n8n-workflow-blolgpost

Building the Ultimate Sovereign AI Automation Hub (for Free)

Table of Contents

How to deploy n8n, Ollama, Qdrant, and PostgreSQL on Oracle Cloud ARM Infrastructure.

Stop paying monthly subscriptions for intelligence you can own.

We’re past the point where you need a credit card and an API key to build useful agents. If you combine n8n (automation) with Ollama (local LLM inference), you get a private, self-hosted stack that costs $0/month.

The problem? Most “free tier” cloud servers are toys. AWS and Google give you 1GB of RAM, which barely runs a web server, let alone an AI model.

Oracle Cloud is the exception. Their “Always Free” Ampere tier gives you 4 vCPUs and a massive 24 GB of RAM. That’s enough to run a production-grade Llama 3.1 model entirely in memory.

You can create your Oracle Cloud account here: https://www.oracle.com/cloud/sign-in.html

Here is the exact playbook to turn that infrastructure into a sovereign AI hub.

1. The Stack: Why This Combo?

We aren’t just throwing random tools together. This is a cohesive system.

  • n8n: The platform to create automation workflows by handling the logic and connecting different apps.
  • PostgreSQL: The memory of our workflows. We’re ditching SQLite because it chokes under concurrent traffic. Postgres is solid.
  • Ollama: It runs the models locally on the CPU/RAM.
  • Qdrant: A vector database that lets your AI “read” your PDFs and documents (RAG).
  • Caddy: Handles HTTPS automatically so you don’t have to mess with certbot.

2. Prepare the server “Virtual Metal”

Cloud servers are paranoid by default. They block everything. Before installing software, we need to smash through two specific barriers.

A. The Double-Lock Firewall

Oracle has a nasty habit of layering firewalls. If you open one and forget the other, your connection times out.

Lock 1: The Cloud Console (The Gate)

This blocks traffic before it hits your server.

  1. Go to OCI Console > VCN > Select your VCN> Security Lists > Default Security List > Security rules
  2. Add an Ingress Rule allowing traffic from 0.0.0.0/0 on ports 80 (HTTP) and 443 (HTTPS).

Lock 2: The OS Firewall (The Door)

Ubuntu on Oracle uses iptables rules that are aggressive. They usually REJECT everything but SSH. You need to force your allow rules above the reject rules.

Run this on your server:

B. The Data Vault (Storage)

Do not skip this step.

Your VM comes with a tiny boot volume. If you fill it with AI models and logs, the OS will crash.

Create a free 100GB Block Volume in the Oracle console and attach it to your VM. Once attached (usually as /dev/sdb), format and mount it:

3. Installing Docker on ARM

Here’s a trap for beginners: You are running on an ARM processor. The standard “convenience scripts” (get-docker.sh) often fail or pull incompatible plugins.

Do it manually to ensure stability.

4. Configuration: Production Mindset

We are going to separate the Configuration (static files) from the Data (dynamic files).

Create the Hierarchy:

The Secrets File (.env):

Never hardcode passwords in your YAML. Create /opt/n8n_stack/.env:

5. The Master Blueprint

This docker-compose.yml orchestrates the services. It ensures the database is ready before n8n wakes up and that your data persists on the large drive.

Create /opt/n8n_stack/docker-compose.yml:

6. Accessing the n8n instance

You have two ways to reach your server.

Path A: The Pro Route (Domain Name)

If you own a domain, create an A Record pointing to your Oracle IP. Then, create a Caddyfile in /opt/n8n_stack/:

Caddy handles SSL automatically. It just works.

Path B: The Test Route (SSH Tunnel)

No domain? No problem.

  1. Delete the caddy service from the YAML file above.
  2. Change the n8n ports to: 127.0.0.1:5678:5678.
  3. Run this on your local machine to tunnel in:
    ssh -L 5678:localhost:5678 ubuntu@<YOUR-ORACLE-IP>

Now browse to http://localhost:5678

7. Launch and Model Installation

Time to deploy the whole stack.

Set up the Brain of the deployment.

Ollama starts empty. You need to pull the models into RAM. We’ll use Llama 3.1 for thinking and Nomic for vector embeddings, in case one of the use cases will need RAG (Retrieval-Augmented Generation)

8. Troubleshooting

I hit a couple of issues during the deployment, so I wanted to share it here so you don’t have to troubleshoot it again.

  • “EACCES: permission denied”: This means your n8n container (user 1000) can’t write to the host folder.
    Fix: sudo chown -R 1000:1000 /mnt/n8n_data/n8n_home.

  • 502 Bad Gateway: Caddy is up, but n8n is dead.
    Fix: Check docker logs n8n_stack-n8n-1. It’s almost always a wrong DB password in your .env.

  • Connection Timed Out:
    Fix: You forgot one of the firewalls. Check the Oracle Cloud Console (Ingress Rules) AND the server’s iptables.

Your Move

You now have a system that SaaS companies charge hundreds a month for. It’s running on your own metal, your data never leaves the server, and it costs zero dollars.

Go build something dangerous 😉