PocketCodeIn Docs
// Service Usage Documentation ยท How to use each tool in the stack
๐Ÿ“š
Service Documentation Hub
// Quick reference for using every service in the pocketcode.in stack

This site covers how to use each service after it's set up. For installation steps, see the setup guide. For source code and architecture, see the bundle README.

All Services at a Glance

๐Ÿ’ฌ
Open WebUI
ChatGPT-style interface for local Ollama models. RAG, file uploads, multi-user. Start here for most chat needs.
chat.pocketcode.in
๐ŸŽจ
Sim.ai
Visual workflow canvas for AI agents. Drag-drop nodes, connect models to tools, run pipelines.
sim.pocketcode.in
โš™๏ธ
n8n
Workflow automation. Triggers, schedules, 400+ integrations. Bridges AI and the rest of your stack.
n8n.pocketcode.in
๐Ÿฆž
OpenClaw
Multi-model AI agent dashboard. Coordinate Claude, GPT, Ollama models in one UI.
openclaw.pocketcode.in
โ‡†
OpenRouter (LiteLLM Proxy)
OpenAI-compatible API endpoint. Routes to GPT-4, Claude, Llama, etc. Use from any tool that speaks OpenAI's API.
openrouter.pocketcode.in
๐Ÿฆ™
Ollama API
Local LLM runtime. Direct REST API. Used internally by Open WebUI, Sim.ai, and n8n.
ollama.pocketcode.in
โ—†
Qdrant
Vector database for embeddings. Semantic search, RAG retrieval, similarity matching.
qdrant.pocketcode.in
๐Ÿ˜
pgAdmin
PostgreSQL admin GUI. Inspect databases, run queries, manage data across all services.
pgadmin.pocketcode.in
๐Ÿ–ฅ๏ธ
Web Terminal
Browser-based terminal with Docker socket access. Manage the stack from any device.
terminal.pocketcode.in

Infrastructure Reference

๐Ÿ—„๏ธ
Databases
Shared PostgreSQL with pgvector. Five databases serve Sim.ai, n8n, OpenClaw, Open WebUI, and general use.
Internal: sim-db-1:5432
๐Ÿ”—
Service Comms
All services share the ai-stack Docker network. Each service is reachable by its container name.
Internal Docker DNS
๐Ÿ“
Shared Files
Two cross-service file paths: a Docker volume at /shared and a bind-mount at /uploads.
Cross-container file exchange

How to Read This Documentation

Each service tab follows the same structure:

One login. Every service. Stays signed in. Sign in once at pocketcode.in. The gateway sets a .pocketcode.in cookie that protects every subdomain via Caddy's forward_auth. Auto-SSO goes further: the gateway also programmatically logs you into Sim.ai, Open WebUI, and n8n using stored credentials โ€” click any card and you're already inside. A watchdog refreshes those service sessions every 2 minutes and on tab focus, so if you accidentally click a service's own "Logout" or its session expires, you're silently re-logged-in within seconds.
Logging out at pocketcode.in: only the gateway cookie is cleared. Service-internal sessions stay intact, but Caddy's forward_auth immediately blocks access to every *.pocketcode.in service โ€” visiting any of them redirects to the pocketcode login page. After re-logging in, every service is instantly accessible again (since their own sessions never died).
๐Ÿ’ฌ
Open WebUI
// ChatGPT-style interface for local Ollama models ยท chat.pocketcode.in

Quick Start

  1. Go to chat.pocketcode.in
  2. Sign in with your Open WebUI admin account (first signup = admin)
  3. Pick a model from the dropdown at the top center of the chat window
  4. Type a message โ†’ press Enter

Pull a New Model

Models are pulled inside the Ollama container. You can do this from Open WebUI's admin panel OR via the web terminal:

via terminal.pocketcode.in
docker exec ollama ollama pull llama3.2:3b
docker exec ollama ollama pull qwen2.5:7b
docker exec ollama ollama pull nomic-embed-text   # for RAG embeddings

Or in Open WebUI: Settings โ†’ Admin Panel โ†’ Models โ†’ Pull a model from Ollama.com. Type the model name (e.g. llama3.2:3b) and click pull.

Common Tasks

Upload a document for Q&A (RAG):

  1. Click the ๐Ÿ“Ž paperclip icon in the chat input
  2. Select a PDF, DOCX, MD, or TXT file
  3. Ask questions about it โ€” the model reads the document and answers

Build a permanent knowledge base:

  1. Profile (top right) โ†’ Workspace โ†’ Knowledge โ†’ + Create Knowledge
  2. Name it (e.g. "Company Docs") โ†’ upload multiple files
  3. Open WebUI chunks the docs, embeds them with nomic-embed-text, stores in ChromaDB
  4. In any chat, reference the collection with #Company Docs

Create a custom model with a pinned system prompt:

  1. Workspace โ†’ Models โ†’ + Create a model
  2. Choose base model (e.g. llama3.2), add a system prompt, save
  3. Now appears in the chat model dropdown

Use as OpenAI-compatible API for other tools:

example โ€” curl
curl https://chat.pocketcode.in/api/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.2:3b",
    "messages": [{"role":"user","content":"Hello"}]
  }'

API key is in Settings โ†’ Account โ†’ API Keys. Click "Create new secret key".

Useful Settings

SettingWhereWhat it does
Default modelSettings โ†’ GeneralPicked when you start a new chat
Embedding modelAdmin โ†’ Settings โ†’ DocumentsSet to nomic-embed-text for RAG
System promptPer-chat โš™๏ธ iconPin instructions for that conversation
TemperaturePer-chat โš™๏ธ โ†’ Advanced0 = deterministic, 1 = creative
User signupsAdmin โ†’ Settings โ†’ GeneralEnable/disable. Default is "pending approval"

This Setup's Quirks

OLLAMA_BASE_URL is preconfigured to http://ollama:11434 (internal Docker DNS). Open WebUI sees all your Ollama models automatically.
CPU mode means slower inference. On Hostinger KVM 8 (no GPU), expect 3B models at 8-15 tok/s, 7B at 3-6 tok/s. Use small models for chat, larger for one-off complex tasks.

Resources

๐ŸŽจ
Sim.ai
// Visual canvas for building AI agents and workflows ยท sim.pocketcode.in

Quick Start

  1. Go to sim.pocketcode.in
  2. Sign up (first time) or sign in
  3. Click + New Workflow
  4. Drag blocks from the left panel onto the canvas, connect them with lines
  5. Click Run in the top right to test

Block Types

BlockWhat it doesCommon use
AgentLLM call with optional toolsMost workflows start here
FunctionRun JavaScript codeData transformation between steps
APIHTTP request to any URLCall external services
ConditionIf/else branchingRoute based on agent output
LoopIterate over arrayProcess lists of items
ScheduleCron triggerRun workflow on schedule
WebhookHTTP endpoint triggerExternal app calls Sim.ai

Connecting to Models

Inside any Agent block:

Build Your First Workflow โ€” Hello World

  1. Drag Agent block onto canvas
  2. Set model: llama3.2:3b
  3. Set system prompt: "You are a helpful assistant."
  4. Set user message: "What is 2+2?"
  5. Click Run โ†’ see the output in the right panel

Add Tools to an Agent

Tools let agents take actions. Click an Agent block โ†’ Tools tab โ†’ add:

Common Tasks

Schedule a workflow to run daily:

  1. Add a Schedule block โ†’ set cron (e.g. 0 9 * * * = daily 9am)
  2. Connect it to your first Agent block
  3. Click Deploy in top right
  4. Workflow now runs automatically

Expose workflow as a webhook (for external apps):

  1. Add a Webhook block as the trigger
  2. Deploy โ†’ copy the webhook URL
  3. Any service can POST JSON to it โ†’ triggers the workflow

This Setup's Quirks

PostgreSQL is shared. Sim.ai uses sim-db-1 (pgvector image) which also hosts databases for n8n, OpenClaw, and Open WebUI. Inspect via pgAdmin โ€” connect to host sim-db-1, port 5432.
WebSocket realtime uses a separate subdomain. sim-realtime.pocketcode.in serves the live-collaboration WebSocket. If /workspace goes blank, check the subdomain's cert is valid and your browser cache is clear.

Resources

โš™๏ธ
n8n
// Workflow automation with 400+ integrations ยท n8n.pocketcode.in

Quick Start

  1. Go to n8n.pocketcode.in
  2. Sign in with your owner account (first signup = owner)
  3. Click + Add workflow
  4. Click + in the canvas โ†’ search for a trigger (e.g. "Manual")
  5. Add more nodes โ†’ connect โ†’ click Execute Workflow

Node Categories

CategoryExamplesUse for
TriggersManual, Webhook, Schedule, Email, SlackStarting events
AIOpenAI, Anthropic, Ollama, OpenRouterLLM calls
AppsSlack, Discord, Gmail, GitHub, NotionExternal service integration
DataPostgres, HTTP Request, RSS, CSVRead/write data
LogicIF, Switch, Loop, Merge, WaitFlow control
CodeFunction, Code (JS/Python)Custom transforms

Build Your First Workflow โ€” Slack to Ollama

  1. Trigger: Slack Trigger (or Webhook for testing)
  2. Add Ollama Chat Model node โ€” base URL http://ollama:11434, model llama3.2:3b
  3. Add AI Agent node โ€” pass the Slack message as user input
  4. Add Slack Send Message node โ€” post the AI response back
  5. Click Activate in top right

Common Tasks

Use Ollama directly in a workflow:

  1. Add Ollama Chat Model node
  2. Credentials โ†’ New โ†’ Base URL: http://ollama:11434
  3. Pick model from dropdown

Use cloud models (Claude/GPT) via OpenRouter:

  1. Add OpenAI Chat Model node (yes, even for Claude โ€” OpenRouter is OpenAI-compatible)
  2. Credentials โ†’ New โ†’ Base URL: http://openrouter-proxy:4000, API key: any value (or your LiteLLM master key)
  3. Set model name to OpenRouter format (e.g. anthropic/claude-3.5-sonnet)

Query the shared Postgres:

  1. Add Postgres node
  2. Credentials: Host = sim-db-1, Port = 5432, DB = n8n, User/Pass from your .env
  3. Pick operation (Select, Insert, Update)

Receive webhooks from external apps:

  1. Use Webhook trigger node
  2. n8n shows you the test + production URLs
  3. For production (after Activate), URL is https://n8n.pocketcode.in/webhook/<your-path>

This Setup's Quirks

Encryption key persists in a volume. The n8n-data Docker volume holds the encryption key. All your stored credentials remain readable across container restarts. Back up this volume periodically.
WEBHOOK_URL must match the public domain. Check ~/ai-stack/n8n/run-n8n.sh โ€” should have WEBHOOK_URL=https://n8n.pocketcode.in/ so webhook URLs n8n generates point to the right place.

Resources

๐Ÿฆž
OpenClaw
// Multi-model AI agent dashboard ยท openclaw.pocketcode.in

Quick Start

  1. Go to openclaw.pocketcode.in
  2. If first-time access, you may see a "Pair device" prompt โ€” see "Device Pairing" below
  3. Type a message in the chat input โ†’ send

Device Pairing

OpenClaw uses a device pairing system. New browsers/devices need to be approved from the CLI:

terminal.pocketcode.in or SSH
docker exec -it openclaw node /app/openclaw.mjs devices list
docker exec -it openclaw node /app/openclaw.mjs devices approve <UUID>

Common Tasks

Switch the active model:

  1. Settings (gear icon) โ†’ Models
  2. Choose from local Ollama or configured cloud providers

Configure cloud model providers:

  1. Settings โ†’ API Keys โ†’ add Anthropic / OpenAI / OpenRouter keys
  2. Models become available in the chat selector

Start a new agent session:

  1. Click + New Session (sidebar)
  2. Pick a model + system prompt template (or custom)
  3. Sessions persist in the OpenClaw database

This Setup's Quirks

Ollama is pre-configured. OpenClaw is wired to use http://ollama:11434 via the ai-stack Docker network. Models pulled into Ollama appear automatically.
Browser plugin port 18791 not exposed in hosted edition. If you want browser-side relay features, they require additional setup not covered here.

Resources

โœฆ
Claude Code
// Anthropic's CLI coding agent ยท Used from the web terminal

Quick Start

Claude Code is a CLI tool, not a web service. Access via the web terminal or SSH:

terminal
docker exec -it claude-code bash
claude

First run prompts for an Anthropic API key. Paste yours from console.anthropic.com โ†’ API Keys.

Basic Usage

Working with the Stack

The claude-code container mounts ~/ai-stack at /workspace. So you can:

example
docker exec -it claude-code bash
cd /workspace/sim          # Sim.ai config
claude "review the docker-compose.prod.yml and suggest improvements"

Common Tasks

Code review a file:

terminal
claude "review /workspace/manage/start-all.sh and find any race conditions"

Generate a new service config:

terminal
claude "create a docker-compose.yml for a Redis instance on the ai-stack network with persistence"

Debug a failing container:

terminal
docker logs sim-simstudio-1 --tail 100 > /tmp/logs.txt
claude "read /tmp/logs.txt and tell me why this container is restarting"

This Setup's Quirks

Costs money per token. Claude Code uses your Anthropic API key. Check /cost regularly. Sonnet 4 is ~$3/M input + $15/M output tokens.
Tab-completion for files works. When Claude asks for confirmation to edit a file, type the path with Tab. Files are mounted at /workspace.

Resources

โ‡†
OpenRouter (LiteLLM Proxy)
// OpenAI-compatible gateway to 300+ models ยท openrouter.pocketcode.in

What This Is

A LiteLLM proxy that exposes an OpenAI-compatible API on port 4000. You configure cloud providers (OpenAI, Anthropic, OpenRouter.ai, etc.) once, and any tool that speaks OpenAI's API can use them.

From Inside the Stack

Containers on the ai-stack network reach the proxy at:

internal URL
http://openrouter-proxy:4000

From Outside (via HTTPS)

public URL
https://openrouter.pocketcode.in

Configured Models

Check ~/ai-stack/openrouter-proxy/litellm-config.yaml on the server. Typical entries:

litellm-config.yaml
model_list:
  - model_name: claude-sonnet-4
    litellm_params:
      model: anthropic/claude-sonnet-4-20250514
      api_key: os.environ/ANTHROPIC_API_KEY
  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY
  - model_name: llama-3.3-70b
    litellm_params:
      model: openrouter/meta-llama/llama-3.3-70b-instruct
      api_key: os.environ/OPENROUTER_API_KEY

Common Tasks

Test from terminal:

terminal
curl https://openrouter.pocketcode.in/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4",
    "messages": [{"role":"user","content":"Hello"}]
  }'

Add a new model:

  1. Edit ~/ai-stack/openrouter-proxy/litellm-config.yaml โ†’ add a new entry
  2. Restart the container: docker restart openrouter-proxy
  3. The new model_name is now usable everywhere

Use from Python:

python
from openai import OpenAI

client = OpenAI(
    base_url="https://openrouter.pocketcode.in/v1",
    api_key="any-string"  # disabled in default config
)

response = client.chat.completions.create(
    model="claude-sonnet-4",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

This Setup's Quirks

API keys live in env vars. Set in ~/ai-stack/openrouter-proxy/.env: ANTHROPIC_API_KEY=, OPENAI_API_KEY=, OPENROUTER_API_KEY=. Reference them in the YAML as os.environ/VAR_NAME.
Watch the costs. No native cost limits. Use LiteLLM's built-in budget feature in the config if needed. Check /spend endpoint for usage stats.

Resources

๐Ÿฆ™
Ollama API
// Local LLM runtime ยท Direct REST API ยท ollama.pocketcode.in

What This Is

The raw Ollama API. Most users interact with Ollama via Open WebUI (recommended for chat) or via the SDK in code. This page covers direct API usage.

Endpoints

EndpointMethodPurpose
/api/tagsGETList installed models
/api/generatePOSTSingle-turn generation
/api/chatPOSTMulti-turn chat
/api/embeddingsPOSTGet vector embeddings
/api/pullPOSTDownload a new model
/api/showPOSTGet model details

Common Calls

List installed models:

curl
curl https://ollama.pocketcode.in/api/tags

Chat completion:

curl
curl https://ollama.pocketcode.in/api/chat -d '{
  "model": "llama3.2:3b",
  "messages": [{"role":"user","content":"Hello"}],
  "stream": false
}'

Generate embeddings (for RAG):

curl
curl https://ollama.pocketcode.in/api/embeddings -d '{
  "model": "nomic-embed-text",
  "prompt": "The text to embed"
}'

Pull a new model (from terminal, not API recommended):

terminal
docker exec ollama ollama pull llama3.2:3b
docker exec ollama ollama pull qwen2.5:7b
docker exec ollama ollama pull nomic-embed-text

Model Recommendations (CPU mode)

ModelSizeSpeed on KVM 8Use for
llama3.2:3b2.0 GB12-18 tok/sDefault chat, summaries
llama3.2:1b1.3 GB30-40 tok/sFast tasks, classification
qwen2.5:7b4.7 GB4-7 tok/sBetter reasoning
qwen2.5-coder:7b4.7 GB4-7 tok/sCode completion
nomic-embed-text274 MBEmbedding onlyRAG vector embeddings

This Setup's Quirks

Model data persists in a volume. Pulled models live in the ollama-data Docker volume. Survives container restarts. Inspect with docker exec ollama du -sh /root/.ollama.
CPU is the bottleneck. Concurrency is limited. Set OLLAMA_NUM_PARALLEL=2 in the run script if multiple users share the instance.

Resources

โ—†
Qdrant
// Vector database for embeddings ยท qdrant.pocketcode.in

What This Is

A vector database for semantic search and RAG. Store embeddings of text/images, query by similarity. Used by Sim.ai's knowledge tool and any custom workflow that needs vector search.

Web UI

qdrant.pocketcode.in/dashboard opens the Qdrant Web UI for browsing collections, inspecting points, running test queries.

Endpoints

EndpointMethodPurpose
/collectionsGETList all collections
/collections/{name}PUTCreate a collection
/collections/{name}/pointsPUTInsert/update vectors
/collections/{name}/points/searchPOSTSimilarity search

Quick Workflow โ€” Create + Query Collection

Step 1 โ€” Create a collection (vectors of size 768, matching nomic-embed-text):

curl
curl -X PUT https://qdrant.pocketcode.in/collections/my-docs \
  -H "Content-Type: application/json" \
  -d '{"vectors":{"size":768,"distance":"Cosine"}}'

Step 2 โ€” Get an embedding from Ollama and insert:

bash
# Get embedding
EMB=$(curl -s https://ollama.pocketcode.in/api/embeddings \
  -d '{"model":"nomic-embed-text","prompt":"The quick brown fox"}' \
  | jq -c .embedding)

# Insert into Qdrant
curl -X PUT https://qdrant.pocketcode.in/collections/my-docs/points \
  -H "Content-Type: application/json" \
  -d "{\"points\":[{\"id\":1,\"vector\":$EMB,\"payload\":{\"text\":\"The quick brown fox\"}}]}"

Step 3 โ€” Search:

bash
QUERY_EMB=$(curl -s https://ollama.pocketcode.in/api/embeddings \
  -d '{"model":"nomic-embed-text","prompt":"fast animal"}' \
  | jq -c .embedding)

curl -X POST https://qdrant.pocketcode.in/collections/my-docs/points/search \
  -H "Content-Type: application/json" \
  -d "{\"vector\":$QUERY_EMB,\"limit\":5,\"with_payload\":true}"

Common Tasks

List collections:

curl
curl https://qdrant.pocketcode.in/collections

Delete a collection:

curl
curl -X DELETE https://qdrant.pocketcode.in/collections/my-docs

This Setup's Quirks

Vector size matches your embedding model. nomic-embed-text outputs 768-dim vectors. If you switch to all-minilm use 384, OpenAI's text-embedding-3-small uses 1536. Set the right size when creating collections.

Resources

๐Ÿ˜
pgAdmin
// PostgreSQL admin GUI ยท pgadmin.pocketcode.in

Quick Start

  1. Go to pgadmin.pocketcode.in
  2. Sign in with email/password from PGADMIN_DEFAULT_EMAIL / PGADMIN_DEFAULT_PASSWORD env vars
  3. Add a new server connection (first-time only)

Connect to the Shared Postgres

In pgAdmin: right-click Servers โ†’ Register โ†’ Server. Then:

TabFieldValue
GeneralNameai-stack-db (anything)
ConnectionHostsim-db-1
ConnectionPort5432
ConnectionMaintenance DBpostgres
ConnectionUsernameFrom your ~/ai-stack/sim/.env (POSTGRES_USER)
ConnectionPasswordFrom your ~/ai-stack/sim/.env (POSTGRES_PASSWORD)

What Databases Are There?

DatabaseUsed by
ailabGeneral-purpose (you can use freely)
simstudioSim.ai workflows & users
openclawOpenClaw sessions
n8nn8n workflows & credentials
ollama_resultsIf you stash Ollama outputs (optional)

Common Tasks

Run a query:

  1. Navigate to a database in the tree
  2. Right-click โ†’ Query Tool
  3. Type SQL โ†’ F5 to execute

Export a table to CSV:

  1. Right-click a table โ†’ Import/Export Data
  2. Choose Export, set CSV format, browse for output location

Backup the whole stack DB:

terminal
docker exec sim-db-1 pg_dumpall -U postgres > ~/ai-stack/backups/all-$(date +%Y%m%d).sql

pgvector queries (Sim.ai uses this):

sql
-- Find vectors closest to a target
SELECT id, content, embedding <-> '[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY distance
LIMIT 5;

This Setup's Quirks

pgvector is pre-installed. The sim-db-1 container uses the pgvector/pgvector:pg17 image. CREATE EXTENSION vector; already done in each database.
Don't drop the postgres database. It's the maintenance DB. Use the per-service databases (ailab, simstudio, etc.) for your work.

Resources

๐Ÿ–ฅ๏ธ
Web Terminal
// Browser-based bash terminal with Docker access ยท terminal.pocketcode.in

What This Is

A full bash terminal running in your browser. The container has the host Docker socket mounted, so you can manage every container in your stack without SSH. Perfect for quick admin from your phone, tablet, or any browser.

Quick Start

  1. Go to terminal.pocketcode.in
  2. If not logged in โ†’ bounces you to pocketcode.in to log in
  3. Terminal loads โ€” you see root@<container-id>:~# prompt
  4. Type any command and press Enter

Keyboard Shortcuts

ShortcutWhat it does
Ctrl+Shift+CCopy selected text (browsers reserve plain Ctrl+C)
Ctrl+Shift+VPaste from clipboard
Right-clickContext menu with paste option
Ctrl+C in terminalInterrupt running process (e.g. stop a tail -f)
Ctrl+DExit current shell (reconnects automatically)

What You Can Do

Manage every container:

terminal
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
docker logs sim-simstudio-1 --tail 50
docker restart n8n
docker exec -it ollama bash

Edit any config file:

terminal
nano ~/ai-stack/caddy/Caddyfile
nano ~/ai-stack/sim/.env
docker exec caddy caddy reload --config /etc/caddy/Caddyfile

Use stack aliases (after sourcing .bashrc):

terminal ยท source aliases once per session
source ~/.bashrc

ai-start          # bring up all containers
ai-stop           # graceful shutdown
ai-doctor         # full diagnostic
ai-status         # quick container health snapshot
ai-logs n8n       # tail any service's logs
ai-reboot         # restart whole stack

Pull new Ollama models:

terminal
docker exec ollama ollama pull qwen2.5:7b
docker exec ollama ollama list

Run Claude Code from here:

terminal
docker exec -it claude-code bash
claude "review the start-all.sh script for race conditions"

Query the shared Postgres:

terminal
docker exec -it sim-db-1 psql -U postgres -d simstudio
\dt    -- list tables
\q     -- quit

This Setup's Quirks

You're inside a container, not the host. The terminal runs bash in the terminal container, with these mounts:
  • /var/run/docker.sock โ€” talk to host Docker (manage all containers)
  • /root/ai-stack โ€” edit any config file (changes affect host)
  • /root/.bash_history โ€” command history persists
  • /root/.ssh:ro โ€” read-only SSH keys (for outbound scp etc.)
Can't apt install packages on the host. You're inside an Alpine-based image. For host-level changes (kernel modules, system packages, systemd services), use real SSH. 99% of stack management works fine here.
WebSocket connection drops on idle networks. If the terminal seems frozen, reload the page โ€” the auth cookie keeps you signed in. ttyd auto-reconnects on minor blips.

Resources

๐Ÿ—„๏ธ
Databases
// Shared PostgreSQL + pgvector for the whole stack ยท sim-db-1:5432

Architecture

One PostgreSQL container โ€” sim-db-1 running pgvector/pgvector:pg17 โ€” hosts every database in the stack. Sim.ai brings it up (it's in the Sim.ai compose file), but pgAdmin, n8n, OpenClaw, and any custom workflow share the same instance. The container sits on two Docker networks simultaneously: sim_default (Sim.ai's own) and ai-stack (the rest of the stack), so anyone can reach it.

Sim.ai
simstudio

sim-db-1
pgvector pg17

n8n

OpenClaw

Open WebUI
optional

Your custom
workflows

simstudio

n8n

openclaw

ailab

ollama_results

The Five Databases

DatabaseOwner / Primary userWhat's inside
simstudioSim.aiWorkflows, agents, users, runs, knowledge base embeddings (pgvector)
n8nn8nWorkflows, credentials (encrypted), executions, webhooks
openclawOpenClawSessions, chat history, device pairings
ailabGeneral-purposeFree for your own use โ€” custom tables, prototypes, scratch
ollama_resultsOptionalIf you stash inference outputs from cron jobs, n8n, etc.

Connection Strings

FromHow to connect
Inside Docker (any service on ai-stack)postgres://USER:PASS@sim-db-1:5432/<db>
pgAdmin in browserHost = sim-db-1, Port = 5432 (Sim.ai's .env has the password)
Web terminal (CLI psql)docker exec -it sim-db-1 psql -U postgres -d <db>

pgvector Extension

Already installed in every database. Use it for semantic search inside Postgres without needing Qdrant for small datasets:

sql ยท create a vector table
-- nomic-embed-text returns 768-dim vectors
CREATE TABLE documents (
  id BIGSERIAL PRIMARY KEY,
  content TEXT,
  embedding vector(768),
  created_at TIMESTAMPTZ DEFAULT now()
);

-- Index for fast similarity search
CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops);

-- Find nearest neighbors
SELECT id, content, embedding <=> '[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY distance
LIMIT 5;

Common Tasks

Get a psql shell:

terminal
docker exec -it sim-db-1 psql -U postgres -d ailab

List all databases:

terminal
docker exec sim-db-1 psql -U postgres -l

Backup a single database:

terminal
docker exec sim-db-1 pg_dump -U postgres -Fc -d n8n > ~/backups/n8n-$(date +%Y%m%d).pgcustom

Backup everything (pg_dumpall):

terminal
docker exec sim-db-1 pg_dumpall -U postgres > ~/backups/all-$(date +%Y%m%d).sql

Restore from backup:

terminal
cat ~/backups/all-20260516.sql | docker exec -i sim-db-1 psql -U postgres

Create a new database for your project:

sql
-- via pgAdmin Query Tool or psql shell
CREATE DATABASE my_project;
\c my_project
CREATE EXTENSION vector;

Connection Pool Limits

Default Postgres config allows ~100 concurrent connections. Sim.ai + n8n + OpenClaw typically use ~30. If you start running into "too many connections" errors, increase the limit:

terminal
docker exec sim-db-1 psql -U postgres -c "ALTER SYSTEM SET max_connections=200;"
docker restart sim-db-1

This Setup's Quirks

sim-db-1 lives in the Sim.ai compose file. If you docker compose down from ~/ai-stack/sim/, the database goes away. ai-stop handles this gracefully (stops dependent services first), but if you ever manually run compose commands, be aware.
Dual network membership. The container is connected to both sim_default AND ai-stack. After a Sim.ai recompose, it sometimes drops the ai-stack attachment โ€” ai-start reconnects it automatically. If you see "host sim-db-1 not found" errors from n8n or OpenClaw, run docker network connect ai-stack sim-db-1 manually.
n8n's encryption key is in a Docker volume (n8n-data), not in Postgres. Back up that volume separately if you care about preserving stored credentials across rebuilds.

Resources

๐Ÿ”—
Service Communication
// How services talk to each other inside Docker ยท ai-stack network ยท internal DNS

The Network Architecture

Every container in the stack is on the ai-stack Docker network. Docker provides automatic DNS resolution between containers using their container names. Sim.ai's compose stack has its own internal network sim_default, and sim-db-1 straddles both so services on either network can reach it.

Docker network sim_default

Docker network ai-stack

Caddy proxy on host

Outside world

https

Browser

caddy
port 443

auth-gateway
7000

ollama
11434

open-webui
8080

openrouter-proxy
4000

openclaw
8080

n8n
5678

qdrant
6333

pgadmin
80

terminal
7681

sim-simstudio-1
3000

sim-realtime-1
3002

sim-redis-1
6379

sim-db-1 port 5432
both networks

Internal Hostnames

Inside any container on ai-stack, these hostnames resolve automatically:

HostnamePortWhat it isCommon use
ollama11434Ollama APILLM inference
open-webui8080Open WebUIโ€”
auth-gateway7000Gateway serviceCaddy forward_auth
sim-db-15432PostgreSQLAny service's database
sim-simstudio-13000Sim.ai web appCaddy upstream
sim-realtime-13002Sim.ai WebSocketWorkspace live collab
sim-redis-16379Redis (Sim.ai)Sim.ai queues, sessions
openclaw8080OpenClawโ€”
n8n5678n8nโ€”
openrouter-proxy4000LiteLLM proxyCloud model gateway
qdrant6333Qdrant RESTVector search
pgadmin80pgAdminโ€”
terminal7681Web terminal (ttyd)โ€”
caddy80, 443Reverse proxyโ€”

Why Use Internal Hostnames?

Common Patterns

n8n calling Ollama: In n8n's Ollama Chat Model credentials, set Base URL to:

n8n credential value
http://ollama:11434

n8n calling OpenRouter (cloud models) via the proxy:

n8n OpenAI Chat Model credentials
Base URL: http://openrouter-proxy:4000
API Key:  any-non-empty-string

Sim.ai calling Ollama: already wired via Sim.ai's OLLAMA_URL=http://ollama:11434 env var.

Custom script calling everything from a terminal:

terminal ยท inside any ai-stack container
curl http://ollama:11434/api/tags
curl http://qdrant:6333/collections
curl http://openrouter-proxy:4000/v1/models

Connecting from a brand-new container you create: Add it to the ai-stack network:

terminal
docker run -d --name my-app \
  --network ai-stack \
  -e DB_URL=postgres://postgres:PASS@sim-db-1:5432/ailab \
  -e LLM_URL=http://ollama:11434 \
  my-image:latest

Adding a Service to ai-stack

  1. Run with --network ai-stack flag (or networks: [ai-stack] in compose)
  2. Once started, it can reach every other container by name
  3. If it needs to be public via HTTPS, add a Caddyfile block per Tab 13

Auto-SSO Across Services

The gateway at pocketcode.in ships with optional auto-SSO: when you sign in at the apex, hidden iframes silently log you into Sim.ai, Open WebUI, and n8n using credentials stored in ~/ai-stack/auth-gateway/.env. Three things are happening:

PhaseWhat runsHow it works
1. Master loginGateway sets pocketcode_session cookie scoped to .pocketcode.inOne cookie sent to every subdomain โ€” Caddy's forward_auth sees it and lets you through
2. Bootstraphome.html creates hidden iframes pointing at https://<svc>.pocketcode.in/_sso_initCaddy proxies /_sso_init to the gateway; gateway POSTs to the service's login API internally, forwards Set-Cookie back through Caddy โ†’ cookie scoped to <svc>.pocketcode.in
3. Watchdogiframes re-load every 2 min and on tab focusIf you got logged out of any service in the meantime, the next refresh logs you back in. No way to "stay out" of a service while the master session is alive.

Which services auto-SSO works for

ServiceSSO?Why / why not
Sim.aiโœ“BetterAuth's POST /api/auth/sign-in/email accepts JSON, returns a session cookie
Open WebUIโœ“POST /api/v1/auths/signin JSON + cookie-based auth
n8nโœ“POST /rest/login JSON + n8n-auth session cookie
OpenClawโ€”Device pairing only, no credential-based login
pgAdminโ€”Requires CSRF token (two-step login); skipped for simplicity. Sessions persist 30+ days though, so a one-time login.
Ollama / Qdrant / OpenRouterโ€”No login โ€” gated by Caddy's auth_gate directly

The /_sso_init endpoint

Each SSO-enabled subdomain has this Caddyfile block:

caddyfile ยท sim.pocketcode.in pattern
sim.pocketcode.in {
    import auth_gate
    handle /_sso_init {
        reverse_proxy auth-gateway:7000 {
            rewrite /sso-init/sim
        }
    }
    reverse_proxy sim-simstudio-1:3000
}

Browser loads /_sso_init in a hidden iframe โ†’ Caddy proxies to gateway โ†’ gateway logs into Sim.ai server-side โ†’ response carries Set-Cookie for the user โ†’ iframe posts a status message back to home.html.

What logging out does

Troubleshooting

"Host not found" / "getaddrinfo ENOTFOUND":

"Connection refused" but container is up:

n8n shows "Failed to load model catalog" briefly at start:

Resources

๐Ÿ“
Shared Files
// Two cross-service file paths ยท Docker volume + bind mount ยท /shared and /uploads

What's Mounted Where

Two file paths are mounted inside every relevant service container, so data can flow between them without HTTP roundtrips:

Path inside containersBacked byTypeUse for
/sharedai-shared-data Docker volumeManaged volumeCross-service intermediate data, persistent across restarts
/uploads/root/ai-stack/uploads bind mountHost bind mountFiles dropped from your Mac via scp, visible to all services

When to Use Which

Use /shared for service-to-service data. One service writes, another reads. Stays inside Docker. No host pollution. Survives reboots. Examples: n8n stashes a CSV, Sim.ai picks it up; Ollama caches an embedding, custom script reads it.
Use /uploads for files coming from you (the human). scp from your Mac, drop into ~/ai-stack/uploads/, every service sees it instantly at /uploads. Examples: PDFs for Open WebUI to ingest, CSVs for n8n to process, datasets for Qdrant to embed.

How to Use From Each Service

From the web terminal:

terminal.pocketcode.in
ls /shared          # Inside the terminal container
ls /uploads
ls ~/ai-stack/uploads   # Same as /uploads โ€” terminal's /root is mounted

From n8n (Read/Write Files node): point at /shared/output.csv or /uploads/input.csv directly. n8n already has both mounts.

From Sim.ai workflows: use the Function block with Node.js fs:

javascript ยท in a Sim.ai Function block
const fs = require('fs');
const content = fs.readFileSync('/uploads/data.json', 'utf-8');
const data = JSON.parse(content);
return { items: data };

From Open WebUI: upload files through the UI (paperclip icon) โ€” they're stored in the open-webui-data volume separately. To make a host file available, copy it to /uploads first, then upload via the UI.

From OpenClaw and custom scripts: Same โ€” read/write to /shared or /uploads within their containers.

Common Workflows

Drop a file from your Mac, process it in n8n:

bash ยท on Mac
scp ~/Documents/data.csv root@YOUR_SERVER_IP:~/ai-stack/uploads/data.csv

In n8n: Read Binary File node โ†’ /uploads/data.csv โ†’ process. n8n sees the file immediately, no restart needed.

n8n writes a result, Sim.ai picks it up:

Build a dataset for Qdrant:

  1. scp folder of PDFs to ~/ai-stack/uploads/pdfs/
  2. From terminal: docker exec -it <some-python-container> python ingest.py /uploads/pdfs
  3. Script chunks files, embeds via Ollama at http://ollama:11434, posts to Qdrant at http://qdrant:6333

Inspecting Each From the Host

The bind-mount (/uploads) is trivially visible on the host:

terminal ยท on host
ls -la ~/ai-stack/uploads/
du -sh ~/ai-stack/uploads/

The Docker volume (/shared) lives under Docker's internal storage:

terminal ยท on host
docker volume inspect ai-shared-data
# shows: Mountpoint /var/lib/docker/volumes/ai-shared-data/_data
ls -la /var/lib/docker/volumes/ai-shared-data/_data/

Backup & Restore

Backup /uploads (host bind-mount โ€” just tar it):

terminal
tar czf ~/backups/uploads-$(date +%Y%m%d).tar.gz -C ~/ai-stack uploads/

Backup /shared (Docker volume โ€” needs a helper container):

terminal
docker run --rm \
  -v ai-shared-data:/data \
  -v ~/backups:/backup \
  alpine tar czf /backup/shared-$(date +%Y%m%d).tar.gz -C /data .

Restore the volume:

terminal
docker run --rm \
  -v ai-shared-data:/data \
  -v ~/backups:/backup \
  alpine sh -c "cd /data && tar xzf /backup/shared-20260516.tar.gz"

Permissions Note

Containers may run as different users. Open WebUI runs as a non-root UID; n8n runs as node (UID 1000); pgAdmin runs as pgadmin (UID 5050). Files created by one service may be owned by an unfamiliar UID. If you hit permission errors, chmod -R a+rw on the affected directory from the terminal usually fixes it (these mounts aren't security boundaries, just convenience).

This Setup's Quirks

The volume isn't auto-cleaned. /shared grows over time. Periodically check with du -sh /var/lib/docker/volumes/ai-shared-data/_data/ from the host and clean up old files.
Not all services have both mounts. Cross-check by inspecting: docker inspect <container> --format='{{range .Mounts}}{{.Destination}} {{end}}'. If your service needs them and they're missing, edit the run script to add -v ai-shared-data:/shared -v /root/ai-stack/uploads:/uploads and recreate.

Resources

๐Ÿ”ง
Integration Recipes
// Cross-service patterns and common workflows

Recipe 1: Slack-to-Ollama Chatbot (via n8n)

Listen for Slack messages, route through Ollama, respond.

  1. In n8n: Slack Trigger โ†’ AI Agent (Ollama, model llama3.2:3b) โ†’ Slack Send Message
  2. Use the message text as user input to the agent
  3. Pass the agent output back to Slack channel

Recipe 2: Document Q&A System (Open WebUI + Ollama)

  1. Pull embedding model: docker exec ollama ollama pull nomic-embed-text
  2. In Open WebUI: Settings โ†’ Documents โ†’ Embedding Model = nomic-embed-text
  3. Workspace โ†’ Knowledge โ†’ New Collection โ†’ upload PDFs
  4. In chat, reference with #collection-name

Recipe 3: Scheduled Web Scraping (n8n + OpenRouter)

  1. n8n: Schedule trigger (daily 9am) โ†’ HTTP Request (fetch URL) โ†’ AI Agent (summarize via OpenRouter Claude) โ†’ Send Email/Slack
  2. Claude does the summarization since it's better at structured output
  3. Costs ~$0.01-0.05 per run depending on page size

Recipe 4: Custom RAG Pipeline (Ollama + Qdrant + Sim.ai)

  1. Create a Qdrant collection (size 768 for nomic-embed-text)
  2. In Sim.ai: build a workflow that takes a query, embeds via Ollama, searches Qdrant, passes top-K results + query to an Agent block
  3. Agent uses retrieved context + query to produce grounded answer

Recipe 5: Multi-Model Comparison (LiteLLM)

Compare outputs from local Ollama vs cloud Claude vs GPT side-by-side.

terminal
PROMPT="Explain quantum computing in 2 sentences"

# Local Ollama
curl -s http://localhost:11434/api/generate \
  -d "{\"model\":\"llama3.2:3b\",\"prompt\":\"$PROMPT\",\"stream\":false}" \
  | jq -r .response

# Cloud Claude via OpenRouter proxy
curl -s http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d "{\"model\":\"claude-sonnet-4\",\"messages\":[{\"role\":\"user\",\"content\":\"$PROMPT\"}]}" \
  | jq -r '.choices[0].message.content'

Recipe 6: Voice Pipeline (Open WebUI + Whisper)

Open WebUI supports voice input via Whisper. To enable:

  1. Settings โ†’ Audio โ†’ STT Engine: select "Local Whisper"
  2. Pick a Whisper model size (base / small / medium)
  3. Click ๐ŸŽค icon in chat input โ†’ speak โ†’ transcribed and sent

Recipe 7: Backup & Restore the Stack

Backup (run daily as a cron job or n8n workflow):

bash
#!/bin/bash
DATE=$(date +%Y%m%d-%H%M)
mkdir -p ~/backups/$DATE

# Postgres dump
docker exec sim-db-1 pg_dumpall -U postgres > ~/backups/$DATE/postgres.sql

# Critical volumes (n8n encryption key, Open WebUI chats, etc.)
docker run --rm -v n8n-data:/data -v ~/backups/$DATE:/backup \
  alpine tar czf /backup/n8n-data.tar.gz -C /data .

docker run --rm -v open-webui-data:/data -v ~/backups/$DATE:/backup \
  alpine tar czf /backup/open-webui-data.tar.gz -C /data .

# Caddy data (certs)
docker run --rm -v caddy_data:/data -v ~/backups/$DATE:/backup \
  alpine tar czf /backup/caddy_data.tar.gz -C /data .

# Compress everything
tar czf ~/backups/$DATE.tar.gz -C ~/backups $DATE
rm -rf ~/backups/$DATE

Recipe 8: Manage Stack from the Web Terminal

Everything in this guide can be done from terminal.pocketcode.in in your browser:

useful aliases
ai-start          # bring up all containers
ai-stop           # shut down cleanly
ai-doctor         # diagnostic scan
ai-logs SERVICE   # tail any service's logs
ai-status         # quick health check
Going deeper? The web terminal has full Docker socket access. docker ps, docker exec, docker logs all work. Edit any config file under ~/ai-stack/.