USE GUIDE — OLLAMA
Main takeaway: start local chat first, then test API, then automate
API means application programming interface, a URL apps use to exchange data with your local model.
Model check
ollama list
ollama run llama3.2:3b "Say hello in one short sentence."Expected result: model list appears and model responds in one sentence.
Model check
ollama list
ollama run llama3.2:3b "Say hello in one short sentence."Expected result: same output pattern as Windows.
Model check
ollama list
ollama run llama3.2:3b "Say hello in one short sentence."Expected result: local model answers directly in terminal.
Call localhost API
curl http://localhost:11434/api/generate -d '{"model":"llama3.2:3b","prompt":"Give one productivity tip.","stream":false}'Expected result: JSON response that includes generated text.
Private knowledge draft assistant
Feed sanitized internal notes to a local model, generate draft replies, and keep sensitive details on your own machine. This is ideal for support teams, operations notes, and repetitive report writing.
Common fixes
If ollama list fails, start server with ollama serve.
If model is missing, run ollama pull llama3.2:3b.
If port 11434 fails, restart Ollama service and verify no firewall block.
Install first
Use setup guide if Ollama is not installed yet.
Open setup guide →Combine tools
Wire Ollama into n8n and OpenClaw for reusable automation.
Open combine guide →