Using local Ollama with Opencode

Tiempo de lectura: < 1 minuto

We will learn today how we can use opencode with our Ollama directly on local.

The API is exposed by default in:

http://localhost:11434

Check it out:

curl http://localhost:11434/api/tags

You don’t have it? You need to install Ollama

For Windows:

irm https://ollama.com/install.ps1 | iex

For Windows:

curl -fsSL https://ollama.com/install.ps1 | iex

We download a model, in my case I like the qwen2.5-coder:7b:

ollama pull qwen2.5-coder:7b

Configure OpenCode to use Ollama

OpenCode normalmente utiliza proveedores de tipo OpenAI…
pero aquí lo vamos a apuntar a local.

Search for your config (example):

In Windows:

C:\Users\TU_USUARIO\.opencode\config.json

On Linux:

~/.opencode/config.json

If it does not exist, it creates.

And add this:

{"provider": "openai", "openai": {"base_url": "http://localhost:11434/v1", "api_key": "ollama"}, "model": "qwen2.5-coder:7b"}

Another way to do it

If you want to run it without having to edit files, you can put:

ollama

And then select open ocode

 Launch OpenCode (qwen2.5-coder:7b) Anomaly's open-source coding agent

Important

Ollama uses a compatible format with the OpenAI API (/v1) mode,

but:

export OPENAI_BASE_URL=http://localhost:11434/v1 export OPENAI_API_KEY=ollama

Leave a Comment