Fork of Qwen Code · otter CLI

Speed
of Intent.

Read Docs
 ██████╗ ████████╗████████╗███████╗██████╗ 
██╔═══██╗╚══██╔══╝╚══██╔══╝██╔════╝██╔══██╗
██║   ██║   ██║      ██║   █████╗  ██████╔╝
██║   ██║   ██║      ██║   ██╔══╝  ██╔══██╗
╚██████╔╝   ██║      ██║   ███████╗██║  ██║
 ╚═════╝    ╚═╝      ╚═╝   ╚══════╝╚═╝  ╚═╝
>_Otter Code (v0.14.1)
API Key | qwen3-coder (/model to change)
~/dev/my-app
> Type your message or @path/to/file
? for shortcuts
OPEN SOURCESKILLS & SUBAGENTSSIMPLE MODEL CONFIGLOCAL OPENAI-COMPATIBLEMCP INTEGRATIONTERMINAL-FIRST · IDE-READYOPEN SOURCESKILLS & SUBAGENTSSIMPLE MODEL CONFIGLOCAL OPENAI-COMPATIBLEMCP INTEGRATIONTERMINAL-FIRST · IDE-READY

Local AI agent.

otter runs locally: your repo, Skills, SubAgents, and shell—code stays on disk. Point models at local or hosted OpenAI-compatible APIs; everything happens in the terminal.

01. Agent on your machine

Plain-language tasks: plan, edit files, run commands in your environment—not a hosted IDE. Real tools and agent loop, not a one-off web completion.

02. Add models with /addprovider

Run /addprovider (or /add-provider) for a guided setup—local or hosted OpenAI-compatible APIs, keys, defaults. Use /model to switch; edit ~/.qwen/settings.json if you prefer raw config.

03. Your repo & MCP

Ask in natural language; attach context with @ paths. Plug in MCP servers so the agent can use your tools and data from the project root.

04. Terminal-native control

Interactive otter, otter -p for scripts/CI, pipes, optional sandboxes, IDE companions—native to your shell.

Free · Open source

Your terminal. Your models.

Free (Apache-2.0), no paywall. Built for local & self-hosted models—Ollama, LM Studio, vLLM, or any OpenAI-compatible URL. Keys stay on your machine.

  • $0 to install · Apache-2.0
  • Local baseUrl providers & env keys
  • Offline when your local stack supports it