██████╗ ████████╗████████╗███████╗██████╗ ██╔═══██╗╚══██╔══╝╚══██╔══╝██╔════╝██╔══██╗ ██║ ██║ ██║ ██║ █████╗ ██████╔╝ ██║ ██║ ██║ ██║ ██╔══╝ ██╔══██╗ ╚██████╔╝ ██║ ██║ ███████╗██║ ██║ ╚═════╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝
otter runs locally: your repo, Skills, SubAgents, and shell—code stays on disk. Point models at local or hosted OpenAI-compatible APIs; everything happens in the terminal.
Plain-language tasks: plan, edit files, run commands in your environment—not a hosted IDE. Real tools and agent loop, not a one-off web completion.
Run /addprovider (or /add-provider) for a guided setup—local or hosted OpenAI-compatible APIs, keys, defaults. Use /model to switch; edit ~/.qwen/settings.json if you prefer raw config.
Ask in natural language; attach context with @ paths. Plug in MCP servers so the agent can use your tools and data from the project root.
Interactive otter, otter -p for scripts/CI, pipes, optional sandboxes, IDE companions—native to your shell.
Free (Apache-2.0), no paywall. Built for local & self-hosted models—Ollama, LM Studio, vLLM, or any OpenAI-compatible URL. Keys stay on your machine.