3.7 KiB
3.7 KiB
ZeroClaw Providers Reference
This document maps provider IDs, aliases, and credential environment variables.
Last verified: February 18, 2026.
How to List Providers
zeroclaw providers
Credential Resolution Order
Runtime resolution order is:
- Explicit credential from config/CLI
- Provider-specific env var(s)
- Generic fallback env vars:
ZEROCLAW_API_KEYthenAPI_KEY
Provider Catalog
| Canonical ID | Aliases | Local | Provider-specific env var(s) |
|---|---|---|---|
openrouter |
— | No | OPENROUTER_API_KEY |
anthropic |
— | No | ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY |
openai |
— | No | OPENAI_API_KEY |
ollama |
— | Yes | OLLAMA_API_KEY (optional) |
gemini |
google, google-gemini |
No | GEMINI_API_KEY, GOOGLE_API_KEY |
venice |
— | No | VENICE_API_KEY |
vercel |
vercel-ai |
No | VERCEL_API_KEY |
cloudflare |
cloudflare-ai |
No | CLOUDFLARE_API_KEY |
moonshot |
kimi |
No | MOONSHOT_API_KEY |
kimi-code |
kimi_coding, kimi_for_coding |
No | KIMI_CODE_API_KEY, MOONSHOT_API_KEY |
synthetic |
— | No | SYNTHETIC_API_KEY |
opencode |
opencode-zen |
No | OPENCODE_API_KEY |
zai |
z.ai |
No | ZAI_API_KEY |
glm |
zhipu |
No | GLM_API_KEY |
minimax |
minimax-intl, minimax-io, minimax-global, minimax-cn, minimaxi, minimax-oauth, minimax-oauth-cn, minimax-portal, minimax-portal-cn |
No | MINIMAX_OAUTH_TOKEN, MINIMAX_API_KEY |
bedrock |
aws-bedrock |
No | (use config/API_KEY fallback) |
qianfan |
baidu |
No | QIANFAN_API_KEY |
qwen |
dashscope, qwen-intl, dashscope-intl, qwen-us, dashscope-us |
No | DASHSCOPE_API_KEY |
groq |
— | No | GROQ_API_KEY |
mistral |
— | No | MISTRAL_API_KEY |
xai |
grok |
No | XAI_API_KEY |
deepseek |
— | No | DEEPSEEK_API_KEY |
together |
together-ai |
No | TOGETHER_API_KEY |
fireworks |
fireworks-ai |
No | FIREWORKS_API_KEY |
perplexity |
— | No | PERPLEXITY_API_KEY |
cohere |
— | No | COHERE_API_KEY |
copilot |
github-copilot |
No | (use config/API_KEY fallback with GitHub token) |
lmstudio |
lm-studio |
Yes | (optional; local by default) |
nvidia |
nvidia-nim, build.nvidia.com |
No | NVIDIA_API_KEY |
Kimi Code Notes
- Provider ID:
kimi-code - Endpoint:
https://api.kimi.com/coding/v1 - Default onboarding model:
kimi-for-coding(alternative:kimi-k2.5) - Runtime auto-adds
User-Agent: KimiCLI/0.77for compatibility.
Custom Endpoints
- OpenAI-compatible endpoint:
default_provider = "custom:https://your-api.example.com"
- Anthropic-compatible endpoint:
default_provider = "anthropic-custom:https://your-api.example.com"
MiniMax OAuth Setup (config.toml)
Set the MiniMax provider and OAuth placeholder in config:
default_provider = "minimax-oauth"
api_key = "minimax-oauth"
Then provide one of the following credentials via environment variables:
MINIMAX_OAUTH_TOKEN(preferred, direct access token)MINIMAX_API_KEY(legacy/static token)MINIMAX_OAUTH_REFRESH_TOKEN(auto-refreshes access token at startup)
Optional:
MINIMAX_OAUTH_REGION=globalorcn(defaults by provider alias)MINIMAX_OAUTH_CLIENT_IDto override the default OAuth client id
Model Routing (hint:<name>)
You can route model calls by hint using [[model_routes]]:
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "anthropic/claude-opus-4-20250514"
[[model_routes]]
hint = "fast"
provider = "groq"
model = "llama-3.3-70b-versatile"
Then call with a hint model name (for example from tool or integration paths):
hint:reasoning