fix(onboard): refine nvidia nim onboarding catalogs and docs

This commit is contained in:
Chummy 2026-02-18 23:08:12 +08:00
parent daef8f8094
commit dea5dcad36
3 changed files with 55 additions and 19 deletions

View file

@ -71,6 +71,8 @@ Last verified: **February 18, 2026**.
- `zeroclaw models refresh --provider <ID>`
- `zeroclaw models refresh --force`
`models refresh` currently supports live catalog refresh for provider IDs: `openrouter`, `openai`, `anthropic`, `groq`, `mistral`, `deepseek`, `xai`, `together-ai`, `gemini`, `ollama`, `astrai`, `venice`, `fireworks`, `cohere`, `moonshot`, `glm`, `zai`, `qwen`, and `nvidia`.
### `channel`
- `zeroclaw channel list`

View file

@ -59,6 +59,20 @@ Runtime resolution order is:
- Default onboarding model: `kimi-for-coding` (alternative: `kimi-k2.5`)
- Runtime auto-adds `User-Agent: KimiCLI/0.77` for compatibility.
### NVIDIA NIM Notes
- Canonical provider ID: `nvidia`
- Aliases: `nvidia-nim`, `build.nvidia.com`
- Base API URL: `https://integrate.api.nvidia.com/v1`
- Model discovery: `zeroclaw models refresh --provider nvidia`
Recommended starter model IDs (verified against NVIDIA API catalog on February 18, 2026):
- `meta/llama-3.3-70b-instruct`
- `deepseek-ai/deepseek-v3.2`
- `nvidia/llama-3.3-nemotron-super-49b-v1.5`
- `nvidia/llama-3.1-nemotron-ultra-253b-v1`
## Custom Endpoints
- OpenAI-compatible endpoint: