docs(zai): align setup guide with runtime defaults

- remove trailing whitespace in .env.example Z.AI block
- align documented model defaults/options with current onboard/provider behavior
- keep this PR docs-focused by reverting incidental workflow edits
This commit is contained in:
Chummy 2026-02-18 15:06:10 +08:00
parent e3d6058424
commit a3eedfdc78
3 changed files with 30 additions and 23 deletions

View file

@ -70,7 +70,7 @@ PROVIDER=openrouter
# HOST_PORT=3000 # HOST_PORT=3000
# ── Z.AI GLM Coding Plan ─────────────────────────────────────── # ── Z.AI GLM Coding Plan ───────────────────────────────────────
# Z.AI provides GLM models (glm-4.5, glm-4.6, glm-4.7, glm-5). # Z.AI provides GLM models through OpenAI-compatible endpoints.
# API key format: id.secret (e.g., abc123.xyz789) # API key format: id.secret (e.g., abc123.xyz789)
# #
# Usage: # Usage:
@ -79,5 +79,5 @@ PROVIDER=openrouter
# Or set the environment variable: # Or set the environment variable:
# ZAI_API_KEY=your-id.secret # ZAI_API_KEY=your-id.secret
# #
# Available models: glm-4.5, glm-4.5-air, glm-4.6, glm-4.7, glm-5 # Common models: glm-5, glm-4.7, glm-4-plus, glm-4-flash
# See docs/zai-glm-setup.md for detailed configuration. # See docs/zai-glm-setup.md for detailed configuration.

View file

@ -40,8 +40,8 @@ jobs:
- name: Greet first-time contributors - name: Greet first-time contributors
uses: actions/first-interaction@a1db7729b356323c7988c20ed6f0d33fe31297be # v1 uses: actions/first-interaction@a1db7729b356323c7988c20ed6f0d33fe31297be # v1
with: with:
repo_token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
issue_message: | issue-message: |
Thanks for opening this issue. Thanks for opening this issue.
Before maintainers triage it, please confirm: Before maintainers triage it, please confirm:
@ -50,7 +50,7 @@ jobs:
- Sensitive values are redacted - Sensitive values are redacted
This helps us keep issue throughput high and response latency low. This helps us keep issue throughput high and response latency low.
pr_message: | pr-message: |
Thanks for contributing to ZeroClaw. Thanks for contributing to ZeroClaw.
For faster review, please ensure: For faster review, please ensure:

View file

@ -1,15 +1,18 @@
# Z.AI GLM Coding Plan Setup # Z.AI GLM Setup
ZeroClaw supports Z.AI's GLM models through multiple endpoints. This guide covers the recommended configuration. ZeroClaw supports Z.AI's GLM models through OpenAI-compatible endpoints.
This guide covers practical setup options that match current ZeroClaw provider behavior.
## Overview ## Overview
Z.AI provides GLM models through two API styles: ZeroClaw supports these Z.AI aliases and endpoints out of the box:
| Endpoint | API Style | Provider String | | Alias | Endpoint | Notes |
|----------|-----------|-----------------| |-------|----------|-------|
| `/api/coding/paas/v4` | OpenAI-compatible | `zai` | | `zai` | `https://api.z.ai/api/coding/paas/v4` | Global endpoint |
| `/api/anthropic` | Anthropic-compatible | `anthropic-custom:https://api.z.ai/api/anthropic` | | `zai-cn` | `https://open.bigmodel.cn/api/paas/v4` | China endpoint |
If you need a custom base URL, see `docs/custom-providers.md`.
## Setup ## Setup
@ -28,7 +31,7 @@ Edit `~/.zeroclaw/config.toml`:
```toml ```toml
api_key = "YOUR_ZAI_API_KEY" api_key = "YOUR_ZAI_API_KEY"
default_provider = "zai" default_provider = "zai"
default_model = "glm-4.7" default_model = "glm-5"
default_temperature = 0.7 default_temperature = 0.7
``` ```
@ -36,11 +39,12 @@ default_temperature = 0.7
| Model | Description | | Model | Description |
|-------|-------------| |-------|-------------|
| `glm-4.5` | Stable release | | `glm-5` | Default in onboarding; strongest reasoning |
| `glm-4.5-air` | Lightweight version | | `glm-4.7` | Strong general-purpose quality |
| `glm-4.6` | Improved reasoning | | `glm-4-plus` | Stable baseline |
| `glm-4.7` | Current recommended | | `glm-4-flash` | Lower-latency option |
| `glm-5` | Latest |
Model availability can vary by account/region, so use the `/models` API when in doubt.
## Verify Setup ## Verify Setup
@ -52,7 +56,7 @@ curl -X POST "https://api.z.ai/api/coding/paas/v4/chat/completions" \
-H "Authorization: Bearer YOUR_ZAI_API_KEY" \ -H "Authorization: Bearer YOUR_ZAI_API_KEY" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"model": "glm-4.7", "model": "glm-5",
"messages": [{"role": "user", "content": "Hello"}] "messages": [{"role": "user", "content": "Hello"}]
}' }'
``` ```
@ -87,10 +91,12 @@ Add to your `.env` file:
# Z.AI API Key # Z.AI API Key
ZAI_API_KEY=your-id.secret ZAI_API_KEY=your-id.secret
# Or use the provider-specific variable # Optional generic key (used by many providers)
# The key format is: id.secret (e.g., abc123.xyz789) # API_KEY=your-id.secret
``` ```
The key format is `id.secret` (for example: `abc123.xyz789`).
## Troubleshooting ## Troubleshooting
### Rate Limiting ### Rate Limiting
@ -100,7 +106,7 @@ ZAI_API_KEY=your-id.secret
**Solution:** **Solution:**
- Wait and retry - Wait and retry
- Check your Z.AI plan limits - Check your Z.AI plan limits
- Try `glm-4.7` instead of `glm-5` (more stable availability) - Try `glm-4-flash` for lower latency and higher quota tolerance
### Authentication Errors ### Authentication Errors
@ -132,4 +138,5 @@ curl -s "https://api.z.ai/api/coding/paas/v4/models" \
## Related Documentation ## Related Documentation
- [ZeroClaw README](../README.md) - [ZeroClaw README](../README.md)
- [Custom Provider Endpoints](./custom-providers.md)
- [Contributing Guide](../CONTRIBUTING.md) - [Contributing Guide](../CONTRIBUTING.md)