docs(zai): align setup guide with runtime defaults

- remove trailing whitespace in .env.example Z.AI block
- align documented model defaults/options with current onboard/provider behavior
- keep this PR docs-focused by reverting incidental workflow edits
This commit is contained in:
Chummy 2026-02-18 15:06:10 +08:00
parent e3d6058424
commit a3eedfdc78
3 changed files with 30 additions and 23 deletions

View file

@ -70,14 +70,14 @@ PROVIDER=openrouter
# HOST_PORT=3000
# ── Z.AI GLM Coding Plan ───────────────────────────────────────
# Z.AI provides GLM models (glm-4.5, glm-4.6, glm-4.7, glm-5).
# Z.AI provides GLM models through OpenAI-compatible endpoints.
# API key format: id.secret (e.g., abc123.xyz789)
#
#
# Usage:
# zeroclaw onboard --provider zai --api-key YOUR_ZAI_API_KEY
#
# Or set the environment variable:
# ZAI_API_KEY=your-id.secret
#
# Available models: glm-4.5, glm-4.5-air, glm-4.6, glm-4.7, glm-5
# Common models: glm-5, glm-4.7, glm-4-plus, glm-4-flash
# See docs/zai-glm-setup.md for detailed configuration.

View file

@ -40,8 +40,8 @@ jobs:
- name: Greet first-time contributors
uses: actions/first-interaction@a1db7729b356323c7988c20ed6f0d33fe31297be # v1
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
issue_message: |
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: |
Thanks for opening this issue.
Before maintainers triage it, please confirm:
@ -50,7 +50,7 @@ jobs:
- Sensitive values are redacted
This helps us keep issue throughput high and response latency low.
pr_message: |
pr-message: |
Thanks for contributing to ZeroClaw.
For faster review, please ensure:

View file

@ -1,15 +1,18 @@
# Z.AI GLM Coding Plan Setup
# Z.AI GLM Setup
ZeroClaw supports Z.AI's GLM models through multiple endpoints. This guide covers the recommended configuration.
ZeroClaw supports Z.AI's GLM models through OpenAI-compatible endpoints.
This guide covers practical setup options that match current ZeroClaw provider behavior.
## Overview
Z.AI provides GLM models through two API styles:
ZeroClaw supports these Z.AI aliases and endpoints out of the box:
| Endpoint | API Style | Provider String |
|----------|-----------|-----------------|
| `/api/coding/paas/v4` | OpenAI-compatible | `zai` |
| `/api/anthropic` | Anthropic-compatible | `anthropic-custom:https://api.z.ai/api/anthropic` |
| Alias | Endpoint | Notes |
|-------|----------|-------|
| `zai` | `https://api.z.ai/api/coding/paas/v4` | Global endpoint |
| `zai-cn` | `https://open.bigmodel.cn/api/paas/v4` | China endpoint |
If you need a custom base URL, see `docs/custom-providers.md`.
## Setup
@ -28,7 +31,7 @@ Edit `~/.zeroclaw/config.toml`:
```toml
api_key = "YOUR_ZAI_API_KEY"
default_provider = "zai"
default_model = "glm-4.7"
default_model = "glm-5"
default_temperature = 0.7
```
@ -36,11 +39,12 @@ default_temperature = 0.7
| Model | Description |
|-------|-------------|
| `glm-4.5` | Stable release |
| `glm-4.5-air` | Lightweight version |
| `glm-4.6` | Improved reasoning |
| `glm-4.7` | Current recommended |
| `glm-5` | Latest |
| `glm-5` | Default in onboarding; strongest reasoning |
| `glm-4.7` | Strong general-purpose quality |
| `glm-4-plus` | Stable baseline |
| `glm-4-flash` | Lower-latency option |
Model availability can vary by account/region, so use the `/models` API when in doubt.
## Verify Setup
@ -52,7 +56,7 @@ curl -X POST "https://api.z.ai/api/coding/paas/v4/chat/completions" \
-H "Authorization: Bearer YOUR_ZAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"model": "glm-5",
"messages": [{"role": "user", "content": "Hello"}]
}'
```
@ -87,10 +91,12 @@ Add to your `.env` file:
# Z.AI API Key
ZAI_API_KEY=your-id.secret
# Or use the provider-specific variable
# The key format is: id.secret (e.g., abc123.xyz789)
# Optional generic key (used by many providers)
# API_KEY=your-id.secret
```
The key format is `id.secret` (for example: `abc123.xyz789`).
## Troubleshooting
### Rate Limiting
@ -100,7 +106,7 @@ ZAI_API_KEY=your-id.secret
**Solution:**
- Wait and retry
- Check your Z.AI plan limits
- Try `glm-4.7` instead of `glm-5` (more stable availability)
- Try `glm-4-flash` for lower latency and higher quota tolerance
### Authentication Errors
@ -132,4 +138,5 @@ curl -s "https://api.z.ai/api/coding/paas/v4/models" \
## Related Documentation
- [ZeroClaw README](../README.md)
- [Custom Provider Endpoints](./custom-providers.md)
- [Contributing Guide](../CONTRIBUTING.md)