Commit graph

14 commits

Author SHA1 Message Date
Chummy
78e0594e5f fix(openai): align chat_with_tools with http client and strict tool parsing 2026-02-19 15:11:18 +08:00
Lucien Loiseau
f76c1226f1 fix(providers): implement chat_with_tools for OpenAiProvider
The OpenAiProvider overrode chat() with native tool support but never
overrode chat_with_tools(), which is the method called by
run_tool_call_loop in channel mode (IRC/Discord/etc). The trait default
for chat_with_tools() silently drops the tools parameter, sending plain
ChatRequest with no tools — causing the model to never use native tool
calls in channel mode.

Add chat_with_tools() override that deserializes tool specs, uses
convert_messages() for proper tool_call_id handling, and sends
NativeChatRequest with tools and tool_choice.

Also add Deserialize derive to NativeToolSpec and NativeToolFunctionSpec
to support deserialization from OpenAI-format JSON.
2026-02-19 15:11:18 +08:00
Chummy
ce104bed45 feat(proxy): add scoped proxy configuration and docs runbooks
- add scope-aware proxy schema and runtime wiring for providers/channels/tools

- add agent callable proxy_config tool for fast proxy setup

- standardize docs system with index, template, and playbooks
2026-02-18 22:10:42 +08:00
Lucien Loiseau
6062888d1b feat(providers): add OVHcloud AI Endpoints as native provider
Route OVHcloud through OpenAiProvider (with proper tool_call_id
serialization) instead of OpenAiCompatibleProvider, fixing tool-call
round-trips against vLLM-based endpoints.

- Add base_url field and with_base_url() constructor to OpenAiProvider
- Replace all hardcoded api.openai.com URLs with self.base_url
- Pass api_url through for the openai provider arm
- Register ovhcloud/ovh provider with env var OVH_AI_ENDPOINTS_ACCESS_TOKEN
2026-02-18 20:54:49 +08:00
Chummy
bc5b1a7841 fix(providers): harden reasoning_content fallback behavior 2026-02-18 17:07:38 +08:00
Vernon Stinebaker
dd4f5271d1 feat(providers): support reasoning_content fallback for thinking models
Reasoning/thinking models (Qwen3, GLM-4, DeepSeek, etc.) may return
output in `reasoning_content` instead of `content`. Add automatic
fallback for both OpenAI and OpenAI-compatible providers, including
streaming SSE support.

Changes:
- Add `reasoning_content` field to response structs in both providers
- Add `effective_content()` helper that prefers `content` but falls
  back to `reasoning_content` when content is empty/null/missing
- Update all extraction sites to use `effective_content()`
- Add streaming SSE fallback for `reasoning_content` chunks
- Add 16 focused unit tests covering all edge cases

Tested end-to-end against GLM-4.7-flash via local LLM server.
2026-02-18 17:07:38 +08:00
Edvard
1336c2f03e feat(providers): add warmup() for OpenAI, Anthropic, Gemini, Compatible, GLM
All five providers have HTTP clients but did not implement warmup(),
relying on the trait default no-op. This adds lightweight warmup calls
to establish TLS + HTTP/2 connection pools on startup, reducing
first-request latency. Each warmup is skipped when credentials are
absent, matching the OpenRouter pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 14:35:03 +08:00
Chummy
1711f140be fix(security): remediate unassigned CodeQL findings
- harden URL/request handling for composio and whatsapp integrations
- reduce cleartext logging exposure across providers/tools/gateway
- hash and constant-time compare gateway webhook secrets
- expand nested secret encryption coverage in config
- align feature aliases and add regression tests for security paths
- fix bubblewrap all-features test invocation surfaced during deep validation
2026-02-17 19:19:06 +08:00
mai1015
b341fdb368 feat: add agent structure and improve tooling for provider 2026-02-17 01:01:56 +08:00
chumyin
3b4a4de457 refactor(provider): unify Provider responses with ChatResponse
- Switch Provider trait methods to return structured ChatResponse
- Map OpenAI-compatible tool_calls into shared ToolCall type
- Update reliable/router wrappers and provider tests for new interface
- Make agent loop prefer structured tool calls with text fallback parsing
- Adapt gateway replies to structured responses with safe tool-call fallback
2026-02-16 19:16:22 +08:00
Argenis
5cc02c5813
fix: add WhatsApp webhook signature verification (X-Hub-Signature-256)
Closes #51

- Add HMAC-SHA256 signature verification for WhatsApp webhooks
- Prevents message spoofing attacks (CWE-345)
- Add whatsapp_app_secret config field with ZEROCLAW_WHATSAPP_APP_SECRET env override
- Add 13 comprehensive unit tests

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 06:17:24 -05:00
argenis de la rosa
976c5bbf3c hardening: fix 7 production weaknesses found in codebase scan
Scan findings and fixes:

1. Gateway buffer overflow (8KB → 64KB)
   - Fixed: Increased request buffer from 8,192 to 65,536 bytes
   - Large POST bodies (long prompts) were silently truncated

2. Gateway slow-loris attack (no read timeout → 30s)
   - Fixed: tokio::time::timeout(30s) on stream.read()
   - Malicious clients could hold connections indefinitely

3. Webhook secret timing attack (== → constant_time_eq)
   - Fixed: Now uses constant_time_eq() for secret comparison
   - Prevents timing side-channel on webhook authentication

4. Pairing brute force (no limit → 5 attempts + 5min lockout)
   - Fixed: PairingGuard tracks failed attempts with lockout
   - Returns 429 Too Many Requests with retry_after seconds

5. Shell tool hang (no timeout → 60s kill)
   - Fixed: tokio::time::timeout(60s) on Command::output()
   - Commands that hang are killed and return error

6. Shell tool OOM (unbounded output → 1MB cap)
   - Fixed: stdout/stderr truncated at 1MB with warning
   - Prevents memory exhaustion from verbose commands

7. Provider HTTP timeout (none → 120s request + 10s connect)
   - Fixed: All 5 providers (OpenRouter, Anthropic, OpenAI,
     Ollama, Compatible) now have reqwest timeouts
   - Ollama gets 300s (local models are slower)

949 tests passing, 0 clippy warnings, cargo fmt clean
2026-02-14 01:47:08 -05:00
argenis de la rosa
bc31e4389b style: cargo fmt — fix all formatting for CI
Ran cargo fmt across entire codebase to pass CI's cargo fmt --check.
No logic changes, only whitespace/formatting.
2026-02-13 16:03:50 -05:00
argenis de la rosa
05cb353f7f feat: initial release — ZeroClaw v0.1.0
- 22 AI providers (OpenRouter, Anthropic, OpenAI, Mistral, etc.)
- 7 channels (CLI, Telegram, Discord, Slack, iMessage, Matrix, Webhook)
- 5-step onboarding wizard with Project Context personalization
- OpenClaw-aligned system prompt (SOUL.md, IDENTITY.md, USER.md, AGENTS.md, etc.)
- SQLite memory backend with auto-save
- Skills system with on-demand loading
- Security: autonomy levels, command allowlists, cost limits
- 532 tests passing, 0 clippy warnings
2026-02-13 12:19:14 -05:00