Commit graph

839 commits

Author SHA1 Message Date
Chummy
3d068c21be fix: correct Lark/Feishu channel selection index in wizard 2026-02-19 21:25:21 +08:00
Chummy
dcd0bf641d feat: add multimodal image marker support with Ollama vision 2026-02-19 21:25:21 +08:00
Chummy
63aacb09ff fix(provider): preserve full history in responses fallback 2026-02-19 21:16:55 +08:00
Chummy
48b51e7152 test(config): make tokio::test schema cases async 2026-02-19 21:05:19 +08:00
Chummy
a5d7911923 feat(runtime): add reasoning toggle for ollama 2026-02-19 21:05:19 +08:00
Chummy
8f13fee4a6 test: stabilize qwen oauth env tests and gateway fixtures 2026-02-19 20:54:20 +08:00
Chummy
bca58acdcb feat(provider): add qwen-code oauth credential support 2026-02-19 20:54:20 +08:00
Chummy
e9c280324f test(config): make schema export test async 2026-02-19 20:49:53 +08:00
Chummy
c57f3f51a0 fix(config): derive JsonSchema for embedding routes 2026-02-19 20:49:53 +08:00
Chummy
572aa77c2a feat(memory): add embedding hint routes and upgrade guidance 2026-02-19 20:49:53 +08:00
T. Budiman
2b8547b386 feat(gateway): enrich webhook and WhatsApp with workspace system prompt
Add workspace context (IDENTITY.md, AGENTS.md, etc.) to gateway webhook
and WhatsApp message handlers by using chat_with_system() with a
build_system_prompt()-generated system prompt instead of simple_chat().

This aligns gateway behavior with other channels (Telegram, Discord, etc.)
and the agent loop, which all pass system prompts via structured
ChatMessage::system() or chat_with_system().

Changes:
- handle_webhook: build system prompt and use chat_with_system()
- handle_whatsapp_message: build system prompt and use chat_with_system()

Risk: Low - uses existing build_system_prompt() function, no new dependencies
Rollback: Revert commit removes system prompt enrichment
2026-02-19 20:30:02 +08:00
Chummy
2016382f42 fix(channels): compact sender history and filter oversized memory context 2026-02-19 20:05:35 +08:00
Chummy
2c07fb1792 fix: fail fast on context-window overflow and reset channel history 2026-02-19 19:38:28 +08:00
Chummy
aa176ef881 docs(readme): add impersonation warning for openagen fork and domain 2026-02-19 19:35:21 +08:00
Chummy
b611609c30 ci(docker): publish multi-arch latest and harden release tagging path 2026-02-19 19:32:18 +08:00
Chummy
772bb15ed9 fix(tests): stabilize issue #868 model refresh regression 2026-02-19 19:15:08 +08:00
Aleksandr Prilipko
2124b1dbbd test(e2e): add multi-turn history fidelity and memory enrichment tests
Add comprehensive e2e test coverage for chat_with_history and RAG
enrichment pipeline:

- RecordingProvider mock that captures all messages sent to the provider
- StaticMemoryLoader mock that simulates RAG context injection
- e2e_multi_turn_history_fidelity: verifies growing history across 3 turns
- e2e_memory_enrichment_injects_context: verifies RAG context prepended
- e2e_multi_turn_with_memory_enrichment: combined multi-turn + enrichment
- e2e_empty_memory_context_passthrough: verifies no corruption on empty RAG
- e2e_live_openai_codex_multi_turn (#[ignore]): real API call verifying
  the model recalls facts from prior messages via chat_with_history

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Aleksandr Prilipko
5dd11e6b0f fix(provider): use output_text content type for assistant messages in Codex history
The OpenAI Responses API requires assistant messages to use content type
"output_text" while user messages use "input_text". The prior implementation
used "input_text" for both roles, causing 400 errors on multi-turn history.

Extract build_responses_input() helper for testability and add 3 unit tests
covering role→content-type mapping, default instructions, and unknown roles.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Aleksandr Prilipko
1b57be7223 fix(provider): implement chat_with_history for OpenAI Codex and Gemini
Both providers only implemented chat_with_system, so the default
chat_with_history trait method was discarding all conversation history
except the last user message. This caused the Telegram bot to lose
context between messages.

Changes:
- OpenAiCodexProvider: extract send_responses_request helper, add
  chat_with_history that maps full ChatMessage history to ResponsesInput
- GeminiProvider: extract send_generate_content helper, add
  chat_with_history that maps ChatMessage history to Gemini Content
  (with assistant→model role mapping)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Chummy
6eec888ff0 docs(config): document autonomy policy and quote-aware shell parsing 2026-02-19 19:03:20 +08:00
Chummy
67466254f0 fix(security): parse shell separators only when unquoted 2026-02-19 19:03:20 +08:00
Chummy
a0098de28c fix(bedrock): normalize aws-bedrock alias and harden docs/tests 2026-02-19 19:01:45 +08:00
KevinZhao
0e4e0d590d feat(provider): add dedicated AWS Bedrock Converse API provider
Replace the non-functional OpenAI-compatible stub with a purpose-built
Bedrock provider that implements AWS SigV4 signing from first principles
using hmac/sha2/hex crates — no AWS SDK dependency.

Key capabilities:
- SigV4 authentication (AKSK + optional session token)
- Converse API with native tool calling support
- Prompt caching via cachePoint heuristics
- Proper URI encoding for model IDs containing colons
- Resilient response parsing with unknown block type fallback

Also updates:
- Factory wiring and credential resolution bypass for AKSK auth
- Onboard wizard with Bedrock-specific model selection and guidance
- Provider reference docs with auth, region, and model ID details

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:01:45 +08:00
Chummy
9f94ad6db4 fix(config): log resolved config path source at startup 2026-02-19 18:58:41 +08:00
Chummy
e83e017062 fix(channels): preserve slack thread root ids 2026-02-19 18:52:30 +08:00
Daniel Willitzer
9afe4f28e7 feat(channels): add threading support to message channels
Add optional thread_ts field to ChannelMessage and SendMessage for
platform-specific threading (e.g. Slack threads, Discord threads).

- ChannelMessage.thread_ts captures incoming thread context
- SendMessage.thread_ts propagates thread context to replies
- SendMessage::in_thread() builder for fluent API
- Slack: send with thread_ts, capture ts from incoming messages
- All reply paths in runtime preserve thread context via in_thread()
- All other channels initialize thread_ts: None (forward-compatible)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 18:52:30 +08:00
Chummy
adc998429e test(channel): harden Lark WS heartbeat activity handling 2026-02-19 18:43:49 +08:00
wonder_land
3108ffe3e7 fix(channel): update last_recv on WS Ping/Pong frames in Lark channel
Feishu WebSocket server sends native WS Ping frames as keep-alive probes.
ZeroClaw correctly replied with Pong but did not update last_recv, so the
heartbeat watchdog (WS_HEARTBEAT_TIMEOUT = 300s) triggered a forced
reconnect every 5 minutes even when the connection was healthy.

Two fixes:
- WsMsg::Ping: update last_recv before sending Pong
- WsMsg::Pong: handle explicitly and update last_recv (was silently
  swallowed by the wildcard arm)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 18:43:49 +08:00
Chummy
1bf5582c83 docs(provider): clarify credential resolution for fallback chains 2026-02-19 18:43:45 +08:00
Chummy
ba018a38ef chore(provider): normalize fallback test comments to ASCII punctuation 2026-02-19 18:43:45 +08:00
Chummy
435c33d408 fix(provider): preserve fallback runtime options when resolving credentials 2026-02-19 18:43:45 +08:00
Vernon Stinebaker
bb22bdc8fb fix(provider): resolve fallback provider credentials independently
Fallback providers in create_resilient_provider_with_options() were
created via create_provider_with_options() which passed the primary
provider's api_key as credential_override.  This caused
resolve_provider_credential() to short-circuit on the override and
never check the fallback provider's own env var (e.g. DEEPSEEK_API_KEY
for a deepseek fallback), resulting in auth failures (401) when the
primary and fallback use different API services.

Switch to create_provider_with_url(fallback, None, None) so each
fallback resolves its own credential via provider-specific env vars.
This also enables custom: URL prefixes (e.g.
custom:http://host.docker.internal:1234/v1) to work as fallback
entries, which was previously impossible through the options path.

Add three focused tests covering independent credential resolution,
custom URL fallbacks, and mixed fallback chains.
2026-02-19 18:43:45 +08:00
Chummy
f9e1ffe634 style: format schema provider override logic 2026-02-19 18:04:55 +08:00
Chummy
916c0c823b fix: sync gateway pairing persistence and proxy null clears 2026-02-19 18:04:55 +08:00
Jayson Reis
f1ca73d3d2 chore: Remove more blocking io calls 2026-02-19 18:04:55 +08:00
Chummy
1aec9ad9c0 fix(rebase): resolve duplicate tests and gateway AppState fields 2026-02-19 18:03:09 +08:00
Chummy
268a1dee09 style: apply rustfmt after rebase 2026-02-19 18:03:09 +08:00
Chummy
b1ebd4b579 fix(whatsapp): complete wa-rs channel behavior and storage correctness 2026-02-19 18:03:09 +08:00
mmacedoeu
c2a1eb1088 feat(channels): implement WhatsApp Web channel with wa-rs integration
- Add wa-rs dependencies with custom rusqlite storage backend
- Implement functional WhatsApp Web channel using wa-rs Bot
- Integrate TokioWebSocketTransportFactory and UreqHttpClient
- Add message handling via Bot event loop with proper shutdown
- Create WhatsApp storage trait implementations for wa-rs
- Add WhatsApp config schema and onboarding support
- Implement Meta webhook verification for WhatsApp Cloud API
- Add webhook signature verification for security
- Generate unique message keys for WhatsApp conversations
- Remove unused Node.js whatsapp-web-bridge stub

Supersedes: baileys-based bridge approach in favor of native Rust wa-rs
2026-02-19 18:03:09 +08:00
Chummy
9381e4451a fix(config): preserve explicit custom provider against legacy PROVIDER override 2026-02-19 17:54:25 +08:00
Chummy
d6dca4b890 fix(provider): align native tool system-flattening and add regressions 2026-02-19 17:44:07 +08:00
YubinghanBai
48eb1d1f30 fix(agent): inject full datetime into system prompt and allow date command
Three related agent UX issues found during MiniMax channel testing:

1. DateTimeSection injected only timezone, not the actual date/time.
   Models have no reliable way to know the current date from training
   data alone, causing wrong or hallucinated dates in responses.
   Fix: include full timestamp (YYYY-MM-DD HH:MM:SS TZ) in the prompt.

2. The `date` shell command was absent from the security policy
   allowed_commands default list. When a model tried to call
   shell("date") to get the current time, it received a policy
   rejection and told the user it was "blocked by security policy".
   Fix: add "date" to the default allowed_commands list. The command
   is read-only, side-effect-free, and carries no security risk.

3. (Context) The datetime prompt fix makes the date command fallback
   largely unnecessary, but the allowlist addition ensures the tool
   works correctly if models choose to call it anyway.

Non-goals:
- Not changing the autonomy model or risk classification
- Not adding new config keys

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 17:44:07 +08:00
Chummy
c9a0893fc8 fix(bootstrap): support --model in onboard passthrough 2026-02-19 17:36:20 +08:00
cbigger
3c60b6bc2d feat(onboard): add optional --model flag to quick setup and channels-only guard 2026-02-19 17:36:20 +08:00
Chummy
ff254b4bb3 fix(provider): harden think-tag fallback and add edge-case tests 2026-02-19 16:54:52 +08:00
YubinghanBai
db7b24b319 fix(provider): strip <think> tags and merge system messages for MiniMax
MiniMax API rejects role: system in the messages array with error
2013 (invalid message role: system). In channel mode, the history
builder prepends a system message and optionally appends a second
one for delivery instructions, causing 400 errors on every channel
turn.

Additionally, MiniMax reasoning models embed chain-of-thought in
the content field as <think>...</think> blocks rather than using
the separate reasoning_content field, causing raw thinking output
to leak into user-visible responses.

Changes:
- Add merge_system_into_user flag to OpenAiCompatibleProvider;
  when set, all system messages are concatenated and prepended to
  the first user message before sending to the API
- Add new_merge_system_into_user() constructor used by MiniMax
- Add strip_think_tags() helper that removes <think>...</think>
  blocks from response content before returning to the caller
- Apply strip_think_tags in effective_content() and
  effective_content_optional() so all non-streaming paths are covered
- Update MiniMax factory registration to use new_merge_system_into_user
- Fix pre-existing rustfmt violation on apply_auth_header call

All other providers continue to use the default path unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 16:54:52 +08:00
Chummy
d33eadea75 docs(config): document schema command and add schema test 2026-02-19 16:41:21 +08:00
s04
282fbe0e95 style: fix cargo fmt formatting in config schema handler 2026-02-19 16:41:21 +08:00
s04
996f66b6a7 feat: add zeroclaw config schema for JSON Schema export
Add a `config schema` subcommand that dumps the full configuration
schema as JSON Schema (draft 2020-12) to stdout. This enables
downstream consumers (like PankoAgent) to programmatically validate
configs, generate forms, and stay in sync with zeroclaw's evolving
config surface without hand-maintaining copies of the schema.

- Add schemars 1.2 dependency and derive JsonSchema on all config
  structs/enums (schema.rs, policy.rs, email_channel.rs)
- Add `Config` subcommand group with `Schema` sub-command
- Output is valid JSON Schema with $defs for all 56 config types
2026-02-19 16:41:21 +08:00
Jayson Reis
d44dc5a048 chore: Add nix files for easy on-boarding on the project 2026-02-19 16:29:32 +08:00