ChannelMessage.sender was used both for display (username) and as the
reply target in Channel::send(). For Telegram, sender is the username
(e.g. "unknown") while send() requires the numeric chat_id, causing
"Bad Request: chat not found" errors.
Add a dedicated reply_to field to ChannelMessage that stores the
channel-specific reply address (Telegram chat_id, Discord channel_id,
Slack channel, etc.). Update all channel implementations and dispatch
code to use reply_to for send/start_typing/stop_typing calls.
This also fixes the same latent bug in Discord and Slack channels where
sender (user ID) was incorrectly passed as the reply target.
`cargo install` places the binary in ~/.cargo/bin, which may not be in
the user's PATH by default. This adds an explicit export step so new
users don't hit a "not found" error after install.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add ProviderCapabilities struct to enable runtime detection of
provider-specific features, starting with native tool calling support.
This is a foundational change that enables future PRs to implement
intelligent tool calling mode selection (native vs prompt-guided).
Changes:
- Add ProviderCapabilities struct with native_tool_calling field
- Add capabilities() method to Provider trait with default impl
- Add unit tests for capabilities equality and defaults
Why:
- Current design cannot distinguish providers with native tool calling
- Needed to enable Gemini/Anthropic/OpenAI native function calling
- Fully backward compatible (all providers inherit default)
What did NOT change:
- No existing Provider methods modified
- No behavior changes for existing code
- Zero breaking changes
Testing:
- cargo test: all tests passed
- cargo fmt: pass
- cargo clippy: pass
Replace bare .body() call with .singlepart(SinglePart::plain()) to ensure
outgoing emails have explicit Content-Type: text/plain; charset=utf-8
header. This fixes recipients seeing raw quoted-printable encoding
(e.g., =E2=80=99) instead of properly decoded UTF-8 characters.
- Move ZEROCLAW_WORKSPACE check to the start of load_or_init()
- Use custom workspace for both config and workspace directories
- Fixes issue where env var was applied AFTER config loading
Fixes#417
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove unused import AsyncBufReadExt in compatible.rs
- Remove unused mut keywords from response and tx
- Remove unused variable 'name'
- Prefix unused parameters with _ in traits.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
browser.rs:
- Extract parse_browser_action() from Tool::execute, removing one
#[allow(clippy::too_many_lines)] suppression
irc.rs:
- Replace 10-parameter IrcChannel::new() with IrcChannelConfig struct,
removing #[allow(clippy::too_many_arguments)] suppression
- Update all call sites (mod.rs and tests)
Closes#366
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- feat(streaming): add streaming support for LLM responses (fixes#211)
- security(deps): remove vulnerable xmas-elf dependency via embuild (fixes#399)
- fix: resolve merge conflicts and integrate chat_with_tools from main
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit fixes compilation errors when running tests by:
1. Adding `futures = "0.3"` dependency to Cargo.toml
2. Adding proper import `use futures_util::{stream, StreamExt};`
3. Replacing `futures::stream` with `stream` (using imported module)
The `futures_util` crate already had the `sink` feature but was missing
the stream-related types. Adding the full `futures` crate provides
the complete stream API needed for the streaming chat functionality.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implement Server-Sent Events (SSE) streaming for OpenAI-compatible providers:
- Add StreamChunk, StreamOptions, and StreamError types to traits module
- Add supports_streaming() and stream_chat_with_system() to Provider trait
- Implement SSE parser for OpenAI streaming responses (data: {...} format)
- Add streaming support to OpenAiCompatibleProvider
- Add streaming support to ReliableProvider with error propagation
- Add futures dependency for async stream support
Features:
- Token-by-token streaming for real-time feedback
- Token counting option (estimated ~4 chars per token)
- Graceful error handling and logging
- Channel-based stream bridging for async compatibility
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Removes the unused "elf" feature from the embuild dependency in
firmware/zeroclaw-esp32/Cargo.toml.
Vulnerability Details:
- Advisory: GHSA-9cc5-2pq7-hfj8
- Package: xmas-elf < 0.10.0
- Severity: Moderate (insufficient bounds checks in HashTable access)
Root Cause:
- The embuild dependency (version < 0.33) relies on xmas-elf ~0.9.1
- The "elf" feature was enabled but not actually used
Fix:
- Removed features = ["elf"] from embuild dependency
- The build.rs only uses embuild::espidf::sysenv, which doesn't require elf
- xmas-elf dependency is now completely eliminated from Cargo.lock
Verification:
- cargo build passes successfully
- grep "xmas-elf" firmware/zeroclaw-esp32/Cargo.lock confirms removal
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add chat_with_tools() to the Provider trait with a default fallback to
chat_with_history(). Implement native tool calling in OpenRouterProvider,
reusing existing NativeChatRequest/NativeChatResponse structs. Wire the
agent loop to use native tool calls when the provider supports them,
falling back to XML-based parsing otherwise.
Changes are purely additive to traits.rs and openrouter.rs. The only
deletions (36 lines) are within run_tool_call_loop() in loop_.rs where
the LLM call section was replaced with a branching if/else for native
vs XML tool calling.
Includes 5 new tests covering:
- chat_with_tools error path (missing API key)
- NativeChatResponse deserialization (tool calls only, mixed)
- parse_native_response conversion to ChatResponse
- tools_to_openai_format schema validation