Compare commits

...

200 commits

Author SHA1 Message Date
pluginmd
c185261909 fix(i18n): rename README.vn.md to README.vi.md
Some checks are pending
CI Run / Detect Change Scope (push) Waiting to run
CI Run / Lint Gate (Format + Clippy + Strict Delta) (push) Blocked by required conditions
CI Run / Test (push) Blocked by required conditions
CI Run / Build (Smoke) (push) Blocked by required conditions
CI Run / Docs-Only Fast Path (push) Blocked by required conditions
CI Run / Non-Rust Fast Path (push) Blocked by required conditions
CI Run / Docs Quality (push) Blocked by required conditions
CI Run / Lint Feedback (push) Blocked by required conditions
CI Run / Workflow Owner Approval (push) Blocked by required conditions
CI Run / CI Required Gate (push) Blocked by required conditions
Test E2E / Integration / E2E Tests (push) Waiting to run
Use correct ISO 639-1 language code (vi) instead of country code (vn),
consistent with existing translations (zh-CN, ja, ru).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 19:01:38 +08:00
pluginmd
4abd1b4471 docs(i18n): add Vietnamese README translation
Add full Vietnamese (Tiếng Việt) translation of README.md and update
language selector links across existing README files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 19:01:38 +08:00
dependabot[bot]
b23c2e7ae6
chore(deps): bump rand from 0.9.2 to 0.10.0 (#1075)
* chore(deps): bump rand from 0.9.2 to 0.10.0

Bumps [rand](https://github.com/rust-random/rand) from 0.9.2 to 0.10.0.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.2...0.10.0)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix(security): keep token generation compatible with rand 0.10

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Will Sarg <12886992+willsarg@users.noreply.github.com>
2026-02-20 05:29:23 -05:00
dependabot[bot]
bd7b59151a
chore(deps): bump actions/download-artifact from 4.3.0 to 7.0.0 (#1073)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.3.0 to 7.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](d3f86a106a...37930b1c2a)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 05:27:25 -05:00
dependabot[bot]
e04d114814
chore(deps): bump toml from 0.8.23 to 1.0.1+spec-1.1.0 (#1074)
Bumps [toml](https://github.com/toml-rs/toml) from 0.8.23 to 1.0.1+spec-1.1.0.
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.23...toml-v1.0.1)

---
updated-dependencies:
- dependency-name: toml
  dependency-version: 1.0.1+spec-1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 05:25:41 -05:00
dependabot[bot]
ee7c437061
chore(deps): bump probe-rs from 0.30.0 to 0.31.0 (#1076)
Bumps [probe-rs](https://github.com/probe-rs/probe-rs) from 0.30.0 to 0.31.0.
- [Release notes](https://github.com/probe-rs/probe-rs/releases)
- [Changelog](https://github.com/probe-rs/probe-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/probe-rs/probe-rs/compare/v0.30.0...v0.31.0)

---
updated-dependencies:
- dependency-name: probe-rs
  dependency-version: 0.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 05:25:38 -05:00
fettpl
c649ced585
fix(security): enforce cron agent autonomy and rate gates (#626) 2026-02-20 05:23:20 -05:00
Edvard Schøyen
861137b2b3
fix(security): deny unapproved tool calls on non-CLI channels (#998)
When autonomy is set to "supervised", the approval gate only prompted
interactively on CLI. On Telegram and other channels, all tool calls
were silently auto-approved with ApprovalResponse::Yes, including
high-risk tools like shell — completely bypassing supervised mode.

On non-CLI channels where interactive prompting is not possible, deny
tool calls that require approval instead of auto-approving. Users can
expand the auto_approve list in config to explicitly allow specific
tools on non-interactive channels.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 05:22:56 -05:00
Andy Tian
9fdc4c36b1
fix(observability): use blocking OTLP HTTP exporter client (#975) (#1032) 2026-02-20 05:08:33 -05:00
Alex Gorevski
2c407f6a55
refactor(lib): restrict internal module visibility to pub(crate) (#985)
Restrict 19 internal-only modules from pub to pub(crate) in lib.rs,
reducing the public API surface of the library crate.

Modules kept pub (used by integration tests, benchmarks, or are
documented extension points per AGENTS.md):
  agent, channels, config, gateway, memory, observability,
  peripherals, providers, rag, runtime, tools

Modules restricted to pub(crate) (not imported via zeroclaw:: by any
external consumer):
  approval, auth, cost, cron, daemon, doctor, hardware, health,
  heartbeat, identity, integrations, migration, multimodal, onboard,
  security, service, skills, tunnel, util

Also restrict 6 command enums (ServiceCommands, ChannelCommands,
SkillCommands, MigrateCommands, CronCommands, IntegrationCommands)
to pub(crate) — main.rs defines its own copies and does not import
these from the library crate. HardwareCommands and PeripheralCommands
remain pub as main.rs imports them via zeroclaw::.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 05:06:41 -05:00
Edvard Schøyen
f35a365d83
fix(agent): implement actual concurrent tool execution (#1001)
When parallel_tools is enabled, both code branches in execute_tools()
ran the same sequential for loop. The parallel path was a no-op.

Use futures::future::join_all to execute tool calls concurrently when
parallel_tools is true. The futures crate is already a dependency.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 05:05:33 -05:00
Edvard Schøyen
2ae12578f0
fix(channel): use per-recipient typing handles in Discord (#1005)
Replace the single shared typing_handle with a HashMap keyed by
recipient channel ID. Previously, concurrent messages would fight
over one handle — starting typing for message B would cancel message
A's indicator, and stopping one would kill the other's.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 05:02:39 -05:00
Edvard Schøyen
e2c507664c
fix(provider): surface API key rotation as ineffective warning (#1000)
rotate_key() selects the next key in the round-robin but never applies
it to the underlying provider (Provider trait has no set_api_key
method). The previous info-level log implied rotation was working.

Change to warn-level and explicitly state the key is not applied,
making the limitation visible to operators instead of silently
pretending rotation works.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 05:00:26 -05:00
Alex Gorevski
1a3be5e54f
fix(config): change web_search.enabled default to false for explicit opt-in (#986)
Network access (web search via DuckDuckGo) should require explicit user
consent rather than being enabled by default. This aligns with the
least-surprise principle and the project's secure-by-default policy:
users must opt in to external network requests.

Changes:
- WebSearchConfig::default() now sets enabled: false
- Serde default for enabled field changed from default_true to default
  (bool defaults to false)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 04:58:19 -05:00
Jayson Reis
75772cc3a7
chore: Fix pull request template's merge conflict (#892) 2026-02-20 04:57:29 -05:00
dependabot[bot]
b76c757400
chore(deps): bump criterion from 0.5.1 to 0.8.2 (#1070)
Bumps [criterion](https://github.com/criterion-rs/criterion.rs) from 0.5.1 to 0.8.2.
- [Release notes](https://github.com/criterion-rs/criterion.rs/releases)
- [Changelog](https://github.com/criterion-rs/criterion.rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/criterion-rs/criterion.rs/compare/0.5.1...criterion-v0.8.2)

---
updated-dependencies:
- dependency-name: criterion
  dependency-version: 0.8.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 04:51:09 -05:00
dependabot[bot]
7875a08100
chore(deps): bump directories from 5.0.1 to 6.0.0 (#1069)
Bumps [directories](https://github.com/soc/directories-rs) from 5.0.1 to 6.0.0.
- [Commits](https://github.com/soc/directories-rs/commits)

---
updated-dependencies:
- dependency-name: directories
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 04:51:06 -05:00
dependabot[bot]
d82350d847
chore(deps): bump the rust-all group with 3 updates (#1068)
Bumps the rust-all group with 3 updates: [clap](https://github.com/clap-rs/clap), [anyhow](https://github.com/dtolnay/anyhow) and [nusb](https://github.com/kevinmehall/nusb).


Updates `clap` from 4.5.58 to 4.5.60
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.58...clap_complete-v4.5.60)

Updates `anyhow` from 1.0.101 to 1.0.102
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.101...1.0.102)

Updates `nusb` from 0.2.1 to 0.2.2
- [Release notes](https://github.com/kevinmehall/nusb/releases)
- [Commits](https://github.com/kevinmehall/nusb/compare/v0.2.1...v0.2.2)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.60
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: anyhow
  dependency-version: 1.0.102
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: nusb
  dependency-version: 0.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 04:51:03 -05:00
dependabot[bot]
12fd87623a
chore(deps): bump sigstore/cosign-installer from 3.8.2 to 4.0.0 (#1067)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 4.0.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](3454372f43...faadad0cce)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 04:39:29 -05:00
Will Sarg
c96ea79ac0
feat(installer): add guided zeroclaw installer and distro hardening (#887)
* feat(installer): add guided zeroclaw installer entrypoint

- add top-level POSIX wrapper (zeroclaw_install.sh) that ensures bash is present

- route bootstrap/install compatibility scripts through the new installer entrypoint

- improve Linux dependency handling for Alpine/Fedora/Arch, including pacman container fallback

* fix(ci): resolve dependabot config conflict and run daily

- remove duplicate docker ecosystem entry with overlapping directory/target-branch

- switch cargo, github-actions, and docker schedules from monthly to daily
2026-02-20 04:34:14 -05:00
Chummy
a2e9c0d1e1 fix(skills): make open-skills sync opt-in and configurable 2026-02-20 16:45:50 +08:00
Chummy
d0674c4b98 fix(channels): harden whatsapp web mode and document dual backend 2026-02-20 16:45:16 +08:00
Chummy
70f12e5df9 test(onboard): add regression coverage for quick setup model override 2026-02-20 16:22:03 +08:00
Chummy
bbaf55eb3b fix(config): harden sync_directory async signature across platforms 2026-02-20 16:21:47 +08:00
Chummy
654f822430 fix(memory): avoid tokio runtime panic when initializing postgres backend 2026-02-20 16:21:25 +08:00
Chummy
7c2c370180 fix(channel): preserve interrupted user context in cached turn normalization 2026-02-20 16:21:24 +08:00
Chummy
e7ccb573fa fix(observability): prevent otel reactor panic in non-tokio contexts 2026-02-20 16:07:50 +08:00
xj
2d6205ee58 fix(channel): use native tool calling to preserve conversation context
AnthropicProvider declared supports_native_tools() = true but did not
override chat_with_tools(). The default trait implementation drops all
conversation history (sends only system + last user message), breaking
multi-turn conversations on Telegram and other channels.

Changes:
- Override chat_with_tools() in AnthropicProvider: converts OpenAI-format
  tool JSON to ToolSpec and delegates to chat() which preserves full
  message history
- Skip build_tool_instructions() XML protocol when provider supports
  native tools (saves ~12k chars in system prompt)
- Remove duplicate Tool Use Protocol section from build_system_prompt()
  for native-tool providers
- Update Your Task section to encourage conversational follow-ups
  instead of XML tool_call tags when using native tools
- Add tracing::warn for malformed tool definitions in chat_with_tools
2026-02-20 13:58:27 +08:00
xj
8c826e581c fix(channel): store raw user message and skip memory recall with history
Two fixes for conversation history quality:

1. Store raw msg.content in ConversationHistoryMap instead of
   enriched_message — memory context is ephemeral per-request and
   pollutes future turns when persisted.

2. Skip memory recall when conversation history exists — prior turns
   already provide context. Memory recall adds noise and can mislead
   the model (e.g. old 'seen' entries overshadowing a code variable
   named seen in the current conversation).
2026-02-20 13:58:27 +08:00
Chummy
8cafeb02e8
fix(composio): request latest v3 tool versions by default (#1039) 2026-02-19 23:29:09 -05:00
Chummy
f274fd5757
fix(channel): prevent false timeout during multi-turn tool loops (#1037) 2026-02-19 23:28:05 -05:00
Chummy
178bb108da
fix(gemini): correct Gemini CLI OAuth cloudcode payload/response handling (#1040)
* fix(gemini): align OAuth cloudcode payload and response parsing

* docs(gemini): document OAuth vs API key endpoint behavior
2026-02-19 23:27:00 -05:00
Chummy
db2d9acd22
fix(skills): support SSH git remotes for skills install (#1035) 2026-02-19 23:25:47 -05:00
Chummy
f10bb998e0
fix(build): unblock low-resource installs and release binaries (#1041)
* fix(build): unblock low-resource installs and release binaries

* fix(ci): use supported intel macOS runner label
2026-02-19 23:24:43 -05:00
Chummy
5c1d6fcba6 fix(channel): align runtime defaults with current model id and test context 2026-02-20 11:05:41 +08:00
Chummy
740eb17d76 fix(channel): hot-apply runtime config updates for running channel service 2026-02-20 11:05:41 +08:00
Chummy
95ec5922d1 fix(channel): robust tool context summary extraction 2026-02-20 10:59:18 +08:00
Edvard
61530520b3 fix(channel): preserve tool context in conversation history
After run_tool_call_loop, only the final text response was saved to
per-sender conversation history. All intermediate tool calls and results
were discarded, so on the next turn the LLM had no awareness of what
tools it used or what it discovered — causing poor follow-up ability.

Record the history length before the tool loop, then scan new messages
for tool names after it completes. Prepend a compact [Used tools: ...]
annotation to the assistant message saved in history, giving the LLM
context about its own actions on subsequent turns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 10:59:18 +08:00
Chummy
b2c5d611be fix(channel): preserve memory enrichment for current call while storing raw user turn 2026-02-20 10:48:18 +08:00
Edvard
6cbdef8c16 fix(channel): save original user text to conversation history
Previously, the memory-enriched message (with [Memory context] block
prepended) was saved to per-sender conversation history. On subsequent
turns the LLM saw stale memory fragments with raw keys baked into
prior "user" messages, creating compounding noise.

Save the original msg.content instead. Memory context is still injected
for the current LLM call but no longer persists across turns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 10:48:18 +08:00
Edvard
ea2ff7c53b fix(memory): add minimum-length filter for auto-save messages
Every user message was auto-saved to memory regardless of length,
flooding the store with trivial entries like "ok", "thanks", "hi".
These noise entries competed with real memories during recall, degrading
relevance — especially with keyword-only search.

Skip auto-saving messages shorter than 20 characters. Applied to both
the channel path (channels/mod.rs) and CLI agent path (agent/loop_.rs).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 10:26:31 +08:00
Chummy
63a59e3735 test(channels): assert single tool protocol block in final prompt 2026-02-20 10:25:48 +08:00
Edvard
35a3520621 fix(channel): remove duplicated tool protocol from system prompt
build_system_prompt() included a "## Tool Use Protocol" section with
the tag format and usage instructions. build_tool_instructions() then
appended another identical "## Tool Use Protocol" section with full
JSON schemas. This wasted ~1-2K tokens on every API call.

Remove the duplicate protocol block from build_system_prompt(), keeping
only the compact tool name/description list. The complete protocol with
schemas is provided by build_tool_instructions().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 10:25:48 +08:00
Edvard
3a8a1754ef fix(channel): replace hardcoded Discord bot text with generic channel text
The Channel Capabilities section in build_system_prompt() was hardcoded
to say "You are running as a Discord bot" for ALL channels, including
Telegram. This caused the LLM to misidentify itself and reference
Discord-specific features regardless of the actual channel.

Replace with generic "messaging bot" text. Per-channel delivery
instructions already exist via channel_delivery_instructions().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 10:25:07 +08:00
Ken Simpson
e0ca73336a feat(bootstrap): add docker onboarding bootstrap mode 2026-02-20 10:20:18 +08:00
Ken Simpson
2fc0504545 chore(dev): auto-load env and hide compose secrets 2026-02-20 10:20:18 +08:00
Alex Gorevski
9de77df235
Merge pull request #1020 from zeroclaw-labs/fix/code-scanning-alerts
fix(security): address CodeQL code-scanning alerts
2026-02-19 16:36:29 -08:00
Alex Gorevski
36f971a3d0 fix(security): address CodeQL code-scanning alerts
- Extract hard-coded test vector keys into named constants in bedrock.rs
  and linq.rs to resolve rust/hard-coded-cryptographic-value alerts
- Replace derived Debug impls with manual impls that redact sensitive
  fields (access_token, refresh_token, credential, api_key) on
  QwenOauthCredentials, QwenOauthProviderContext, and
  ResolvedEmbeddingConfig to resolve rust/cleartext-logging alerts
- Redact Matrix user_id and device_id hints in tracing::warn! diagnostic
  messages via crate::security::redact() to resolve cleartext-logging
  alert in matrix.rs

Addresses CodeQL alerts: #77, #95-106

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 16:31:03 -08:00
Alex Gorevski
0f69464a1f
Merge pull request #1018 from zeroclaw-labs/test/fuzz-target-expansion
test(fuzz): add webhook, provider response, and command validation fuzz targets
2026-02-19 16:17:14 -08:00
Alex Gorevski
7d945aea6a
Merge pull request #1017 from zeroclaw-labs/test/peripherals-unit-tests
test(peripherals): add unit tests for peripheral module configuration and listing
2026-02-19 16:17:07 -08:00
Alex Gorevski
9d0ff54037
Merge pull request #1016 from zeroclaw-labs/test/improve-test-assertions
test(quality): replace bare .unwrap() with .expect() in agent and shell tests
2026-02-19 16:16:42 -08:00
Alex Gorevski
1708243470
Merge pull request #1015 from zeroclaw-labs/test/gateway-idempotency-tests
test(gateway): add edge-case idempotency store tests
2026-02-19 16:16:28 -08:00
Alex Gorevski
2a106d051a
Merge pull request #1013 from zeroclaw-labs/fix/docs-inline-code-comments
docs(code): add decision-point comments to agent loop, security policy, and reliable provider
2026-02-19 16:01:19 -08:00
Alex Gorevski
88a036304d
Merge pull request #1012 from zeroclaw-labs/fix/docs-collection-indexes
docs: enhance getting-started, hardware, and project collection indexes
2026-02-19 16:00:56 -08:00
Alex Gorevski
7d7362439e
Merge pull request #1011 from zeroclaw-labs/fix/docs-config-struct-fields
docs(code): add comprehensive doc comments to config schema public fields
2026-02-19 16:00:34 -08:00
Alex Gorevski
200ce0d6fd
Merge pull request #1010 from zeroclaw-labs/fix/docs-trait-doc-comments
docs(code): expand doc comments on security, observability, runtime, and peripheral traits
2026-02-19 15:59:56 -08:00
Alex Gorevski
9f93b8ef89
Merge pull request #1009 from zeroclaw-labs/fix/docs-multilingual-readme-parity
docs: add architecture, subscription auth, and memory system sections to multilingual READMEs
2026-02-19 15:59:25 -08:00
Alex Gorevski
c6de02b93b
Merge pull request #1008 from zeroclaw-labs/fix/docs-module-level-docs
docs(code): add module-level doc blocks to providers, channels, tools, and security
2026-02-19 15:58:56 -08:00
Argenis
96d5ae0c43
fix(composio): pick first usable account when multiple exist, add connected_accounts alias (#1003)
Root cause of #959: resolve_connected_account_ref returned None when the entity had more than one connected account for an app, silently dropping auto-resolve and causing every execute call to fail with 'cannot find connected account'. The LLM then looped re-issuing the OAuth URL even though the account was already connected.

- resolve_connected_account_ref now picks the first usable account (ordered by updated_at DESC from the API) instead of returning None when multiple accounts exist
- Add 'connected_accounts' as a dispatch alias for 'list_accounts' in handler, schema enum, and description
- 8 new regression tests

Closes #959
2026-02-19 17:19:04 -05:00
Alex Gorevski
867a7a5cbd test(gateway): add edge-case idempotency store tests
Add five new idempotency store tests covering: different-key acceptance,
max_keys clamping to minimum of 1, rapid duplicate rejection, TTL-based
key expiry and re-acceptance, and eviction preserving newest entries.
Addresses audit finding on weak gateway idempotency test coverage.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:28:24 -08:00
Alex Gorevski
673697a43e test(peripherals): add unit tests for peripheral module configuration and listing
Add tests for list_configured_boards() covering enabled/disabled states and
empty/non-empty board configurations. Add test verifying create_peripheral_tools()
returns empty when peripherals are disabled. Addresses audit finding CRITICAL-1
for the untested peripherals module — covers all non-hardware-gated logic paths.

Fix pre-existing Windows build errors in config/schema.rs: make non-unix
sync_directory async and gate unix-only imports behind #[cfg(unix)].

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:28:22 -08:00
Alex Gorevski
22bd03c65a test(quality): replace bare .unwrap() with .expect() in agent and shell tests
Replace bare .unwrap() calls with descriptive .expect() messages in
src/agent/agent.rs and src/tools/shell.rs test modules. Adds meaningful
failure context for memory creation, agent builder, and tool execution
assertions. Addresses audit finding on test assertion quality (§5.2).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:23:33 -08:00
Alex Gorevski
d407eb61f0 test(fuzz): add webhook, provider response, and command validation fuzz targets
Add three new fuzz targets expanding coverage from 2 to 5 targets:
- fuzz_webhook_payload: fuzzes webhook body JSON deserialization
- fuzz_provider_response: fuzzes provider API response parsing
- fuzz_command_validation: fuzzes security policy command validation
Addresses audit findings for critical fuzz coverage gaps in gateway,
provider, and security subsystems.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:56 -08:00
Alex Gorevski
dd541bd7e4 docs(code): add decision-point comments to agent loop, security policy, and reliable provider
Adds section markers and decision-point comments to the three most complex
control-flow modules. Comments explain loop invariants, retry/fallback
strategy, security policy precedence rules, and error handling rationale.

This improves maintainability by making the reasoning behind complex
branches explicit for reviewers and future contributors.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:53 -08:00
Alex Gorevski
eae8a99584 docs(code): add comprehensive doc comments to config schema public fields
Every public field in the Config struct hierarchy now has a /// doc comment
explaining its purpose, default value, and usage context. This ensures
operators and extension developers can understand config options directly
from rustdoc without cross-referencing the config reference documentation.

Comments are consistent with docs/config-reference.md descriptions.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:52 -08:00
Alex Gorevski
25fd10a538 docs(code): expand doc comments on security, observability, runtime, and peripheral traits
The four underdocumented core trait files now include trait-level doc blocks
explaining purpose and architecture role, method-level documentation with
parameter/return/error descriptions, and public struct/enum documentation.

This brings parity with the well-documented provider, channel, tool, and
memory traits, giving extension developers clear guidance for implementing
these core extension points.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:46 -08:00
Alex Gorevski
6d4bfb73ba docs: add architecture, subscription auth, and memory system sections to multilingual READMEs
The English README contains architecture overview (diagram + trait table),
subscription auth setup (OAuth flow + examples), and memory system design
(vector + FTS5 hybrid search) sections that were missing from the Chinese,
Japanese, and Russian translations.

This closes the content parity gap identified in the documentation audit,
ensuring non-English speakers have access to the same critical architectural
context and setup guidance.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:46 -08:00
Alex Gorevski
4a7dff6ef1 docs(code): add module-level doc blocks to providers, channels, tools, and security
Each major subsystem mod.rs now includes a //! doc block explaining the
subsystem purpose, trait-driven architecture, factory registration pattern,
and extension guidance. This improves the generated rustdoc experience for
developers navigating ZeroClaw's modular architecture.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:19:46 -08:00
Alex Gorevski
3b471e74b7 docs: enhance getting-started, hardware, and project collection indexes
Adds onboarding decision tree to getting-started/README.md so users can
quickly identify the right setup command for their situation.

Adds hardware vision overview to hardware/README.md explaining the
Peripheral trait and supported board types.

Expands project/README.md with scope explanation describing the purpose
of project snapshots and how they relate to documentation maintenance.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 13:04:11 -08:00
Alex Gorevski
bec1dc7b8c
Merge pull request #994 from zeroclaw-labs/algore/merge_fix
fix: resolve merge conflict in pull request template
2026-02-19 12:55:23 -08:00
Alex Gorevski
d22adb21e6 fix: resolve merge conflict in pull request template
Remove merge conflict markers in .github/pull_request_template.md,
keeping the spaced module label format (\<module>: <component>\)
from the chore/labeler-spacing-trusted-tier branch.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 12:54:42 -08:00
Alex Gorevski
835d51d7e1
Merge pull request #971 from zeroclaw-labs/docs/social-telegram-cn-ru-channels
docs(readme): add Telegram CN/RU channels to media matrix
2026-02-19 12:51:04 -08:00
Alex Gorevski
229f826656
Merge pull request #969 from zeroclaw-labs/docs/homebrew-install-readme
docs(readme): add Homebrew install instructions
2026-02-19 12:50:41 -08:00
Alex Gorevski
141d483aa4
Merge pull request #987 from ecschoye/fix/openrouter-embedding-provider
fix(memory): add openrouter as recognized embedding provider
2026-02-19 12:47:25 -08:00
Edvard
832facf5ef fix(memory): add openrouter as recognized embedding provider
The embedding provider factory only recognized "openai" and "custom:*",
causing "openrouter" to silently fall through to NoopEmbedding. This
made vector/semantic search completely non-functional — memory recall
fell back to BM25 keyword-only matching, with 70% of the hybrid score
always returning zero.

Route "openrouter" through OpenAiEmbedding with the OpenRouter API base
URL (https://openrouter.ai/api/v1), which is OpenAI-compatible.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 15:10:25 -05:00
Alex Gorevski
007e9fa7ea
Merge pull request #984 from zeroclaw-labs/fix/improve-config-error-messages
fix(errors): improve config error messages with section paths and remediation hints
2026-02-19 11:56:45 -08:00
Alex Gorevski
9ab2b7be61
Merge pull request #983 from zeroclaw-labs/fix/env-example-missing-vars
docs(env): add missing environment variables to .env.example
2026-02-19 11:55:36 -08:00
Alex Gorevski
b6f99c31d1
Merge pull request #982 from zeroclaw-labs/fix/cli-help-text-improvements
docs(cli): add detailed help text and examples to complex subcommands
2026-02-19 11:54:38 -08:00
Alex Gorevski
f308353ab2
Merge pull request #981 from zeroclaw-labs/fix/config-validation-on-load
fix(config): add startup validation to catch invalid config values early
2026-02-19 11:52:57 -08:00
Alex Gorevski
63f3c5fe6d
Merge pull request #980 from zeroclaw-labs/fix/config-reference-missing-sections
docs(config): add missing config sections to config-reference.md
2026-02-19 11:51:31 -08:00
Alex Gorevski
b84f0e1956
Merge pull request #979 from zeroclaw-labs/fix/cli-argument-range-validation
fix(cli): add range validation for temperature argument
2026-02-19 11:50:08 -08:00
Alex Gorevski
39a09f007b fix(cli): add range validation for temperature argument
Add a custom value_parser for the --temperature CLI argument to enforce
the documented 0.0-2.0 range at parse time. Previously, the comment
stated the valid range but clap did not reject out-of-range values,
allowing invalid temperatures to propagate to provider API calls.

- Add parse_temperature() validator that rejects values outside 0.0..=2.0
- Wire it into the Agent subcommand's temperature arg via value_parser

Addresses API surface audit §2.3 (CLI argument range validation).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:45:12 -08:00
Alex Gorevski
cc07cb66c3 fix(errors): improve config error messages with section paths and remediation hints
Improve vague error messages in channel initialization and tool setup
to include specific config key paths and remediation steps, matching
the quality standard set by proxy validation errors.

Changes:
- telegram.rs: Include [channels.telegram] section path and required
  fields (bot_token, allowed_users) in missing-config error; add
  onboard hint; specify channels.telegram.allowed_users in pairing
  message; improve parse error context
- whatsapp.rs: Specify channels.whatsapp.allowed_numbers key path
  in unauthorized-number warning
- linq.rs: Specify channels.linq.allowed_senders key path in
  unauthorized-sender warning; add onboard hint
- web_search_tool.rs: Include tools.web_search.provider config path
  and valid values in unknown-provider error

Addresses API surface audit §8.2 (config context in error messages).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:44:04 -08:00
Alex Gorevski
9f1a306962 docs(cli): add detailed help text and examples to complex subcommands
Add long_about attributes with usage examples to the following commands:

src/main.rs (binary CLI):
- Agent: interactive/single-message modes, provider/peripheral options
- Gateway: port/host binding with examples
- Daemon: full runtime explanation with service install reference
- Cron: cron expression format, timezone handling, all scheduling modes
- Channel: supported types, JSON config format, bind-telegram
- Hardware: discover, introspect, info subcommands
- Peripheral: add, flash, board types
- Config: schema export

src/lib.rs (library enums):
- CronCommands::Add: cron syntax and timezone examples
- CronCommands::AddAt: RFC 3339 timestamp format
- CronCommands::AddEvery: interval in milliseconds
- CronCommands::Once: human-readable duration syntax
- CronCommands::Update: partial field update
- ChannelCommands::Add: JSON config and supported types
- ChannelCommands::BindTelegram: username/numeric ID format
- HardwareCommands::Discover, Introspect, Info: device paths and chip names
- PeripheralCommands::Add: board types and transport paths
- PeripheralCommands::Flash: serial port options

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:42:31 -08:00
Alex Gorevski
99cf2fdfee fix(config): add startup validation to catch invalid config values early
Add Config::validate() called from load_or_init() after env overrides
are applied. This catches obviously invalid configuration values at
startup instead of allowing them to silently cause runtime failures.

Validated fields:
- gateway.host: must not be empty
- autonomy.max_actions_per_hour: must be > 0
- scheduler.max_concurrent: must be > 0
- scheduler.max_tasks: must be > 0
- model_routes[*]: hint, provider, model must not be empty
- embedding_routes[*]: hint, provider, model must not be empty
- proxy: delegates to existing ProxyConfig::validate()

Previously, ProxyConfig::validate() was only called during
apply_env_overrides() and only warned/disabled on failure. The new
Config::validate() runs it as a hard error after all overrides are
resolved, ensuring proxy misconfiguration is surfaced early.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:37:30 -08:00
Alex Gorevski
753e90e0e7 docs(config): add missing config sections to config-reference.md
Add documentation for config schema sections that were undocumented:

- [cost] — daily/monthly spending limits and cost tracking
- [identity] — AIEOS / OpenClaw identity format
- [hardware] — hardware wizard config (STM32, serial, probe)
- [peripherals] — peripheral board configurations (STM32, RPi GPIO)
- [browser] — browser automation backend config
- [browser.computer_use] — computer-use sidecar endpoint config
- [http_request] — HTTP request tool config
- [agents.<name>] — delegate sub-agent configurations
- [query_classification] — automatic model hint routing

Also expanded existing sections:
- [agent] — added compact_context, max_history_messages, parallel_tools, tool_dispatcher
- [[model_routes]] — added field reference table
- [[embedding_routes]] — added field reference table

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:34:48 -08:00
Alex Gorevski
7feb57ad53 docs(env): add missing environment variables to .env.example
Add env vars from apply_env_overrides() that were absent from .env.example:

- ZEROCLAW_REASONING_ENABLED / REASONING_ENABLED (reasoning mode)
- ZEROCLAW_STORAGE_PROVIDER (storage backend override)
- ZEROCLAW_STORAGE_DB_URL (remote storage connection URL)
- ZEROCLAW_STORAGE_CONNECT_TIMEOUT_SECS (storage connect timeout)
- ZEROCLAW_PROXY_ENABLED (proxy toggle)
- ZEROCLAW_HTTP_PROXY (HTTP proxy URL)
- ZEROCLAW_HTTPS_PROXY (HTTPS proxy URL)
- ZEROCLAW_ALL_PROXY (SOCKS/universal proxy URL)
- ZEROCLAW_NO_PROXY (proxy bypass list)
- ZEROCLAW_PROXY_SCOPE (proxy scope: environment|zeroclaw|services)
- ZEROCLAW_PROXY_SERVICES (service selector for scoped proxy)

Resolves audit finding §6.3.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 11:31:43 -08:00
Alex Gorevski
77609777ab
Merge pull request #951 from zeroclaw-labs/fix/per-client-pairing-lockout
fix(security): change pairing lockout to per-client accounting
2026-02-19 11:26:46 -08:00
Chummy
f7076183b9 docs(readme): add Telegram CN and RU channels to media matrix 2026-02-20 02:19:05 +08:00
Chummy
3733856093 Fix skill instruction/tool injection in system prompts 2026-02-20 02:16:41 +08:00
Nikolay Vyahhi
315985199b docs(readme): add Homebrew install instructions 2026-02-19 13:03:32 -05:00
Chummy
f2ffd653de fix(channel): preserve trailing user turn in normalization 2026-02-20 02:01:42 +08:00
Chummy
c5834b1077 fix(channel): normalize telegram history for MiniMax 2026-02-20 02:01:42 +08:00
Chummy
7173045f1c docs(readme): sync social badges to translated READMEs 2026-02-20 01:56:41 +08:00
Chummy
132a6b70e0 docs(readme): add X, Xiaohongshu, and Telegram media badges 2026-02-20 01:56:41 +08:00
Chummy
13dce49a5e docs(readme): add official Reddit badge and channel link 2026-02-20 01:56:41 +08:00
Chummy
4531c342f5 fix(onboard): remove fragile numeric channel dispatch
Use enum-backed channel menu dispatch to prevent duplicated match-arm indices and unreachable-pattern warnings (issue #913).

Also switch OpenAI native tool spec parsing to owned serde structs so tool-schema validation compiles.
2026-02-20 01:56:41 +08:00
Chummy
ef82c7dbcd fix(channels): interrupt in-flight telegram requests on newer sender messages 2026-02-20 01:54:07 +08:00
Chummy
d9a94fc763 fix(skills): escape inlined skill XML content 2026-02-20 01:28:49 +08:00
Edvard
8a4da141d6 fix(skills): inject skill prompts and tools into agent system prompt
Skill prompts and tool definitions from SKILL.toml were parsed and stored
correctly but never included in the agent's system prompt. Both prompt-building
paths (channels/mod.rs and agent/prompt.rs) only emitted skill metadata (name,
description, location), telling the LLM to "read" the SKILL.toml on demand.
This caused the agent to attempt manual file reads that often failed, leaving
skills effectively ignored.

Now both paths inline <instructions> and <tools> blocks inside each <skill>
XML element, so the agent receives full skill context without extra tool calls.

Closes #877

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 01:28:49 +08:00
Chummy
14fb3fbcae fix(composio): resolve connected account refs after OAuth 2026-02-20 01:28:19 +08:00
Chummy
d714d3984e fix(memory): stop autosaving assistant summaries and filter legacy entries 2026-02-20 01:14:08 +08:00
Chummy
6d745e9cb3 fix(openai): deserialize native tool specs with owned fields 2026-02-20 00:07:28 +08:00
Chummy
4c249c579f fix(composio): repair v3 execute path and enable alias 2026-02-20 00:07:28 +08:00
argenis de la rosa
a03ddc3ace fix: gate nusb/hardware discovery to Linux/macOS/Windows only
Android (Termux) reports target_os="android" which is not supported
by nusb::list_devices(). This caused E0425 and E0282 compile errors
when building on Termux.

Changes:
- Cargo.toml: move nusb to a target-gated dependency block so it is
  only compiled on linux/macos/windows
- src/hardware/discover.rs: add #![cfg(...)] file-level gate matching
  the nusb platform support matrix
- src/hardware/mod.rs: gate discover/introspect module declarations,
  discover_hardware() call, handle_command() dispatch, and all helper
  fns on the same platform set; add a clear user-facing message on
  unsupported platforms
- src/security/pairing.rs: replace deprecated rand::thread_rng() with
  rand::rng() to keep clippy -D warnings clean

Fixes #880
2026-02-20 00:02:01 +08:00
Alex Gorevski
56af0d169e fix(security): change pairing lockout to per-client accounting
Replace global failed-attempt counter with per-client HashMap keyed by
client identity (IP address for gateway, chat_id for Telegram).  This
prevents a single attacker from locking out all legitimate clients.

Bounded state: entries are evicted after lockout expiry, and the map is
capped at 1024 tracked clients.

Closes #603

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 07:33:11 -08:00
Alex Gorevski
ba500a606e
Merge pull request #948 from zeroclaw-labs/fix/wizard-channel-default-index
fix(onboard): correct channel selector default to 'Done' item
2026-02-19 07:21:10 -08:00
Alex Gorevski
8f8641d9fb fix(onboard): correct channel selector default to 'Done' item
The hardcoded .default(11) became stale when Lark/Feishu was
added at index 11, shifting 'Done — finish setup' to index 12.
The wizard now pre-selects the wrong channel instead of 'Done'.

Use options.len() - 1 so the default always tracks the last
item regardless of how many channels exist.

Fixes #913

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 07:19:20 -08:00
Alex Gorevski
3a19d6cd98
Merge pull request #872 from agorevski/perf/eliminate-unnecessary-heap-allocations
perf: eliminate unnecessary heap allocations across agent loop, memory and channels
2026-02-19 07:11:55 -08:00
Alex Gorevski
a4b27d2afe perf: eliminate unnecessary heap allocations across agent loop, memory, and channels
- Replace clone()+clear() with std::mem::take() in chunker (items 1, 6)
- Add Vec::with_capacity() hints in chunker split functions (item 2)
- Replace collect::<Vec<_>>().join() with direct iteration in IRC and
  email channels (item 3)
- Share heading strings via Rc<str> instead of cloning per chunk (item 5)
- Use borrowed references in provider tool spec types to avoid cloning
  name/description/parameters per tool per request (item 7)

Closes #712

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-19 07:06:27 -08:00
Alex Gorevski
dce7280812
Merge pull request #865 from agorevski/feat/systematic-test-coverage-852
test: add systematic test coverage for 7 bug pattern groups (#852)
2026-02-19 07:02:20 -08:00
reidliu41
88e7e5480a chore: ignore macOS AppleDouble files (._*) 2026-02-19 22:59:21 +08:00
Alex Gorevski
fedfd6ae01
Merge pull request #847 from agorevski/algore/cicd-descript-release-matrix
perf(ci): reduce GitHub Actions costs ~60-65% across all workflows
2026-02-19 06:54:40 -08:00
Argenis
9af91effc6
legal: add dual MIT+Apache 2.0 license, trademark policy, and CLA (#941)
- LICENSE: add trademark notice and dual-license reference to MIT file
- LICENSE-APACHE: add full Apache 2.0 license text with ZeroClaw
  trademark clause in section 6
- TRADEMARK.md: define permitted/prohibited uses of the ZeroClaw name,
  list known unauthorized forks (openagen/zeroclaw), and document
  contributor trademark protections
- CLA.md: add Contributor License Agreement granting rights under both
  MIT and Apache 2.0, with explicit patent grant and attribution
  guarantees; contributors retain copyright ownership
- NOTICE: add official repository notice, dual-license summary, and
  contributor protection statement
- README.md: add impersonation warning section with official repo link,
  replace single MIT badge with dual-license table, add trademark and
  contributor protection summary, link CLA.md from Contributing section
2026-02-19 09:20:03 -05:00
Chummy
7b4fe96c8a fix(provider): align qwen oauth alias with qwen base-url mapping 2026-02-19 21:46:48 +08:00
Chummy
05404c6e7a perf(build): gate Matrix channel for faster iteration 2026-02-19 21:29:53 +08:00
Chummy
87dcda638c fix: resolve post-rebase config and ollama test regressions 2026-02-19 21:25:21 +08:00
Chummy
ce6ba36f4e test: account for ellipsis when compacting channel history 2026-02-19 21:25:21 +08:00
Chummy
3d068c21be fix: correct Lark/Feishu channel selection index in wizard 2026-02-19 21:25:21 +08:00
Chummy
dcd0bf641d feat: add multimodal image marker support with Ollama vision 2026-02-19 21:25:21 +08:00
Chummy
63aacb09ff fix(provider): preserve full history in responses fallback 2026-02-19 21:16:55 +08:00
Chummy
48b51e7152 test(config): make tokio::test schema cases async 2026-02-19 21:05:19 +08:00
Chummy
a5d7911923 feat(runtime): add reasoning toggle for ollama 2026-02-19 21:05:19 +08:00
Chummy
8f13fee4a6 test: stabilize qwen oauth env tests and gateway fixtures 2026-02-19 20:54:20 +08:00
Chummy
bca58acdcb feat(provider): add qwen-code oauth credential support 2026-02-19 20:54:20 +08:00
Chummy
e9c280324f test(config): make schema export test async 2026-02-19 20:49:53 +08:00
Chummy
c57f3f51a0 fix(config): derive JsonSchema for embedding routes 2026-02-19 20:49:53 +08:00
Chummy
572aa77c2a feat(memory): add embedding hint routes and upgrade guidance 2026-02-19 20:49:53 +08:00
T. Budiman
2b8547b386 feat(gateway): enrich webhook and WhatsApp with workspace system prompt
Add workspace context (IDENTITY.md, AGENTS.md, etc.) to gateway webhook
and WhatsApp message handlers by using chat_with_system() with a
build_system_prompt()-generated system prompt instead of simple_chat().

This aligns gateway behavior with other channels (Telegram, Discord, etc.)
and the agent loop, which all pass system prompts via structured
ChatMessage::system() or chat_with_system().

Changes:
- handle_webhook: build system prompt and use chat_with_system()
- handle_whatsapp_message: build system prompt and use chat_with_system()

Risk: Low - uses existing build_system_prompt() function, no new dependencies
Rollback: Revert commit removes system prompt enrichment
2026-02-19 20:30:02 +08:00
Chummy
2016382f42 fix(channels): compact sender history and filter oversized memory context 2026-02-19 20:05:35 +08:00
Chummy
2c07fb1792 fix: fail fast on context-window overflow and reset channel history 2026-02-19 19:38:28 +08:00
Chummy
aa176ef881 docs(readme): add impersonation warning for openagen fork and domain 2026-02-19 19:35:21 +08:00
Chummy
b611609c30 ci(docker): publish multi-arch latest and harden release tagging path 2026-02-19 19:32:18 +08:00
Chummy
772bb15ed9 fix(tests): stabilize issue #868 model refresh regression 2026-02-19 19:15:08 +08:00
Aleksandr Prilipko
2124b1dbbd test(e2e): add multi-turn history fidelity and memory enrichment tests
Add comprehensive e2e test coverage for chat_with_history and RAG
enrichment pipeline:

- RecordingProvider mock that captures all messages sent to the provider
- StaticMemoryLoader mock that simulates RAG context injection
- e2e_multi_turn_history_fidelity: verifies growing history across 3 turns
- e2e_memory_enrichment_injects_context: verifies RAG context prepended
- e2e_multi_turn_with_memory_enrichment: combined multi-turn + enrichment
- e2e_empty_memory_context_passthrough: verifies no corruption on empty RAG
- e2e_live_openai_codex_multi_turn (#[ignore]): real API call verifying
  the model recalls facts from prior messages via chat_with_history

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Aleksandr Prilipko
5dd11e6b0f fix(provider): use output_text content type for assistant messages in Codex history
The OpenAI Responses API requires assistant messages to use content type
"output_text" while user messages use "input_text". The prior implementation
used "input_text" for both roles, causing 400 errors on multi-turn history.

Extract build_responses_input() helper for testability and add 3 unit tests
covering role→content-type mapping, default instructions, and unknown roles.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Aleksandr Prilipko
1b57be7223 fix(provider): implement chat_with_history for OpenAI Codex and Gemini
Both providers only implemented chat_with_system, so the default
chat_with_history trait method was discarding all conversation history
except the last user message. This caused the Telegram bot to lose
context between messages.

Changes:
- OpenAiCodexProvider: extract send_responses_request helper, add
  chat_with_history that maps full ChatMessage history to ResponsesInput
- GeminiProvider: extract send_generate_content helper, add
  chat_with_history that maps ChatMessage history to Gemini Content
  (with assistant→model role mapping)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:04:02 +08:00
Chummy
6eec888ff0 docs(config): document autonomy policy and quote-aware shell parsing 2026-02-19 19:03:20 +08:00
Chummy
67466254f0 fix(security): parse shell separators only when unquoted 2026-02-19 19:03:20 +08:00
Chummy
a0098de28c fix(bedrock): normalize aws-bedrock alias and harden docs/tests 2026-02-19 19:01:45 +08:00
KevinZhao
0e4e0d590d feat(provider): add dedicated AWS Bedrock Converse API provider
Replace the non-functional OpenAI-compatible stub with a purpose-built
Bedrock provider that implements AWS SigV4 signing from first principles
using hmac/sha2/hex crates — no AWS SDK dependency.

Key capabilities:
- SigV4 authentication (AKSK + optional session token)
- Converse API with native tool calling support
- Prompt caching via cachePoint heuristics
- Proper URI encoding for model IDs containing colons
- Resilient response parsing with unknown block type fallback

Also updates:
- Factory wiring and credential resolution bypass for AKSK auth
- Onboard wizard with Bedrock-specific model selection and guidance
- Provider reference docs with auth, region, and model ID details

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 19:01:45 +08:00
Chummy
9f94ad6db4 fix(config): log resolved config path source at startup 2026-02-19 18:58:41 +08:00
Chummy
e83e017062 fix(channels): preserve slack thread root ids 2026-02-19 18:52:30 +08:00
Daniel Willitzer
9afe4f28e7 feat(channels): add threading support to message channels
Add optional thread_ts field to ChannelMessage and SendMessage for
platform-specific threading (e.g. Slack threads, Discord threads).

- ChannelMessage.thread_ts captures incoming thread context
- SendMessage.thread_ts propagates thread context to replies
- SendMessage::in_thread() builder for fluent API
- Slack: send with thread_ts, capture ts from incoming messages
- All reply paths in runtime preserve thread context via in_thread()
- All other channels initialize thread_ts: None (forward-compatible)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 18:52:30 +08:00
Chummy
adc998429e test(channel): harden Lark WS heartbeat activity handling 2026-02-19 18:43:49 +08:00
wonder_land
3108ffe3e7 fix(channel): update last_recv on WS Ping/Pong frames in Lark channel
Feishu WebSocket server sends native WS Ping frames as keep-alive probes.
ZeroClaw correctly replied with Pong but did not update last_recv, so the
heartbeat watchdog (WS_HEARTBEAT_TIMEOUT = 300s) triggered a forced
reconnect every 5 minutes even when the connection was healthy.

Two fixes:
- WsMsg::Ping: update last_recv before sending Pong
- WsMsg::Pong: handle explicitly and update last_recv (was silently
  swallowed by the wildcard arm)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 18:43:49 +08:00
Chummy
1bf5582c83 docs(provider): clarify credential resolution for fallback chains 2026-02-19 18:43:45 +08:00
Chummy
ba018a38ef chore(provider): normalize fallback test comments to ASCII punctuation 2026-02-19 18:43:45 +08:00
Chummy
435c33d408 fix(provider): preserve fallback runtime options when resolving credentials 2026-02-19 18:43:45 +08:00
Vernon Stinebaker
bb22bdc8fb fix(provider): resolve fallback provider credentials independently
Fallback providers in create_resilient_provider_with_options() were
created via create_provider_with_options() which passed the primary
provider's api_key as credential_override.  This caused
resolve_provider_credential() to short-circuit on the override and
never check the fallback provider's own env var (e.g. DEEPSEEK_API_KEY
for a deepseek fallback), resulting in auth failures (401) when the
primary and fallback use different API services.

Switch to create_provider_with_url(fallback, None, None) so each
fallback resolves its own credential via provider-specific env vars.
This also enables custom: URL prefixes (e.g.
custom:http://host.docker.internal:1234/v1) to work as fallback
entries, which was previously impossible through the options path.

Add three focused tests covering independent credential resolution,
custom URL fallbacks, and mixed fallback chains.
2026-02-19 18:43:45 +08:00
Chummy
f9e1ffe634 style: format schema provider override logic 2026-02-19 18:04:55 +08:00
Chummy
916c0c823b fix: sync gateway pairing persistence and proxy null clears 2026-02-19 18:04:55 +08:00
Jayson Reis
f1ca73d3d2 chore: Remove more blocking io calls 2026-02-19 18:04:55 +08:00
Chummy
1aec9ad9c0 fix(rebase): resolve duplicate tests and gateway AppState fields 2026-02-19 18:03:09 +08:00
Chummy
268a1dee09 style: apply rustfmt after rebase 2026-02-19 18:03:09 +08:00
Chummy
b1ebd4b579 fix(whatsapp): complete wa-rs channel behavior and storage correctness 2026-02-19 18:03:09 +08:00
mmacedoeu
c2a1eb1088 feat(channels): implement WhatsApp Web channel with wa-rs integration
- Add wa-rs dependencies with custom rusqlite storage backend
- Implement functional WhatsApp Web channel using wa-rs Bot
- Integrate TokioWebSocketTransportFactory and UreqHttpClient
- Add message handling via Bot event loop with proper shutdown
- Create WhatsApp storage trait implementations for wa-rs
- Add WhatsApp config schema and onboarding support
- Implement Meta webhook verification for WhatsApp Cloud API
- Add webhook signature verification for security
- Generate unique message keys for WhatsApp conversations
- Remove unused Node.js whatsapp-web-bridge stub

Supersedes: baileys-based bridge approach in favor of native Rust wa-rs
2026-02-19 18:03:09 +08:00
Chummy
9381e4451a fix(config): preserve explicit custom provider against legacy PROVIDER override 2026-02-19 17:54:25 +08:00
Chummy
d6dca4b890 fix(provider): align native tool system-flattening and add regressions 2026-02-19 17:44:07 +08:00
YubinghanBai
48eb1d1f30 fix(agent): inject full datetime into system prompt and allow date command
Three related agent UX issues found during MiniMax channel testing:

1. DateTimeSection injected only timezone, not the actual date/time.
   Models have no reliable way to know the current date from training
   data alone, causing wrong or hallucinated dates in responses.
   Fix: include full timestamp (YYYY-MM-DD HH:MM:SS TZ) in the prompt.

2. The `date` shell command was absent from the security policy
   allowed_commands default list. When a model tried to call
   shell("date") to get the current time, it received a policy
   rejection and told the user it was "blocked by security policy".
   Fix: add "date" to the default allowed_commands list. The command
   is read-only, side-effect-free, and carries no security risk.

3. (Context) The datetime prompt fix makes the date command fallback
   largely unnecessary, but the allowlist addition ensures the tool
   works correctly if models choose to call it anyway.

Non-goals:
- Not changing the autonomy model or risk classification
- Not adding new config keys

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 17:44:07 +08:00
Chummy
c9a0893fc8 fix(bootstrap): support --model in onboard passthrough 2026-02-19 17:36:20 +08:00
cbigger
3c60b6bc2d feat(onboard): add optional --model flag to quick setup and channels-only guard 2026-02-19 17:36:20 +08:00
Chummy
ff254b4bb3 fix(provider): harden think-tag fallback and add edge-case tests 2026-02-19 16:54:52 +08:00
YubinghanBai
db7b24b319 fix(provider): strip <think> tags and merge system messages for MiniMax
MiniMax API rejects role: system in the messages array with error
2013 (invalid message role: system). In channel mode, the history
builder prepends a system message and optionally appends a second
one for delivery instructions, causing 400 errors on every channel
turn.

Additionally, MiniMax reasoning models embed chain-of-thought in
the content field as <think>...</think> blocks rather than using
the separate reasoning_content field, causing raw thinking output
to leak into user-visible responses.

Changes:
- Add merge_system_into_user flag to OpenAiCompatibleProvider;
  when set, all system messages are concatenated and prepended to
  the first user message before sending to the API
- Add new_merge_system_into_user() constructor used by MiniMax
- Add strip_think_tags() helper that removes <think>...</think>
  blocks from response content before returning to the caller
- Apply strip_think_tags in effective_content() and
  effective_content_optional() so all non-streaming paths are covered
- Update MiniMax factory registration to use new_merge_system_into_user
- Fix pre-existing rustfmt violation on apply_auth_header call

All other providers continue to use the default path unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 16:54:52 +08:00
Chummy
d33eadea75 docs(config): document schema command and add schema test 2026-02-19 16:41:21 +08:00
s04
282fbe0e95 style: fix cargo fmt formatting in config schema handler 2026-02-19 16:41:21 +08:00
s04
996f66b6a7 feat: add zeroclaw config schema for JSON Schema export
Add a `config schema` subcommand that dumps the full configuration
schema as JSON Schema (draft 2020-12) to stdout. This enables
downstream consumers (like PankoAgent) to programmatically validate
configs, generate forms, and stay in sync with zeroclaw's evolving
config surface without hand-maintaining copies of the schema.

- Add schemars 1.2 dependency and derive JsonSchema on all config
  structs/enums (schema.rs, policy.rs, email_channel.rs)
- Add `Config` subcommand group with `Schema` sub-command
- Output is valid JSON Schema with $defs for all 56 config types
2026-02-19 16:41:21 +08:00
Jayson Reis
d44dc5a048 chore: Add nix files for easy on-boarding on the project 2026-02-19 16:29:32 +08:00
Chummy
1461b00ad1 fix(provider): fallback to responses on chat transport errors 2026-02-19 15:42:38 +08:00
Devin AI
44fa7f3d3d fix(agent): include workspace files when AIEOS identity is configured
Remove early return in IdentitySection::build() that caused AGENTS.md,
SOUL.md, and other workspace files to be silently skipped when AIEOS
identity loaded successfully. Both AIEOS identity and workspace files
now coexist in the system prompt.

Closes zeroclaw-labs/zeroclaw#856

Co-Authored-By: Kristofer Mondlane <kmondlane@gmail.com>
2026-02-19 15:24:58 +08:00
bhagwan
c405cdf19a fix(channel/signal): route UUID senders as direct recipients
Privacy-enabled Signal users have no sourceNumber, so sender()
falls back to their UUID from the source field.  Previously
parse_recipient_target() treated non-E.164 strings without the
group: prefix as group IDs, causing signal-cli to reject the
UUID as an invalid base64 group ID.

Add is_uuid() helper using the already-imported uuid crate and
recognise valid UUIDs as Direct targets alongside E.164 numbers.
2026-02-19 15:19:41 +08:00
Edvard
8b4607a1ef feat(cron): add cron update CLI subcommand for in-place job updates
Add Update variant to CronCommands in both main.rs and lib.rs, with
handler in cron/mod.rs that constructs a CronJobPatch and calls
update_job(). Includes security policy check for command changes.

Fixes from review feedback:
- --tz alone now correctly updates timezone (fetches existing schedule)
- --expression alone preserves existing timezone instead of clearing it
- All-None patch (no flags) now returns an error
- Output uses consistent emoji prefix

Tests exercise handle_command directly to cover schedule construction.

Closes #809

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 15:11:37 +08:00
Chummy
0910f2a710 fix(config): ignore future channel fields in channel destructuring 2026-02-19 15:11:18 +08:00
Chummy
78e0594e5f fix(openai): align chat_with_tools with http client and strict tool parsing 2026-02-19 15:11:18 +08:00
Lucien Loiseau
f76c1226f1 fix(providers): implement chat_with_tools for OpenAiProvider
The OpenAiProvider overrode chat() with native tool support but never
overrode chat_with_tools(), which is the method called by
run_tool_call_loop in channel mode (IRC/Discord/etc). The trait default
for chat_with_tools() silently drops the tools parameter, sending plain
ChatRequest with no tools — causing the model to never use native tool
calls in channel mode.

Add chat_with_tools() override that deserializes tool specs, uses
convert_messages() for proper tool_call_id handling, and sends
NativeChatRequest with tools and tool_choice.

Also add Deserialize derive to NativeToolSpec and NativeToolFunctionSpec
to support deserialization from OpenAI-format JSON.
2026-02-19 15:11:18 +08:00
Chummy
d8409b0878 fix(channels): include mattermost in launch/list checks 2026-02-19 14:53:58 +08:00
Inu-Dial
af2510879e fix(daemon): add missing items and turn to let binding 2026-02-19 14:53:58 +08:00
Chummy
275d3e7791 style: apply rustfmt to async fs updates 2026-02-19 14:52:29 +08:00
Jayson Reis
b9af601943 chore: Remove blocking read strings 2026-02-19 14:52:29 +08:00
Chummy
bc0be9a3c1 fix(linq): accept prefixed and uppercase webhook signatures 2026-02-19 14:49:52 +08:00
George McCain
361e750576 feat(channels): add Linq channel for iMessage/RCS/SMS support
The existing iMessage channel relies on AppleScript and only works on macOS.
Linq provides a REST API for iMessage, RCS, and SMS — this gives ZeroClaw
native iMessage support on any platform via webhooks.

Implements LinqChannel following the same patterns as WhatsAppChannel:
- Channel trait impl (send, listen, health_check, typing indicators)
- Webhook handler with HMAC-SHA256 signature verification
- Sender allowlist filtering
- Onboarding wizard step with connection testing
- 18 unit tests covering parsing, auth, and signature verification

Resolves #656 — the prior issue was closed without a merged PR, so this
is the actual implementation.
2026-02-19 14:49:52 +08:00
Chummy
e23edde44b docs(readme): add multilingual announcement board and oauth warning 2026-02-19 14:39:27 +08:00
Chummy
cf476a81c1 fix(provider): preserve native Ollama tool history structure 2026-02-19 14:32:43 +08:00
reidliu41
cd59dc65c4 fix(provider): enable native tool calling for OllamaProvider 2026-02-19 14:32:43 +08:00
Chummy
d548caa5f3 fix(channel): clamp configurable timeout to minimum 30s 2026-02-19 14:19:49 +08:00
ZeroClaw Contributor
41a6ed30dd feat(channel): make message timeout configurable via channels_config.message_timeout_secs
Add configurable timeout for processing channel messages (LLM + tools).
Default: 300s (optimized for on-device LLMs like Ollama).
Can be overridden in config.toml:

[channels_config]
message_timeout_secs = 600
2026-02-19 14:19:49 +08:00
Alex Gorevski
3abadc4574 remove cost optimization analysis doc 2026-02-18 21:30:09 -08:00
Alex Gorevski
00c0995213 fix(ci): restore broken YAML structure in 3 workflows, revert aggressive STALE_HOURS
- pr-auto-response.yml: restore permissions, steps, and checkout in
  contributor-tier-issues job (broken by runner swap)
- pr-check-stale.yml: restore steps block and step name
- pr-intake-checks.yml: restore steps block, checkout, and timeout
- pr-check-status.yml: revert STALE_HOURS from 4 to 48 (not a cost
  optimization; 4h is too aggressive), switch to ubuntu-latest per
  PR description

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-18 21:26:14 -08:00
wonder_land
4ecaf6070c fix(tools): remove non-string enum from pushover priority for Gemini compat
The pushover tool priority parameter schema used integer enum values
[-2, -1, 0, 1, 2]. OpenAI-compatible APIs accept this, but the Gemini
API (and Gemini-relay proxies) strictly require all enum values to be
strings, rejecting the request with 400 Bad Request.

This causes every agent turn to fail with a non_retryable error when
using Gemini models, regardless of user message content, because tool
schemas are included in every request.

Fix: remove the enum constraint, keeping integer type and description
documenting the valid range. This is valid for both OpenAI and Gemini
providers and requires no changes to execute() which already uses
as_i64() with range validation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 13:24:23 +08:00
Alex Gorevski
8a2d7fe0a6 Merge branch 'algore/cicd-descript-release-matrix' of https://github.com/agorevski/zeroclaw into algore/cicd-descript-release-matrix 2026-02-18 21:23:42 -08:00
Alex Gorevski
a17c35679e add params to actions data 2026-02-18 21:23:31 -08:00
Alex Gorevski
825f42071c
Merge branch 'main' into algore/cicd-descript-release-matrix 2026-02-18 21:15:51 -08:00
Alex Gorevski
44725da08c perf(ci): reduce GitHub Actions costs ~60-65% across all workflows
Analysis of Feb 17 data showed 400+ workflow runs/day consuming ~398 billable minutes (~200 hours/month projected). Implemented targeted optimizations:

High-impact changes:

- sec-audit.yml: add path filters (Cargo.toml, src/**, crates/**, deny.toml); skip docs-only PRs

- test-benchmarks.yml: move from every-push-to-main to weekly schedule; retention 30d -> 7d

- pub-docker-img.yml: tighten PR smoke build path filters to Docker-specific files only

- sec-codeql.yml: reduce from twice-daily (14 runs/week) to weekly

Medium-impact changes:

- ci-run.yml: merge lint + lint-strict-delta into single job; drop --release from smoke build

- feature-matrix.yml: remove push trigger (weekly-only); remove redundant cargo test step

- dependabot.yml: monthly instead of weekly; reduce PR limits from 11 to 5/month; group all deps

Runner cost savings:

- Switch 6 lightweight API-only workflows to ubuntu-latest (PR Labeler, Intake, Auto Responder, Check Stale, Check Status, Sync Contributors)

- pr-check-status.yml: reduce from every 12h to daily

New files:

- docs/ci-cost-optimization.md: comprehensive analysis and revised architecture documentation

- scripts/ci/fetch_actions_data.py: reusable GitHub Actions cost analysis script

Estimated impact: daily billable minutes ~400 -> ~120-150 (60-65%% reduction), monthly hours ~200 -> ~60-75, Dependabot PRs ~44/month -> ~5 (89%% reduction)
2026-02-18 21:14:47 -08:00
Alex Gorevski
52dc9fd9e9
Merge pull request #883 from agorevski/fix/cleartext-logging-sensitive-data
fix(security): prevent cleartext logging of sensitive data
2026-02-18 21:11:31 -08:00
Alex Gorevski
bbbcd06cca
Merge pull request #882 from agorevski/fix/hardcoded-crypto-test-values-v2
fix(security): replace hard-coded crypto test values with runtime-generate secrets
2026-02-18 21:11:23 -08:00
Alex Gorevski
5f9d5a019d
Merge pull request #881 from agorevski/fix/cleartext-transmission-https-enforcement
fix(security): enforce HTTPS for sensitive data transmission
2026-02-18 21:11:18 -08:00
Alex Gorevski
4a9fc9b6cc fix(security): prevent cleartext logging of sensitive data
Address CodeQL rust/cleartext-logging alerts by breaking data-flow taint
chains from sensitive variables (api_key, credential, session_id, user_id)
to log/print sinks. Changes include:

- Replace tainted profile IDs in println! with untainted local variables
- Add redact() helper for safe logging of sensitive values
- Redact account identifiers in auth status output
- Rename session_id locals in memory backends to break name-based taint
- Rename user_id/user_id_hint in channels to break name-based taint
- Custom Debug impl for ComputerUseConfig to redact api_key field
- Break taint chain in provider credential factory via string reconstruction
- Remove client IP from gateway rate-limit log messages
- Break taint on auth token extraction and wizard credential flow
- Rename composio account ref variable to break name-based taint

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-18 20:12:45 -08:00
Alex Gorevski
9a784954f6 fix(security): replace hard-coded crypto test values with runtime-generated secrets
Replace hard-coded string literals used as cryptographic keys/secrets in
gateway webhook and WhatsApp signature verification tests with runtime-
generated random values. This resolves CodeQL rust/hard-coded-cryptographic-value
alerts while maintaining identical test coverage.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-18 20:03:38 -08:00
Alex Gorevski
925a352454 fix(security): enforce HTTPS for sensitive data transmission
Add URL scheme validation before HTTP requests that transmit sensitive
data (account IDs, phone numbers, user IDs). All endpoints already use
HTTPS URLs, but this explicit check satisfies CodeQL rust/cleartext-
transmission analysis and prevents future regressions if URLs are
changed.

Affected files: composio.rs, whatsapp.rs, qq.rs

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-18 20:03:02 -08:00
Alex Gorevski
7f03ab77a9 test: add systematic test coverage for 7 bug pattern groups (#852)
Add ~105 test cases across 7 test groups identified in issue #852:

TG1 - Provider resolution (27 tests): Factory resolution, alias mapping,
      custom URLs, auth styles, credential wiring
TG2 - Config persistence (18 tests): Config defaults, TOML roundtrip,
      agent/memory config, workspace dirs
TG3 - Channel routing (14 tests): ChannelMessage identity contracts,
      SendMessage construction, Channel trait send/listen roundtrip
TG4 - Agent loop robustness (12 integration + 14 inline tests): Malformed
      tool calls, failing tools, iteration limits, empty responses, unicode
TG5 - Memory restart (14 tests): Dedup on same key, restart persistence,
      session scoping, recall, concurrent stores, categories
TG6 - Channel message splitting (8+8 inline tests): Code blocks at boundary,
      long words, emoji, CJK chars, whitespace edge cases
TG7 - Provider schema (21 tests): ChatMessage/ToolCall/ChatResponse
      serialization, tool_call_id preservation, auth style variants

Also fixes a bug in split_message_for_telegram() where byte-based indexing
could panic on multi-byte characters (emoji, CJK). Now uses char_indices()
consistent with the Discord split implementation.

Closes #852

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-18 15:28:34 -08:00
156 changed files with 23511 additions and 2415 deletions

View file

@ -21,6 +21,10 @@ PROVIDER=openrouter
# Workspace directory override # Workspace directory override
# ZEROCLAW_WORKSPACE=/path/to/workspace # ZEROCLAW_WORKSPACE=/path/to/workspace
# Reasoning mode (enables extended thinking for supported models)
# ZEROCLAW_REASONING_ENABLED=false
# REASONING_ENABLED=false
# ── Provider-Specific API Keys ──────────────────────────────── # ── Provider-Specific API Keys ────────────────────────────────
# OpenRouter # OpenRouter
# OPENROUTER_API_KEY=sk-or-v1-... # OPENROUTER_API_KEY=sk-or-v1-...
@ -63,6 +67,22 @@ PROVIDER=openrouter
# ZEROCLAW_GATEWAY_HOST=127.0.0.1 # ZEROCLAW_GATEWAY_HOST=127.0.0.1
# ZEROCLAW_ALLOW_PUBLIC_BIND=false # ZEROCLAW_ALLOW_PUBLIC_BIND=false
# ── Storage ─────────────────────────────────────────────────
# Backend override for persistent storage (default: sqlite)
# ZEROCLAW_STORAGE_PROVIDER=sqlite
# ZEROCLAW_STORAGE_DB_URL=postgres://localhost/zeroclaw
# ZEROCLAW_STORAGE_CONNECT_TIMEOUT_SECS=5
# ── Proxy ──────────────────────────────────────────────────
# Forward provider/service traffic through an HTTP(S) proxy.
# ZEROCLAW_PROXY_ENABLED=false
# ZEROCLAW_HTTP_PROXY=http://proxy.example.com:8080
# ZEROCLAW_HTTPS_PROXY=http://proxy.example.com:8080
# ZEROCLAW_ALL_PROXY=socks5://proxy.example.com:1080
# ZEROCLAW_NO_PROXY=localhost,127.0.0.1
# ZEROCLAW_PROXY_SCOPE=zeroclaw # environment|zeroclaw|services
# ZEROCLAW_PROXY_SERVICES=openai,anthropic
# ── Optional Integrations ──────────────────────────────────── # ── Optional Integrations ────────────────────────────────────
# Pushover notifications (`pushover` tool) # Pushover notifications (`pushover` tool)
# PUSHOVER_TOKEN=your-pushover-app-token # PUSHOVER_TOKEN=your-pushover-app-token

1
.envrc Normal file
View file

@ -0,0 +1 @@
use flake

View file

@ -4,13 +4,13 @@ updates:
- package-ecosystem: cargo - package-ecosystem: cargo
directory: "/" directory: "/"
schedule: schedule:
interval: weekly interval: daily
target-branch: main target-branch: main
open-pull-requests-limit: 5 open-pull-requests-limit: 3
labels: labels:
- "dependencies" - "dependencies"
groups: groups:
rust-minor-patch: rust-all:
patterns: patterns:
- "*" - "*"
update-types: update-types:
@ -20,14 +20,14 @@ updates:
- package-ecosystem: github-actions - package-ecosystem: github-actions
directory: "/" directory: "/"
schedule: schedule:
interval: weekly interval: daily
target-branch: main target-branch: main
open-pull-requests-limit: 3 open-pull-requests-limit: 1
labels: labels:
- "ci" - "ci"
- "dependencies" - "dependencies"
groups: groups:
actions-minor-patch: actions-all:
patterns: patterns:
- "*" - "*"
update-types: update-types:
@ -37,14 +37,14 @@ updates:
- package-ecosystem: docker - package-ecosystem: docker
directory: "/" directory: "/"
schedule: schedule:
interval: weekly interval: daily
target-branch: main target-branch: main
open-pull-requests-limit: 3 open-pull-requests-limit: 1
labels: labels:
- "ci" - "ci"
- "dependencies" - "dependencies"
groups: groups:
docker-minor-patch: docker-all:
patterns: patterns:
- "*" - "*"
update-types: update-types:

View file

@ -12,11 +12,7 @@ Describe this PR in 2-5 bullets:
- Risk label (`risk: low|medium|high`): - Risk label (`risk: low|medium|high`):
- Size label (`size: XS|S|M|L|XL`, auto-managed/read-only): - Size label (`size: XS|S|M|L|XL`, auto-managed/read-only):
- Scope labels (`core|agent|channel|config|cron|daemon|doctor|gateway|health|heartbeat|integration|memory|observability|onboard|provider|runtime|security|service|skillforge|skills|tool|tunnel|docs|dependencies|ci|tests|scripts|dev`, comma-separated): - Scope labels (`core|agent|channel|config|cron|daemon|doctor|gateway|health|heartbeat|integration|memory|observability|onboard|provider|runtime|security|service|skillforge|skills|tool|tunnel|docs|dependencies|ci|tests|scripts|dev`, comma-separated):
<<<<<<< chore/labeler-spacing-trusted-tier
- Module labels (`<module>: <component>`, for example `channel: telegram`, `provider: kimi`, `tool: shell`): - Module labels (`<module>: <component>`, for example `channel: telegram`, `provider: kimi`, `tool: shell`):
=======
- Module labels (`<module>:<component>`, for example `channel:telegram`, `provider:kimi`, `tool:shell`):
>>>>>>> main
- Contributor tier label (`trusted contributor|experienced contributor|principal contributor|distinguished contributor`, auto-managed/read-only; author merged PRs >=5/10/20/50): - Contributor tier label (`trusted contributor|experienced contributor|principal contributor|distinguished contributor`, auto-managed/read-only; author merged PRs >=5/10/20/50):
- If any auto-label is incorrect, note requested correction: - If any auto-label is incorrect, note requested correction:

View file

@ -41,25 +41,7 @@ jobs:
run: ./scripts/ci/detect_change_scope.sh run: ./scripts/ci/detect_change_scope.sh
lint: lint:
name: Lint Gate (Format + Clippy) name: Lint Gate (Format + Clippy + Strict Delta)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 20
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run rust quality gate
run: ./scripts/ci/rust_quality_gate.sh
lint-strict-delta:
name: Lint Gate (Strict Delta)
needs: [changes] needs: [changes]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: blacksmith-2vcpu-ubuntu-2404
@ -71,8 +53,10 @@ jobs:
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable - uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with: with:
toolchain: 1.92.0 toolchain: 1.92.0
components: clippy components: rustfmt, clippy
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3 - uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run rust quality gate
run: ./scripts/ci/rust_quality_gate.sh
- name: Run strict lint delta gate - name: Run strict lint delta gate
env: env:
BASE_SHA: ${{ needs.changes.outputs.base_sha }} BASE_SHA: ${{ needs.changes.outputs.base_sha }}
@ -80,8 +64,8 @@ jobs:
test: test:
name: Test name: Test
needs: [changes, lint, lint-strict-delta] needs: [changes, lint]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) && needs.lint.result == 'success' && needs.lint-strict-delta.result == 'success' if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) && needs.lint.result == 'success'
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30 timeout-minutes: 30
steps: steps:
@ -106,8 +90,8 @@ jobs:
with: with:
toolchain: 1.92.0 toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3 - uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Build release binary - name: Build binary (smoke check)
run: cargo build --release --locked --verbose run: cargo build --locked --verbose
docs-only: docs-only:
name: Docs-Only Fast Path name: Docs-Only Fast Path
@ -185,7 +169,7 @@ jobs:
lint-feedback: lint-feedback:
name: Lint Feedback name: Lint Feedback
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
needs: [changes, lint, lint-strict-delta, docs-quality] needs: [changes, lint, docs-quality]
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: blacksmith-2vcpu-ubuntu-2404
permissions: permissions:
contents: read contents: read
@ -201,7 +185,7 @@ jobs:
RUST_CHANGED: ${{ needs.changes.outputs.rust_changed }} RUST_CHANGED: ${{ needs.changes.outputs.rust_changed }}
DOCS_CHANGED: ${{ needs.changes.outputs.docs_changed }} DOCS_CHANGED: ${{ needs.changes.outputs.docs_changed }}
LINT_RESULT: ${{ needs.lint.result }} LINT_RESULT: ${{ needs.lint.result }}
LINT_DELTA_RESULT: ${{ needs.lint-strict-delta.result }} LINT_DELTA_RESULT: ${{ needs.lint.result }}
DOCS_RESULT: ${{ needs.docs-quality.result }} DOCS_RESULT: ${{ needs.docs-quality.result }}
with: with:
script: | script: |
@ -231,7 +215,7 @@ jobs:
ci-required: ci-required:
name: CI Required Gate name: CI Required Gate
if: always() if: always()
needs: [changes, lint, lint-strict-delta, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval] needs: [changes, lint, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval]
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: blacksmith-2vcpu-ubuntu-2404
steps: steps:
- name: Enforce required status - name: Enforce required status
@ -276,7 +260,7 @@ jobs:
fi fi
lint_result="${{ needs.lint.result }}" lint_result="${{ needs.lint.result }}"
lint_strict_delta_result="${{ needs.lint-strict-delta.result }}" lint_strict_delta_result="${{ needs.lint.result }}"
test_result="${{ needs.test.result }}" test_result="${{ needs.test.result }}"
build_result="${{ needs.build.result }}" build_result="${{ needs.build.result }}"

View file

@ -1,12 +1,6 @@
name: Feature Matrix name: Feature Matrix
on: on:
push:
branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
schedule: schedule:
- cron: "30 4 * * 1" # Weekly Monday 4:30am UTC - cron: "30 4 * * 1" # Weekly Monday 4:30am UTC
workflow_dispatch: workflow_dispatch:
@ -61,6 +55,3 @@ jobs:
- name: Check feature combination - name: Check feature combination
run: cargo check --locked ${{ matrix.args }} run: cargo check --locked ${{ matrix.args }}
- name: Test feature combination
run: cargo test --locked ${{ matrix.args }}

View file

@ -143,7 +143,7 @@ Workflow: `.github/workflows/pub-docker-img.yml`
- `latest` + SHA tag (`sha-<12 chars>`) for `main` - `latest` + SHA tag (`sha-<12 chars>`) for `main`
- semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag for tag pushes - semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag for tag pushes
- branch name + SHA tag for non-`main` manual dispatch refs - branch name + SHA tag for non-`main` manual dispatch refs
5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`), while `main` publish stays `linux/amd64`. 5. Multi-platform publish is used for both `main` and tag pushes (`linux/amd64,linux/arm64`).
6. Typical runtime in recent sample: ~139.9s. 6. Typical runtime in recent sample: ~139.9s.
7. Result: pushed image tags under `ghcr.io/<owner>/<repo>`. 7. Result: pushed image tags under `ghcr.io/<owner>/<repo>`.

View file

@ -15,7 +15,7 @@ jobs:
(github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled' || github.event.action == 'unlabeled')) || (github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled' || github.event.action == 'unlabeled')) ||
(github.event_name == 'pull_request_target' && (github.event_name == 'pull_request_target' &&
(github.event.action == 'labeled' || github.event.action == 'unlabeled')) (github.event.action == 'labeled' || github.event.action == 'unlabeled'))
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
permissions: permissions:
contents: read contents: read
issues: write issues: write
@ -34,7 +34,7 @@ jobs:
await script({ github, context, core }); await script({ github, context, core });
first-interaction: first-interaction:
if: github.event.action == 'opened' if: github.event.action == 'opened'
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
permissions: permissions:
issues: write issues: write
pull-requests: write pull-requests: write
@ -65,7 +65,7 @@ jobs:
labeled-routes: labeled-routes:
if: github.event.action == 'labeled' if: github.event.action == 'labeled'
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
permissions: permissions:
contents: read contents: read
issues: write issues: write

View file

@ -12,7 +12,7 @@ jobs:
permissions: permissions:
issues: write issues: write
pull-requests: write pull-requests: write
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
steps: steps:
- name: Mark stale issues and pull requests - name: Mark stale issues and pull requests
uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0 uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0

View file

@ -2,7 +2,7 @@ name: PR Check Status
on: on:
schedule: schedule:
- cron: "15 */12 * * *" - cron: "15 8 * * *" # Once daily at 8:15am UTC
workflow_dispatch: workflow_dispatch:
permissions: {} permissions: {}
@ -13,13 +13,13 @@ concurrency:
jobs: jobs:
nudge-stale-prs: nudge-stale-prs:
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
permissions: permissions:
contents: read contents: read
pull-requests: write pull-requests: write
issues: write issues: write
env: env:
STALE_HOURS: "4" STALE_HOURS: "48"
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -16,7 +16,7 @@ permissions:
jobs: jobs:
intake: intake:
name: Intake Checks name: Intake Checks
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- name: Checkout repository - name: Checkout repository

View file

@ -25,8 +25,7 @@ permissions:
jobs: jobs:
label: label:
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
timeout-minutes: 10
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -21,13 +21,8 @@ on:
paths: paths:
- "Dockerfile" - "Dockerfile"
- ".dockerignore" - ".dockerignore"
- "Cargo.toml" - "docker-compose.yml"
- "Cargo.lock"
- "rust-toolchain.toml" - "rust-toolchain.toml"
- "src/**"
- "crates/**"
- "benches/**"
- "firmware/**"
- "dev/config.template.toml" - "dev/config.template.toml"
- ".github/workflows/pub-docker-img.yml" - ".github/workflows/pub-docker-img.yml"
workflow_dispatch: workflow_dispatch:
@ -75,6 +70,8 @@ jobs:
tags: zeroclaw-pr-smoke:latest tags: zeroclaw-pr-smoke:latest
labels: ${{ steps.meta.outputs.labels || '' }} labels: ${{ steps.meta.outputs.labels || '' }}
platforms: linux/amd64 platforms: linux/amd64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Verify image - name: Verify image
run: docker run --rm zeroclaw-pr-smoke:latest --version run: docker run --rm zeroclaw-pr-smoke:latest --version
@ -83,7 +80,7 @@ jobs:
name: Build and Push Docker Image name: Build and Push Docker Image
if: (github.event_name == 'workflow_dispatch' || (github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v')))) && github.repository == 'zeroclaw-labs/zeroclaw' if: (github.event_name == 'workflow_dispatch' || (github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v')))) && github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25 timeout-minutes: 45
permissions: permissions:
contents: read contents: read
packages: write packages: write
@ -128,7 +125,9 @@ jobs:
context: . context: .
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
platforms: ${{ startsWith(github.ref, 'refs/tags/v') && 'linux/amd64,linux/arm64' || 'linux/amd64' }} platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Set GHCR package visibility to public - name: Set GHCR package visibility to public
shell: bash shell: bash

View file

@ -27,15 +27,45 @@ jobs:
- os: ubuntu-latest - os: ubuntu-latest
target: x86_64-unknown-linux-gnu target: x86_64-unknown-linux-gnu
artifact: zeroclaw artifact: zeroclaw
- os: macos-latest archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-latest
target: armv7-unknown-linux-gnueabihf
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
- os: macos-15-intel
target: x86_64-apple-darwin target: x86_64-apple-darwin
artifact: zeroclaw artifact: zeroclaw
- os: macos-latest archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: macos-14
target: aarch64-apple-darwin target: aarch64-apple-darwin
artifact: zeroclaw artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: windows-latest - os: windows-latest
target: x86_64-pc-windows-msvc target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe artifact: zeroclaw.exe
archive_ext: zip
cross_compiler: ""
linker_env: ""
linker: ""
steps: steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
@ -46,20 +76,41 @@ jobs:
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3 - uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Install cross-compilation toolchain (Linux)
if: runner.os == 'Linux' && matrix.cross_compiler != ''
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release - name: Build release
run: cargo build --release --locked --target ${{ matrix.target }} env:
LINKER_ENV: ${{ matrix.linker_env }}
LINKER: ${{ matrix.linker }}
run: |
if [ -n "$LINKER_ENV" ] && [ -n "$LINKER" ]; then
echo "Using linker override: $LINKER_ENV=$LINKER"
export "$LINKER_ENV=$LINKER"
fi
cargo build --release --locked --target ${{ matrix.target }}
- name: Check binary size (Unix) - name: Check binary size (Unix)
if: runner.os != 'Windows' if: runner.os != 'Windows'
run: | run: |
SIZE=$(stat -f%z target/${{ matrix.target }}/release/${{ matrix.artifact }} 2>/dev/null || stat -c%s target/${{ matrix.target }}/release/${{ matrix.artifact }}) BIN="target/${{ matrix.target }}/release/${{ matrix.artifact }}"
if [ ! -f "$BIN" ]; then
echo "::error::Expected binary not found: $BIN"
exit 1
fi
SIZE=$(stat -f%z "$BIN" 2>/dev/null || stat -c%s "$BIN")
SIZE_MB=$((SIZE / 1024 / 1024)) SIZE_MB=$((SIZE / 1024 / 1024))
echo "Binary size: ${SIZE_MB}MB ($SIZE bytes)" echo "Binary size: ${SIZE_MB}MB ($SIZE bytes)"
echo "### Binary Size: ${{ matrix.target }}" >> "$GITHUB_STEP_SUMMARY" echo "### Binary Size: ${{ matrix.target }}" >> "$GITHUB_STEP_SUMMARY"
echo "- Size: ${SIZE_MB}MB ($SIZE bytes)" >> "$GITHUB_STEP_SUMMARY" echo "- Size: ${SIZE_MB}MB ($SIZE bytes)" >> "$GITHUB_STEP_SUMMARY"
if [ "$SIZE" -gt 15728640 ]; then if [ "$SIZE" -gt 41943040 ]; then
echo "::error::Binary exceeds 15MB hard limit (${SIZE_MB}MB)" echo "::error::Binary exceeds 40MB safeguard (${SIZE_MB}MB)"
exit 1 exit 1
elif [ "$SIZE" -gt 15728640 ]; then
echo "::warning::Binary exceeds 15MB advisory target (${SIZE_MB}MB)"
elif [ "$SIZE" -gt 5242880 ]; then elif [ "$SIZE" -gt 5242880 ]; then
echo "::warning::Binary exceeds 5MB target (${SIZE_MB}MB)" echo "::warning::Binary exceeds 5MB target (${SIZE_MB}MB)"
else else
@ -70,19 +121,19 @@ jobs:
if: runner.os != 'Windows' if: runner.os != 'Windows'
run: | run: |
cd target/${{ matrix.target }}/release cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.tar.gz ${{ matrix.artifact }} tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
- name: Package (Windows) - name: Package (Windows)
if: runner.os == 'Windows' if: runner.os == 'Windows'
run: | run: |
cd target/${{ matrix.target }}/release cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.zip ${{ matrix.artifact }} 7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6 uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
with: with:
name: zeroclaw-${{ matrix.target }} name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.* path: zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }}
retention-days: 7 retention-days: 7
publish: publish:
@ -94,7 +145,7 @@ jobs:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Download all artifacts - name: Download all artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4 uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with: with:
path: artifacts path: artifacts
@ -119,7 +170,7 @@ jobs:
cat SHA256SUMS cat SHA256SUMS
- name: Install cosign - name: Install cosign
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2 uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- name: Sign artifacts with cosign (keyless) - name: Sign artifacts with cosign (keyless)
run: | run: |

View file

@ -3,8 +3,20 @@ name: Sec Audit
on: on:
push: push:
branches: [main] branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
pull_request: pull_request:
branches: [main] branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
schedule: schedule:
- cron: "0 6 * * 1" # Weekly on Monday 6am UTC - cron: "0 6 * * 1" # Weekly on Monday 6am UTC

View file

@ -2,7 +2,7 @@ name: Sec CodeQL
on: on:
schedule: schedule:
- cron: "0 6,18 * * *" # Twice daily at 6am and 6pm UTC - cron: "0 6 * * 1" # Weekly Monday 6am UTC
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:

View file

@ -17,7 +17,7 @@ permissions:
jobs: jobs:
update-notice: update-notice:
name: Update NOTICE with new contributors name: Update NOTICE with new contributors
runs-on: blacksmith-2vcpu-ubuntu-2404 runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -1,8 +1,8 @@
name: Test Benchmarks name: Test Benchmarks
on: on:
push: schedule:
branches: [main] - cron: "0 3 * * 1" # Weekly Monday 3am UTC
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
@ -39,7 +39,7 @@ jobs:
path: | path: |
target/criterion/ target/criterion/
benchmark_output.txt benchmark_output.txt
retention-days: 30 retention-days: 7
- name: Post benchmark summary on PR - name: Post benchmark summary on PR
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'

1
.gitignore vendored
View file

@ -3,6 +3,7 @@ firmware/*/target
*.db *.db
*.db-journal *.db-journal
.DS_Store .DS_Store
._*
.wt-pr37/ .wt-pr37/
__pycache__/ __pycache__/
*.pyc *.pyc

View file

@ -26,6 +26,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `enc:` prefix for encrypted secrets — Use `enc2:` (ChaCha20-Poly1305) instead. - `enc:` prefix for encrypted secrets — Use `enc2:` (ChaCha20-Poly1305) instead.
Legacy values are still decrypted for backward compatibility but should be migrated. Legacy values are still decrypted for backward compatibility but should be migrated.
### Fixed
- **Onboarding channel menu dispatch** now uses an enum-backed selector instead of hard-coded
numeric match arms, preventing duplicated pattern arms and related `unreachable pattern`
compiler warnings in `src/onboard/wizard.rs`.
- **OpenAI native tool spec parsing** now uses owned serializable/deserializable structs,
fixing a compile-time type mismatch when validating tool schemas before API calls.
## [0.1.0] - 2026-02-13 ## [0.1.0] - 2026-02-13
### Added ### Added

132
CLA.md Normal file
View file

@ -0,0 +1,132 @@
# ZeroClaw Contributor License Agreement (CLA)
**Version 1.0 — February 2026**
**ZeroClaw Labs**
---
## Purpose
This Contributor License Agreement ("CLA") clarifies the intellectual
property rights granted by contributors to ZeroClaw Labs. This agreement
protects both contributors and users of the ZeroClaw project.
By submitting a contribution (pull request, patch, issue with code, or any
other form of code submission) to the ZeroClaw repository, you agree to the
terms of this CLA.
---
## 1. Definitions
- **"Contribution"** means any original work of authorship, including any
modifications or additions to existing work, submitted to ZeroClaw Labs
for inclusion in the ZeroClaw project.
- **"You"** means the individual or legal entity submitting a Contribution.
- **"ZeroClaw Labs"** means the maintainers and organization responsible
for the ZeroClaw project at https://github.com/zeroclaw-labs/zeroclaw.
---
## 2. Grant of Copyright License
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to:
- Reproduce, prepare derivative works of, publicly display, publicly
perform, sublicense, and distribute your Contributions and derivative
works under **both the MIT License and the Apache License 2.0**.
---
## 3. Grant of Patent License
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable patent license to make, have made, use, offer to sell, sell,
import, and otherwise transfer your Contributions.
This patent license applies only to patent claims licensable by you that
are necessarily infringed by your Contribution alone or in combination with
the ZeroClaw project.
**This protects you:** if a third party files a patent claim against
ZeroClaw that covers your Contribution, your patent license to the project
is not revoked.
---
## 4. You Retain Your Rights
This CLA does **not** transfer ownership of your Contribution to ZeroClaw
Labs. You retain full copyright ownership of your Contribution. You are
free to use your Contribution in any other project under any license.
---
## 5. Original Work
You represent that:
1. Each Contribution is your original creation, or you have sufficient
rights to submit it under this CLA.
2. Your Contribution does not knowingly infringe any third-party patent,
copyright, trademark, or other intellectual property right.
3. If your employer has rights to intellectual property you create, you
have received permission to submit the Contribution, or your employer
has signed a corporate CLA with ZeroClaw Labs.
---
## 6. No Trademark Rights
This CLA does not grant you any rights to use the ZeroClaw name,
trademarks, service marks, or logos. See TRADEMARK.md for trademark policy.
---
## 7. Attribution
ZeroClaw Labs will maintain attribution to contributors in the repository
commit history and NOTICE file. Your contributions are permanently and
publicly recorded.
---
## 8. Dual-License Commitment
All Contributions accepted into the ZeroClaw project are licensed under
both:
- **MIT License** — permissive open-source use
- **Apache License 2.0** — patent protection and stronger IP guarantees
This dual-license model ensures maximum compatibility and protection for
the entire contributor community.
---
## 9. How to Agree
By opening a pull request or submitting a patch to the ZeroClaw repository,
you indicate your agreement to this CLA. No separate signature is required
for individual contributors.
For **corporate contributors** (submitting on behalf of a company or
organization), please open an issue titled "Corporate CLA — [Company Name]"
and a maintainer will follow up.
---
## 10. Questions
If you have questions about this CLA, open an issue at:
https://github.com/zeroclaw-labs/zeroclaw/issues
---
*This CLA is based on the Apache Individual Contributor License Agreement
v2.0, adapted for the ZeroClaw dual-license model.*

1083
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -26,7 +26,7 @@ tokio-util = { version = "0.7", default-features = false }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream", "socks"] } reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream", "socks"] }
# Matrix client + E2EE decryption # Matrix client + E2EE decryption
matrix-sdk = { version = "0.16", default-features = false, features = ["e2e-encryption", "rustls-tls", "markdown"] } matrix-sdk = { version = "0.16", optional = true, default-features = false, features = ["e2e-encryption", "rustls-tls", "markdown"] }
# Serialization # Serialization
serde = { version = "1.0", default-features = false, features = ["derive"] } serde = { version = "1.0", default-features = false, features = ["derive"] }
@ -37,6 +37,9 @@ directories = "6.0"
toml = "1.0" toml = "1.0"
shellexpand = "3.1" shellexpand = "3.1"
# JSON Schema generation for config export
schemars = "1.2"
# Logging - minimal # Logging - minimal
tracing = { version = "0.1", default-features = false } tracing = { version = "0.1", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi", "env-filter"] } tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi", "env-filter"] }
@ -69,7 +72,10 @@ sha2 = "0.10"
hex = "0.4" hex = "0.4"
# CSPRNG for secure token generation # CSPRNG for secure token generation
rand = "0.9" rand = "0.10"
# serde-big-array for wa-rs storage (large array serialization)
serde-big-array = { version = "0.5", optional = true }
# Fast mutexes that don't poison on panic # Fast mutexes that don't poison on panic
parking_lot = "0.12" parking_lot = "0.12"
@ -97,8 +103,8 @@ console = "0.16"
# Hardware discovery (device path globbing) # Hardware discovery (device path globbing)
glob = "0.3" glob = "0.3"
# Discord WebSocket gateway # WebSocket client channels (Discord/Lark/DingTalk)
tokio-tungstenite = { version = "0.24", features = ["rustls-tls-webpki-roots"] } tokio-tungstenite = { version = "0.28", features = ["rustls-tls-webpki-roots"] }
futures-util = { version = "0.3", default-features = false, features = ["sink"] } futures-util = { version = "0.3", default-features = false, features = ["sink"] }
futures = "0.3" futures = "0.3"
regex = "1.10" regex = "1.10"
@ -114,27 +120,42 @@ mail-parser = "0.11.2"
async-imap = { version = "0.11",features = ["runtime-tokio"], default-features = false } async-imap = { version = "0.11",features = ["runtime-tokio"], default-features = false }
# HTTP server (gateway) — replaces raw TCP for proper HTTP/1.1 compliance # HTTP server (gateway) — replaces raw TCP for proper HTTP/1.1 compliance
axum = { version = "0.8", default-features = false, features = ["http1", "json", "tokio", "query", "ws"] } axum = { version = "0.8", default-features = false, features = ["http1", "json", "tokio", "query", "ws", "macros"] }
tower = { version = "0.5", default-features = false } tower = { version = "0.5", default-features = false }
tower-http = { version = "0.6", default-features = false, features = ["limit", "timeout"] } tower-http = { version = "0.6", default-features = false, features = ["limit", "timeout"] }
http-body-util = "0.1" http-body-util = "0.1"
# OpenTelemetry — OTLP trace + metrics export # OpenTelemetry — OTLP trace + metrics export.
# Use the blocking HTTP exporter client to avoid Tokio-reactor panics in
# OpenTelemetry background batch threads when ZeroClaw emits spans/metrics from
# non-Tokio contexts.
opentelemetry = { version = "0.31", default-features = false, features = ["trace", "metrics"] } opentelemetry = { version = "0.31", default-features = false, features = ["trace", "metrics"] }
opentelemetry_sdk = { version = "0.31", default-features = false, features = ["trace", "metrics"] } opentelemetry_sdk = { version = "0.31", default-features = false, features = ["trace", "metrics"] }
opentelemetry-otlp = { version = "0.31", default-features = false, features = ["trace", "metrics", "http-proto", "reqwest-client", "reqwest-rustls-webpki-roots"] } opentelemetry-otlp = { version = "0.31", default-features = false, features = ["trace", "metrics", "http-proto", "reqwest-blocking-client", "reqwest-rustls-webpki-roots"] }
# USB device enumeration (hardware discovery)
nusb = { version = "0.2", default-features = false, optional = true }
# Serial port for peripheral communication (STM32, etc.) # Serial port for peripheral communication (STM32, etc.)
tokio-serial = { version = "5", default-features = false, optional = true } tokio-serial = { version = "5", default-features = false, optional = true }
# USB device enumeration (hardware discovery) — only on platforms nusb supports
# (Linux, macOS, Windows). Android/Termux uses target_os="android" and is excluded.
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))'.dependencies]
nusb = { version = "0.2", default-features = false, optional = true }
# probe-rs for STM32/Nucleo memory read (Phase B) # probe-rs for STM32/Nucleo memory read (Phase B)
probe-rs = { version = "0.30", optional = true } probe-rs = { version = "0.31", optional = true }
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf) # PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
pdf-extract = { version = "0.10", optional = true } pdf-extract = { version = "0.10", optional = true }
tokio-stream = { version = "0.1.18", features = ["full"] }
# WhatsApp Web client (wa-rs) — optional, enable with --features whatsapp-web
# Uses wa-rs for Bot and Client, wa-rs-core for storage traits, custom rusqlite backend avoids Diesel conflict.
wa-rs = { version = "0.2", optional = true, default-features = false }
wa-rs-core = { version = "0.2", optional = true, default-features = false }
wa-rs-binary = { version = "0.2", optional = true, default-features = false }
wa-rs-proto = { version = "0.2", optional = true, default-features = false }
wa-rs-ureq-http = { version = "0.2", optional = true }
wa-rs-tokio-transport = { version = "0.2", optional = true, default-features = false }
# Raspberry Pi GPIO / Landlock (Linux only) — target-specific to avoid compile failure on macOS # Raspberry Pi GPIO / Landlock (Linux only) — target-specific to avoid compile failure on macOS
[target.'cfg(target_os = "linux")'.dependencies] [target.'cfg(target_os = "linux")'.dependencies]
@ -142,8 +163,9 @@ rppal = { version = "0.22", optional = true }
landlock = { version = "0.4", optional = true } landlock = { version = "0.4", optional = true }
[features] [features]
default = ["hardware"] default = ["hardware", "channel-matrix"]
hardware = ["nusb", "tokio-serial"] hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
peripheral-rpi = ["rppal"] peripheral-rpi = ["rppal"]
# Browser backend feature alias used by cfg(feature = "browser-native") # Browser backend feature alias used by cfg(feature = "browser-native")
browser-native = ["dep:fantoccini"] browser-native = ["dep:fantoccini"]
@ -158,6 +180,9 @@ landlock = ["sandbox-landlock"]
probe = ["dep:probe-rs"] probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG # rag-pdf = PDF ingestion for datasheet RAG
rag-pdf = ["dep:pdf-extract"] rag-pdf = ["dep:pdf-extract"]
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "serde-big-array"]
[profile.release] [profile.release]
opt-level = "z" # Optimize for size opt-level = "z" # Optimize for size
lto = "thin" # Lower memory use during release builds lto = "thin" # Lower memory use during release builds
@ -181,7 +206,7 @@ panic = "abort"
[dev-dependencies] [dev-dependencies]
tempfile = "3.14" tempfile = "3.14"
criterion = { version = "0.5", features = ["async_tokio"] } criterion = { version = "0.8", features = ["async_tokio"] }
[[bench]] [[bench]]
name = "agent_benchmarks" name = "agent_benchmarks"

29
LICENSE
View file

@ -22,7 +22,34 @@ SOFTWARE.
================================================================================ ================================================================================
TRADEMARK NOTICE
This license does not grant permission to use the trade names, trademarks,
service marks, or product names of ZeroClaw Labs, including "ZeroClaw",
"zeroclaw-labs", or associated logos, except as required for reasonable and
customary use in describing the origin of the Software.
Unauthorized use of the ZeroClaw name or branding to imply endorsement,
affiliation, or origin is strictly prohibited. See TRADEMARK.md for details.
================================================================================
DUAL LICENSE NOTICE
This software is available under a dual-license model:
1. MIT License (this file) — for open-source, research, academic, and
personal use. See LICENSE (this file).
2. Apache License 2.0 — for contributors and deployments requiring explicit
patent grants and stronger IP protection. See LICENSE-APACHE.
You may choose either license for your use. Contributors submitting patches
grant rights under both licenses. See CLA.md for the contributor agreement.
================================================================================
This product includes software developed by ZeroClaw Labs and contributors: This product includes software developed by ZeroClaw Labs and contributors:
https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors
See NOTICE file for full contributor attribution. See NOTICE for full contributor attribution.

186
LICENSE-APACHE Normal file
View file

@ -0,0 +1,186 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship made available under
the License, as indicated by a copyright notice that is included in
or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean, as defined in Section 5, any work of
authorship, including the original version of the Work and any
modifications or additions to that Work or Derivative Works of the
Work, that is intentionally submitted to the Licensor for inclusion
in the Work by the copyright owner or by an individual or Legal Entity
authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal,
or written communication sent to the Licensor or its representatives,
including but not limited to communication on electronic mailing lists,
source code control systems, and issue tracking systems that are managed
by, or on behalf of, the Licensor for the purpose of discussing and
improving the Work, but excluding communication that is conspicuously
marked or designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" shall mean Licensor and any Legal Entity on behalf of
whom a Contribution has been received by the Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a cross-claim
or counterclaim in a lawsuit) alleging that the Work or any Contribution
incorporated within the Work constitutes direct or contributory patent
infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or Derivative
Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, You must include a readable copy of the
attribution notices contained within such NOTICE file, in
at least one of the following places: within a NOTICE text
file distributed as part of the Derivative Works; within
the Source form or documentation, if provided along with the
Derivative Works; or, within a display generated by the
Derivative Works, if and wherever such third-party notices
normally appear. The contents of the NOTICE file are for
informational purposes only and do not modify the License.
You may add Your own attribution notices within Derivative
Works that You distribute, alongside or as an addendum to
the NOTICE text from the Work, provided that such additional
attribution notices cannot be construed as modifying the License.
You may add Your own license statement for Your modifications and
may provide additional grant of rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the
Contribution, either on its own or as part of the Work.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
including "ZeroClaw", "zeroclaw-labs", or associated logos, except
as required for reasonable and customary use in describing the origin
of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or exemplary damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or all other
commercial damages or losses), even if such Contributor has been
advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may offer such
obligations only on Your own behalf and on Your sole responsibility,
not on behalf of any other Contributor, and only if You agree to
indemnify, defend, and hold each Contributor harmless for any
liability incurred by, or claims asserted against, such Contributor
by reason of your accepting any warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2025 ZeroClaw Labs
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

24
NOTICE
View file

@ -3,6 +3,26 @@ Copyright 2025 ZeroClaw Labs
This product includes software developed at ZeroClaw Labs (https://github.com/zeroclaw-labs). This product includes software developed at ZeroClaw Labs (https://github.com/zeroclaw-labs).
Official Repository
===================
The only official ZeroClaw repository is:
https://github.com/zeroclaw-labs/zeroclaw
Any other repository claiming to be ZeroClaw is unauthorized.
See TRADEMARK.md for the full trademark policy.
License
=======
This software is available under a dual-license model:
1. MIT License — see LICENSE
2. Apache License 2.0 — see LICENSE-APACHE
You may use either license. Contributors grant rights under both.
See CLA.md for the contributor license agreement.
Contributors Contributors
============ ============
@ -10,6 +30,10 @@ This NOTICE file is maintained by repository automation.
For the latest contributor list, see the repository contributors page: For the latest contributor list, see the repository contributors page:
https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors
All contributors retain copyright ownership of their contributions.
Contributions are permanently attributed in the repository commit history.
Patent rights are protected for all contributors under Apache License 2.0.
Third-Party Dependencies Third-Party Dependencies
======================== ========================

View file

@ -8,6 +8,15 @@
<strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong> <strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong>
</p> </p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center"> <p align="center">
🌐 言語: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> 🌐 言語: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p> </p>
@ -33,7 +42,17 @@
> >
> コマンド名、設定キー、API パス、Trait 名などの技術識別子は英語のまま維持しています。 > コマンド名、設定キー、API パス、Trait 名などの技術識別子は英語のまま維持しています。
> >
> 最終同期日: **2026-02-18** > 最終同期日: **2026-02-19**
## 📢 お知らせボード
重要なお知らせ(互換性破壊変更、セキュリティ告知、メンテナンス時間、リリース阻害事項など)をここに掲載します。
| 日付 (UTC) | レベル | お知らせ | 対応 |
|---|---|---|---|
| 2026-02-19 | _緊急_ | 私たちは `openagen/zeroclaw` および `zeroclaw.org` とは**一切関係ありません**。`zeroclaw.org` は現在 `openagen/zeroclaw` の fork を指しており、そのドメイン/リポジトリは当プロジェクトの公式サイト・公式プロジェクトを装っています。 | これらの情報源による案内、バイナリ、資金調達情報、公式発表は信頼しないでください。必ず本リポジトリと認証済み公式SNSのみを参照してください。 |
| 2026-02-19 | _重要_ | 公式サイトは**まだ公開しておらず**、なりすましの試みを確認しています。ZeroClaw 名義の投資・資金調達などの活動には参加しないでください。 | 情報は本リポジトリを最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Telegram CN@zeroclawlabs_cn](https://t.me/zeroclawlabs_cn)、[Telegram RU@zeroclawlabs_ru](https://t.me/zeroclawlabs_ru) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-19 | _重要_ | Anthropic は 2026-02-19 に Authentication and Credential Use を更新しました。条文では、OAuth authenticationFree/Pro/Maxは Claude Code と Claude.ai 専用であり、Claude Free/Pro/Max で取得した OAuth トークンを他の製品・ツール・サービスAgent SDK を含むで使用することは許可されず、Consumer Terms of Service 違反に該当すると明記されています。 | 損失回避のため、当面は Claude Code OAuth 連携を試さないでください。原文: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 概要 ## 概要
@ -100,6 +119,12 @@ cd zeroclaw
## クイックスタート ## クイックスタート
### HomebrewmacOS/Linuxbrew
```bash
brew install zeroclaw
```
```bash ```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw cd zeroclaw
@ -117,6 +142,106 @@ zeroclaw gateway
zeroclaw daemon zeroclaw daemon
``` ```
## Subscription AuthOpenAI Codex / Claude Code
ZeroClaw はサブスクリプションベースのネイティブ認証プロファイルをサポートしています(マルチアカウント対応、保存時暗号化)。
- 保存先: `~/.zeroclaw/auth-profiles.json`
- 暗号化キー: `~/.zeroclaw/.secret_key`
- Profile ID 形式: `<provider>:<profile_name>`(例: `openai-codex:work`
OpenAI Codex OAuthChatGPT サブスクリプション):
```bash
# サーバー/ヘッドレス環境向け推奨
zeroclaw auth login --provider openai-codex --device-code
# ブラウザ/コールバックフロー(ペーストフォールバック付き)
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# 確認 / リフレッシュ / プロファイル切替
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token:
```bash
# サブスクリプション/setup token の貼り付けAuthorization header モード)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# エイリアスコマンド
zeroclaw auth setup-token --provider anthropic --profile default
```
Subscription auth で agent を実行:
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic は API key と auth token の両方の環境変数をサポート:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## アーキテクチャ
すべてのサブシステムは **Trait** — 設定変更だけで実装を差し替え可能、コード変更不要。
<p align="center">
<img src="docs/architecture.svg" alt="ZeroClaw アーキテクチャ" width="900" />
</p>
| サブシステム | Trait | 内蔵実装 | 拡張方法 |
|-------------|-------|----------|----------|
| **AI モデル** | `Provider` | `zeroclaw providers` で確認(現在 28 個の組み込み + エイリアス、カスタムエンドポイント対応) | `custom:https://your-api.com`OpenAI 互換)または `anthropic-custom:https://your-api.com` |
| **チャネル** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | 任意のメッセージ API |
| **メモリ** | `Memory` | SQLite ハイブリッド検索, PostgreSQL バックエンド, Lucid ブリッジ, Markdown ファイル, 明示的 `none` バックエンド, スナップショット/復元, オプション応答キャッシュ | 任意の永続化バックエンド |
| **ツール** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, ハードウェアツール | 任意の機能 |
| **オブザーバビリティ** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **ランタイム** | `RuntimeAdapter` | Native, Dockerサンドボックス | adapter 経由で追加可能;未対応の kind は即座にエラー |
| **セキュリティ** | `SecurityPolicy` | Gateway ペアリング, サンドボックス, allowlist, レート制限, ファイルシステムスコープ, 暗号化シークレット | — |
| **アイデンティティ** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | 任意の ID フォーマット |
| **トンネル** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | 任意のトンネルバイナリ |
| **ハートビート** | Engine | HEARTBEAT.md 定期タスク | — |
| **スキル** | Loader | TOML マニフェスト + SKILL.md インストラクション | コミュニティスキルパック |
| **インテグレーション** | Registry | 9 カテゴリ、70 件以上の連携 | プラグインシステム |
### ランタイムサポート(現状)
- ✅ 現在サポート: `runtime.kind = "native"` または `runtime.kind = "docker"`
- 🚧 計画中(未実装): WASM / エッジランタイム
未対応の `runtime.kind` が設定された場合、ZeroClaw は native へのサイレントフォールバックではなく、明確なエラーで終了します。
### メモリシステム(フルスタック検索エンジン)
すべて自社実装、外部依存ゼロ — Pinecone、Elasticsearch、LangChain 不要:
| レイヤー | 実装 |
|---------|------|
| **ベクトル DB** | Embeddings を SQLite に BLOB として保存、コサイン類似度検索 |
| **キーワード検索** | FTS5 仮想テーブル、BM25 スコアリング |
| **ハイブリッドマージ** | カスタム重み付きマージ関数(`vector.rs` |
| **Embeddings** | `EmbeddingProvider` trait — OpenAI、カスタム URL、または noop |
| **チャンキング** | 行ベースの Markdown チャンカー(見出し構造保持) |
| **キャッシュ** | SQLite `embedding_cache` テーブル、LRU エビクション |
| **安全な再インデックス** | FTS5 再構築 + 欠落ベクトルの再埋め込みをアトミックに実行 |
Agent はツール経由でメモリの呼び出し・保存・管理を自動的に行います。
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## セキュリティのデフォルト ## セキュリティのデフォルト
- Gateway の既定バインド: `127.0.0.1:3000` - Gateway の既定バインド: `127.0.0.1:3000`

169
README.md
View file

@ -13,13 +13,19 @@
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT" /></a> <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a> <a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a> <a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p> </p>
<p align="center"> <p align="center">
Built by students and members of the Harvard, MIT, and Sundai.Club communities. Built by students and members of the Harvard, MIT, and Sundai.Club communities.
</p> </p>
<p align="center"> <p align="center">
🌐 <strong>Languages:</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> 🌐 <strong>Languages:</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> · <a href="README.vi.md">Tiếng Việt</a>
</p> </p>
<p align="center"> <p align="center">
@ -46,6 +52,16 @@ Built by students and members of the Harvard, MIT, and Sundai.Club communities.
<p align="center"><code>Trait-driven architecture · secure-by-default runtime · provider/channel/tool swappable · pluggable everything</code></p> <p align="center"><code>Trait-driven architecture · secure-by-default runtime · provider/channel/tool swappable · pluggable everything</code></p>
### 📢 Announcements
Use this board for important notices (breaking changes, security advisories, maintenance windows, and release blockers).
| Date (UTC) | Level | Notice | Action |
|---|---|---|---|
| 2026-02-19 | _Critical_ | We are **not affiliated** with `openagen/zeroclaw` or `zeroclaw.org`. The `zeroclaw.org` domain currently points to the `openagen/zeroclaw` fork, and that domain/repository are impersonating our official website/project. | Do not trust information, binaries, fundraising, or announcements from those sources. Use only this repository and our verified social accounts. |
| 2026-02-19 | _Important_ | We have **not** launched an official website yet, and we are seeing impersonation attempts. Do **not** join any investment or fundraising activity claiming the ZeroClaw name. | Use this repository as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Telegram CN (@zeroclawlabs_cn)](https://t.me/zeroclawlabs_cn), [Telegram RU (@zeroclawlabs_ru)](https://t.me/zeroclawlabs_ru), and [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) for official updates. |
| 2026-02-19 | _Important_ | Anthropic updated the Authentication and Credential Use terms on 2026-02-19. OAuth authentication (Free, Pro, Max) is intended exclusively for Claude Code and Claude.ai; using OAuth tokens from Claude Free/Pro/Max in any other product, tool, or service (including Agent SDK) is not permitted and may violate the Consumer Terms of Service. | Please temporarily avoid Claude Code OAuth integrations to prevent potential loss. Original clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Features ### ✨ Features
- 🏎️ **Lean Runtime by Default:** Common CLI and status workflows run in a few-megabyte memory envelope on release builds. - 🏎️ **Lean Runtime by Default:** Common CLI and status workflows run in a few-megabyte memory envelope on release builds.
@ -72,7 +88,7 @@ Local machine quick benchmark (macOS arm64, Feb 2026) normalized for 0.8GHz edge
| **Binary Size** | ~28MB (dist) | N/A (Scripts) | ~8MB | **3.4 MB** | | **Binary Size** | ~28MB (dist) | N/A (Scripts) | ~8MB | **3.4 MB** |
| **Cost** | Mac Mini $599 | Linux SBC ~$50 | Linux Board $10 | **Any hardware $10** | | **Cost** | Mac Mini $599 | Linux SBC ~$50 | Linux Board $10 | **Any hardware $10** |
> Notes: ZeroClaw results are measured on release builds using `/usr/bin/time -l`. OpenClaw requires Node.js runtime (typically ~390MB additional memory overhead), while NanoBot requires Python runtime. PicoClaw and ZeroClaw are static binaries. > Notes: ZeroClaw results are measured on release builds using `/usr/bin/time -l`. OpenClaw requires Node.js runtime (typically ~390MB additional memory overhead), while NanoBot requires Python runtime. PicoClaw and ZeroClaw are static binaries. The RAM figures above are runtime memory; build-time compilation requirements are higher.
<p align="center"> <p align="center">
<img src="zero-claw.jpeg" alt="ZeroClaw vs OpenClaw Comparison" width="800" /> <img src="zero-claw.jpeg" alt="ZeroClaw vs OpenClaw Comparison" width="800" />
@ -157,17 +173,44 @@ Or skip the steps above and install everything (system deps, Rust, ZeroClaw) in
curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/install.sh | bash curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/install.sh | bash
``` ```
#### Compilation resource requirements
Building from source needs more resources than running the resulting binary:
| Resource | Minimum | Recommended |
|---|---|---|
| **RAM + swap** | 2 GB | 4 GB+ |
| **Free disk** | 6 GB | 10 GB+ |
If your host is below the minimum, use pre-built binaries:
```bash
./bootstrap.sh --prefer-prebuilt
```
To require binary-only install with no source fallback:
```bash
./bootstrap.sh --prebuilt-only
```
#### Optional #### Optional
- **Docker** — required only if using the [Docker sandboxed runtime](#runtime-support-current) (`runtime.kind = "docker"`). Install via your package manager or [docker.com](https://docs.docker.com/engine/install/). - **Docker** — required only if using the [Docker sandboxed runtime](#runtime-support-current) (`runtime.kind = "docker"`). Install via your package manager or [docker.com](https://docs.docker.com/engine/install/).
> **Note:** The default `cargo build --release` uses `codegen-units=1` for compatibility with low-memory devices (e.g., Raspberry Pi 3 with 1GB RAM). For faster builds on powerful machines, use `cargo build --profile release-fast`. > **Note:** The default `cargo build --release` uses `codegen-units=1` to lower peak compile pressure. For faster builds on powerful machines, use `cargo build --profile release-fast`.
</details> </details>
## Quick Start ## Quick Start
### Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
### One-click bootstrap ### One-click bootstrap
```bash ```bash
@ -179,8 +222,17 @@ cd zeroclaw
# Optional: bootstrap dependencies + Rust on fresh machines # Optional: bootstrap dependencies + Rust on fresh machines
./bootstrap.sh --install-system-deps --install-rust ./bootstrap.sh --install-system-deps --install-rust
# Optional: pre-built binary first (recommended on low-RAM/low-disk hosts)
./bootstrap.sh --prefer-prebuilt
# Optional: binary-only install (no source build fallback)
./bootstrap.sh --prebuilt-only
# Optional: run onboarding in the same flow # Optional: run onboarding in the same flow
./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter ./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter [--model "openrouter/auto"]
# Optional: run bootstrap + onboarding fully in Docker
./bootstrap.sh --docker
``` ```
Remote one-liner (review first in security-sensitive environments): Remote one-liner (review first in security-sensitive environments):
@ -191,6 +243,25 @@ curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts
Details: [`docs/one-click-bootstrap.md`](docs/one-click-bootstrap.md) (toolchain mode may request `sudo` for system packages). Details: [`docs/one-click-bootstrap.md`](docs/one-click-bootstrap.md) (toolchain mode may request `sudo` for system packages).
### Pre-built binaries
Release assets are published for:
- Linux: `x86_64`, `aarch64`, `armv7`
- macOS: `x86_64`, `aarch64`
- Windows: `x86_64`
Download the latest assets from:
<https://github.com/zeroclaw-labs/zeroclaw/releases/latest>
Example (ARM64 Linux):
```bash
curl -fsSLO https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-aarch64-unknown-linux-gnu.tar.gz
tar xzf zeroclaw-aarch64-unknown-linux-gnu.tar.gz
install -m 0755 zeroclaw "$HOME/.cargo/bin/zeroclaw"
```
```bash ```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw cd zeroclaw
@ -200,8 +271,8 @@ cargo install --path . --force --locked
# Ensure ~/.cargo/bin is in your PATH # Ensure ~/.cargo/bin is in your PATH
export PATH="$HOME/.cargo/bin:$PATH" export PATH="$HOME/.cargo/bin:$PATH"
# Quick setup (no prompts) # Quick setup (no prompts, optional model specification)
zeroclaw onboard --api-key sk-... --provider openrouter zeroclaw onboard --api-key sk-... --provider openrouter [--model "openrouter/auto"]
# Or interactive wizard # Or interactive wizard
zeroclaw onboard --interactive zeroclaw onboard --interactive
@ -244,6 +315,7 @@ zeroclaw integrations info Telegram
# Manage background service # Manage background service
zeroclaw service install zeroclaw service install
zeroclaw service status zeroclaw service status
zeroclaw service restart
# Migrate memory from OpenClaw (safe preview first) # Migrate memory from OpenClaw (safe preview first)
zeroclaw migrate openclaw --dry-run zeroclaw migrate openclaw --dry-run
@ -452,7 +524,37 @@ For non-text replies, ZeroClaw can send Telegram attachments when the assistant
Paths can be local files (for example `/tmp/screenshot.png`) or HTTPS URLs. Paths can be local files (for example `/tmp/screenshot.png`) or HTTPS URLs.
### WhatsApp Business Cloud API Setup ### WhatsApp Setup
ZeroClaw supports two WhatsApp backends:
- **WhatsApp Web mode** (QR / pair code, no Meta Business API required)
- **WhatsApp Business Cloud API mode** (official Meta webhook flow)
#### WhatsApp Web mode (recommended for personal/self-hosted use)
1. **Build with WhatsApp Web support:**
```bash
cargo build --features whatsapp-web
```
2. **Configure ZeroClaw:**
```toml
[channels_config.whatsapp]
session_path = "~/.zeroclaw/state/whatsapp-web/session.db"
pair_phone = "15551234567" # optional; omit to use QR flow
pair_code = "" # optional custom pair code
allowed_numbers = ["+1234567890"] # E.164 format, or ["*"] for all
```
3. **Start channels/daemon and link device:**
- Run `zeroclaw channel start` (or `zeroclaw daemon`).
- Follow terminal pairing output (QR or pair code).
- In WhatsApp on phone: **Settings → Linked Devices**.
4. **Test:** Send a message from an allowed number and verify the agent replies.
#### WhatsApp Business Cloud API mode
WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling): WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling):
@ -493,6 +595,10 @@ WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling):
Config: `~/.zeroclaw/config.toml` (created by `onboard`) Config: `~/.zeroclaw/config.toml` (created by `onboard`)
When `zeroclaw channel start` is already running, changes to `default_provider`,
`default_model`, `default_temperature`, `api_key`, `api_url`, and `reliability.*`
are hot-applied on the next inbound channel message.
```toml ```toml
api_key = "sk-..." api_key = "sk-..."
default_provider = "openrouter" default_provider = "openrouter"
@ -591,6 +697,8 @@ window_allowlist = [] # optional window title/process allowlist hints
enabled = false # opt-in: 1000+ OAuth apps via composio.dev enabled = false # opt-in: 1000+ OAuth apps via composio.dev
# api_key = "cmp_..." # optional: stored encrypted when [secrets].encrypt = true # api_key = "cmp_..." # optional: stored encrypted when [secrets].encrypt = true
entity_id = "default" # default user_id for Composio tool calls entity_id = "default" # default user_id for Composio tool calls
# Runtime tip: if execute asks for connected_account_id, run composio with
# action='list_accounts' and app='gmail' (or your toolkit) to retrieve account IDs.
[identity] [identity]
format = "openclaw" # "openclaw" (default, markdown files) or "aieos" (JSON) format = "openclaw" # "openclaw" (default, markdown files) or "aieos" (JSON)
@ -767,7 +875,7 @@ See [aieos.org](https://aieos.org) for the full schema and live examples.
| `service` | Manage user-level background service | | `service` | Manage user-level background service |
| `doctor` | Diagnose daemon/scheduler/channel freshness | | `doctor` | Diagnose daemon/scheduler/channel freshness |
| `status` | Show full system status | | `status` | Show full system status |
| `cron` | Manage scheduled tasks (`list/add/add-at/add-every/once/remove/pause/resume`) | | `cron` | Manage scheduled tasks (`list/add/add-at/add-every/once/remove/update/pause/resume`) |
| `models` | Refresh provider model catalogs (`models refresh`) | | `models` | Refresh provider model catalogs (`models refresh`) |
| `providers` | List supported providers and aliases | | `providers` | List supported providers and aliases |
| `channel` | List/start/doctor channels and bind Telegram identities | | `channel` | List/start/doctor channels and bind Telegram identities |
@ -779,6 +887,18 @@ See [aieos.org](https://aieos.org) for the full schema and live examples.
For a task-oriented command guide, see [`docs/commands-reference.md`](docs/commands-reference.md). For a task-oriented command guide, see [`docs/commands-reference.md`](docs/commands-reference.md).
### Open-Skills Opt-In
Community `open-skills` sync is disabled by default. Enable it explicitly in `config.toml`:
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # optional
```
You can also override at runtime with `ZEROCLAW_OPEN_SKILLS_ENABLED` and `ZEROCLAW_OPEN_SKILLS_DIR`.
## Development ## Development
```bash ```bash
@ -869,13 +989,42 @@ A heartfelt thank you to the communities and institutions that inspire and fuel
We're building in the open because the best ideas come from everywhere. If you're reading this, you're part of it. Welcome. 🦀❤️ We're building in the open because the best ideas come from everywhere. If you're reading this, you're part of it. Welcome. 🦀❤️
## ⚠️ Official Repository & Impersonation Warning
**This is the only official ZeroClaw repository:**
> https://github.com/zeroclaw-labs/zeroclaw
Any other repository, organization, domain, or package claiming to be "ZeroClaw" or implying affiliation with ZeroClaw Labs is **unauthorized and not affiliated with this project**. Known unauthorized forks will be listed in [TRADEMARK.md](TRADEMARK.md).
If you encounter impersonation or trademark misuse, please [open an issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## License ## License
MIT — see [LICENSE](LICENSE) for license terms and attribution baseline ZeroClaw is dual-licensed for maximum openness and contributor protection:
| License | Use case |
|---|---|
| [MIT](LICENSE) | Open-source, research, academic, personal use |
| [Apache 2.0](LICENSE-APACHE) | Patent protection, institutional, commercial deployment |
You may choose either license. **Contributors automatically grant rights under both** — see [CLA.md](CLA.md) for the full contributor agreement.
### Trademark
The **ZeroClaw** name and logo are trademarks of ZeroClaw Labs. This license does not grant permission to use them to imply endorsement or affiliation. See [TRADEMARK.md](TRADEMARK.md) for permitted and prohibited uses.
### Contributor Protections
- You **retain copyright** of your contributions
- **Patent grant** (Apache 2.0) shields you from patent claims by other contributors
- Your contributions are **permanently attributed** in commit history and [NOTICE](NOTICE)
- No trademark rights are transferred by contributing
## Contributing ## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md). Implement a trait, submit a PR: See [CONTRIBUTING.md](CONTRIBUTING.md) and [CLA.md](CLA.md). Implement a trait, submit a PR:
- CI workflow guide: [docs/ci-map.md](docs/ci-map.md) - CI workflow guide: [docs/ci-map.md](docs/ci-map.md)
- New `Provider``src/providers/` - New `Provider``src/providers/`
- New `Channel``src/channels/` - New `Channel``src/channels/`

View file

@ -8,6 +8,15 @@
<strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong> <strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong>
</p> </p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center"> <p align="center">
🌐 Языки: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> 🌐 Языки: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p> </p>
@ -33,7 +42,17 @@
> >
> Технические идентификаторы (команды, ключи конфигурации, API-пути, имена Trait) сохранены на английском. > Технические идентификаторы (команды, ключи конфигурации, API-пути, имена Trait) сохранены на английском.
> >
> Последняя синхронизация: **2026-02-18**. > Последняя синхронизация: **2026-02-19**.
## 📢 Доска объявлений
Публикуйте здесь важные уведомления (breaking changes, security advisories, окна обслуживания и блокеры релиза).
| Дата (UTC) | Уровень | Объявление | Действие |
|---|---|---|---|
| 2026-02-19 | _Срочно_ | Мы **не аффилированы** с `openagen/zeroclaw` и `zeroclaw.org`. Домен `zeroclaw.org` сейчас указывает на fork `openagen/zeroclaw`, и этот домен/репозиторий выдают себя за наш официальный сайт и проект. | Не доверяйте информации, бинарникам, сборам средств и «официальным» объявлениям из этих источников. Используйте только этот репозиторий и наши верифицированные соцсети. |
| 2026-02-19 | _Важно_ | Официальный сайт пока **не запущен**, и мы уже видим попытки выдавать себя за ZeroClaw. Пожалуйста, не участвуйте в инвестициях, сборах средств или похожих активностях от имени ZeroClaw. | Ориентируйтесь только на этот репозиторий; также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Telegram CN (@zeroclawlabs_cn)](https://t.me/zeroclawlabs_cn), [Telegram RU (@zeroclawlabs_ru)](https://t.me/zeroclawlabs_ru) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-19 | _Важно_ | Anthropic обновил раздел Authentication and Credential Use 2026-02-19. В нем указано, что OAuth authentication (Free/Pro/Max) предназначена только для Claude Code и Claude.ai; использование OAuth-токенов, полученных через Claude Free/Pro/Max, в любых других продуктах, инструментах или сервисах (включая Agent SDK), не допускается и может считаться нарушением Consumer Terms of Service. | Чтобы избежать потерь, временно не используйте Claude Code OAuth-интеграции. Оригинал: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
## О проекте ## О проекте
@ -100,6 +119,12 @@ cd zeroclaw
## Быстрый старт ## Быстрый старт
### Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
```bash ```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw cd zeroclaw
@ -117,6 +142,106 @@ zeroclaw gateway
zeroclaw daemon zeroclaw daemon
``` ```
## Subscription Auth (OpenAI Codex / Claude Code)
ZeroClaw поддерживает нативные профили авторизации на основе подписки (мультиаккаунт, шифрование при хранении).
- Файл хранения: `~/.zeroclaw/auth-profiles.json`
- Ключ шифрования: `~/.zeroclaw/.secret_key`
- Формат Profile ID: `<provider>:<profile_name>` (пример: `openai-codex:work`)
OpenAI Codex OAuth (подписка ChatGPT):
```bash
# Рекомендуется для серверов/headless-окружений
zeroclaw auth login --provider openai-codex --device-code
# Браузерный/callback-поток с paste-фолбэком
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# Проверка / обновление / переключение профиля
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token:
```bash
# Вставка subscription/setup token (режим Authorization header)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# Команда-алиас
zeroclaw auth setup-token --provider anthropic --profile default
```
Запуск agent с subscription auth:
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic поддерживает и API key, и auth token через переменные окружения:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## Архитектура
Каждая подсистема — это **Trait**: меняйте реализации через конфигурацию, без изменения кода.
<p align="center">
<img src="docs/architecture.svg" alt="Архитектура ZeroClaw" width="900" />
</p>
| Подсистема | Trait | Встроенные реализации | Расширение |
|-----------|-------|---------------------|------------|
| **AI-модели** | `Provider` | Каталог через `zeroclaw providers` (сейчас 28 встроенных + алиасы, плюс пользовательские endpoint) | `custom:https://your-api.com` (OpenAI-совместимый) или `anthropic-custom:https://your-api.com` |
| **Каналы** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | Любой messaging API |
| **Память** | `Memory` | SQLite гибридный поиск, PostgreSQL-бэкенд, Lucid-мост, Markdown-файлы, явный `none`-бэкенд, snapshot/hydrate, опциональный кэш ответов | Любой persistence-бэкенд |
| **Инструменты** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, аппаратные инструменты | Любая функциональность |
| **Наблюдаемость** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **Runtime** | `RuntimeAdapter` | Native, Docker (sandbox) | Через adapter; неподдерживаемые kind завершаются с ошибкой |
| **Безопасность** | `SecurityPolicy` | Gateway pairing, sandbox, allowlist, rate limits, scoping файловой системы, шифрование секретов | — |
| **Идентификация** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | Любой формат идентификации |
| **Туннели** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | Любой tunnel-бинарник |
| **Heartbeat** | Engine | HEARTBEAT.md — периодические задачи | — |
| **Навыки** | Loader | TOML-манифесты + SKILL.md-инструкции | Пакеты навыков сообщества |
| **Интеграции** | Registry | 70+ интеграций в 9 категориях | Плагинная система |
### Поддержка runtime (текущая)
- ✅ Поддерживается сейчас: `runtime.kind = "native"` или `runtime.kind = "docker"`
- 🚧 Запланировано, но ещё не реализовано: WASM / edge-runtime
При указании неподдерживаемого `runtime.kind` ZeroClaw завершается с явной ошибкой, а не молча откатывается к native.
### Система памяти (полнофункциональный поисковый движок)
Полностью собственная реализация, ноль внешних зависимостей — без Pinecone, Elasticsearch, LangChain:
| Уровень | Реализация |
|---------|-----------|
| **Векторная БД** | Embeddings хранятся как BLOB в SQLite, поиск по косинусному сходству |
| **Поиск по ключевым словам** | Виртуальные таблицы FTS5 со скорингом BM25 |
| **Гибридное слияние** | Пользовательская взвешенная функция слияния (`vector.rs`) |
| **Embeddings** | Trait `EmbeddingProvider` — OpenAI, пользовательский URL или noop |
| **Чанкинг** | Построчный Markdown-чанкер с сохранением заголовков |
| **Кэширование** | Таблица `embedding_cache` в SQLite с LRU-вытеснением |
| **Безопасная переиндексация** | Атомарная перестройка FTS5 + повторное встраивание отсутствующих векторов |
Agent автоматически вспоминает, сохраняет и управляет памятью через инструменты.
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## Важные security-дефолты ## Важные security-дефолты
- Gateway по умолчанию: `127.0.0.1:3000` - Gateway по умолчанию: `127.0.0.1:3000`

1051
README.vi.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -8,6 +8,15 @@
<strong>零开销、零妥协;随处部署、万物可换。</strong> <strong>零开销、零妥协;随处部署、万物可换。</strong>
</p> </p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center"> <p align="center">
🌐 语言:<a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> 🌐 语言:<a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p> </p>
@ -33,7 +42,17 @@
> >
> 技术标识命令、配置键、API 路径、Trait 名称)保持英文,避免语义漂移。 > 技术标识命令、配置键、API 路径、Trait 名称)保持英文,避免语义漂移。
> >
> 最后对齐时间:**2026-02-18**。 > 最后对齐时间:**2026-02-19**。
## 📢 公告板
用于发布重要通知(破坏性变更、安全通告、维护窗口、版本阻塞问题等)。
| 日期UTC | 级别 | 通知 | 处理建议 |
|---|---|---|---|
| 2026-02-19 | _紧急_ | 我们与 `openagen/zeroclaw``zeroclaw.org` **没有任何关系**`zeroclaw.org` 当前会指向 `openagen/zeroclaw` 这个 fork并且该域名/仓库正在冒充我们的官网与官方项目。 | 请不要相信上述来源发布的任何信息、二进制、募资活动或官方声明。请仅以本仓库和已验证官方社媒为准。 |
| 2026-02-19 | _重要_ | 我们目前**尚未发布官方正式网站**,且已发现有人尝试冒充我们。请勿参与任何打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以本仓库为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Telegram 中文频道(@zeroclawlabs_cn](https://t.me/zeroclawlabs_cn)、[Telegram 俄语频道(@zeroclawlabs_ru](https://t.me/zeroclawlabs_ru) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-19 | _重要_ | Anthropic 于 2026-02-19 更新了 Authentication and Credential Use 条款。条款明确OAuth authentication用于 Free、Pro、Max仅适用于 Claude Code 与 Claude.ai将 Claude Free/Pro/Max 账号获得的 OAuth token 用于其他任何产品、工具或服务(包括 Agent SDK不被允许并可能构成对 Consumer Terms of Service 的违规。 | 为避免损失,请暂时不要尝试 Claude Code OAuth 集成;原文见:[Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 项目简介 ## 项目简介
@ -100,6 +119,12 @@ cd zeroclaw
## 快速开始 ## 快速开始
### HomebrewmacOS/Linuxbrew
```bash
brew install zeroclaw
```
```bash ```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw cd zeroclaw
@ -122,6 +147,106 @@ zeroclaw gateway
zeroclaw daemon zeroclaw daemon
``` ```
## Subscription AuthOpenAI Codex / Claude Code
ZeroClaw 现已支持基于订阅的原生鉴权配置(多账号、静态加密存储)。
- 配置文件:`~/.zeroclaw/auth-profiles.json`
- 加密密钥:`~/.zeroclaw/.secret_key`
- Profile ID 格式:`<provider>:<profile_name>`(例:`openai-codex:work`
OpenAI Codex OAuthChatGPT 订阅):
```bash
# 推荐用于服务器/无显示器环境
zeroclaw auth login --provider openai-codex --device-code
# 浏览器/回调流程,支持粘贴回退
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# 检查 / 刷新 / 切换 profile
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token
```bash
# 粘贴订阅/setup tokenAuthorization header 模式)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# 别名命令
zeroclaw auth setup-token --provider anthropic --profile default
```
使用 subscription auth 运行 agent
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic 同时支持 API key 和 auth token 环境变量:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## 架构
每个子系统都是一个 **Trait** — 通过配置切换即可更换实现,无需修改代码。
<p align="center">
<img src="docs/architecture.svg" alt="ZeroClaw 架构图" width="900" />
</p>
| 子系统 | Trait | 内置实现 | 扩展方式 |
|--------|-------|----------|----------|
| **AI 模型** | `Provider` | 通过 `zeroclaw providers` 查看(当前 28 个内置 + 别名,以及自定义端点) | `custom:https://your-api.com`OpenAI 兼容)或 `anthropic-custom:https://your-api.com` |
| **通道** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | 任意消息 API |
| **记忆** | `Memory` | SQLite 混合搜索, PostgreSQL 后端, Lucid 桥接, Markdown 文件, 显式 `none` 后端, 快照/恢复, 可选响应缓存 | 任意持久化后端 |
| **工具** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, 硬件工具 | 任意能力 |
| **可观测性** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **运行时** | `RuntimeAdapter` | Native, Docker沙箱 | 通过 adapter 添加;不支持的类型会快速失败 |
| **安全** | `SecurityPolicy` | Gateway 配对, 沙箱, allowlist, 速率限制, 文件系统作用域, 加密密钥 | — |
| **身份** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | 任意身份格式 |
| **隧道** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | 任意隧道工具 |
| **心跳** | Engine | HEARTBEAT.md 定期任务 | — |
| **技能** | Loader | TOML 清单 + SKILL.md 指令 | 社区技能包 |
| **集成** | Registry | 9 个分类下 70+ 集成 | 插件系统 |
### 运行时支持(当前)
- ✅ 当前支持:`runtime.kind = "native"``runtime.kind = "docker"`
- 🚧 计划中尚未实现WASM / 边缘运行时
配置了不支持的 `runtime.kind`ZeroClaw 会以明确的错误退出,而非静默回退到 native。
### 记忆系统(全栈搜索引擎)
全部自研,零外部依赖 — 无需 Pinecone、Elasticsearch、LangChain
| 层级 | 实现 |
|------|------|
| **向量数据库** | Embeddings 以 BLOB 存储于 SQLite余弦相似度搜索 |
| **关键词搜索** | FTS5 虚拟表BM25 评分 |
| **混合合并** | 自定义加权合并函数(`vector.rs` |
| **Embeddings** | `EmbeddingProvider` trait — OpenAI、自定义 URL 或 noop |
| **分块** | 基于行的 Markdown 分块器,保留标题结构 |
| **缓存** | SQLite `embedding_cache`LRU 淘汰策略 |
| **安全重索引** | 原子化重建 FTS5 + 重新嵌入缺失向量 |
Agent 通过工具自动进行记忆的回忆、保存和管理。
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## 安全默认行为(关键) ## 安全默认行为(关键)
- Gateway 默认绑定:`127.0.0.1:3000` - Gateway 默认绑定:`127.0.0.1:3000`

129
TRADEMARK.md Normal file
View file

@ -0,0 +1,129 @@
# ZeroClaw Trademark Policy
**Effective date:** February 2026
**Maintained by:** ZeroClaw Labs
---
## Our Trademarks
The following are trademarks of ZeroClaw Labs:
- **ZeroClaw** (word mark)
- **zeroclaw-labs** (organization name)
- The ZeroClaw logo and associated visual identity
These marks identify the official ZeroClaw project and distinguish it from
unauthorized forks, derivatives, or impersonators.
---
## Official Repository
The **only** official ZeroClaw repository is:
> https://github.com/zeroclaw-labs/zeroclaw
Any other repository, organization, domain, or product claiming to be
"ZeroClaw" or implying affiliation with ZeroClaw Labs is unauthorized and
may constitute trademark infringement.
**Known unauthorized forks:**
- `openagen/zeroclaw` — not affiliated with ZeroClaw Labs
If you encounter an unauthorized use, please report it by opening an issue
at https://github.com/zeroclaw-labs/zeroclaw/issues.
---
## Permitted Uses
You **may** use the ZeroClaw name and marks in the following ways without
prior written permission:
1. **Attribution** — stating that your software is based on or derived from
ZeroClaw, provided it is clear your project is not the official ZeroClaw.
2. **Descriptive reference** — referring to ZeroClaw in documentation,
articles, blog posts, or presentations to accurately describe the software.
3. **Community discussion** — using the name in forums, issues, or social
media to discuss the project.
4. **Fork identification** — identifying your fork as "a fork of ZeroClaw"
with a clear link to the official repository.
---
## Prohibited Uses
You **may not** use the ZeroClaw name or marks in ways that:
1. **Imply official endorsement** — suggest your project, product, or
organization is officially affiliated with or endorsed by ZeroClaw Labs.
2. **Cause brand confusion** — use "ZeroClaw" as the primary name of a
competing or derivative product in a way that could confuse users about
the source.
3. **Impersonate the project** — create repositories, domains, packages,
or accounts that could be mistaken for the official ZeroClaw project.
4. **Misrepresent origin** — remove or obscure attribution to ZeroClaw Labs
while distributing the software or derivatives.
5. **Commercial trademark use** — use the marks in commercial products,
services, or marketing without prior written permission from ZeroClaw Labs.
---
## Fork Guidelines
Forks are welcome under the terms of the MIT and Apache 2.0 licenses. If
you fork ZeroClaw, you must:
- Clearly state your project is a fork of ZeroClaw
- Link back to the official repository
- Not use "ZeroClaw" as the primary name of your fork
- Not imply your fork is the official or original project
- Retain all copyright, license, and attribution notices
---
## Contributor Protections
Contributors to the official ZeroClaw repository are protected under the
dual MIT + Apache 2.0 license model:
- **Patent grant** (Apache 2.0) — your contributions are protected from
patent claims by other contributors.
- **Attribution** — your contributions are permanently recorded in the
repository history and NOTICE file.
- **No trademark transfer** — contributing code does not transfer any
trademark rights to third parties.
---
## Reporting Infringement
If you believe someone is infringing ZeroClaw trademarks:
1. Open an issue at https://github.com/zeroclaw-labs/zeroclaw/issues
2. Include the URL of the infringing content
3. Describe how it violates this policy
For serious or commercial infringement, contact the maintainers directly
through the repository.
---
## Changes to This Policy
ZeroClaw Labs reserves the right to update this policy at any time. Changes
will be committed to the official repository with a clear commit message.
---
*This trademark policy is separate from and in addition to the MIT and
Apache 2.0 software licenses. The licenses govern use of the source code;
this policy governs use of the ZeroClaw name and brand.*

View file

@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euo pipefail set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd || pwd)"
exec "$ROOT_DIR/scripts/bootstrap.sh" "$@" exec "$ROOT_DIR/zeroclaw_install.sh" "$@"

View file

@ -30,7 +30,7 @@ tokio = { version = "1.42", features = ["rt-multi-thread", "macros", "time", "sy
# Serialization # Serialization
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
toml = "0.8" toml = "1.0"
# HTTP client (for Ollama vision) # HTTP client (for Ollama vision)
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] } reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
@ -52,7 +52,7 @@ tracing = "0.1"
chrono = { version = "0.4", features = ["clock", "std"] } chrono = { version = "0.4", features = ["clock", "std"] }
# User directories # User directories
directories = "5.0" directories = "6.0"
[target.'cfg(target_os = "linux")'.dependencies] [target.'cfg(target_os = "linux")'.dependencies]

View file

@ -14,6 +14,11 @@ else
fi fi
COMPOSE_FILE="$BASE_DIR/docker-compose.yml" COMPOSE_FILE="$BASE_DIR/docker-compose.yml"
if [ "$BASE_DIR" = "dev" ]; then
ENV_FILE=".env"
else
ENV_FILE="../.env"
fi
# Colors # Colors
GREEN='\033[0;32m' GREEN='\033[0;32m'
@ -21,6 +26,15 @@ YELLOW='\033[1;33m'
RED='\033[0;31m' RED='\033[0;31m'
NC='\033[0m' # No Color NC='\033[0m' # No Color
function load_env {
if [ -f "$ENV_FILE" ]; then
# Auto-export variables from .env for docker compose passthrough.
set -a
source "$ENV_FILE"
set +a
fi
}
function ensure_config { function ensure_config {
CONFIG_DIR="$HOST_TARGET_DIR/.zeroclaw" CONFIG_DIR="$HOST_TARGET_DIR/.zeroclaw"
CONFIG_FILE="$CONFIG_DIR/config.toml" CONFIG_FILE="$CONFIG_DIR/config.toml"
@ -55,6 +69,8 @@ if [ -z "$1" ]; then
exit 1 exit 1
fi fi
load_env
case "$1" in case "$1" in
up) up)
ensure_config ensure_config

View file

@ -20,11 +20,20 @@ services:
container_name: zeroclaw-dev container_name: zeroclaw-dev
restart: unless-stopped restart: unless-stopped
environment: environment:
- API_KEY
- PROVIDER
- ZEROCLAW_MODEL
- ZEROCLAW_GATEWAY_PORT=3000 - ZEROCLAW_GATEWAY_PORT=3000
- SANDBOX_HOST=zeroclaw-sandbox - SANDBOX_HOST=zeroclaw-sandbox
secrets:
- source: zeroclaw_env
target: zeroclaw_env
entrypoint: ["/bin/bash", "-lc"]
command:
- |
if [ -f /run/secrets/zeroclaw_env ]; then
set -a
. /run/secrets/zeroclaw_env
set +a
fi
exec zeroclaw gateway --port "${ZEROCLAW_GATEWAY_PORT:-3000}" --host "[::]"
volumes: volumes:
# Mount single config file (avoids shadowing other files in .zeroclaw) # Mount single config file (avoids shadowing other files in .zeroclaw)
- ../target/.zeroclaw/config.toml:/zeroclaw-data/.zeroclaw/config.toml - ../target/.zeroclaw/config.toml:/zeroclaw-data/.zeroclaw/config.toml
@ -57,3 +66,7 @@ services:
networks: networks:
dev-net: dev-net:
driver: bridge driver: bridge
secrets:
zeroclaw_env:
file: ../.env

View file

@ -51,8 +51,43 @@ Notes:
- Model cache previews come from `zeroclaw models refresh --provider <ID>`. - Model cache previews come from `zeroclaw models refresh --provider <ID>`.
- These are runtime chat commands, not CLI subcommands. - These are runtime chat commands, not CLI subcommands.
## Inbound Image Marker Protocol
ZeroClaw supports multimodal input through inline message markers:
- Syntax: ``[IMAGE:<source>]``
- `<source>` can be:
- Local file path
- Data URI (`data:image/...;base64,...`)
- Remote URL only when `[multimodal].allow_remote_fetch = true`
Operational notes:
- Marker parsing applies to user-role messages before provider calls.
- Provider capability is enforced at runtime: if the selected provider does not support vision, the request fails with a structured capability error (`capability=vision`).
- Linq webhook `media` parts with `image/*` MIME type are automatically converted to this marker format.
## Channel Matrix ## Channel Matrix
### Build Feature Toggle (`channel-matrix`)
Matrix support is controlled at compile time by the `channel-matrix` Cargo feature.
- Default builds include Matrix support (`default = ["hardware", "channel-matrix"]`).
- For faster local iteration when Matrix is not needed:
```bash
cargo check --no-default-features --features hardware
```
- To explicitly enable Matrix support in custom feature sets:
```bash
cargo check --no-default-features --features hardware,channel-matrix
```
If `[channels_config.matrix]` is present but the binary was built without `channel-matrix`, `zeroclaw channel list`, `zeroclaw channel doctor`, and `zeroclaw channel start` will log that Matrix is intentionally skipped for this build.
--- ---
## 2. Delivery Modes at a Glance ## 2. Delivery Modes at a Glance
@ -66,7 +101,7 @@ Notes:
| Mattermost | polling | No | | Mattermost | polling | No |
| Matrix | sync API (supports E2EE) | No | | Matrix | sync API (supports E2EE) | No |
| Signal | signal-cli HTTP bridge | No (local bridge endpoint) | | Signal | signal-cli HTTP bridge | No (local bridge endpoint) |
| WhatsApp | webhook | Yes (public HTTPS callback) | | WhatsApp | webhook (Cloud API) or websocket (Web mode) | Cloud API: Yes (public HTTPS callback), Web mode: No |
| Webhook | gateway endpoint (`/webhook`) | Usually yes | | Webhook | gateway endpoint (`/webhook`) | Usually yes |
| Email | IMAP polling + SMTP send | No | | Email | IMAP polling + SMTP send | No |
| IRC | IRC socket | No | | IRC | IRC socket | No |
@ -103,8 +138,17 @@ Field names differ by channel:
[channels_config.telegram] [channels_config.telegram]
bot_token = "123456:telegram-token" bot_token = "123456:telegram-token"
allowed_users = ["*"] allowed_users = ["*"]
stream_mode = "off" # optional: off | partial
draft_update_interval_ms = 1000 # optional: edit throttle for partial streaming
mention_only = false # optional: require @mention in groups
interrupt_on_new_message = false # optional: cancel in-flight same-sender same-chat request
``` ```
Telegram notes:
- `interrupt_on_new_message = true` preserves interrupted user turns in conversation history, then restarts generation on the newest message.
- Interruption scope is strict: same sender in the same chat. Messages from different chats are processed independently.
### 4.2 Discord ### 4.2 Discord
```toml ```toml
@ -164,6 +208,13 @@ ignore_stories = true
### 4.7 WhatsApp ### 4.7 WhatsApp
ZeroClaw supports two WhatsApp backends:
- **Cloud API mode** (`phone_number_id` + `access_token` + `verify_token`)
- **WhatsApp Web mode** (`session_path`, requires build flag `--features whatsapp-web`)
Cloud API mode:
```toml ```toml
[channels_config.whatsapp] [channels_config.whatsapp]
access_token = "EAAB..." access_token = "EAAB..."
@ -173,6 +224,22 @@ app_secret = "your-app-secret" # optional but recommended
allowed_numbers = ["*"] allowed_numbers = ["*"]
``` ```
WhatsApp Web mode:
```toml
[channels_config.whatsapp]
session_path = "~/.zeroclaw/state/whatsapp-web/session.db"
pair_phone = "15551234567" # optional; omit to use QR flow
pair_code = "" # optional custom pair code
allowed_numbers = ["*"]
```
Notes:
- Build with `cargo build --features whatsapp-web` (or equivalent run command).
- Keep `session_path` on persistent storage to avoid relinking after restart.
- Reply routing uses the originating chat JID, so direct and group replies work correctly.
### 4.8 Webhook Channel Config (Gateway) ### 4.8 Webhook Channel Config (Gateway)
`channels_config.webhook` enables webhook-specific gateway behavior. `channels_config.webhook` enables webhook-specific gateway behavior.
@ -331,7 +398,7 @@ rg -n "Matrix|Telegram|Discord|Slack|Mattermost|Signal|WhatsApp|Email|IRC|Lark|D
| Mattermost | `Mattermost channel listening on` | `Mattermost: ignoring message from unauthorized user:` | `Mattermost poll error:` / `Mattermost parse error:` | | Mattermost | `Mattermost channel listening on` | `Mattermost: ignoring message from unauthorized user:` | `Mattermost poll error:` / `Mattermost parse error:` |
| Matrix | `Matrix channel listening on room` / `Matrix room ... is encrypted; E2EE decryption is enabled via matrix-sdk.` | `Matrix whoami failed; falling back to configured session hints for E2EE session restore:` / `Matrix whoami failed while resolving listener user_id; using configured user_id hint:` | `Matrix sync error: ... retrying...` | | Matrix | `Matrix channel listening on room` / `Matrix room ... is encrypted; E2EE decryption is enabled via matrix-sdk.` | `Matrix whoami failed; falling back to configured session hints for E2EE session restore:` / `Matrix whoami failed while resolving listener user_id; using configured user_id hint:` | `Matrix sync error: ... retrying...` |
| Signal | `Signal channel listening via SSE on` | (allowlist checks are enforced by `allowed_from`) | `Signal SSE returned ...` / `Signal SSE connect error:` | | Signal | `Signal channel listening via SSE on` | (allowlist checks are enforced by `allowed_from`) | `Signal SSE returned ...` / `Signal SSE connect error:` |
| WhatsApp (channel) | `WhatsApp channel active (webhook mode).` | `WhatsApp: ignoring message from unauthorized number:` | `WhatsApp send failed:` | | WhatsApp (channel) | `WhatsApp channel active (webhook mode).` / `WhatsApp Web connected successfully` | `WhatsApp: ignoring message from unauthorized number:` / `WhatsApp Web: message from ... not in allowed list` | `WhatsApp send failed:` / `WhatsApp Web stream error:` |
| Webhook / WhatsApp (gateway) | `WhatsApp webhook verified successfully` | `Webhook: rejected — not paired / invalid bearer token` / `Webhook: rejected request — invalid or missing X-Webhook-Secret` / `WhatsApp webhook verification failed — token mismatch` | `Webhook JSON parse error:` | | Webhook / WhatsApp (gateway) | `WhatsApp webhook verified successfully` | `Webhook: rejected — not paired / invalid bearer token` / `Webhook: rejected request — invalid or missing X-Webhook-Secret` / `WhatsApp webhook verification failed — token mismatch` | `Webhook JSON parse error:` |
| Email | `Email polling every ...` / `Email sent to ...` | `Blocked email from ...` | `Email poll failed:` / `Email poll task panicked:` | | Email | `Email polling every ...` / `Email sent to ...` | `Blocked email from ...` | `Email poll failed:` / `Email poll task panicked:` |
| IRC | `IRC channel connecting to ...` / `IRC registered as ...` | (allowlist checks are enforced by `allowed_users`) | `IRC SASL authentication failed (...)` / `IRC server does not support SASL...` / `IRC nickname ... is in use, trying ...` | | IRC | `IRC channel connecting to ...` / `IRC registered as ...` | (allowlist checks are enforced by `allowed_users`) | `IRC SASL authentication failed (...)` / `IRC server does not support SASL...` / `IRC nickname ... is in use, trying ...` |
@ -349,4 +416,3 @@ If a specific channel task crashes or exits, the channel supervisor in `channels
- `Channel message worker crashed:` - `Channel message worker crashed:`
These messages indicate automatic restart behavior is active, and you should inspect preceding logs for root cause. These messages indicate automatic restart behavior is active, and you should inspect preceding logs for root cause.

View file

@ -2,7 +2,7 @@
This reference is derived from the current CLI surface (`zeroclaw --help`). This reference is derived from the current CLI surface (`zeroclaw --help`).
Last verified: **February 18, 2026**. Last verified: **February 19, 2026**.
## Top-Level Commands ## Top-Level Commands
@ -22,6 +22,7 @@ Last verified: **February 18, 2026**.
| `integrations` | Inspect integration details | | `integrations` | Inspect integration details |
| `skills` | List/install/remove skills | | `skills` | List/install/remove skills |
| `migrate` | Import from external runtimes (currently OpenClaw) | | `migrate` | Import from external runtimes (currently OpenClaw) |
| `config` | Export machine-readable config schema |
| `hardware` | Discover and introspect USB hardware | | `hardware` | Discover and introspect USB hardware |
| `peripheral` | Configure and flash peripherals | | `peripheral` | Configure and flash peripherals |
@ -33,6 +34,7 @@ Last verified: **February 18, 2026**.
- `zeroclaw onboard --interactive` - `zeroclaw onboard --interactive`
- `zeroclaw onboard --channels-only` - `zeroclaw onboard --channels-only`
- `zeroclaw onboard --api-key <KEY> --provider <ID> --memory <sqlite|lucid|markdown|none>` - `zeroclaw onboard --api-key <KEY> --provider <ID> --memory <sqlite|lucid|markdown|none>`
- `zeroclaw onboard --api-key <KEY> --provider <ID> --model <MODEL_ID> --memory <sqlite|lucid|markdown|none>`
### `agent` ### `agent`
@ -51,6 +53,7 @@ Last verified: **February 18, 2026**.
- `zeroclaw service install` - `zeroclaw service install`
- `zeroclaw service start` - `zeroclaw service start`
- `zeroclaw service stop` - `zeroclaw service stop`
- `zeroclaw service restart`
- `zeroclaw service status` - `zeroclaw service status`
- `zeroclaw service uninstall` - `zeroclaw service uninstall`
@ -89,6 +92,13 @@ Runtime in-chat commands (Telegram/Discord while channel server is running):
- `/model` - `/model`
- `/model <model-id>` - `/model <model-id>`
Channel runtime also watches `config.toml` and hot-applies updates to:
- `default_provider`
- `default_model`
- `default_temperature`
- `api_key` / `api_url` (for the default provider)
- `reliability.*` provider retry settings
`add/remove` currently route you back to managed setup/manual config paths (not full declarative mutators yet). `add/remove` currently route you back to managed setup/manual config paths (not full declarative mutators yet).
### `integrations` ### `integrations`
@ -101,10 +111,20 @@ Runtime in-chat commands (Telegram/Discord while channel server is running):
- `zeroclaw skills install <source>` - `zeroclaw skills install <source>`
- `zeroclaw skills remove <name>` - `zeroclaw skills remove <name>`
`<source>` accepts git remotes (`https://...`, `http://...`, `ssh://...`, and `git@host:owner/repo.git`) or a local filesystem path.
Skill manifests (`SKILL.toml`) support `prompts` and `[[tools]]`; both are injected into the agent system prompt at runtime, so the model can follow skill instructions without manually reading skill files.
### `migrate` ### `migrate`
- `zeroclaw migrate openclaw [--source <path>] [--dry-run]` - `zeroclaw migrate openclaw [--source <path>] [--dry-run]`
### `config`
- `zeroclaw config schema`
`config schema` prints a JSON Schema (draft 2020-12) for the full `config.toml` contract to stdout.
### `hardware` ### `hardware`
- `zeroclaw hardware discover` - `zeroclaw hardware discover`

View file

@ -2,11 +2,21 @@
This is a high-signal reference for common config sections and defaults. This is a high-signal reference for common config sections and defaults.
Last verified: **February 18, 2026**. Last verified: **February 19, 2026**.
Config file path: Config path resolution at startup:
- `~/.zeroclaw/config.toml` 1. `ZEROCLAW_WORKSPACE` override (if set)
2. persisted `~/.zeroclaw/active_workspace.toml` marker (if present)
3. default `~/.zeroclaw/config.toml`
ZeroClaw logs the resolved config on startup at `INFO` level:
- `Config loaded` with fields: `path`, `workspace`, `source`, `initialized`
Schema export command:
- `zeroclaw config schema` (prints JSON Schema draft 2020-12 to stdout)
## Core Keys ## Core Keys
@ -16,17 +26,216 @@ Config file path:
| `default_model` | `anthropic/claude-sonnet-4-6` | model routed through selected provider | | `default_model` | `anthropic/claude-sonnet-4-6` | model routed through selected provider |
| `default_temperature` | `0.7` | model temperature | | `default_temperature` | `0.7` | model temperature |
## `[observability]`
| Key | Default | Purpose |
|---|---|---|
| `backend` | `none` | Observability backend: `none`, `noop`, `log`, `prometheus`, `otel`, `opentelemetry`, or `otlp` |
| `otel_endpoint` | `http://localhost:4318` | OTLP HTTP endpoint used when backend is `otel` |
| `otel_service_name` | `zeroclaw` | Service name emitted to OTLP collector |
Notes:
- `backend = "otel"` uses OTLP HTTP export with a blocking exporter client so spans and metrics can be emitted safely from non-Tokio contexts.
- Alias values `opentelemetry` and `otlp` map to the same OTel backend.
Example:
```toml
[observability]
backend = "otel"
otel_endpoint = "http://localhost:4318"
otel_service_name = "zeroclaw"
```
## Environment Provider Overrides
Provider selection can also be controlled by environment variables. Precedence is:
1. `ZEROCLAW_PROVIDER` (explicit override, always wins when non-empty)
2. `PROVIDER` (legacy fallback, only applied when config provider is unset or still `openrouter`)
3. `default_provider` in `config.toml`
Operational note for container users:
- If your `config.toml` sets an explicit custom provider like `custom:https://.../v1`, a default `PROVIDER=openrouter` from Docker/container env will no longer replace it.
- Use `ZEROCLAW_PROVIDER` when you intentionally want runtime env to override a non-default configured provider.
## `[agent]` ## `[agent]`
| Key | Default | Purpose | | Key | Default | Purpose |
|---|---|---| |---|---|---|
| `compact_context` | `false` | When true: bootstrap_max_chars=6000, rag_chunk_limit=2. Use for 13B or smaller models |
| `max_tool_iterations` | `10` | Maximum tool-call loop turns per user message across CLI, gateway, and channels | | `max_tool_iterations` | `10` | Maximum tool-call loop turns per user message across CLI, gateway, and channels |
| `max_history_messages` | `50` | Maximum conversation history messages retained per session |
| `parallel_tools` | `false` | Enable parallel tool execution within a single iteration |
| `tool_dispatcher` | `auto` | Tool dispatch strategy |
Notes: Notes:
- Setting `max_tool_iterations = 0` falls back to safe default `10`. - Setting `max_tool_iterations = 0` falls back to safe default `10`.
- If a channel message exceeds this value, the runtime returns: `Agent exceeded maximum tool iterations (<value>)`. - If a channel message exceeds this value, the runtime returns: `Agent exceeded maximum tool iterations (<value>)`.
## `[agents.<name>]`
Delegate sub-agent configurations. Each key under `[agents]` defines a named sub-agent that the primary agent can delegate to.
| Key | Default | Purpose |
|---|---|---|
| `provider` | _required_ | Provider name (e.g. `"ollama"`, `"openrouter"`, `"anthropic"`) |
| `model` | _required_ | Model name for the sub-agent |
| `system_prompt` | unset | Optional system prompt override for the sub-agent |
| `api_key` | unset | Optional API key override (stored encrypted when `secrets.encrypt = true`) |
| `temperature` | unset | Temperature override for the sub-agent |
| `max_depth` | `3` | Max recursion depth for nested delegation |
```toml
[agents.researcher]
provider = "openrouter"
model = "anthropic/claude-sonnet-4-6"
system_prompt = "You are a research assistant."
max_depth = 2
[agents.coder]
provider = "ollama"
model = "qwen2.5-coder:32b"
temperature = 0.2
```
## `[runtime]`
| Key | Default | Purpose |
|---|---|---|
| `reasoning_enabled` | unset (`None`) | Global reasoning/thinking override for providers that support explicit controls |
Notes:
- `reasoning_enabled = false` explicitly disables provider-side reasoning for supported providers (currently `ollama`, via request field `think: false`).
- `reasoning_enabled = true` explicitly requests reasoning for supported providers (`think: true` on `ollama`).
- Unset keeps provider defaults.
## `[skills]`
| Key | Default | Purpose |
|---|---|---|
| `open_skills_enabled` | `false` | Opt-in loading/sync of community `open-skills` repository |
| `open_skills_dir` | unset | Optional local path for `open-skills` (defaults to `$HOME/open-skills` when enabled) |
Notes:
- Security-first default: ZeroClaw does **not** clone or sync `open-skills` unless `open_skills_enabled = true`.
- Environment overrides:
- `ZEROCLAW_OPEN_SKILLS_ENABLED` accepts `1/0`, `true/false`, `yes/no`, `on/off`.
- `ZEROCLAW_OPEN_SKILLS_DIR` overrides the repository path when non-empty.
- Precedence for enable flag: `ZEROCLAW_OPEN_SKILLS_ENABLED``skills.open_skills_enabled` in `config.toml` → default `false`.
## `[composio]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable Composio managed OAuth tools |
| `api_key` | unset | Composio API key used by the `composio` tool |
| `entity_id` | `default` | Default `user_id` sent on connect/execute calls |
Notes:
- Backward compatibility: legacy `enable = true` is accepted as an alias for `enabled = true`.
- If `enabled = false` or `api_key` is missing, the `composio` tool is not registered.
- ZeroClaw requests Composio v3 tools with `toolkit_versions=latest` and executes tools with `version="latest"` to avoid stale default tool revisions.
- Typical flow: call `connect`, complete browser OAuth, then run `execute` for the desired tool action.
- If Composio returns a missing connected-account reference error, call `list_accounts` (optionally with `app`) and pass the returned `connected_account_id` to `execute`.
## `[cost]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable cost tracking |
| `daily_limit_usd` | `10.00` | Daily spending limit in USD |
| `monthly_limit_usd` | `100.00` | Monthly spending limit in USD |
| `warn_at_percent` | `80` | Warn when spending reaches this percentage of limit |
| `allow_override` | `false` | Allow requests to exceed budget with `--override` flag |
Notes:
- When `enabled = true`, the runtime tracks per-request cost estimates and enforces daily/monthly limits.
- At `warn_at_percent` threshold, a warning is emitted but requests continue.
- When a limit is reached, requests are rejected unless `allow_override = true` and the `--override` flag is passed.
## `[identity]`
| Key | Default | Purpose |
|---|---|---|
| `format` | `openclaw` | Identity format: `"openclaw"` (default) or `"aieos"` |
| `aieos_path` | unset | Path to AIEOS JSON file (relative to workspace) |
| `aieos_inline` | unset | Inline AIEOS JSON (alternative to file path) |
Notes:
- Use `format = "aieos"` with either `aieos_path` or `aieos_inline` to load an AIEOS / OpenClaw identity document.
- Only one of `aieos_path` or `aieos_inline` should be set; `aieos_path` takes precedence.
## `[multimodal]`
| Key | Default | Purpose |
|---|---|---|
| `max_images` | `4` | Maximum image markers accepted per request |
| `max_image_size_mb` | `5` | Per-image size limit before base64 encoding |
| `allow_remote_fetch` | `false` | Allow fetching `http(s)` image URLs from markers |
Notes:
- Runtime accepts image markers in user messages with syntax: ``[IMAGE:<source>]``.
- Supported sources:
- Local file path (for example ``[IMAGE:/tmp/screenshot.png]``)
- Data URI (for example ``[IMAGE:data:image/png;base64,...]``)
- Remote URL only when `allow_remote_fetch = true`
- Allowed MIME types: `image/png`, `image/jpeg`, `image/webp`, `image/gif`, `image/bmp`.
- When the active provider does not support vision, requests fail with a structured capability error (`capability=vision`) instead of silently dropping images.
## `[browser]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable `browser_open` tool (opens URLs without scraping) |
| `allowed_domains` | `[]` | Allowed domains for `browser_open` (exact or subdomain match) |
| `session_name` | unset | Browser session name (for agent-browser automation) |
| `backend` | `agent_browser` | Browser automation backend: `"agent_browser"`, `"rust_native"`, `"computer_use"`, or `"auto"` |
| `native_headless` | `true` | Headless mode for rust-native backend |
| `native_webdriver_url` | `http://127.0.0.1:9515` | WebDriver endpoint URL for rust-native backend |
| `native_chrome_path` | unset | Optional Chrome/Chromium executable path for rust-native backend |
### `[browser.computer_use]`
| Key | Default | Purpose |
|---|---|---|
| `endpoint` | `http://127.0.0.1:8787/v1/actions` | Sidecar endpoint for computer-use actions (OS-level mouse/keyboard/screenshot) |
| `api_key` | unset | Optional bearer token for computer-use sidecar (stored encrypted) |
| `timeout_ms` | `15000` | Per-action request timeout in milliseconds |
| `allow_remote_endpoint` | `false` | Allow remote/public endpoint for computer-use sidecar |
| `window_allowlist` | `[]` | Optional window title/process allowlist forwarded to sidecar policy |
| `max_coordinate_x` | unset | Optional X-axis boundary for coordinate-based actions |
| `max_coordinate_y` | unset | Optional Y-axis boundary for coordinate-based actions |
Notes:
- When `backend = "computer_use"`, the agent delegates browser actions to the sidecar at `computer_use.endpoint`.
- `allow_remote_endpoint = false` (default) rejects any non-loopback endpoint to prevent accidental public exposure.
- Use `window_allowlist` to restrict which OS windows the sidecar can interact with.
## `[http_request]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable `http_request` tool for API interactions |
| `allowed_domains` | `[]` | Allowed domains for HTTP requests (exact or subdomain match) |
| `max_response_size` | `1000000` | Maximum response size in bytes (default: 1 MB) |
| `timeout_secs` | `30` | Request timeout in seconds |
Notes:
- Deny-by-default: if `allowed_domains` is empty, all HTTP requests are rejected.
- Use exact domain or subdomain matching (e.g. `"api.example.com"`, `"example.com"`).
## `[gateway]` ## `[gateway]`
| Key | Default | Purpose | | Key | Default | Purpose |
@ -36,20 +245,133 @@ Notes:
| `require_pairing` | `true` | require pairing before bearer auth | | `require_pairing` | `true` | require pairing before bearer auth |
| `allow_public_bind` | `false` | block accidental public exposure | | `allow_public_bind` | `false` | block accidental public exposure |
## `[autonomy]`
| Key | Default | Purpose |
|---|---|---|
| `level` | `supervised` | `read_only`, `supervised`, or `full` |
| `workspace_only` | `true` | restrict writes/command paths to workspace scope |
| `allowed_commands` | _required for shell execution_ | allowlist of executable names |
| `forbidden_paths` | `[]` | explicit path denylist |
| `max_actions_per_hour` | `100` | per-policy action budget |
| `max_cost_per_day_cents` | `1000` | per-policy spend guardrail |
| `require_approval_for_medium_risk` | `true` | approval gate for medium-risk commands |
| `block_high_risk_commands` | `true` | hard block for high-risk commands |
| `auto_approve` | `[]` | tool operations always auto-approved |
| `always_ask` | `[]` | tool operations that always require approval |
Notes:
- `level = "full"` skips medium-risk approval gating for shell execution, while still enforcing configured guardrails.
- Shell separator/operator parsing is quote-aware. Characters like `;` inside quoted arguments are treated as literals, not command separators.
- Unquoted shell chaining/operators are still enforced by policy checks (`;`, `|`, `&&`, `||`, background chaining, and redirects).
## `[memory]` ## `[memory]`
| Key | Default | Purpose | | Key | Default | Purpose |
|---|---|---| |---|---|---|
| `backend` | `sqlite` | `sqlite`, `lucid`, `markdown`, `none` | | `backend` | `sqlite` | `sqlite`, `lucid`, `markdown`, `none` |
| `auto_save` | `true` | automatic persistence | | `auto_save` | `true` | persist user-stated inputs only (assistant outputs are excluded) |
| `embedding_provider` | `none` | `none`, `openai`, or custom endpoint | | `embedding_provider` | `none` | `none`, `openai`, or custom endpoint |
| `embedding_model` | `text-embedding-3-small` | embedding model ID, or `hint:<name>` route |
| `embedding_dimensions` | `1536` | expected vector size for selected embedding model |
| `vector_weight` | `0.7` | hybrid ranking vector weight | | `vector_weight` | `0.7` | hybrid ranking vector weight |
| `keyword_weight` | `0.3` | hybrid ranking keyword weight | | `keyword_weight` | `0.3` | hybrid ranking keyword weight |
Notes:
- Memory context injection ignores legacy `assistant_resp*` auto-save keys to prevent old model-authored summaries from being treated as facts.
## `[[model_routes]]` and `[[embedding_routes]]`
Use route hints so integrations can keep stable names while model IDs evolve.
### `[[model_routes]]`
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Task hint name (e.g. `"reasoning"`, `"fast"`, `"code"`, `"summarize"`) |
| `provider` | _required_ | Provider to route to (must match a known provider name) |
| `model` | _required_ | Model to use with that provider |
| `api_key` | unset | Optional API key override for this route's provider |
### `[[embedding_routes]]`
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Route hint name (e.g. `"semantic"`, `"archive"`, `"faq"`) |
| `provider` | _required_ | Embedding provider (`"none"`, `"openai"`, or `"custom:<url>"`) |
| `model` | _required_ | Embedding model to use with that provider |
| `dimensions` | unset | Optional embedding dimension override for this route |
| `api_key` | unset | Optional API key override for this route's provider |
```toml
[memory]
embedding_model = "hint:semantic"
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "provider/model-id"
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
dimensions = 1536
```
Upgrade strategy:
1. Keep hints stable (`hint:reasoning`, `hint:semantic`).
2. Update only `model = "...new-version..."` in the route entries.
3. Validate with `zeroclaw doctor` before restart/rollout.
## `[query_classification]`
Automatic model hint routing — maps user messages to `[[model_routes]]` hints based on content patterns.
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable automatic query classification |
| `rules` | `[]` | Classification rules (evaluated in priority order) |
Each rule in `rules`:
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Must match a `[[model_routes]]` hint value |
| `keywords` | `[]` | Case-insensitive substring matches |
| `patterns` | `[]` | Case-sensitive literal matches (for code fences, keywords like `"fn "`) |
| `min_length` | unset | Only match if message length ≥ N chars |
| `max_length` | unset | Only match if message length ≤ N chars |
| `priority` | `0` | Higher priority rules are checked first |
```toml
[query_classification]
enabled = true
[[query_classification.rules]]
hint = "reasoning"
keywords = ["explain", "analyze", "why"]
min_length = 200
priority = 10
[[query_classification.rules]]
hint = "fast"
keywords = ["hi", "hello", "thanks"]
max_length = 50
priority = 5
```
## `[channels_config]` ## `[channels_config]`
Top-level channel options are configured under `channels_config`. Top-level channel options are configured under `channels_config`.
| Key | Default | Purpose |
|---|---|---|
| `message_timeout_secs` | `300` | Base timeout in seconds for channel message processing; runtime scales this with tool-loop depth (up to 4x) |
Examples: Examples:
- `[channels_config.telegram]` - `[channels_config.telegram]`
@ -57,8 +379,107 @@ Examples:
- `[channels_config.whatsapp]` - `[channels_config.whatsapp]`
- `[channels_config.email]` - `[channels_config.email]`
Notes:
- Default `300s` is optimized for on-device LLMs (Ollama) which are slower than cloud APIs.
- Runtime timeout budget is `message_timeout_secs * scale`, where `scale = min(max_tool_iterations, 4)` and a minimum of `1`.
- This scaling avoids false timeouts when the first LLM turn is slow/retried but later tool-loop turns still need to complete.
- If using cloud APIs (OpenAI, Anthropic, etc.), you can reduce this to `60` or lower.
- Values below `30` are clamped to `30` to avoid immediate timeout churn.
- When a timeout occurs, users receive: `⚠️ Request timed out while waiting for the model. Please try again.`
- Telegram-only interruption behavior is controlled with `channels_config.telegram.interrupt_on_new_message` (default `false`).
When enabled, a newer message from the same sender in the same chat cancels the in-flight request and preserves interrupted user context.
- While `zeroclaw channel start` is running, updates to `default_provider`, `default_model`, `default_temperature`, `api_key`, `api_url`, and `reliability.*` are hot-applied from `config.toml` on the next inbound message.
See detailed channel matrix and allowlist behavior in [channels-reference.md](channels-reference.md). See detailed channel matrix and allowlist behavior in [channels-reference.md](channels-reference.md).
### `[channels_config.whatsapp]`
WhatsApp supports two backends under one config table.
Cloud API mode (Meta webhook):
| Key | Required | Purpose |
|---|---|---|
| `access_token` | Yes | Meta Cloud API bearer token |
| `phone_number_id` | Yes | Meta phone number ID |
| `verify_token` | Yes | Webhook verification token |
| `app_secret` | Optional | Enables webhook signature verification (`X-Hub-Signature-256`) |
| `allowed_numbers` | Recommended | Allowed inbound numbers (`[]` = deny all, `"*"` = allow all) |
WhatsApp Web mode (native client):
| Key | Required | Purpose |
|---|---|---|
| `session_path` | Yes | Persistent SQLite session path |
| `pair_phone` | Optional | Pair-code flow phone number (digits only) |
| `pair_code` | Optional | Custom pair code (otherwise auto-generated) |
| `allowed_numbers` | Recommended | Allowed inbound numbers (`[]` = deny all, `"*"` = allow all) |
Notes:
- WhatsApp Web requires build flag `whatsapp-web`.
- If both Cloud and Web fields are present, Cloud mode wins for backward compatibility.
## `[hardware]`
Hardware wizard configuration for physical-world access (STM32, probe, serial).
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Whether hardware access is enabled |
| `transport` | `none` | Transport mode: `"none"`, `"native"`, `"serial"`, or `"probe"` |
| `serial_port` | unset | Serial port path (e.g. `"/dev/ttyACM0"`) |
| `baud_rate` | `115200` | Serial baud rate |
| `probe_target` | unset | Probe target chip (e.g. `"STM32F401RE"`) |
| `workspace_datasheets` | `false` | Enable workspace datasheet RAG (index PDF schematics for AI pin lookups) |
Notes:
- Use `transport = "serial"` with `serial_port` for USB-serial connections.
- Use `transport = "probe"` with `probe_target` for debug-probe flashing (e.g. ST-Link).
- See [hardware-peripherals-design.md](hardware-peripherals-design.md) for protocol details.
## `[peripherals]`
Higher-level peripheral board configuration. Boards become agent tools when enabled.
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable peripheral support (boards become agent tools) |
| `boards` | `[]` | Board configurations |
| `datasheet_dir` | unset | Path to datasheet docs (relative to workspace) for RAG retrieval |
Each entry in `boards`:
| Key | Default | Purpose |
|---|---|---|
| `board` | _required_ | Board type: `"nucleo-f401re"`, `"rpi-gpio"`, `"esp32"`, etc. |
| `transport` | `serial` | Transport: `"serial"`, `"native"`, `"websocket"` |
| `path` | unset | Path for serial: `"/dev/ttyACM0"`, `"/dev/ttyUSB0"` |
| `baud` | `115200` | Baud rate for serial |
```toml
[peripherals]
enabled = true
datasheet_dir = "docs/datasheets"
[[peripherals.boards]]
board = "nucleo-f401re"
transport = "serial"
path = "/dev/ttyACM0"
baud = 115200
[[peripherals.boards]]
board = "rpi-gpio"
transport = "native"
```
Notes:
- Place `.md`/`.txt` datasheet files named by board (e.g. `nucleo-f401re.md`, `rpi-gpio.md`) in `datasheet_dir` for RAG retrieval.
- See [hardware-peripherals-design.md](hardware-peripherals-design.md) for board protocol and firmware notes.
## Security-Relevant Defaults ## Security-Relevant Defaults
- deny-by-default channel allowlists (`[]` means deny all) - deny-by-default channel allowlists (`[]` means deny all)
@ -73,6 +494,7 @@ After editing config:
zeroclaw status zeroclaw status
zeroclaw doctor zeroclaw doctor
zeroclaw channel doctor zeroclaw channel doctor
zeroclaw service restart
``` ```
## Related Docs ## Related Docs

View file

@ -26,7 +26,7 @@ pub fn run_wizard() -> Result<Config> {
security: SecurityConfig::autodetect(), // Silent! security: SecurityConfig::autodetect(), // Silent!
}; };
config.save()?; config.save().await?;
Ok(config) Ok(config)
} }
``` ```

View file

@ -8,6 +8,15 @@ For first-time setup and quick orientation.
2. One-click setup and dual bootstrap mode: [../one-click-bootstrap.md](../one-click-bootstrap.md) 2. One-click setup and dual bootstrap mode: [../one-click-bootstrap.md](../one-click-bootstrap.md)
3. Find commands by tasks: [../commands-reference.md](../commands-reference.md) 3. Find commands by tasks: [../commands-reference.md](../commands-reference.md)
## Choose Your Path
| Scenario | Command |
|----------|---------|
| I have an API key, want fastest setup | `zeroclaw onboard --api-key sk-... --provider openrouter` |
| I want guided prompts | `zeroclaw onboard --interactive` |
| Config exists, just fix channels | `zeroclaw onboard --channels-only` |
| Using subscription auth | See [Subscription Auth](../../README.md#subscription-auth-openai-codex--claude-code) |
## Onboarding and Validation ## Onboarding and Validation
- Quick onboarding: `zeroclaw onboard --api-key "sk-..." --provider openrouter` - Quick onboarding: `zeroclaw onboard --api-key "sk-..." --provider openrouter`

View file

@ -2,6 +2,8 @@
For board integration, firmware flow, and peripheral architecture. For board integration, firmware flow, and peripheral architecture.
ZeroClaw's hardware subsystem enables direct control of microcontrollers and peripherals via the `Peripheral` trait. Each board exposes tools for GPIO, ADC, and sensor operations, allowing agent-driven hardware interaction on boards like STM32 Nucleo, Raspberry Pi, and ESP32. See [hardware-peripherals-design.md](../hardware-peripherals-design.md) for the full architecture.
## Entry Points ## Entry Points
- Architecture and peripheral model: [../hardware-peripherals-design.md](../hardware-peripherals-design.md) - Architecture and peripheral model: [../hardware-peripherals-design.md](../hardware-peripherals-design.md)

View file

@ -2,7 +2,13 @@
This page defines the fastest supported path to install and initialize ZeroClaw. This page defines the fastest supported path to install and initialize ZeroClaw.
Last verified: **February 18, 2026**. Last verified: **February 20, 2026**.
## Option 0: Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
## Option A (Recommended): Clone + local script ## Option A (Recommended): Clone + local script
@ -17,6 +23,31 @@ What it does by default:
1. `cargo build --release --locked` 1. `cargo build --release --locked`
2. `cargo install --path . --force --locked` 2. `cargo install --path . --force --locked`
### Resource preflight and pre-built flow
Source builds typically require at least:
- **2 GB RAM + swap**
- **6 GB free disk**
When resources are constrained, bootstrap now attempts a pre-built binary first.
```bash
./bootstrap.sh --prefer-prebuilt
```
To require binary-only installation and fail if no compatible release asset exists:
```bash
./bootstrap.sh --prebuilt-only
```
To bypass pre-built flow and force source compilation:
```bash
./bootstrap.sh --force-source-build
```
## Dual-mode bootstrap ## Dual-mode bootstrap
Default behavior is **app-only** (build/install ZeroClaw) and expects existing Rust toolchain. Default behavior is **app-only** (build/install ZeroClaw) and expects existing Rust toolchain.
@ -31,6 +62,9 @@ Notes:
- `--install-system-deps` installs compiler/build prerequisites (may require `sudo`). - `--install-system-deps` installs compiler/build prerequisites (may require `sudo`).
- `--install-rust` installs Rust via `rustup` when missing. - `--install-rust` installs Rust via `rustup` when missing.
- `--prefer-prebuilt` tries release binary download first, then falls back to source build.
- `--prebuilt-only` disables source fallback.
- `--force-source-build` disables pre-built flow entirely.
## Option B: Remote one-liner ## Option B: Remote one-liner
@ -52,6 +86,15 @@ If you run Option B outside a repository checkout, the bootstrap script automati
## Optional onboarding modes ## Optional onboarding modes
### Containerized onboarding (Docker)
```bash
./bootstrap.sh --docker
```
This builds a local ZeroClaw image and launches onboarding inside a container while
persisting config/workspace to `./.zeroclaw-docker`.
### Quick onboarding (non-interactive) ### Quick onboarding (non-interactive)
```bash ```bash

View file

@ -8,6 +8,10 @@ Time-bound project status snapshots for planning documentation and operations wo
## Scope ## Scope
Use snapshots to understand changing PR/issue pressure and prioritize doc maintenance. Project snapshots are time-bound assessments of open PRs, issues, and documentation health. Use these to:
For stable classification of docs intent, use [../docs-inventory.md](../docs-inventory.md). - Identify documentation gaps driven by feature work
- Prioritize docs maintenance alongside code changes
- Track evolving PR/issue pressure over time
For stable documentation classification (not time-bound), use [docs-inventory.md](../docs-inventory.md).

View file

@ -2,7 +2,7 @@
This document maps provider IDs, aliases, and credential environment variables. This document maps provider IDs, aliases, and credential environment variables.
Last verified: **February 18, 2026**. Last verified: **February 19, 2026**.
## How to List Providers ## How to List Providers
@ -18,6 +18,10 @@ Runtime resolution order is:
2. Provider-specific env var(s) 2. Provider-specific env var(s)
3. Generic fallback env vars: `ZEROCLAW_API_KEY` then `API_KEY` 3. Generic fallback env vars: `ZEROCLAW_API_KEY` then `API_KEY`
For resilient fallback chains (`reliability.fallback_providers`), each fallback
provider resolves credentials independently. The primary provider's explicit
credential is not reused for fallback providers.
## Provider Catalog ## Provider Catalog
| Canonical ID | Aliases | Local | Provider-specific env var(s) | | Canonical ID | Aliases | Local | Provider-specific env var(s) |
@ -37,9 +41,9 @@ Runtime resolution order is:
| `zai` | `z.ai` | No | `ZAI_API_KEY` | | `zai` | `z.ai` | No | `ZAI_API_KEY` |
| `glm` | `zhipu` | No | `GLM_API_KEY` | | `glm` | `zhipu` | No | `GLM_API_KEY` |
| `minimax` | `minimax-intl`, `minimax-io`, `minimax-global`, `minimax-cn`, `minimaxi`, `minimax-oauth`, `minimax-oauth-cn`, `minimax-portal`, `minimax-portal-cn` | No | `MINIMAX_OAUTH_TOKEN`, `MINIMAX_API_KEY` | | `minimax` | `minimax-intl`, `minimax-io`, `minimax-global`, `minimax-cn`, `minimaxi`, `minimax-oauth`, `minimax-oauth-cn`, `minimax-portal`, `minimax-portal-cn` | No | `MINIMAX_OAUTH_TOKEN`, `MINIMAX_API_KEY` |
| `bedrock` | `aws-bedrock` | No | (use config/`API_KEY` fallback) | | `bedrock` | `aws-bedrock` | No | `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` (optional: `AWS_REGION`) |
| `qianfan` | `baidu` | No | `QIANFAN_API_KEY` | | `qianfan` | `baidu` | No | `QIANFAN_API_KEY` |
| `qwen` | `dashscope`, `qwen-intl`, `dashscope-intl`, `qwen-us`, `dashscope-us` | No | `DASHSCOPE_API_KEY` | | `qwen` | `dashscope`, `qwen-intl`, `dashscope-intl`, `qwen-us`, `dashscope-us`, `qwen-code`, `qwen-oauth`, `qwen_oauth` | No | `QWEN_OAUTH_TOKEN`, `DASHSCOPE_API_KEY` |
| `groq` | — | No | `GROQ_API_KEY` | | `groq` | — | No | `GROQ_API_KEY` |
| `mistral` | — | No | `MISTRAL_API_KEY` | | `mistral` | — | No | `MISTRAL_API_KEY` |
| `xai` | `grok` | No | `XAI_API_KEY` | | `xai` | `grok` | No | `XAI_API_KEY` |
@ -52,6 +56,46 @@ Runtime resolution order is:
| `lmstudio` | `lm-studio` | Yes | (optional; local by default) | | `lmstudio` | `lm-studio` | Yes | (optional; local by default) |
| `nvidia` | `nvidia-nim`, `build.nvidia.com` | No | `NVIDIA_API_KEY` | | `nvidia` | `nvidia-nim`, `build.nvidia.com` | No | `NVIDIA_API_KEY` |
### Gemini Notes
- Provider ID: `gemini` (aliases: `google`, `google-gemini`)
- Auth can come from `GEMINI_API_KEY`, `GOOGLE_API_KEY`, or Gemini CLI OAuth cache (`~/.gemini/oauth_creds.json`)
- API key requests use `generativelanguage.googleapis.com/v1beta`
- Gemini CLI OAuth requests use `cloudcode-pa.googleapis.com/v1internal` with Code Assist request envelope semantics
### Ollama Vision Notes
- Provider ID: `ollama`
- Vision input is supported through user message image markers: ``[IMAGE:<source>]``.
- After multimodal normalization, ZeroClaw sends image payloads through Ollama's native `messages[].images` field.
- If a non-vision provider is selected, ZeroClaw returns a structured capability error instead of silently ignoring images.
### Bedrock Notes
- Provider ID: `bedrock` (alias: `aws-bedrock`)
- API: [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
- Authentication: AWS AKSK (not a single API key). Set `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` environment variables.
- Optional: `AWS_SESSION_TOKEN` for temporary/STS credentials, `AWS_REGION` or `AWS_DEFAULT_REGION` (default: `us-east-1`).
- Default onboarding model: `anthropic.claude-sonnet-4-5-20250929-v1:0`
- Supports native tool calling and prompt caching (`cachePoint`).
- Cross-region inference profiles supported (e.g., `us.anthropic.claude-*`).
- Model IDs use Bedrock format: `anthropic.claude-sonnet-4-6`, `anthropic.claude-opus-4-6-v1`, etc.
### Ollama Reasoning Toggle
You can control Ollama reasoning/thinking behavior from `config.toml`:
```toml
[runtime]
reasoning_enabled = false
```
Behavior:
- `false`: sends `think: false` to Ollama `/api/chat` requests.
- `true`: sends `think: true`.
- Unset: omits `think` and keeps Ollama/model defaults.
### Kimi Code Notes ### Kimi Code Notes
- Provider ID: `kimi-code` - Provider ID: `kimi-code`
@ -107,6 +151,33 @@ Optional:
- `MINIMAX_OAUTH_REGION=global` or `cn` (defaults by provider alias) - `MINIMAX_OAUTH_REGION=global` or `cn` (defaults by provider alias)
- `MINIMAX_OAUTH_CLIENT_ID` to override the default OAuth client id - `MINIMAX_OAUTH_CLIENT_ID` to override the default OAuth client id
Channel compatibility note:
- For MiniMax-backed channel conversations, runtime history is normalized to keep valid `user`/`assistant` turn order.
- Channel-specific delivery guidance (for example Telegram attachment markers) is merged into the leading system prompt instead of being appended as a trailing `system` turn.
## Qwen Code OAuth Setup (config.toml)
Set Qwen Code OAuth mode in config:
```toml
default_provider = "qwen-code"
api_key = "qwen-oauth"
```
Credential resolution for `qwen-code`:
1. Explicit `api_key` value (if not the placeholder `qwen-oauth`)
2. `QWEN_OAUTH_TOKEN`
3. `~/.qwen/oauth_creds.json` (reuses Qwen Code cached OAuth credentials)
4. Optional refresh via `QWEN_OAUTH_REFRESH_TOKEN` (or cached refresh token)
5. If no OAuth placeholder is used, `DASHSCOPE_API_KEY` can still be used as fallback
Optional endpoint override:
- `QWEN_OAUTH_RESOURCE_URL` (normalized to `https://.../v1` if needed)
- If unset, `resource_url` from cached OAuth credentials is used when available
## Model Routing (`hint:<name>`) ## Model Routing (`hint:<name>`)
You can route model calls by hint using `[[model_routes]]`: You can route model calls by hint using `[[model_routes]]`:
@ -128,3 +199,56 @@ Then call with a hint model name (for example from tool or integration paths):
```text ```text
hint:reasoning hint:reasoning
``` ```
## Embedding Routing (`hint:<name>`)
You can route embedding calls with the same hint pattern using `[[embedding_routes]]`.
Set `[memory].embedding_model` to a `hint:<name>` value to activate routing.
```toml
[memory]
embedding_model = "hint:semantic"
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
dimensions = 1536
[[embedding_routes]]
hint = "archive"
provider = "custom:https://embed.example.com/v1"
model = "your-embedding-model-id"
dimensions = 1024
```
Supported embedding providers:
- `none`
- `openai`
- `custom:<url>` (OpenAI-compatible embeddings endpoint)
Optional per-route key override:
```toml
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
api_key = "sk-route-specific"
```
## Upgrading Models Safely
Use stable hints and update only route targets when providers deprecate model IDs.
Recommended workflow:
1. Keep call sites stable (`hint:reasoning`, `hint:semantic`).
2. Change only the target model under `[[model_routes]]` or `[[embedding_routes]]`.
3. Run:
- `zeroclaw doctor`
- `zeroclaw status`
4. Smoke test one representative flow (chat + memory retrieval) before rollout.
This minimizes breakage because integrations and prompts do not need to change when model IDs are upgraded.

View file

@ -2,7 +2,7 @@
This guide focuses on common setup/runtime failures and fast resolution paths. This guide focuses on common setup/runtime failures and fast resolution paths.
Last verified: **February 18, 2026**. Last verified: **February 20, 2026**.
## Installation / Bootstrap ## Installation / Bootstrap
@ -32,6 +32,93 @@ Fix:
./bootstrap.sh --install-system-deps ./bootstrap.sh --install-system-deps
``` ```
### Build fails on low-RAM / low-disk hosts
Symptoms:
- `cargo build --release` is killed (`signal: 9`, OOM killer, or `cannot allocate memory`)
- Build crashes after adding swap because disk space runs out
Why this happens:
- Runtime memory (<5MB for common operations) is not the same as compile-time memory.
- Full source build can require **2 GB RAM + swap** and **6+ GB free disk**.
- Enabling swap on a tiny disk can avoid RAM OOM but still fail due to disk exhaustion.
Preferred path for constrained machines:
```bash
./bootstrap.sh --prefer-prebuilt
```
Binary-only mode (no source fallback):
```bash
./bootstrap.sh --prebuilt-only
```
If you must compile from source on constrained hosts:
1. Add swap only if you also have enough free disk for both swap + build output.
1. Limit cargo parallelism:
```bash
CARGO_BUILD_JOBS=1 cargo build --release --locked
```
1. Reduce heavy features when Matrix is not required:
```bash
cargo build --release --locked --no-default-features --features hardware
```
1. Cross-compile on a stronger machine and copy the binary to the target host.
### Build is very slow or appears stuck
Symptoms:
- `cargo check` / `cargo build` appears stuck at `Checking zeroclaw` for a long time
- repeated `Blocking waiting for file lock on package cache` or `build directory`
Why this happens in ZeroClaw:
- Matrix E2EE stack (`matrix-sdk`, `ruma`, `vodozemac`) is large and expensive to type-check.
- TLS + crypto native build scripts (`aws-lc-sys`, `ring`) add noticeable compile time.
- `rusqlite` with bundled SQLite compiles C code locally.
- Running multiple cargo jobs/worktrees in parallel causes lock contention.
Fast checks:
```bash
cargo check --timings
cargo tree -d
```
The timing report is written to `target/cargo-timings/cargo-timing.html`.
Faster local iteration (when Matrix channel is not needed):
```bash
cargo check --no-default-features --features hardware
```
This skips `channel-matrix` and can significantly reduce compile time.
To build with Matrix support explicitly enabled:
```bash
cargo check --no-default-features --features hardware,channel-matrix
```
Lock-contention mitigation:
```bash
pgrep -af "cargo (check|build|test)|cargo check|cargo build|cargo test"
```
Stop unrelated cargo jobs before running your own build.
### `zeroclaw` command not found after install ### `zeroclaw` command not found after install
Symptom: Symptom:

99
flake.lock generated Normal file
View file

@ -0,0 +1,99 @@
{
"nodes": {
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1771398736,
"narHash": "sha256-pjV3C7VJHN0o2SvE3O6xiwraLt7bnlWIF3o7Q0BC1jk=",
"owner": "nix-community",
"repo": "fenix",
"rev": "0f608091816de13d92e1f4058b501028b782dddd",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-unstable",
"type": "indirect"
}
},
"root": {
"inputs": {
"fenix": "fenix",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1771353660,
"narHash": "sha256-yp1y55kXgaa08g/gR3CNiUdkg1JRjPYfkKtEIRNE6S8=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "09f2d468eda25a5f06ae70046357c70ae5cd77c7",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

61
flake.nix Normal file
View file

@ -0,0 +1,61 @@
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
};
nixpkgs.url = "nixpkgs/nixos-unstable";
};
outputs = { flake-utils, fenix, nixpkgs, ... }:
let
nixosModule = { pkgs, ... }: {
nixpkgs.overlays = [ fenix.overlays.default ];
environment.systemPackages = [
(pkgs.fenix.stable.withComponents [
"cargo"
"clippy"
"rust-src"
"rustc"
"rustfmt"
])
pkgs.rust-analyzer
];
};
in
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs {
inherit system;
overlays = [ fenix.overlays.default ];
};
rustToolchain = pkgs.fenix.stable.withComponents [
"cargo"
"clippy"
"rust-src"
"rustc"
"rustfmt"
];
in {
packages.default = fenix.packages.${system}.stable.toolchain;
devShells.default = pkgs.mkShell {
packages = [
rustToolchain
pkgs.rust-analyzer
];
};
}) // {
nixosConfigurations = {
nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [ nixosModule ];
};
nixos-aarch64 = nixpkgs.lib.nixosSystem {
system = "aarch64-linux";
modules = [ nixosModule ];
};
};
};
}

View file

@ -24,3 +24,21 @@ name = "fuzz_tool_params"
path = "fuzz_targets/fuzz_tool_params.rs" path = "fuzz_targets/fuzz_tool_params.rs"
test = false test = false
doc = false doc = false
[[bin]]
name = "fuzz_webhook_payload"
path = "fuzz_targets/fuzz_webhook_payload.rs"
test = false
doc = false
[[bin]]
name = "fuzz_provider_response"
path = "fuzz_targets/fuzz_provider_response.rs"
test = false
doc = false
[[bin]]
name = "fuzz_command_validation"
path = "fuzz_targets/fuzz_command_validation.rs"
test = false
doc = false

View file

@ -0,0 +1,10 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use zeroclaw::security::SecurityPolicy;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
let policy = SecurityPolicy::default();
let _ = policy.validate_command_execution(s, false);
}
});

View file

@ -0,0 +1,9 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
// Fuzz provider API response deserialization
let _ = serde_json::from_str::<serde_json::Value>(s);
}
});

View file

@ -0,0 +1,9 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
// Fuzz webhook body deserialization
let _ = serde_json::from_str::<serde_json::Value>(s);
}
});

View file

@ -15,38 +15,61 @@ error() {
usage() { usage() {
cat <<'USAGE' cat <<'USAGE'
ZeroClaw one-click bootstrap ZeroClaw installer bootstrap engine
Usage: Usage:
./bootstrap.sh [options] ./zeroclaw_install.sh [options]
./bootstrap.sh [options] # compatibility entrypoint
Modes: Modes:
Default mode installs/builds ZeroClaw only (requires existing Rust toolchain). Default mode installs/builds ZeroClaw only (requires existing Rust toolchain).
Guided mode asks setup questions and configures options interactively.
Optional bootstrap mode can also install system dependencies and Rust. Optional bootstrap mode can also install system dependencies and Rust.
Options: Options:
--guided Run interactive guided installer
--no-guided Disable guided installer
--docker Run bootstrap in Docker and launch onboarding inside the container
--install-system-deps Install build dependencies (Linux/macOS) --install-system-deps Install build dependencies (Linux/macOS)
--install-rust Install Rust via rustup if missing --install-rust Install Rust via rustup if missing
--prefer-prebuilt Try latest release binary first; fallback to source build on miss
--prebuilt-only Install only from latest release binary (no source build fallback)
--force-source-build Disable prebuilt flow and always build from source
--onboard Run onboarding after install --onboard Run onboarding after install
--interactive-onboard Run interactive onboarding (implies --onboard) --interactive-onboard Run interactive onboarding (implies --onboard)
--api-key <key> API key for non-interactive onboarding --api-key <key> API key for non-interactive onboarding
--provider <id> Provider for non-interactive onboarding (default: openrouter) --provider <id> Provider for non-interactive onboarding (default: openrouter)
--model <id> Model for non-interactive onboarding (optional)
--build-first Alias for explicitly enabling separate `cargo build --release --locked`
--skip-build Skip `cargo build --release --locked` --skip-build Skip `cargo build --release --locked`
--skip-install Skip `cargo install --path . --force --locked` --skip-install Skip `cargo install --path . --force --locked`
-h, --help Show help -h, --help Show help
Examples: Examples:
./bootstrap.sh ./zeroclaw_install.sh
./bootstrap.sh --install-system-deps --install-rust ./zeroclaw_install.sh --guided
./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter ./zeroclaw_install.sh --install-system-deps --install-rust
./bootstrap.sh --interactive-onboard ./zeroclaw_install.sh --prefer-prebuilt
./zeroclaw_install.sh --prebuilt-only
./zeroclaw_install.sh --onboard --api-key "sk-..." --provider openrouter [--model "openrouter/auto"]
./zeroclaw_install.sh --interactive-onboard
# Compatibility entrypoint:
./bootstrap.sh --docker
# Remote one-liner # Remote one-liner
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/bootstrap.sh | bash curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/bootstrap.sh | bash
Environment: Environment:
ZEROCLAW_DOCKER_DATA_DIR Host path for Docker config/workspace persistence
ZEROCLAW_DOCKER_IMAGE Docker image tag to build/run (default: zeroclaw-bootstrap:local)
ZEROCLAW_API_KEY Used when --api-key is not provided ZEROCLAW_API_KEY Used when --api-key is not provided
ZEROCLAW_PROVIDER Used when --provider is not provided (default: openrouter) ZEROCLAW_PROVIDER Used when --provider is not provided (default: openrouter)
ZEROCLAW_MODEL Used when --model is not provided
ZEROCLAW_BOOTSTRAP_MIN_RAM_MB Minimum RAM threshold for source build preflight (default: 2048)
ZEROCLAW_BOOTSTRAP_MIN_DISK_MB Minimum free disk threshold for source build preflight (default: 6144)
ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS
Set to 1 to disable Alpine auto-install of missing prerequisites
USAGE USAGE
} }
@ -54,6 +77,155 @@ have_cmd() {
command -v "$1" >/dev/null 2>&1 command -v "$1" >/dev/null 2>&1
} }
get_total_memory_mb() {
case "$(uname -s)" in
Linux)
if [[ -r /proc/meminfo ]]; then
awk '/MemTotal:/ {printf "%d\n", $2 / 1024}' /proc/meminfo
fi
;;
Darwin)
if have_cmd sysctl; then
local bytes
bytes="$(sysctl -n hw.memsize 2>/dev/null || true)"
if [[ "$bytes" =~ ^[0-9]+$ ]]; then
echo $((bytes / 1024 / 1024))
fi
fi
;;
esac
}
get_available_disk_mb() {
local path="${1:-.}"
local free_kb
free_kb="$(df -Pk "$path" 2>/dev/null | awk 'NR==2 {print $4}')"
if [[ "$free_kb" =~ ^[0-9]+$ ]]; then
echo $((free_kb / 1024))
fi
}
detect_release_target() {
local os arch
os="$(uname -s)"
arch="$(uname -m)"
case "$os:$arch" in
Linux:x86_64)
echo "x86_64-unknown-linux-gnu"
;;
Linux:aarch64|Linux:arm64)
echo "aarch64-unknown-linux-gnu"
;;
Linux:armv7l|Linux:armv6l)
echo "armv7-unknown-linux-gnueabihf"
;;
Darwin:x86_64)
echo "x86_64-apple-darwin"
;;
Darwin:arm64|Darwin:aarch64)
echo "aarch64-apple-darwin"
;;
*)
return 1
;;
esac
}
should_attempt_prebuilt_for_resources() {
local workspace="${1:-.}"
local min_ram_mb min_disk_mb total_ram_mb free_disk_mb low_resource
min_ram_mb="${ZEROCLAW_BOOTSTRAP_MIN_RAM_MB:-2048}"
min_disk_mb="${ZEROCLAW_BOOTSTRAP_MIN_DISK_MB:-6144}"
total_ram_mb="$(get_total_memory_mb || true)"
free_disk_mb="$(get_available_disk_mb "$workspace" || true)"
low_resource=false
if [[ "$total_ram_mb" =~ ^[0-9]+$ && "$total_ram_mb" -lt "$min_ram_mb" ]]; then
low_resource=true
fi
if [[ "$free_disk_mb" =~ ^[0-9]+$ && "$free_disk_mb" -lt "$min_disk_mb" ]]; then
low_resource=true
fi
if [[ "$low_resource" == true ]]; then
warn "Source build preflight indicates constrained resources."
if [[ "$total_ram_mb" =~ ^[0-9]+$ ]]; then
warn "Detected RAM: ${total_ram_mb}MB (recommended >= ${min_ram_mb}MB for local source builds)."
else
warn "Unable to detect total RAM automatically."
fi
if [[ "$free_disk_mb" =~ ^[0-9]+$ ]]; then
warn "Detected free disk: ${free_disk_mb}MB (recommended >= ${min_disk_mb}MB)."
else
warn "Unable to detect free disk space automatically."
fi
return 0
fi
return 1
}
install_prebuilt_binary() {
local target archive_url temp_dir archive_path extracted_bin install_dir
if ! have_cmd curl; then
warn "curl is required for pre-built binary installation."
return 1
fi
if ! have_cmd tar; then
warn "tar is required for pre-built binary installation."
return 1
fi
target="$(detect_release_target || true)"
if [[ -z "$target" ]]; then
warn "No pre-built binary target mapping for $(uname -s)/$(uname -m)."
return 1
fi
archive_url="https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-${target}.tar.gz"
temp_dir="$(mktemp -d -t zeroclaw-prebuilt-XXXXXX)"
archive_path="$temp_dir/zeroclaw-${target}.tar.gz"
info "Attempting pre-built binary install for target: $target"
if ! curl -fsSL "$archive_url" -o "$archive_path"; then
warn "Could not download release asset: $archive_url"
rm -rf "$temp_dir"
return 1
fi
if ! tar -xzf "$archive_path" -C "$temp_dir"; then
warn "Failed to extract pre-built archive."
rm -rf "$temp_dir"
return 1
fi
extracted_bin="$temp_dir/zeroclaw"
if [[ ! -x "$extracted_bin" ]]; then
extracted_bin="$(find "$temp_dir" -maxdepth 2 -type f -name zeroclaw -perm -u+x | head -n 1 || true)"
fi
if [[ -z "$extracted_bin" || ! -x "$extracted_bin" ]]; then
warn "Archive did not contain an executable zeroclaw binary."
rm -rf "$temp_dir"
return 1
fi
install_dir="$HOME/.cargo/bin"
mkdir -p "$install_dir"
install -m 0755 "$extracted_bin" "$install_dir/zeroclaw"
rm -rf "$temp_dir"
info "Installed pre-built binary to $install_dir/zeroclaw"
if [[ ":$PATH:" != *":$install_dir:"* ]]; then
warn "$install_dir is not in PATH for this shell."
warn "Run: export PATH=\"$install_dir:\$PATH\""
fi
return 0
}
run_privileged() { run_privileged() {
if [[ "$(id -u)" -eq 0 ]]; then if [[ "$(id -u)" -eq 0 ]]; then
"$@" "$@"
@ -65,19 +237,152 @@ run_privileged() {
fi fi
} }
is_container_runtime() {
if [[ -f /.dockerenv || -f /run/.containerenv ]]; then
return 0
fi
if [[ -r /proc/1/cgroup ]] && grep -Eq '(docker|containerd|kubepods|podman|lxc)' /proc/1/cgroup; then
return 0
fi
return 1
}
run_pacman() {
if ! have_cmd pacman; then
error "pacman is not available."
return 1
fi
if ! is_container_runtime; then
run_privileged pacman "$@"
return $?
fi
local pacman_cfg_tmp=""
local pacman_rc=0
pacman_cfg_tmp="$(mktemp /tmp/zeroclaw-pacman.XXXXXX.conf)"
cp /etc/pacman.conf "$pacman_cfg_tmp"
if ! grep -Eq '^[[:space:]]*DisableSandboxSyscalls([[:space:]]|$)' "$pacman_cfg_tmp"; then
printf '\nDisableSandboxSyscalls\n' >> "$pacman_cfg_tmp"
fi
if run_privileged pacman --config "$pacman_cfg_tmp" "$@"; then
pacman_rc=0
else
pacman_rc=$?
fi
rm -f "$pacman_cfg_tmp"
return "$pacman_rc"
}
ALPINE_PREREQ_PACKAGES=(
bash
build-base
pkgconf
git
curl
openssl-dev
perl
ca-certificates
)
ALPINE_MISSING_PKGS=()
find_missing_alpine_prereqs() {
ALPINE_MISSING_PKGS=()
if ! have_cmd apk; then
return 0
fi
local pkg=""
for pkg in "${ALPINE_PREREQ_PACKAGES[@]}"; do
if ! apk info -e "$pkg" >/dev/null 2>&1; then
ALPINE_MISSING_PKGS+=("$pkg")
fi
done
}
bool_to_word() {
if [[ "$1" == true ]]; then
echo "yes"
else
echo "no"
fi
}
prompt_yes_no() {
local question="$1"
local default_answer="$2"
local prompt=""
local answer=""
if [[ "$default_answer" == "yes" ]]; then
prompt="[Y/n]"
else
prompt="[y/N]"
fi
while true; do
if ! read -r -p "$question $prompt " answer; then
error "guided installer input was interrupted."
exit 1
fi
answer="${answer:-$default_answer}"
case "$(printf '%s' "$answer" | tr '[:upper:]' '[:lower:]')" in
y|yes)
return 0
;;
n|no)
return 1
;;
*)
echo "Please answer yes or no."
;;
esac
done
}
install_system_deps() { install_system_deps() {
info "Installing system dependencies" info "Installing system dependencies"
case "$(uname -s)" in case "$(uname -s)" in
Linux) Linux)
if have_cmd apt-get; then if have_cmd apk; then
find_missing_alpine_prereqs
if [[ ${#ALPINE_MISSING_PKGS[@]} -eq 0 ]]; then
info "Alpine prerequisites already installed"
else
info "Installing Alpine prerequisites: ${ALPINE_MISSING_PKGS[*]}"
run_privileged apk add --no-cache "${ALPINE_MISSING_PKGS[@]}"
fi
elif have_cmd apt-get; then
run_privileged apt-get update -qq run_privileged apt-get update -qq
run_privileged apt-get install -y build-essential pkg-config git curl run_privileged apt-get install -y build-essential pkg-config git curl
elif have_cmd dnf; then elif have_cmd dnf; then
run_privileged dnf group install -y development-tools run_privileged dnf install -y \
run_privileged dnf install -y pkg-config git curl gcc \
gcc-c++ \
make \
pkgconf-pkg-config \
git \
curl \
openssl-devel \
perl
elif have_cmd pacman; then
run_pacman -Sy --noconfirm
run_pacman -S --noconfirm --needed \
gcc \
make \
pkgconf \
git \
curl \
openssl \
perl \
ca-certificates
else else
warn "Unsupported Linux distribution. Install compiler toolchain + pkg-config + git + curl manually." warn "Unsupported Linux distribution. Install compiler toolchain + pkg-config + git + curl + OpenSSL headers + perl manually."
fi fi
;; ;;
Darwin) Darwin)
@ -126,22 +431,236 @@ install_rust_toolchain() {
fi fi
} }
run_guided_installer() {
local os_name="$1"
local provider_input=""
local model_input=""
local api_key_input=""
echo
echo "ZeroClaw guided installer"
echo "Answer a few questions, then the installer will run automatically."
echo
if [[ "$os_name" == "Linux" ]]; then
if prompt_yes_no "Install Linux build dependencies (toolchain/pkg-config/git/curl)?" "yes"; then
INSTALL_SYSTEM_DEPS=true
fi
else
if prompt_yes_no "Install system dependencies for $os_name?" "no"; then
INSTALL_SYSTEM_DEPS=true
fi
fi
if have_cmd cargo && have_cmd rustc; then
info "Detected Rust toolchain: $(rustc --version)"
else
if prompt_yes_no "Rust toolchain not found. Install Rust via rustup now?" "yes"; then
INSTALL_RUST=true
fi
fi
if prompt_yes_no "Run a separate prebuild before install?" "yes"; then
SKIP_BUILD=false
else
SKIP_BUILD=true
fi
if prompt_yes_no "Install zeroclaw into cargo bin now?" "yes"; then
SKIP_INSTALL=false
else
SKIP_INSTALL=true
fi
if prompt_yes_no "Run onboarding after install?" "no"; then
RUN_ONBOARD=true
if prompt_yes_no "Use interactive onboarding?" "yes"; then
INTERACTIVE_ONBOARD=true
else
INTERACTIVE_ONBOARD=false
if ! read -r -p "Provider [$PROVIDER]: " provider_input; then
error "guided installer input was interrupted."
exit 1
fi
if [[ -n "$provider_input" ]]; then
PROVIDER="$provider_input"
fi
if ! read -r -p "Model [${MODEL:-leave empty}]: " model_input; then
error "guided installer input was interrupted."
exit 1
fi
if [[ -n "$model_input" ]]; then
MODEL="$model_input"
fi
if [[ -z "$API_KEY" ]]; then
if ! read -r -s -p "API key (hidden, leave empty to switch to interactive onboarding): " api_key_input; then
echo
error "guided installer input was interrupted."
exit 1
fi
echo
if [[ -n "$api_key_input" ]]; then
API_KEY="$api_key_input"
else
warn "No API key entered. Using interactive onboarding instead."
INTERACTIVE_ONBOARD=true
fi
fi
fi
fi
echo
info "Installer plan"
local install_binary=true
local build_first=false
if [[ "$SKIP_INSTALL" == true ]]; then
install_binary=false
fi
if [[ "$SKIP_BUILD" == false ]]; then
build_first=true
fi
echo " docker-mode: $(bool_to_word "$DOCKER_MODE")"
echo " install-system-deps: $(bool_to_word "$INSTALL_SYSTEM_DEPS")"
echo " install-rust: $(bool_to_word "$INSTALL_RUST")"
echo " build-first: $(bool_to_word "$build_first")"
echo " install-binary: $(bool_to_word "$install_binary")"
echo " onboard: $(bool_to_word "$RUN_ONBOARD")"
if [[ "$RUN_ONBOARD" == true ]]; then
echo " interactive-onboard: $(bool_to_word "$INTERACTIVE_ONBOARD")"
if [[ "$INTERACTIVE_ONBOARD" == false ]]; then
echo " provider: $PROVIDER"
if [[ -n "$MODEL" ]]; then
echo " model: $MODEL"
fi
fi
fi
echo
if ! prompt_yes_no "Proceed with this install plan?" "yes"; then
info "Installation canceled by user."
exit 0
fi
}
ensure_docker_ready() {
if ! have_cmd docker; then
error "docker is not installed."
cat <<'MSG' >&2
Install Docker first, then re-run with:
./zeroclaw_install.sh --docker
MSG
exit 1
fi
if ! docker info >/dev/null 2>&1; then
error "Docker daemon is not reachable."
error "Start Docker and re-run bootstrap."
exit 1
fi
}
run_docker_bootstrap() {
local docker_image docker_data_dir default_data_dir
docker_image="${ZEROCLAW_DOCKER_IMAGE:-zeroclaw-bootstrap:local}"
if [[ "$TEMP_CLONE" == true ]]; then
default_data_dir="$HOME/.zeroclaw-docker"
else
default_data_dir="$WORK_DIR/.zeroclaw-docker"
fi
docker_data_dir="${ZEROCLAW_DOCKER_DATA_DIR:-$default_data_dir}"
DOCKER_DATA_DIR="$docker_data_dir"
mkdir -p "$docker_data_dir/.zeroclaw" "$docker_data_dir/workspace"
if [[ "$SKIP_INSTALL" == true ]]; then
warn "--skip-install has no effect with --docker."
fi
if [[ "$SKIP_BUILD" == false ]]; then
info "Building Docker image ($docker_image)"
docker build --target release -t "$docker_image" "$WORK_DIR"
else
info "Skipping Docker image build"
fi
info "Docker data directory: $docker_data_dir"
local onboard_cmd=()
if [[ "$INTERACTIVE_ONBOARD" == true ]]; then
info "Launching interactive onboarding in container"
onboard_cmd=(onboard --interactive)
else
if [[ -z "$API_KEY" ]]; then
cat <<'MSG'
==> Onboarding requested, but API key not provided.
Use either:
--api-key "sk-..."
or:
ZEROCLAW_API_KEY="sk-..." ./zeroclaw_install.sh --docker
or run interactive:
./zeroclaw_install.sh --docker --interactive-onboard
MSG
exit 1
fi
if [[ -n "$MODEL" ]]; then
info "Launching quick onboarding in container (provider: $PROVIDER, model: $MODEL)"
else
info "Launching quick onboarding in container (provider: $PROVIDER)"
fi
onboard_cmd=(onboard --api-key "$API_KEY" --provider "$PROVIDER")
if [[ -n "$MODEL" ]]; then
onboard_cmd+=(--model "$MODEL")
fi
fi
docker run --rm -it \
--user "$(id -u):$(id -g)" \
-e HOME=/zeroclaw-data \
-e ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace \
-v "$docker_data_dir/.zeroclaw:/zeroclaw-data/.zeroclaw" \
-v "$docker_data_dir/workspace:/zeroclaw-data/workspace" \
"$docker_image" \
"${onboard_cmd[@]}"
}
SCRIPT_PATH="${BASH_SOURCE[0]:-$0}" SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"
SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_PATH")" >/dev/null 2>&1 && pwd || pwd)" SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_PATH")" >/dev/null 2>&1 && pwd || pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/.." >/dev/null 2>&1 && pwd || pwd)" ROOT_DIR="$(cd "$SCRIPT_DIR/.." >/dev/null 2>&1 && pwd || pwd)"
REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git" REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git"
ORIGINAL_ARG_COUNT=$#
GUIDED_MODE="auto"
DOCKER_MODE=false
INSTALL_SYSTEM_DEPS=false INSTALL_SYSTEM_DEPS=false
INSTALL_RUST=false INSTALL_RUST=false
PREFER_PREBUILT=false
PREBUILT_ONLY=false
FORCE_SOURCE_BUILD=false
RUN_ONBOARD=false RUN_ONBOARD=false
INTERACTIVE_ONBOARD=false INTERACTIVE_ONBOARD=false
SKIP_BUILD=false SKIP_BUILD=false
SKIP_INSTALL=false SKIP_INSTALL=false
PREBUILT_INSTALLED=false
API_KEY="${ZEROCLAW_API_KEY:-}" API_KEY="${ZEROCLAW_API_KEY:-}"
PROVIDER="${ZEROCLAW_PROVIDER:-openrouter}" PROVIDER="${ZEROCLAW_PROVIDER:-openrouter}"
MODEL="${ZEROCLAW_MODEL:-}"
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
case "$1" in case "$1" in
--guided)
GUIDED_MODE="on"
shift
;;
--no-guided)
GUIDED_MODE="off"
shift
;;
--docker)
DOCKER_MODE=true
shift
;;
--install-system-deps) --install-system-deps)
INSTALL_SYSTEM_DEPS=true INSTALL_SYSTEM_DEPS=true
shift shift
@ -150,6 +669,18 @@ while [[ $# -gt 0 ]]; do
INSTALL_RUST=true INSTALL_RUST=true
shift shift
;; ;;
--prefer-prebuilt)
PREFER_PREBUILT=true
shift
;;
--prebuilt-only)
PREBUILT_ONLY=true
shift
;;
--force-source-build)
FORCE_SOURCE_BUILD=true
shift
;;
--onboard) --onboard)
RUN_ONBOARD=true RUN_ONBOARD=true
shift shift
@ -175,6 +706,18 @@ while [[ $# -gt 0 ]]; do
} }
shift 2 shift 2
;; ;;
--model)
MODEL="${2:-}"
[[ -n "$MODEL" ]] || {
error "--model requires a value"
exit 1
}
shift 2
;;
--build-first)
SKIP_BUILD=false
shift
;;
--skip-build) --skip-build)
SKIP_BUILD=true SKIP_BUILD=true
shift shift
@ -196,22 +739,48 @@ while [[ $# -gt 0 ]]; do
esac esac
done done
if [[ "$INSTALL_SYSTEM_DEPS" == true ]]; then OS_NAME="$(uname -s)"
install_system_deps if [[ "$GUIDED_MODE" == "auto" ]]; then
if [[ "$OS_NAME" == "Linux" && "$ORIGINAL_ARG_COUNT" -eq 0 && -t 0 && -t 1 ]]; then
GUIDED_MODE="on"
else
GUIDED_MODE="off"
fi
fi fi
if [[ "$INSTALL_RUST" == true ]]; then if [[ "$DOCKER_MODE" == true && "$GUIDED_MODE" == "on" ]]; then
install_rust_toolchain warn "--guided is ignored with --docker."
GUIDED_MODE="off"
fi fi
if ! have_cmd cargo; then if [[ "$GUIDED_MODE" == "on" ]]; then
error "cargo is not installed." run_guided_installer "$OS_NAME"
cat <<'MSG' >&2 fi
Install Rust first: https://rustup.rs/
or re-run with: if [[ "$DOCKER_MODE" == true ]]; then
./bootstrap.sh --install-rust if [[ "$INSTALL_SYSTEM_DEPS" == true ]]; then
MSG warn "--install-system-deps is ignored with --docker."
exit 1 fi
if [[ "$INSTALL_RUST" == true ]]; then
warn "--install-rust is ignored with --docker."
fi
else
if [[ "$OS_NAME" == "Linux" && -z "${ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS:-}" ]] && have_cmd apk; then
find_missing_alpine_prereqs
if [[ ${#ALPINE_MISSING_PKGS[@]} -gt 0 && "$INSTALL_SYSTEM_DEPS" == false ]]; then
info "Detected Alpine with missing prerequisites: ${ALPINE_MISSING_PKGS[*]}"
info "Auto-enabling system dependency installation (set ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS=1 to disable)."
INSTALL_SYSTEM_DEPS=true
fi
fi
if [[ "$INSTALL_SYSTEM_DEPS" == true ]]; then
install_system_deps
fi
if [[ "$INSTALL_RUST" == true ]]; then
install_rust_toolchain
fi
fi fi
WORK_DIR="$ROOT_DIR" WORK_DIR="$ROOT_DIR"
@ -254,6 +823,73 @@ echo " workspace: $WORK_DIR"
cd "$WORK_DIR" cd "$WORK_DIR"
if [[ "$FORCE_SOURCE_BUILD" == true ]]; then
PREFER_PREBUILT=false
PREBUILT_ONLY=false
fi
if [[ "$PREBUILT_ONLY" == true ]]; then
PREFER_PREBUILT=true
fi
if [[ "$DOCKER_MODE" == true ]]; then
ensure_docker_ready
if [[ "$RUN_ONBOARD" == false ]]; then
RUN_ONBOARD=true
if [[ -z "$API_KEY" ]]; then
INTERACTIVE_ONBOARD=true
fi
fi
run_docker_bootstrap
cat <<'DONE'
✅ Docker bootstrap complete.
Your containerized ZeroClaw data is persisted under:
DONE
echo " $DOCKER_DATA_DIR"
cat <<'DONE'
Next steps:
./zeroclaw_install.sh --docker --interactive-onboard
./zeroclaw_install.sh --docker --api-key "sk-..." --provider openrouter
DONE
exit 0
fi
if [[ "$FORCE_SOURCE_BUILD" == false ]]; then
if [[ "$PREFER_PREBUILT" == false && "$PREBUILT_ONLY" == false ]]; then
if should_attempt_prebuilt_for_resources "$WORK_DIR"; then
info "Attempting pre-built binary first due to resource preflight."
PREFER_PREBUILT=true
fi
fi
if [[ "$PREFER_PREBUILT" == true ]]; then
if install_prebuilt_binary; then
PREBUILT_INSTALLED=true
SKIP_BUILD=true
SKIP_INSTALL=true
elif [[ "$PREBUILT_ONLY" == true ]]; then
error "Pre-built-only mode requested, but no compatible release asset is available."
error "Try again later, or run with --force-source-build on a machine with enough RAM/disk."
exit 1
else
warn "Pre-built install unavailable; falling back to source build."
fi
fi
fi
if [[ "$PREBUILT_INSTALLED" == false && ( "$SKIP_BUILD" == false || "$SKIP_INSTALL" == false ) ]] && ! have_cmd cargo; then
error "cargo is not installed."
cat <<'MSG' >&2
Install Rust first: https://rustup.rs/
or re-run with:
./zeroclaw_install.sh --install-rust
MSG
exit 1
fi
if [[ "$SKIP_BUILD" == false ]]; then if [[ "$SKIP_BUILD" == false ]]; then
info "Building release binary" info "Building release binary"
cargo build --release --locked cargo build --release --locked
@ -271,6 +907,8 @@ fi
ZEROCLAW_BIN="" ZEROCLAW_BIN=""
if have_cmd zeroclaw; then if have_cmd zeroclaw; then
ZEROCLAW_BIN="zeroclaw" ZEROCLAW_BIN="zeroclaw"
elif [[ -x "$HOME/.cargo/bin/zeroclaw" ]]; then
ZEROCLAW_BIN="$HOME/.cargo/bin/zeroclaw"
elif [[ -x "$WORK_DIR/target/release/zeroclaw" ]]; then elif [[ -x "$WORK_DIR/target/release/zeroclaw" ]]; then
ZEROCLAW_BIN="$WORK_DIR/target/release/zeroclaw" ZEROCLAW_BIN="$WORK_DIR/target/release/zeroclaw"
fi fi
@ -292,14 +930,22 @@ if [[ "$RUN_ONBOARD" == true ]]; then
Use either: Use either:
--api-key "sk-..." --api-key "sk-..."
or: or:
ZEROCLAW_API_KEY="sk-..." ./bootstrap.sh --onboard ZEROCLAW_API_KEY="sk-..." ./zeroclaw_install.sh --onboard
or run interactive: or run interactive:
./bootstrap.sh --interactive-onboard ./zeroclaw_install.sh --interactive-onboard
MSG MSG
exit 1 exit 1
fi fi
info "Running quick onboarding (provider: $PROVIDER)" if [[ -n "$MODEL" ]]; then
"$ZEROCLAW_BIN" onboard --api-key "$API_KEY" --provider "$PROVIDER" info "Running quick onboarding (provider: $PROVIDER, model: $MODEL)"
else
info "Running quick onboarding (provider: $PROVIDER)"
fi
ONBOARD_CMD=("$ZEROCLAW_BIN" onboard --api-key "$API_KEY" --provider "$PROVIDER")
if [[ -n "$MODEL" ]]; then
ONBOARD_CMD+=(--model "$MODEL")
fi
"${ONBOARD_CMD[@]}"
fi fi
fi fi

View file

@ -0,0 +1,209 @@
#!/usr/bin/env python3
"""Fetch GitHub Actions workflow runs for a given date and summarize costs.
Usage:
python fetch_actions_data.py [OPTIONS]
Options:
--date YYYY-MM-DD Date to query (default: yesterday)
--mode brief|full Output mode (default: full)
brief: billable minutes/hours table only
full: detailed breakdown with per-run list
--repo OWNER/NAME Repository (default: zeroclaw-labs/zeroclaw)
-h, --help Show this help message
"""
import argparse
import json
import subprocess
from datetime import datetime, timedelta, timezone
def parse_args():
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Fetch GitHub Actions workflow runs and summarize costs.",
)
yesterday = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%d")
parser.add_argument(
"--date",
default=yesterday,
help="Date to query in YYYY-MM-DD format (default: yesterday)",
)
parser.add_argument(
"--mode",
choices=["brief", "full"],
default="full",
help="Output mode: 'brief' for billable hours only, 'full' for detailed breakdown (default: full)",
)
parser.add_argument(
"--repo",
default="zeroclaw-labs/zeroclaw",
help="Repository in OWNER/NAME format (default: zeroclaw-labs/zeroclaw)",
)
return parser.parse_args()
def fetch_runs(repo, date_str, page=1, per_page=100):
"""Fetch completed workflow runs for a given date."""
url = (
f"https://api.github.com/repos/{repo}/actions/runs"
f"?created={date_str}&per_page={per_page}&page={page}"
)
result = subprocess.run(
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
capture_output=True, text=True
)
return json.loads(result.stdout)
def fetch_jobs(repo, run_id):
"""Fetch jobs for a specific run."""
url = f"https://api.github.com/repos/{repo}/actions/runs/{run_id}/jobs?per_page=100"
result = subprocess.run(
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
capture_output=True, text=True
)
return json.loads(result.stdout)
def parse_duration(started, completed):
"""Return duration in seconds between two ISO timestamps."""
if not started or not completed:
return 0
try:
s = datetime.fromisoformat(started.replace("Z", "+00:00"))
c = datetime.fromisoformat(completed.replace("Z", "+00:00"))
return max(0, (c - s).total_seconds())
except Exception:
return 0
def main():
args = parse_args()
repo = args.repo
date_str = args.date
brief = args.mode == "brief"
print(f"Fetching workflow runs for {repo} on {date_str}...")
print("=" * 100)
all_runs = []
for page in range(1, 5): # up to 400 runs
data = fetch_runs(repo, date_str, page=page)
runs = data.get("workflow_runs", [])
if not runs:
break
all_runs.extend(runs)
if len(runs) < 100:
break
print(f"Total workflow runs found: {len(all_runs)}")
print()
# Group by workflow name
workflow_stats = {}
for run in all_runs:
name = run.get("name", "Unknown")
event = run.get("event", "unknown")
conclusion = run.get("conclusion", "unknown")
run_id = run.get("id")
if name not in workflow_stats:
workflow_stats[name] = {
"count": 0,
"events": {},
"conclusions": {},
"total_job_seconds": 0,
"total_jobs": 0,
"run_ids": [],
}
workflow_stats[name]["count"] += 1
workflow_stats[name]["events"][event] = workflow_stats[name]["events"].get(event, 0) + 1
workflow_stats[name]["conclusions"][conclusion] = workflow_stats[name]["conclusions"].get(conclusion, 0) + 1
workflow_stats[name]["run_ids"].append(run_id)
# For each workflow, sample up to 3 runs to get job-level timing
print("Sampling job-level timing (up to 3 runs per workflow)...")
print()
for name, stats in workflow_stats.items():
sample_ids = stats["run_ids"][:3]
for run_id in sample_ids:
jobs_data = fetch_jobs(repo, run_id)
jobs = jobs_data.get("jobs", [])
for job in jobs:
started = job.get("started_at")
completed = job.get("completed_at")
duration = parse_duration(started, completed)
stats["total_job_seconds"] += duration
stats["total_jobs"] += 1
# Extrapolate: if we sampled N runs but there are M total, scale up
sampled = len(sample_ids)
total = stats["count"]
if sampled > 0 and sampled < total:
scale = total / sampled
stats["estimated_total_seconds"] = stats["total_job_seconds"] * scale
else:
stats["estimated_total_seconds"] = stats["total_job_seconds"]
# Print summary sorted by estimated cost (descending)
sorted_workflows = sorted(
workflow_stats.items(),
key=lambda x: x[1]["estimated_total_seconds"],
reverse=True
)
if brief:
# Brief mode: compact billable hours table
print(f"{'Workflow':<40} {'Runs':>5} {'Est.Mins':>9} {'Est.Hours':>10}")
print("-" * 68)
grand_total_minutes = 0
for name, stats in sorted_workflows:
est_mins = stats["estimated_total_seconds"] / 60
grand_total_minutes += est_mins
print(f"{name:<40} {stats['count']:>5} {est_mins:>9.1f} {est_mins/60:>10.2f}")
print("-" * 68)
print(f"{'TOTAL':<40} {len(all_runs):>5} {grand_total_minutes:>9.0f} {grand_total_minutes/60:>10.1f}")
print(f"\nProjected monthly: ~{grand_total_minutes/60*30:.0f} hours")
else:
# Full mode: detailed breakdown with per-run list
print("=" * 100)
print(f"{'Workflow':<40} {'Runs':>5} {'SampledJobs':>12} {'SampledMins':>12} {'Est.TotalMins':>14} {'Events'}")
print("-" * 100)
grand_total_minutes = 0
for name, stats in sorted_workflows:
sampled_mins = stats["total_job_seconds"] / 60
est_total_mins = stats["estimated_total_seconds"] / 60
grand_total_minutes += est_total_mins
events_str = ", ".join(f"{k}={v}" for k, v in stats["events"].items())
conclusions_str = ", ".join(f"{k}={v}" for k, v in stats["conclusions"].items())
print(
f"{name:<40} {stats['count']:>5} {stats['total_jobs']:>12} "
f"{sampled_mins:>12.1f} {est_total_mins:>14.1f} {events_str}"
)
print(f"{'':>40} {'':>5} {'':>12} {'':>12} {'':>14} outcomes: {conclusions_str}")
print("-" * 100)
print(f"{'GRAND TOTAL':>40} {len(all_runs):>5} {'':>12} {'':>12} {grand_total_minutes:>14.1f}")
print(f"\nEstimated total billable minutes on {date_str}: {grand_total_minutes:.0f} min ({grand_total_minutes/60:.1f} hours)")
print()
# Also show raw run list
print("\n" + "=" * 100)
print("DETAILED RUN LIST")
print("=" * 100)
for run in all_runs:
name = run.get("name", "Unknown")
event = run.get("event", "unknown")
conclusion = run.get("conclusion", "unknown")
run_id = run.get("id")
started = run.get("run_started_at", "?")
print(f" [{run_id}] {name:<40} conclusion={conclusion:<12} event={event:<20} started={started}")
if __name__ == "__main__":
main()

View file

@ -2,10 +2,15 @@
set -euo pipefail set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd || pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd || pwd)"
INSTALLER_LOCAL="$(cd "$SCRIPT_DIR/.." >/dev/null 2>&1 && pwd || pwd)/zeroclaw_install.sh"
BOOTSTRAP_LOCAL="$SCRIPT_DIR/bootstrap.sh" BOOTSTRAP_LOCAL="$SCRIPT_DIR/bootstrap.sh"
REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git" REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git"
echo "[deprecated] scripts/install.sh -> bootstrap.sh" >&2 echo "[deprecated] scripts/install.sh -> ./zeroclaw_install.sh" >&2
if [[ -x "$INSTALLER_LOCAL" ]]; then
exec "$INSTALLER_LOCAL" "$@"
fi
if [[ -f "$BOOTSTRAP_LOCAL" ]]; then if [[ -f "$BOOTSTRAP_LOCAL" ]]; then
exec "$BOOTSTRAP_LOCAL" "$@" exec "$BOOTSTRAP_LOCAL" "$@"
@ -24,35 +29,15 @@ trap cleanup EXIT
git clone --depth 1 "$REPO_URL" "$TEMP_DIR" >/dev/null 2>&1 git clone --depth 1 "$REPO_URL" "$TEMP_DIR" >/dev/null 2>&1
if [[ -x "$TEMP_DIR/zeroclaw_install.sh" ]]; then
exec "$TEMP_DIR/zeroclaw_install.sh" "$@"
fi
if [[ -x "$TEMP_DIR/scripts/bootstrap.sh" ]]; then if [[ -x "$TEMP_DIR/scripts/bootstrap.sh" ]]; then
"$TEMP_DIR/scripts/bootstrap.sh" "$@" exec "$TEMP_DIR/scripts/bootstrap.sh" "$@"
exit 0
fi fi
echo "[deprecated] cloned revision has no bootstrap.sh; falling back to legacy source install flow" >&2 echo "error: zeroclaw_install.sh/bootstrap.sh was not found in the fetched revision." >&2
echo "Run the local bootstrap directly when possible:" >&2
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then echo " ./zeroclaw_install.sh --help" >&2
cat <<'USAGE' exit 1
Legacy install.sh fallback mode
Behavior:
- Clone repository
- cargo build --release --locked
- cargo install --path <clone> --force --locked
For the new dual-mode installer, use:
./bootstrap.sh --help
USAGE
exit 0
fi
if ! command -v cargo >/dev/null 2>&1; then
echo "error: cargo is required for legacy install.sh fallback mode" >&2
echo "Install Rust first: https://rustup.rs/" >&2
exit 1
fi
cargo build --release --locked --manifest-path "$TEMP_DIR/Cargo.toml"
cargo install --path "$TEMP_DIR" --force --locked
echo "Legacy source install completed." >&2

View file

@ -10,7 +10,6 @@ use crate::providers::{self, ChatMessage, ChatRequest, ConversationMessage, Prov
use crate::runtime; use crate::runtime;
use crate::security::SecurityPolicy; use crate::security::SecurityPolicy;
use crate::tools::{self, Tool, ToolSpec}; use crate::tools::{self, Tool, ToolSpec};
use crate::util::truncate_with_ellipsis;
use anyhow::Result; use anyhow::Result;
use std::io::Write as IoWrite; use std::io::Write as IoWrite;
use std::sync::Arc; use std::sync::Arc;
@ -229,8 +228,9 @@ impl Agent {
&config.workspace_dir, &config.workspace_dir,
)); ));
let memory: Arc<dyn Memory> = Arc::from(memory::create_memory_with_storage( let memory: Arc<dyn Memory> = Arc::from(memory::create_memory_with_storage_and_routes(
&config.memory, &config.memory,
&config.embedding_routes,
Some(&config.storage.provider.config), Some(&config.storage.provider.config),
&config.workspace_dir, &config.workspace_dir,
config.api_key.as_deref(), config.api_key.as_deref(),
@ -308,7 +308,10 @@ impl Agent {
.classification_config(config.query_classification.clone()) .classification_config(config.query_classification.clone())
.available_hints(available_hints) .available_hints(available_hints)
.identity_config(config.identity.clone()) .identity_config(config.identity.clone())
.skills(crate::skills::load_skills(&config.workspace_dir)) .skills(crate::skills::load_skills_with_config(
&config.workspace_dir,
config,
))
.auto_save(config.memory.auto_save) .auto_save(config.memory.auto_save)
.build() .build()
} }
@ -400,11 +403,8 @@ impl Agent {
return results; return results;
} }
let mut results = Vec::with_capacity(calls.len()); let futs: Vec<_> = calls.iter().map(|call| self.execute_tool_call(call)).collect();
for call in calls { futures::future::join_all(futs).await
results.push(self.execute_tool_call(call).await);
}
results
} }
fn classify_model(&self, user_message: &str) -> String { fn classify_model(&self, user_message: &str) -> String {
@ -486,14 +486,6 @@ impl Agent {
))); )));
self.trim_history(); self.trim_history();
if self.auto_save {
let summary = truncate_with_ellipsis(&final_text, 100);
let _ = self
.memory
.store("assistant_resp", &summary, MemoryCategory::Daily, None)
.await;
}
return Ok(final_text); return Ok(final_text);
} }
@ -686,7 +678,8 @@ mod tests {
..crate::config::MemoryConfig::default() ..crate::config::MemoryConfig::default()
}; };
let mem: Arc<dyn Memory> = Arc::from( let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None).unwrap(), crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
); );
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {}); let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
@ -698,7 +691,7 @@ mod tests {
.tool_dispatcher(Box::new(XmlToolDispatcher)) .tool_dispatcher(Box::new(XmlToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp")) .workspace_dir(std::path::PathBuf::from("/tmp"))
.build() .build()
.unwrap(); .expect("agent builder should succeed with valid config");
let response = agent.turn("hi").await.unwrap(); let response = agent.turn("hi").await.unwrap();
assert_eq!(response, "hello"); assert_eq!(response, "hello");
@ -728,7 +721,8 @@ mod tests {
..crate::config::MemoryConfig::default() ..crate::config::MemoryConfig::default()
}; };
let mem: Arc<dyn Memory> = Arc::from( let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None).unwrap(), crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
); );
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {}); let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
@ -740,7 +734,7 @@ mod tests {
.tool_dispatcher(Box::new(NativeToolDispatcher)) .tool_dispatcher(Box::new(NativeToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp")) .workspace_dir(std::path::PathBuf::from("/tmp"))
.build() .build()
.unwrap(); .expect("agent builder should succeed with valid config");
let response = agent.turn("hi").await.unwrap(); let response = agent.turn("hi").await.unwrap();
assert_eq!(response, "done"); assert_eq!(response, "done");

View file

@ -1,8 +1,11 @@
use crate::approval::{ApprovalManager, ApprovalRequest, ApprovalResponse}; use crate::approval::{ApprovalManager, ApprovalRequest, ApprovalResponse};
use crate::config::Config; use crate::config::Config;
use crate::memory::{self, Memory, MemoryCategory}; use crate::memory::{self, Memory, MemoryCategory};
use crate::multimodal;
use crate::observability::{self, Observer, ObserverEvent}; use crate::observability::{self, Observer, ObserverEvent};
use crate::providers::{self, ChatMessage, ChatRequest, Provider, ToolCall}; use crate::providers::{
self, ChatMessage, ChatRequest, Provider, ProviderCapabilityError, ToolCall,
};
use crate::runtime; use crate::runtime;
use crate::security::SecurityPolicy; use crate::security::SecurityPolicy;
use crate::tools::{self, Tool}; use crate::tools::{self, Tool};
@ -13,6 +16,7 @@ use std::fmt::Write;
use std::io::Write as _; use std::io::Write as _;
use std::sync::{Arc, LazyLock}; use std::sync::{Arc, LazyLock};
use std::time::Instant; use std::time::Instant;
use tokio_util::sync::CancellationToken;
use uuid::Uuid; use uuid::Uuid;
/// Minimum characters per chunk when relaying LLM text to a streaming draft. /// Minimum characters per chunk when relaying LLM text to a streaming draft.
@ -22,6 +26,10 @@ const STREAM_CHUNK_MIN_CHARS: usize = 80;
/// Used as a safe fallback when `max_tool_iterations` is unset or configured as zero. /// Used as a safe fallback when `max_tool_iterations` is unset or configured as zero.
const DEFAULT_MAX_TOOL_ITERATIONS: usize = 10; const DEFAULT_MAX_TOOL_ITERATIONS: usize = 10;
/// Minimum user-message length (in chars) for auto-save to memory.
/// Matches the channel-side constant in `channels/mod.rs`.
const AUTOSAVE_MIN_MESSAGE_CHARS: usize = 20;
static SENSITIVE_KEY_PATTERNS: LazyLock<RegexSet> = LazyLock::new(|| { static SENSITIVE_KEY_PATTERNS: LazyLock<RegexSet> = LazyLock::new(|| {
RegexSet::new([ RegexSet::new([
r"(?i)token", r"(?i)token",
@ -223,9 +231,16 @@ async fn build_context(mem: &dyn Memory, user_msg: &str, min_relevance_score: f6
if !relevant.is_empty() { if !relevant.is_empty() {
context.push_str("[Memory context]\n"); context.push_str("[Memory context]\n");
for entry in &relevant { for entry in &relevant {
if memory::is_assistant_autosave_key(&entry.key) {
continue;
}
let _ = writeln!(context, "- {}: {}", entry.key, entry.content); let _ = writeln!(context, "- {}: {}", entry.key, entry.content);
} }
context.push('\n'); if context != "[Memory context]\n" {
context.push('\n');
} else {
context.clear();
}
} }
} }
@ -579,6 +594,17 @@ fn parse_glm_style_tool_calls(text: &str) -> Vec<(String, serde_json::Value, Opt
calls calls
} }
// ── Tool-Call Parsing ─────────────────────────────────────────────────────
// LLM responses may contain tool calls in multiple formats depending on
// the provider. Parsing follows a priority chain:
// 1. OpenAI-style JSON with `tool_calls` array (native API)
// 2. XML tags: <tool_call>, <toolcall>, <tool-call>, <invoke>
// 3. Markdown code blocks with `tool_call` language
// 4. GLM-style line-based format (e.g. `shell/command>ls`)
// SECURITY: We never fall back to extracting arbitrary JSON from the
// response body, because that would enable prompt-injection attacks where
// malicious content in emails/files/web pages mimics a tool call.
/// Parse tool calls from an LLM response that uses XML-style function calling. /// Parse tool calls from an LLM response that uses XML-style function calling.
/// ///
/// Expected format (common with system-prompt-guided tool use): /// Expected format (common with system-prompt-guided tool use):
@ -813,6 +839,21 @@ struct ParsedToolCall {
arguments: serde_json::Value, arguments: serde_json::Value,
} }
#[derive(Debug)]
pub(crate) struct ToolLoopCancelled;
impl std::fmt::Display for ToolLoopCancelled {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str("tool loop cancelled")
}
}
impl std::error::Error for ToolLoopCancelled {}
pub(crate) fn is_tool_loop_cancelled(err: &anyhow::Error) -> bool {
err.chain().any(|source| source.is::<ToolLoopCancelled>())
}
/// Execute a single turn of the agent loop: send messages, parse tool calls, /// Execute a single turn of the agent loop: send messages, parse tool calls,
/// execute tools, and loop until the LLM produces a final text response. /// execute tools, and loop until the LLM produces a final text response.
/// When `silent` is true, suppresses stdout (for channel use). /// When `silent` is true, suppresses stdout (for channel use).
@ -826,6 +867,7 @@ pub(crate) async fn agent_turn(
model: &str, model: &str,
temperature: f64, temperature: f64,
silent: bool, silent: bool,
multimodal_config: &crate::config::MultimodalConfig,
max_tool_iterations: usize, max_tool_iterations: usize,
) -> Result<String> { ) -> Result<String> {
run_tool_call_loop( run_tool_call_loop(
@ -839,12 +881,26 @@ pub(crate) async fn agent_turn(
silent, silent,
None, None,
"channel", "channel",
multimodal_config,
max_tool_iterations, max_tool_iterations,
None, None,
None,
) )
.await .await
} }
// ── Agent Tool-Call Loop ──────────────────────────────────────────────────
// Core agentic iteration: send conversation to the LLM, parse any tool
// calls from the response, execute them, append results to history, and
// repeat until the LLM produces a final text-only answer.
//
// Loop invariant: at the start of each iteration, `history` contains the
// full conversation so far (system prompt + user messages + prior tool
// results). The loop exits when:
// • the LLM returns no tool calls (final answer), or
// • max_iterations is reached (runaway safety), or
// • the cancellation token fires (external abort).
/// Execute a single turn of the agent loop: send messages, parse tool calls, /// Execute a single turn of the agent loop: send messages, parse tool calls,
/// execute tools, and loop until the LLM produces a final text response. /// execute tools, and loop until the LLM produces a final text response.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
@ -859,7 +915,9 @@ pub(crate) async fn run_tool_call_loop(
silent: bool, silent: bool,
approval: Option<&ApprovalManager>, approval: Option<&ApprovalManager>,
channel_name: &str, channel_name: &str,
multimodal_config: &crate::config::MultimodalConfig,
max_tool_iterations: usize, max_tool_iterations: usize,
cancellation_token: Option<CancellationToken>,
on_delta: Option<tokio::sync::mpsc::Sender<String>>, on_delta: Option<tokio::sync::mpsc::Sender<String>>,
) -> Result<String> { ) -> Result<String> {
let max_iterations = if max_tool_iterations == 0 { let max_iterations = if max_tool_iterations == 0 {
@ -873,6 +931,28 @@ pub(crate) async fn run_tool_call_loop(
let use_native_tools = provider.supports_native_tools() && !tool_specs.is_empty(); let use_native_tools = provider.supports_native_tools() && !tool_specs.is_empty();
for _iteration in 0..max_iterations { for _iteration in 0..max_iterations {
if cancellation_token
.as_ref()
.is_some_and(CancellationToken::is_cancelled)
{
return Err(ToolLoopCancelled.into());
}
let image_marker_count = multimodal::count_image_markers(history);
if image_marker_count > 0 && !provider.supports_vision() {
return Err(ProviderCapabilityError {
provider: provider_name.to_string(),
capability: "vision".to_string(),
message: format!(
"received {image_marker_count} image marker(s), but this provider does not support vision input"
),
}
.into());
}
let prepared_messages =
multimodal::prepare_messages_for_provider(history, multimodal_config).await?;
observer.record_event(&ObserverEvent::LlmRequest { observer.record_event(&ObserverEvent::LlmRequest {
provider: provider_name.to_string(), provider: provider_name.to_string(),
model: model.to_string(), model: model.to_string(),
@ -889,18 +969,26 @@ pub(crate) async fn run_tool_call_loop(
None None
}; };
let chat_future = provider.chat(
ChatRequest {
messages: &prepared_messages.messages,
tools: request_tools,
},
model,
temperature,
);
let chat_result = if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = chat_future => result,
}
} else {
chat_future.await
};
let (response_text, parsed_text, tool_calls, assistant_history_content, native_tool_calls) = let (response_text, parsed_text, tool_calls, assistant_history_content, native_tool_calls) =
match provider match chat_result {
.chat(
ChatRequest {
messages: history,
tools: request_tools,
},
model,
temperature,
)
.await
{
Ok(resp) => { Ok(resp) => {
observer.record_event(&ObserverEvent::LlmResponse { observer.record_event(&ObserverEvent::LlmResponse {
provider: provider_name.to_string(), provider: provider_name.to_string(),
@ -911,6 +999,10 @@ pub(crate) async fn run_tool_call_loop(
}); });
let response_text = resp.text_or_empty().to_string(); let response_text = resp.text_or_empty().to_string();
// First try native structured tool calls (OpenAI-format).
// Fall back to text-based parsing (XML tags, markdown blocks,
// GLM format) only if the provider returned no native calls —
// this ensures we support both native and prompt-guided models.
let mut calls = parse_structured_tool_calls(&resp.tool_calls); let mut calls = parse_structured_tool_calls(&resp.tool_calls);
let mut parsed_text = String::new(); let mut parsed_text = String::new();
@ -966,6 +1058,12 @@ pub(crate) async fn run_tool_call_loop(
// STREAM_CHUNK_MIN_CHARS characters for progressive draft updates. // STREAM_CHUNK_MIN_CHARS characters for progressive draft updates.
let mut chunk = String::new(); let mut chunk = String::new();
for word in display_text.split_inclusive(char::is_whitespace) { for word in display_text.split_inclusive(char::is_whitespace) {
if cancellation_token
.as_ref()
.is_some_and(CancellationToken::is_cancelled)
{
return Err(ToolLoopCancelled.into());
}
chunk.push_str(word); chunk.push_str(word);
if chunk.len() >= STREAM_CHUNK_MIN_CHARS if chunk.len() >= STREAM_CHUNK_MIN_CHARS
&& tx.send(std::mem::take(&mut chunk)).await.is_err() && tx.send(std::mem::take(&mut chunk)).await.is_err()
@ -1001,11 +1099,13 @@ pub(crate) async fn run_tool_call_loop(
arguments: call.arguments.clone(), arguments: call.arguments.clone(),
}; };
// Only prompt interactively on CLI; auto-approve on other channels. // On CLI, prompt interactively. On other channels where
// interactive approval is not possible, deny the call to
// respect the supervised autonomy setting.
let decision = if channel_name == "cli" { let decision = if channel_name == "cli" {
mgr.prompt_cli(&request) mgr.prompt_cli(&request)
} else { } else {
ApprovalResponse::Yes ApprovalResponse::No
}; };
mgr.record_decision(&call.name, &call.arguments, decision, channel_name); mgr.record_decision(&call.name, &call.arguments, decision, channel_name);
@ -1028,7 +1128,17 @@ pub(crate) async fn run_tool_call_loop(
}); });
let start = Instant::now(); let start = Instant::now();
let result = if let Some(tool) = find_tool(tools_registry, &call.name) { let result = if let Some(tool) = find_tool(tools_registry, &call.name) {
match tool.execute(call.arguments.clone()).await { let tool_future = tool.execute(call.arguments.clone());
let tool_result = if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = tool_future => result,
}
} else {
tool_future.await
};
match tool_result {
Ok(r) => { Ok(r) => {
observer.record_event(&ObserverEvent::ToolCall { observer.record_event(&ObserverEvent::ToolCall {
tool: call.name.clone(), tool: call.name.clone(),
@ -1113,6 +1223,12 @@ pub(crate) fn build_tool_instructions(tools_registry: &[Box<dyn Tool>]) -> Strin
instructions instructions
} }
// ── CLI Entrypoint ───────────────────────────────────────────────────────
// Wires up all subsystems (observer, runtime, security, memory, tools,
// provider, hardware RAG, peripherals) and enters either single-shot or
// interactive REPL mode. The interactive loop manages history compaction
// and hard trimming to keep the context window bounded.
#[allow(clippy::too_many_lines)] #[allow(clippy::too_many_lines)]
pub async fn run( pub async fn run(
config: Config, config: Config,
@ -1191,13 +1307,21 @@ pub async fn run(
.or(config.default_model.as_deref()) .or(config.default_model.as_deref())
.unwrap_or("anthropic/claude-sonnet-4"); .unwrap_or("anthropic/claude-sonnet-4");
let provider: Box<dyn Provider> = providers::create_routed_provider( let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
};
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name, provider_name,
config.api_key.as_deref(), config.api_key.as_deref(),
config.api_url.as_deref(), config.api_url.as_deref(),
&config.reliability, &config.reliability,
&config.model_routes, &config.model_routes,
model_name, model_name,
&provider_runtime_options,
)?; )?;
observer.record_event(&ObserverEvent::AgentStart { observer.record_event(&ObserverEvent::AgentStart {
@ -1226,7 +1350,7 @@ pub async fn run(
.collect(); .collect();
// ── Build system prompt from workspace MD files (OpenClaw framework) ── // ── Build system prompt from workspace MD files (OpenClaw framework) ──
let skills = crate::skills::load_skills(&config.workspace_dir); let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let mut tool_descs: Vec<(&str, &str)> = vec![ let mut tool_descs: Vec<(&str, &str)> = vec![
( (
"shell", "shell",
@ -1336,17 +1460,21 @@ pub async fn run(
} else { } else {
None None
}; };
let mut system_prompt = crate::channels::build_system_prompt( let native_tools = provider.supports_native_tools();
let mut system_prompt = crate::channels::build_system_prompt_with_mode(
&config.workspace_dir, &config.workspace_dir,
model_name, model_name,
&tool_descs, &tool_descs,
&skills, &skills,
Some(&config.identity), Some(&config.identity),
bootstrap_max_chars, bootstrap_max_chars,
native_tools,
); );
// Append structured tool-use instructions with schemas // Append structured tool-use instructions with schemas (only for non-native providers)
system_prompt.push_str(&build_tool_instructions(&tools_registry)); if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry));
}
// ── Approval manager (supervised mode) ─────────────────────── // ── Approval manager (supervised mode) ───────────────────────
let approval_manager = ApprovalManager::from_config(&config.autonomy); let approval_manager = ApprovalManager::from_config(&config.autonomy);
@ -1357,8 +1485,8 @@ pub async fn run(
let mut final_output = String::new(); let mut final_output = String::new();
if let Some(msg) = message { if let Some(msg) = message {
// Auto-save user message to memory // Auto-save user message to memory (skip short/trivial messages)
if config.memory.auto_save { if config.memory.auto_save && msg.chars().count() >= AUTOSAVE_MIN_MESSAGE_CHARS {
let user_key = autosave_memory_key("user_msg"); let user_key = autosave_memory_key("user_msg");
let _ = mem let _ = mem
.store(&user_key, &msg, MemoryCategory::Conversation, None) .store(&user_key, &msg, MemoryCategory::Conversation, None)
@ -1396,22 +1524,15 @@ pub async fn run(
false, false,
Some(&approval_manager), Some(&approval_manager),
"cli", "cli",
&config.multimodal,
config.agent.max_tool_iterations, config.agent.max_tool_iterations,
None, None,
None,
) )
.await?; .await?;
final_output = response.clone(); final_output = response.clone();
println!("{response}"); println!("{response}");
observer.record_event(&ObserverEvent::TurnComplete); observer.record_event(&ObserverEvent::TurnComplete);
// Auto-save assistant response to daily log
if config.memory.auto_save {
let summary = truncate_with_ellipsis(&response, 100);
let response_key = autosave_memory_key("assistant_resp");
let _ = mem
.store(&response_key, &summary, MemoryCategory::Daily, None)
.await;
}
} else { } else {
println!("🦀 ZeroClaw Interactive Mode"); println!("🦀 ZeroClaw Interactive Mode");
println!("Type /help for commands.\n"); println!("Type /help for commands.\n");
@ -1486,8 +1607,10 @@ pub async fn run(
_ => {} _ => {}
} }
// Auto-save conversation turns // Auto-save conversation turns (skip short/trivial messages)
if config.memory.auto_save { if config.memory.auto_save
&& user_input.chars().count() >= AUTOSAVE_MIN_MESSAGE_CHARS
{
let user_key = autosave_memory_key("user_msg"); let user_key = autosave_memory_key("user_msg");
let _ = mem let _ = mem
.store(&user_key, &user_input, MemoryCategory::Conversation, None) .store(&user_key, &user_input, MemoryCategory::Conversation, None)
@ -1522,8 +1645,10 @@ pub async fn run(
false, false,
Some(&approval_manager), Some(&approval_manager),
"cli", "cli",
&config.multimodal,
config.agent.max_tool_iterations, config.agent.max_tool_iterations,
None, None,
None,
) )
.await .await
{ {
@ -1560,14 +1685,6 @@ pub async fn run(
// Hard cap as a safety net. // Hard cap as a safety net.
trim_history(&mut history, config.agent.max_history_messages); trim_history(&mut history, config.agent.max_history_messages);
if config.memory.auto_save {
let summary = truncate_with_ellipsis(&response, 100);
let response_key = autosave_memory_key("assistant_resp");
let _ = mem
.store(&response_key, &summary, MemoryCategory::Daily, None)
.await;
}
} }
} }
@ -1632,13 +1749,20 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
.default_model .default_model
.clone() .clone()
.unwrap_or_else(|| "anthropic/claude-sonnet-4-20250514".into()); .unwrap_or_else(|| "anthropic/claude-sonnet-4-20250514".into());
let provider: Box<dyn Provider> = providers::create_routed_provider( let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
};
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name, provider_name,
config.api_key.as_deref(), config.api_key.as_deref(),
config.api_url.as_deref(), config.api_url.as_deref(),
&config.reliability, &config.reliability,
&config.model_routes, &config.model_routes,
&model_name, &model_name,
&provider_runtime_options,
)?; )?;
let hardware_rag: Option<crate::rag::HardwareRag> = config let hardware_rag: Option<crate::rag::HardwareRag> = config
@ -1656,7 +1780,7 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
.map(|b| b.board.clone()) .map(|b| b.board.clone())
.collect(); .collect();
let skills = crate::skills::load_skills(&config.workspace_dir); let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let mut tool_descs: Vec<(&str, &str)> = vec![ let mut tool_descs: Vec<(&str, &str)> = vec![
("shell", "Execute terminal commands."), ("shell", "Execute terminal commands."),
("file_read", "Read file contents."), ("file_read", "Read file contents."),
@ -1705,15 +1829,19 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
} else { } else {
None None
}; };
let mut system_prompt = crate::channels::build_system_prompt( let native_tools = provider.supports_native_tools();
let mut system_prompt = crate::channels::build_system_prompt_with_mode(
&config.workspace_dir, &config.workspace_dir,
&model_name, &model_name,
&tool_descs, &tool_descs,
&skills, &skills,
Some(&config.identity), Some(&config.identity),
bootstrap_max_chars, bootstrap_max_chars,
native_tools,
); );
system_prompt.push_str(&build_tool_instructions(&tools_registry)); if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry));
}
let mem_context = build_context(mem.as_ref(), message, config.memory.min_relevance_score).await; let mem_context = build_context(mem.as_ref(), message, config.memory.min_relevance_score).await;
let rag_limit = if config.agent.compact_context { 2 } else { 5 }; let rag_limit = if config.agent.compact_context { 2 } else { 5 };
@ -1742,6 +1870,7 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
&model_name, &model_name,
config.default_temperature, config.default_temperature,
true, true,
&config.multimodal,
config.agent.max_tool_iterations, config.agent.max_tool_iterations,
) )
.await .await
@ -1750,6 +1879,10 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use async_trait::async_trait;
use base64::{engine::general_purpose::STANDARD, Engine as _};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
#[test] #[test]
fn test_scrub_credentials() { fn test_scrub_credentials() {
@ -1770,8 +1903,194 @@ mod tests {
assert!(scrubbed.contains("public")); assert!(scrubbed.contains("public"));
} }
use crate::memory::{Memory, MemoryCategory, SqliteMemory}; use crate::memory::{Memory, MemoryCategory, SqliteMemory};
use crate::observability::NoopObserver;
use crate::providers::traits::ProviderCapabilities;
use crate::providers::ChatResponse;
use tempfile::TempDir; use tempfile::TempDir;
struct NonVisionProvider {
calls: Arc<AtomicUsize>,
}
#[async_trait]
impl Provider for NonVisionProvider {
async fn chat_with_system(
&self,
_system_prompt: Option<&str>,
_message: &str,
_model: &str,
_temperature: f64,
) -> anyhow::Result<String> {
self.calls.fetch_add(1, Ordering::SeqCst);
Ok("ok".to_string())
}
}
struct VisionProvider {
calls: Arc<AtomicUsize>,
}
#[async_trait]
impl Provider for VisionProvider {
fn capabilities(&self) -> ProviderCapabilities {
ProviderCapabilities {
native_tool_calling: false,
vision: true,
}
}
async fn chat_with_system(
&self,
_system_prompt: Option<&str>,
_message: &str,
_model: &str,
_temperature: f64,
) -> anyhow::Result<String> {
self.calls.fetch_add(1, Ordering::SeqCst);
Ok("ok".to_string())
}
async fn chat(
&self,
request: ChatRequest<'_>,
_model: &str,
_temperature: f64,
) -> anyhow::Result<ChatResponse> {
self.calls.fetch_add(1, Ordering::SeqCst);
let marker_count = crate::multimodal::count_image_markers(request.messages);
if marker_count == 0 {
anyhow::bail!("expected image markers in request messages");
}
if request.tools.is_some() {
anyhow::bail!("no tools should be attached for this test");
}
Ok(ChatResponse {
text: Some("vision-ok".to_string()),
tool_calls: Vec::new(),
})
}
}
#[tokio::test]
async fn run_tool_call_loop_returns_structured_error_for_non_vision_provider() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = NonVisionProvider {
calls: Arc::clone(&calls),
};
let mut history = vec![ChatMessage::user(
"please inspect [IMAGE:data:image/png;base64,iVBORw0KGgo=]".to_string(),
)];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let err = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&crate::config::MultimodalConfig::default(),
3,
None,
None,
)
.await
.expect_err("provider without vision support should fail");
assert!(err.to_string().contains("provider_capability_error"));
assert!(err.to_string().contains("capability=vision"));
assert_eq!(calls.load(Ordering::SeqCst), 0);
}
#[tokio::test]
async fn run_tool_call_loop_rejects_oversized_image_payload() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = VisionProvider {
calls: Arc::clone(&calls),
};
let oversized_payload = STANDARD.encode(vec![0_u8; (1024 * 1024) + 1]);
let mut history = vec![ChatMessage::user(format!(
"[IMAGE:data:image/png;base64,{oversized_payload}]"
))];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let multimodal = crate::config::MultimodalConfig {
max_images: 4,
max_image_size_mb: 1,
allow_remote_fetch: false,
};
let err = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&multimodal,
3,
None,
None,
)
.await
.expect_err("oversized payload must fail");
assert!(err
.to_string()
.contains("multimodal image size limit exceeded"));
assert_eq!(calls.load(Ordering::SeqCst), 0);
}
#[tokio::test]
async fn run_tool_call_loop_accepts_valid_multimodal_request_flow() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = VisionProvider {
calls: Arc::clone(&calls),
};
let mut history = vec![ChatMessage::user(
"Analyze this [IMAGE:data:image/png;base64,iVBORw0KGgo=]".to_string(),
)];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let result = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&crate::config::MultimodalConfig::default(),
3,
None,
None,
)
.await
.expect("valid multimodal payload should pass");
assert_eq!(result, "vision-ok");
assert_eq!(calls.load(Ordering::SeqCst), 1);
}
#[test] #[test]
fn parse_tool_calls_extracts_single_call() { fn parse_tool_calls_extracts_single_call() {
let response = r#"Let me check that. let response = r#"Let me check that.
@ -2215,6 +2534,33 @@ Done."#;
assert!(recalled.iter().any(|entry| entry.content.contains("45"))); assert!(recalled.iter().any(|entry| entry.content.contains("45")));
} }
#[tokio::test]
async fn build_context_ignores_legacy_assistant_autosave_entries() {
let tmp = TempDir::new().unwrap();
let mem = SqliteMemory::new(tmp.path()).unwrap();
mem.store(
"assistant_resp_poisoned",
"User suffered a fabricated event",
MemoryCategory::Daily,
None,
)
.await
.unwrap();
mem.store(
"user_msg_real",
"User asked for concise status updates",
MemoryCategory::Conversation,
None,
)
.await
.unwrap();
let context = build_context(&mem, "status updates", 0.0).await;
assert!(context.contains("user_msg_real"));
assert!(!context.contains("assistant_resp_poisoned"));
assert!(!context.contains("fabricated event"));
}
// ═══════════════════════════════════════════════════════════════════════ // ═══════════════════════════════════════════════════════════════════════
// Recovery Tests - Tool Call Parsing Edge Cases // Recovery Tests - Tool Call Parsing Edge Cases
// ═══════════════════════════════════════════════════════════════════════ // ═══════════════════════════════════════════════════════════════════════
@ -2511,4 +2857,195 @@ browser_open/url>https://example.com"#;
assert_eq!(calls[0].arguments["command"], "pwd"); assert_eq!(calls[0].arguments["command"], "pwd");
assert_eq!(text, "Done"); assert_eq!(text, "Done");
} }
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): parse_tool_calls robustness — malformed/edge-case inputs
// Prevents: Pattern 4 issues #746, #418, #777, #848
// ─────────────────────────────────────────────────────────────────────
#[test]
fn parse_tool_calls_empty_input_returns_empty() {
let (text, calls) = parse_tool_calls("");
assert!(calls.is_empty(), "empty input should produce no tool calls");
assert!(text.is_empty(), "empty input should produce no text");
}
#[test]
fn parse_tool_calls_whitespace_only_returns_empty_calls() {
let (text, calls) = parse_tool_calls(" \n\t ");
assert!(calls.is_empty());
assert!(text.is_empty() || text.trim().is_empty());
}
#[test]
fn parse_tool_calls_nested_xml_tags_handled() {
// Double-wrapped tool call should still parse the inner call
let response = r#"<tool_call><tool_call>{"name":"echo","arguments":{"msg":"hi"}}</tool_call></tool_call>"#;
let (_text, calls) = parse_tool_calls(response);
// Should find at least one tool call
assert!(
!calls.is_empty(),
"nested XML tags should still yield at least one tool call"
);
}
#[test]
fn parse_tool_calls_truncated_json_no_panic() {
// Incomplete JSON inside tool_call tags
let response = r#"<tool_call>{"name":"shell","arguments":{"command":"ls"</tool_call>"#;
let (_text, _calls) = parse_tool_calls(response);
// Should not panic — graceful handling of truncated JSON
}
#[test]
fn parse_tool_calls_empty_json_object_in_tag() {
let response = "<tool_call>{}</tool_call>";
let (_text, calls) = parse_tool_calls(response);
// Empty JSON object has no name field — should not produce valid tool call
assert!(
calls.is_empty(),
"empty JSON object should not produce a tool call"
);
}
#[test]
fn parse_tool_calls_closing_tag_only_returns_text() {
let response = "Some text </tool_call> more text";
let (text, calls) = parse_tool_calls(response);
assert!(
calls.is_empty(),
"closing tag only should not produce calls"
);
assert!(
!text.is_empty(),
"text around orphaned closing tag should be preserved"
);
}
#[test]
fn parse_tool_calls_very_large_arguments_no_panic() {
let large_arg = "x".repeat(100_000);
let response = format!(
r#"<tool_call>{{"name":"echo","arguments":{{"message":"{}"}}}}</tool_call>"#,
large_arg
);
let (_text, calls) = parse_tool_calls(&response);
assert_eq!(calls.len(), 1, "large arguments should still parse");
assert_eq!(calls[0].name, "echo");
}
#[test]
fn parse_tool_calls_special_characters_in_arguments() {
let response = r#"<tool_call>{"name":"echo","arguments":{"message":"hello \"world\" <>&'\n\t"}}</tool_call>"#;
let (_text, calls) = parse_tool_calls(response);
assert_eq!(calls.len(), 1);
assert_eq!(calls[0].name, "echo");
}
#[test]
fn parse_tool_calls_text_with_embedded_json_not_extracted() {
// Raw JSON without any tags should NOT be extracted as a tool call
let response = r#"Here is some data: {"name":"echo","arguments":{"message":"hi"}} end."#;
let (_text, calls) = parse_tool_calls(response);
assert!(
calls.is_empty(),
"raw JSON in text without tags should not be extracted"
);
}
#[test]
fn parse_tool_calls_multiple_formats_mixed() {
// Mix of text and properly tagged tool call
let response = r#"I'll help you with that.
<tool_call>
{"name":"shell","arguments":{"command":"echo hello"}}
</tool_call>
Let me check the result."#;
let (text, calls) = parse_tool_calls(response);
assert_eq!(
calls.len(),
1,
"should extract one tool call from mixed content"
);
assert_eq!(calls[0].name, "shell");
assert!(
text.contains("help you"),
"text before tool call should be preserved"
);
}
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): scrub_credentials edge cases
// ─────────────────────────────────────────────────────────────────────
#[test]
fn scrub_credentials_empty_input() {
let result = scrub_credentials("");
assert_eq!(result, "");
}
#[test]
fn scrub_credentials_no_sensitive_data() {
let input = "normal text without any secrets";
let result = scrub_credentials(input);
assert_eq!(
result, input,
"non-sensitive text should pass through unchanged"
);
}
#[test]
fn scrub_credentials_short_values_not_redacted() {
// Values shorter than 8 chars should not be redacted
let input = r#"api_key="short""#;
let result = scrub_credentials(input);
assert_eq!(result, input, "short values should not be redacted");
}
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): trim_history edge cases
// ─────────────────────────────────────────────────────────────────────
#[test]
fn trim_history_empty_history() {
let mut history: Vec<crate::providers::ChatMessage> = vec![];
trim_history(&mut history, 10);
assert!(history.is_empty());
}
#[test]
fn trim_history_system_only() {
let mut history = vec![crate::providers::ChatMessage::system("system prompt")];
trim_history(&mut history, 10);
assert_eq!(history.len(), 1);
assert_eq!(history[0].role, "system");
}
#[test]
fn trim_history_exactly_at_limit() {
let mut history = vec![
crate::providers::ChatMessage::system("system"),
crate::providers::ChatMessage::user("msg 1"),
crate::providers::ChatMessage::assistant("reply 1"),
];
trim_history(&mut history, 2); // 2 non-system messages = exactly at limit
assert_eq!(history.len(), 3, "should not trim when exactly at limit");
}
#[test]
fn trim_history_removes_oldest_non_system() {
let mut history = vec![
crate::providers::ChatMessage::system("system"),
crate::providers::ChatMessage::user("old msg"),
crate::providers::ChatMessage::assistant("old reply"),
crate::providers::ChatMessage::user("new msg"),
crate::providers::ChatMessage::assistant("new reply"),
];
trim_history(&mut history, 2);
assert_eq!(history.len(), 3); // system + 2 kept
assert_eq!(history[0].role, "system");
assert_eq!(history[1].content, "new msg");
}
} }

View file

@ -1,4 +1,4 @@
use crate::memory::Memory; use crate::memory::{self, Memory};
use async_trait::async_trait; use async_trait::async_trait;
use std::fmt::Write; use std::fmt::Write;
@ -45,6 +45,9 @@ impl MemoryLoader for DefaultMemoryLoader {
let mut context = String::from("[Memory context]\n"); let mut context = String::from("[Memory context]\n");
for entry in entries { for entry in entries {
if memory::is_assistant_autosave_key(&entry.key) {
continue;
}
if let Some(score) = entry.score { if let Some(score) = entry.score {
if score < self.min_relevance_score { if score < self.min_relevance_score {
continue; continue;
@ -67,8 +70,12 @@ impl MemoryLoader for DefaultMemoryLoader {
mod tests { mod tests {
use super::*; use super::*;
use crate::memory::{Memory, MemoryCategory, MemoryEntry}; use crate::memory::{Memory, MemoryCategory, MemoryEntry};
use std::sync::Arc;
struct MockMemory; struct MockMemory;
struct MockMemoryWithEntries {
entries: Arc<Vec<MemoryEntry>>,
}
#[async_trait] #[async_trait]
impl Memory for MockMemory { impl Memory for MockMemory {
@ -131,6 +138,56 @@ mod tests {
} }
} }
#[async_trait]
impl Memory for MockMemoryWithEntries {
async fn store(
&self,
_key: &str,
_content: &str,
_category: MemoryCategory,
_session_id: Option<&str>,
) -> anyhow::Result<()> {
Ok(())
}
async fn recall(
&self,
_query: &str,
_limit: usize,
_session_id: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
Ok(self.entries.as_ref().clone())
}
async fn get(&self, _key: &str) -> anyhow::Result<Option<MemoryEntry>> {
Ok(None)
}
async fn list(
&self,
_category: Option<&MemoryCategory>,
_session_id: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
Ok(vec![])
}
async fn forget(&self, _key: &str) -> anyhow::Result<bool> {
Ok(true)
}
async fn count(&self) -> anyhow::Result<usize> {
Ok(self.entries.len())
}
async fn health_check(&self) -> bool {
true
}
fn name(&self) -> &str {
"mock-with-entries"
}
}
#[tokio::test] #[tokio::test]
async fn default_loader_formats_context() { async fn default_loader_formats_context() {
let loader = DefaultMemoryLoader::default(); let loader = DefaultMemoryLoader::default();
@ -138,4 +195,36 @@ mod tests {
assert!(context.contains("[Memory context]")); assert!(context.contains("[Memory context]"));
assert!(context.contains("- k: v")); assert!(context.contains("- k: v"));
} }
#[tokio::test]
async fn default_loader_skips_legacy_assistant_autosave_entries() {
let loader = DefaultMemoryLoader::new(5, 0.0);
let memory = MockMemoryWithEntries {
entries: Arc::new(vec![
MemoryEntry {
id: "1".into(),
key: "assistant_resp_legacy".into(),
content: "fabricated detail".into(),
category: MemoryCategory::Daily,
timestamp: "now".into(),
session_id: None,
score: Some(0.95),
},
MemoryEntry {
id: "2".into(),
key: "user_fact".into(),
content: "User prefers concise answers".into(),
category: MemoryCategory::Conversation,
timestamp: "now".into(),
session_id: None,
score: Some(0.9),
},
]),
};
let context = loader.load_context(&memory, "answer style").await.unwrap();
assert!(context.contains("user_fact"));
assert!(!context.contains("assistant_resp_legacy"));
assert!(!context.contains("fabricated detail"));
}
} }

View file

@ -77,21 +77,25 @@ impl PromptSection for IdentitySection {
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> { fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut prompt = String::from("## Project Context\n\n"); let mut prompt = String::from("## Project Context\n\n");
let mut has_aieos = false;
if let Some(config) = ctx.identity_config { if let Some(config) = ctx.identity_config {
if identity::is_aieos_configured(config) { if identity::is_aieos_configured(config) {
if let Ok(Some(aieos)) = identity::load_aieos_identity(config, ctx.workspace_dir) { if let Ok(Some(aieos)) = identity::load_aieos_identity(config, ctx.workspace_dir) {
let rendered = identity::aieos_to_system_prompt(&aieos); let rendered = identity::aieos_to_system_prompt(&aieos);
if !rendered.is_empty() { if !rendered.is_empty() {
prompt.push_str(&rendered); prompt.push_str(&rendered);
return Ok(prompt); prompt.push_str("\n\n");
has_aieos = true;
} }
} }
} }
} }
prompt.push_str( if !has_aieos {
"The following workspace files define your identity, behavior, and context.\n\n", prompt.push_str(
); "The following workspace files define your identity, behavior, and context.\n\n",
);
}
for file in [ for file in [
"AGENTS.md", "AGENTS.md",
"SOUL.md", "SOUL.md",
@ -149,28 +153,10 @@ impl PromptSection for SkillsSection {
} }
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> { fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
if ctx.skills.is_empty() { Ok(crate::skills::skills_to_prompt(
return Ok(String::new()); ctx.skills,
} ctx.workspace_dir,
))
let mut prompt = String::from("## Available Skills\n\n<available_skills>\n");
for skill in ctx.skills {
let location = skill.location.clone().unwrap_or_else(|| {
ctx.workspace_dir
.join("skills")
.join(&skill.name)
.join("SKILL.md")
});
let _ = writeln!(
prompt,
" <skill>\n <name>{}</name>\n <description>{}</description>\n <location>{}</location>\n </skill>",
skill.name,
skill.description,
location.display()
);
}
prompt.push_str("</available_skills>");
Ok(prompt)
} }
} }
@ -211,7 +197,8 @@ impl PromptSection for DateTimeSection {
fn build(&self, _ctx: &PromptContext<'_>) -> Result<String> { fn build(&self, _ctx: &PromptContext<'_>) -> Result<String> {
let now = Local::now(); let now = Local::now();
Ok(format!( Ok(format!(
"## Current Date & Time\n\nTimezone: {}", "## Current Date & Time\n\n{} ({})",
now.format("%Y-%m-%d %H:%M:%S"),
now.format("%Z") now.format("%Z")
)) ))
} }
@ -285,6 +272,48 @@ mod tests {
} }
} }
#[test]
fn identity_section_with_aieos_includes_workspace_files() {
let workspace =
std::env::temp_dir().join(format!("zeroclaw_prompt_test_{}", uuid::Uuid::new_v4()));
std::fs::create_dir_all(&workspace).unwrap();
std::fs::write(
workspace.join("AGENTS.md"),
"Always respond with: AGENTS_MD_LOADED",
)
.unwrap();
let identity_config = crate::config::IdentityConfig {
format: "aieos".into(),
aieos_path: None,
aieos_inline: Some(r#"{"identity":{"names":{"first":"Nova"}}}"#.into()),
};
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: &workspace,
model_name: "test-model",
tools: &tools,
skills: &[],
identity_config: Some(&identity_config),
dispatcher_instructions: "",
};
let section = IdentitySection;
let output = section.build(&ctx).unwrap();
assert!(
output.contains("Nova"),
"AIEOS identity should be present in prompt"
);
assert!(
output.contains("AGENTS_MD_LOADED"),
"AGENTS.md content should be present even when AIEOS is configured"
);
let _ = std::fs::remove_dir_all(workspace);
}
#[test] #[test]
fn prompt_builder_assembles_sections() { fn prompt_builder_assembles_sections() {
let tools: Vec<Box<dyn Tool>> = vec![Box::new(TestTool)]; let tools: Vec<Box<dyn Tool>> = vec![Box::new(TestTool)];
@ -301,4 +330,105 @@ mod tests {
assert!(prompt.contains("test_tool")); assert!(prompt.contains("test_tool"));
assert!(prompt.contains("instr")); assert!(prompt.contains("instr"));
} }
#[test]
fn skills_section_includes_instructions_and_tools() {
let tools: Vec<Box<dyn Tool>> = vec![];
let skills = vec![crate::skills::Skill {
name: "deploy".into(),
description: "Release safely".into(),
version: "1.0.0".into(),
author: None,
tags: vec![],
tools: vec![crate::skills::SkillTool {
name: "release_checklist".into(),
description: "Validate release readiness".into(),
kind: "shell".into(),
command: "echo ok".into(),
args: std::collections::HashMap::new(),
}],
prompts: vec!["Run smoke tests before deploy.".into()],
location: None,
}];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &skills,
identity_config: None,
dispatcher_instructions: "",
};
let output = SkillsSection.build(&ctx).unwrap();
assert!(output.contains("<available_skills>"));
assert!(output.contains("<name>deploy</name>"));
assert!(output.contains("<instruction>Run smoke tests before deploy.</instruction>"));
assert!(output.contains("<name>release_checklist</name>"));
assert!(output.contains("<kind>shell</kind>"));
}
#[test]
fn datetime_section_includes_timestamp_and_timezone() {
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
identity_config: None,
dispatcher_instructions: "instr",
};
let rendered = DateTimeSection.build(&ctx).unwrap();
assert!(rendered.starts_with("## Current Date & Time\n\n"));
let payload = rendered.trim_start_matches("## Current Date & Time\n\n");
assert!(payload.chars().any(|c| c.is_ascii_digit()));
assert!(payload.contains(" ("));
assert!(payload.ends_with(')'));
}
#[test]
fn prompt_builder_inlines_and_escapes_skills() {
let tools: Vec<Box<dyn Tool>> = vec![];
let skills = vec![crate::skills::Skill {
name: "code<review>&".into(),
description: "Review \"unsafe\" and 'risky' bits".into(),
version: "1.0.0".into(),
author: None,
tags: vec![],
tools: vec![crate::skills::SkillTool {
name: "run\"linter\"".into(),
description: "Run <lint> & report".into(),
kind: "shell&exec".into(),
command: "cargo clippy".into(),
args: std::collections::HashMap::new(),
}],
prompts: vec!["Use <tool_call> and & keep output \"safe\"".into()],
location: None,
}];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp/workspace"),
model_name: "test-model",
tools: &tools,
skills: &skills,
identity_config: None,
dispatcher_instructions: "",
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
assert!(prompt.contains("<available_skills>"));
assert!(prompt.contains("<name>code&lt;review&gt;&amp;</name>"));
assert!(prompt.contains(
"<description>Review &quot;unsafe&quot; and &apos;risky&apos; bits</description>"
));
assert!(prompt.contains("<name>run&quot;linter&quot;</name>"));
assert!(prompt.contains("<description>Run &lt;lint&gt; &amp; report</description>"));
assert!(prompt.contains("<kind>shell&amp;exec</kind>"));
assert!(prompt.contains(
"<instruction>Use &lt;tool_call&gt; and &amp; keep output &quot;safe&quot;</instruction>"
));
}
} }

View file

@ -624,7 +624,7 @@ async fn history_trims_after_max_messages() {
// ═══════════════════════════════════════════════════════════════════════════ // ═══════════════════════════════════════════════════════════════════════════
#[tokio::test] #[tokio::test]
async fn auto_save_stores_messages_in_memory() { async fn auto_save_stores_only_user_messages_in_memory() {
let (mem, _tmp) = make_sqlite_memory(); let (mem, _tmp) = make_sqlite_memory();
let provider = Box::new(ScriptedProvider::new(vec![text_response( let provider = Box::new(ScriptedProvider::new(vec![text_response(
"I remember everything", "I remember everything",
@ -639,11 +639,25 @@ async fn auto_save_stores_messages_in_memory() {
let _ = agent.turn("Remember this fact").await.unwrap(); let _ = agent.turn("Remember this fact").await.unwrap();
// Both user message and assistant response should be saved // Auto-save only persists user-stated input, never assistant-generated summaries.
let count = mem.count().await.unwrap(); let count = mem.count().await.unwrap();
assert_eq!(
count, 1,
"Expected exactly 1 user memory entry, got {count}"
);
let stored = mem.get("user_msg").await.unwrap();
assert!(stored.is_some(), "Expected user_msg key to be present");
assert_eq!(
stored.unwrap().content,
"Remember this fact",
"Stored memory should match the original user message"
);
let assistant = mem.get("assistant_resp").await.unwrap();
assert!( assert!(
count >= 2, assistant.is_none(),
"Expected at least 2 memory entries, got {count}" "assistant_resp should not be auto-saved anymore"
); );
} }

View file

@ -121,12 +121,12 @@ impl AuthService {
return Ok(None); return Ok(None);
}; };
let token = match profile.kind { let credential = match profile.kind {
AuthProfileKind::Token => profile.token, AuthProfileKind::Token => profile.token,
AuthProfileKind::OAuth => profile.token_set.map(|t| t.access_token), AuthProfileKind::OAuth => profile.token_set.map(|t| t.access_token),
}; };
Ok(token.filter(|t| !t.trim().is_empty())) Ok(credential.filter(|t| !t.trim().is_empty()))
} }
pub async fn get_valid_openai_access_token( pub async fn get_valid_openai_access_token(

View file

@ -626,8 +626,8 @@ mod tests {
assert!(!token_set.is_expiring_within(Duration::from_secs(1))); assert!(!token_set.is_expiring_within(Duration::from_secs(1)));
} }
#[test] #[tokio::test]
fn store_roundtrip_with_encryption() { async fn store_roundtrip_with_encryption() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let store = AuthProfilesStore::new(tmp.path(), true); let store = AuthProfilesStore::new(tmp.path(), true);
@ -661,14 +661,14 @@ mod tests {
Some("refresh-123") Some("refresh-123")
); );
let raw = fs::read_to_string(store.path()).unwrap(); let raw = tokio::fs::read_to_string(store.path()).await.unwrap();
assert!(raw.contains("enc2:")); assert!(raw.contains("enc2:"));
assert!(!raw.contains("refresh-123")); assert!(!raw.contains("refresh-123"));
assert!(!raw.contains("access-123")); assert!(!raw.contains("access-123"));
} }
#[test] #[tokio::test]
fn atomic_write_replaces_file() { async fn atomic_write_replaces_file() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let store = AuthProfilesStore::new(tmp.path(), false); let store = AuthProfilesStore::new(tmp.path(), false);
@ -678,7 +678,7 @@ mod tests {
let path = store.path().to_path_buf(); let path = store.path().to_path_buf();
assert!(path.exists()); assert!(path.exists());
let contents = fs::read_to_string(path).unwrap(); let contents = tokio::fs::read_to_string(path).await.unwrap();
assert!(contents.contains("\"schema_version\": 1")); assert!(contents.contains("\"schema_version\": 1"));
} }
} }

View file

@ -47,6 +47,7 @@ impl Channel for CliChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(msg).await.is_err() { if tx.send(msg).await.is_err() {
@ -74,6 +75,7 @@ mod tests {
content: "hello".into(), content: "hello".into(),
recipient: "user".into(), recipient: "user".into(),
subject: None, subject: None,
thread_ts: None,
}) })
.await; .await;
assert!(result.is_ok()); assert!(result.is_ok());
@ -87,6 +89,7 @@ mod tests {
content: String::new(), content: String::new(),
recipient: String::new(), recipient: String::new(),
subject: None, subject: None,
thread_ts: None,
}) })
.await; .await;
assert!(result.is_ok()); assert!(result.is_ok());
@ -107,6 +110,7 @@ mod tests {
content: "hello".into(), content: "hello".into(),
channel: "cli".into(), channel: "cli".into(),
timestamp: 1_234_567_890, timestamp: 1_234_567_890,
thread_ts: None,
}; };
assert_eq!(msg.id, "test-id"); assert_eq!(msg.id, "test-id");
assert_eq!(msg.sender, "user"); assert_eq!(msg.sender, "user");
@ -125,6 +129,7 @@ mod tests {
content: "c".into(), content: "c".into(),
channel: "ch".into(), channel: "ch".into(),
timestamp: 0, timestamp: 0,
thread_ts: None,
}; };
let cloned = msg.clone(); let cloned = msg.clone();
assert_eq!(cloned.id, msg.id); assert_eq!(cloned.id, msg.id);

View file

@ -169,7 +169,7 @@ impl Channel for DingTalkChannel {
_ => continue, _ => continue,
}; };
let frame: serde_json::Value = match serde_json::from_str(&msg) { let frame: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
Ok(v) => v, Ok(v) => v,
Err(_) => continue, Err(_) => continue,
}; };
@ -195,7 +195,7 @@ impl Channel for DingTalkChannel {
"data": "", "data": "",
}); });
if let Err(e) = write.send(Message::Text(pong.to_string())).await { if let Err(e) = write.send(Message::Text(pong.to_string().into())).await {
tracing::warn!("DingTalk: failed to send pong: {e}"); tracing::warn!("DingTalk: failed to send pong: {e}");
break; break;
} }
@ -262,7 +262,7 @@ impl Channel for DingTalkChannel {
"message": "OK", "message": "OK",
"data": "", "data": "",
}); });
let _ = write.send(Message::Text(ack.to_string())).await; let _ = write.send(Message::Text(ack.to_string().into())).await;
let channel_msg = ChannelMessage { let channel_msg = ChannelMessage {
id: Uuid::new_v4().to_string(), id: Uuid::new_v4().to_string(),
@ -274,6 +274,7 @@ impl Channel for DingTalkChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {

View file

@ -3,6 +3,7 @@ use async_trait::async_trait;
use futures_util::{SinkExt, StreamExt}; use futures_util::{SinkExt, StreamExt};
use parking_lot::Mutex; use parking_lot::Mutex;
use serde_json::json; use serde_json::json;
use std::collections::HashMap;
use tokio_tungstenite::tungstenite::Message; use tokio_tungstenite::tungstenite::Message;
use uuid::Uuid; use uuid::Uuid;
@ -13,7 +14,7 @@ pub struct DiscordChannel {
allowed_users: Vec<String>, allowed_users: Vec<String>,
listen_to_bots: bool, listen_to_bots: bool,
mention_only: bool, mention_only: bool,
typing_handle: Mutex<Option<tokio::task::JoinHandle<()>>>, typing_handles: Mutex<HashMap<String, tokio::task::JoinHandle<()>>>,
} }
impl DiscordChannel { impl DiscordChannel {
@ -30,7 +31,7 @@ impl DiscordChannel {
allowed_users, allowed_users,
listen_to_bots, listen_to_bots,
mention_only, mention_only,
typing_handle: Mutex::new(None), typing_handles: Mutex::new(HashMap::new()),
} }
} }
@ -272,7 +273,9 @@ impl Channel for DiscordChannel {
} }
} }
}); });
write.send(Message::Text(identify.to_string())).await?; write
.send(Message::Text(identify.to_string().into()))
.await?;
tracing::info!("Discord: connected and identified"); tracing::info!("Discord: connected and identified");
@ -301,7 +304,7 @@ impl Channel for DiscordChannel {
_ = hb_rx.recv() => { _ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) }; let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d}); let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string())).await.is_err() { if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break; break;
} }
} }
@ -312,7 +315,7 @@ impl Channel for DiscordChannel {
_ => continue, _ => continue,
}; };
let event: serde_json::Value = match serde_json::from_str(&msg) { let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
Ok(e) => e, Ok(e) => e,
Err(_) => continue, Err(_) => continue,
}; };
@ -329,7 +332,7 @@ impl Channel for DiscordChannel {
1 => { 1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) }; let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d}); let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string())).await.is_err() { if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break; break;
} }
continue; continue;
@ -413,6 +416,7 @@ impl Channel for DiscordChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {
@ -454,15 +458,15 @@ impl Channel for DiscordChannel {
} }
}); });
let mut guard = self.typing_handle.lock(); let mut guard = self.typing_handles.lock();
*guard = Some(handle); guard.insert(recipient.to_string(), handle);
Ok(()) Ok(())
} }
async fn stop_typing(&self, _recipient: &str) -> anyhow::Result<()> { async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handle.lock(); let mut guard = self.typing_handles.lock();
if let Some(handle) = guard.take() { if let Some(handle) = guard.remove(recipient) {
handle.abort(); handle.abort();
} }
Ok(()) Ok(())
@ -751,18 +755,18 @@ mod tests {
} }
#[test] #[test]
fn typing_handle_starts_as_none() { fn typing_handles_start_empty() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false); let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let guard = ch.typing_handle.lock(); let guard = ch.typing_handles.lock();
assert!(guard.is_none()); assert!(guard.is_empty());
} }
#[tokio::test] #[tokio::test]
async fn start_typing_sets_handle() { async fn start_typing_sets_handle() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false); let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("123456").await; let _ = ch.start_typing("123456").await;
let guard = ch.typing_handle.lock(); let guard = ch.typing_handles.lock();
assert!(guard.is_some()); assert!(guard.contains_key("123456"));
} }
#[tokio::test] #[tokio::test]
@ -770,8 +774,8 @@ mod tests {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false); let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("123456").await; let _ = ch.start_typing("123456").await;
let _ = ch.stop_typing("123456").await; let _ = ch.stop_typing("123456").await;
let guard = ch.typing_handle.lock(); let guard = ch.typing_handles.lock();
assert!(guard.is_none()); assert!(!guard.contains_key("123456"));
} }
#[tokio::test] #[tokio::test]
@ -782,12 +786,21 @@ mod tests {
} }
#[tokio::test] #[tokio::test]
async fn start_typing_replaces_existing_task() { async fn concurrent_typing_handles_are_independent() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false); let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("111").await; let _ = ch.start_typing("111").await;
let _ = ch.start_typing("222").await; let _ = ch.start_typing("222").await;
let guard = ch.typing_handle.lock(); {
assert!(guard.is_some()); let guard = ch.typing_handles.lock();
assert_eq!(guard.len(), 2);
assert!(guard.contains_key("111"));
assert!(guard.contains_key("222"));
}
// Stopping one does not affect the other
let _ = ch.stop_typing("111").await;
let guard = ch.typing_handles.lock();
assert_eq!(guard.len(), 1);
assert!(guard.contains_key("222"));
} }
// ── Message ID edge cases ───────────────────────────────────── // ── Message ID edge cases ─────────────────────────────────────
@ -840,4 +853,113 @@ mod tests {
// Should have UUID dashes // Should have UUID dashes
assert!(id.contains('-')); assert!(id.contains('-'));
} }
// ─────────────────────────────────────────────────────────────────────
// TG6: Channel platform limit edge cases for Discord (2000 char limit)
// Prevents: Pattern 6 — issues #574, #499
// ─────────────────────────────────────────────────────────────────────
#[test]
fn split_message_code_block_at_boundary() {
// Code block that spans the split boundary
let mut msg = String::new();
msg.push_str("```rust\n");
msg.push_str(&"x".repeat(1990));
msg.push_str("\n```\nMore text after code block");
let parts = split_message_for_discord(&msg);
assert!(
parts.len() >= 2,
"code block spanning boundary should split"
);
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"each part must be <= {DISCORD_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
}
#[test]
fn split_message_single_long_word_exceeds_limit() {
// A single word longer than 2000 chars must be hard-split
let long_word = "a".repeat(2500);
let parts = split_message_for_discord(&long_word);
assert!(parts.len() >= 2, "word exceeding limit must be split");
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"hard-split part must be <= {DISCORD_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
// Reassembled content should match original
let reassembled: String = parts.join("");
assert_eq!(reassembled, long_word);
}
#[test]
fn split_message_exactly_at_limit_no_split() {
let msg = "a".repeat(DISCORD_MAX_MESSAGE_LENGTH);
let parts = split_message_for_discord(&msg);
assert_eq!(parts.len(), 1, "message exactly at limit should not split");
assert_eq!(parts[0].len(), DISCORD_MAX_MESSAGE_LENGTH);
}
#[test]
fn split_message_one_over_limit_splits() {
let msg = "a".repeat(DISCORD_MAX_MESSAGE_LENGTH + 1);
let parts = split_message_for_discord(&msg);
assert!(parts.len() >= 2, "message 1 char over limit must split");
}
#[test]
fn split_message_many_short_lines() {
// Many short lines should be batched into chunks under the limit
let msg: String = (0..500).map(|i| format!("line {i}\n")).collect();
let parts = split_message_for_discord(&msg);
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"short-line batch must be <= limit"
);
}
// All content should be preserved
let reassembled: String = parts.join("");
assert_eq!(reassembled.trim(), msg.trim());
}
#[test]
fn split_message_only_whitespace() {
let msg = " \n\n\t ";
let parts = split_message_for_discord(msg);
// Should handle gracefully without panic
assert!(parts.len() <= 1);
}
#[test]
fn split_message_emoji_at_boundary() {
// Emoji are multi-byte; ensure we don't split mid-emoji
let mut msg = "a".repeat(1998);
msg.push_str("🎉🎊"); // 2 emoji at the boundary (2000 chars total)
let parts = split_message_for_discord(&msg);
for part in &parts {
// The function splits on character count, not byte count
assert!(
part.chars().count() <= DISCORD_MAX_MESSAGE_LENGTH,
"emoji boundary split must respect limit"
);
}
}
#[test]
fn split_message_consecutive_newlines_at_boundary() {
let mut msg = "a".repeat(1995);
msg.push_str("\n\n\n\n\n");
msg.push_str(&"b".repeat(100));
let parts = split_message_for_discord(&msg);
for part in &parts {
assert!(part.len() <= DISCORD_MAX_MESSAGE_LENGTH);
}
}
} }

View file

@ -20,6 +20,7 @@ use lettre::{Message, SmtpTransport, Transport};
use mail_parser::{MessageParser, MimeHeaders}; use mail_parser::{MessageParser, MimeHeaders};
use rustls::{ClientConfig, RootCertStore}; use rustls::{ClientConfig, RootCertStore};
use rustls_pki_types::DnsName; use rustls_pki_types::DnsName;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashSet; use std::collections::HashSet;
use std::sync::Arc; use std::sync::Arc;
@ -35,7 +36,7 @@ use uuid::Uuid;
use super::traits::{Channel, ChannelMessage, SendMessage}; use super::traits::{Channel, ChannelMessage, SendMessage};
/// Email channel configuration /// Email channel configuration
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct EmailConfig { pub struct EmailConfig {
/// IMAP server hostname /// IMAP server hostname
pub imap_host: String, pub imap_host: String,
@ -153,7 +154,14 @@ impl EmailChannel {
_ => {} _ => {}
} }
} }
result.split_whitespace().collect::<Vec<_>>().join(" ") let mut normalized = String::with_capacity(result.len());
for word in result.split_whitespace() {
if !normalized.is_empty() {
normalized.push(' ');
}
normalized.push_str(word);
}
normalized
} }
/// Extract the sender address from a parsed email /// Extract the sender address from a parsed email
@ -442,6 +450,7 @@ impl EmailChannel {
content: email.content, content: email.content,
channel: "email".to_string(), channel: "email".to_string(),
timestamp: email.timestamp, timestamp: email.timestamp,
thread_ts: None,
}; };
if tx.send(msg).await.is_err() { if tx.send(msg).await.is_err() {

View file

@ -231,6 +231,7 @@ end tell"#
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(msg).await.is_err() { if tx.send(msg).await.is_err() {

View file

@ -163,12 +163,17 @@ fn split_message(message: &str, max_bytes: usize) -> Vec<String> {
// Guard against max_bytes == 0 to prevent infinite loop // Guard against max_bytes == 0 to prevent infinite loop
if max_bytes == 0 { if max_bytes == 0 {
let full: String = message let mut full = String::new();
for l in message
.lines() .lines()
.map(|l| l.trim_end_matches('\r')) .map(|l| l.trim_end_matches('\r'))
.filter(|l| !l.is_empty()) .filter(|l| !l.is_empty())
.collect::<Vec<_>>() {
.join(" "); if !full.is_empty() {
full.push(' ');
}
full.push_str(l);
}
if full.is_empty() { if full.is_empty() {
chunks.push(String::new()); chunks.push(String::new());
} else { } else {
@ -455,6 +460,7 @@ impl Channel for IrcChannel {
"AUTHENTICATE" => { "AUTHENTICATE" => {
// Server sends "AUTHENTICATE +" to request credentials // Server sends "AUTHENTICATE +" to request credentials
if sasl_pending && msg.params.first().is_some_and(|p| p == "+") { if sasl_pending && msg.params.first().is_some_and(|p| p == "+") {
// sasl_password is loaded from runtime config, not hard-coded
if let Some(password) = self.sasl_password.as_deref() { if let Some(password) = self.sasl_password.as_deref() {
let encoded = encode_sasl_plain(&current_nick, password); let encoded = encode_sasl_plain(&current_nick, password);
let mut guard = self.writer.lock().await; let mut guard = self.writer.lock().await;
@ -573,6 +579,7 @@ impl Channel for IrcChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {

View file

@ -127,6 +127,12 @@ struct LarkMessage {
/// If no binary frame (pong or event) is received within this window, reconnect. /// If no binary frame (pong or event) is received within this window, reconnect.
const WS_HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(300); const WS_HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(300);
/// Returns true when the WebSocket frame indicates live traffic that should
/// refresh the heartbeat watchdog.
fn should_refresh_last_recv(msg: &WsMsg) -> bool {
matches!(msg, WsMsg::Binary(_) | WsMsg::Ping(_) | WsMsg::Pong(_))
}
/// Lark/Feishu channel. /// Lark/Feishu channel.
/// ///
/// Supports two receive modes (configured via `receive_mode` in config): /// Supports two receive modes (configured via `receive_mode` in config):
@ -282,7 +288,7 @@ impl LarkChannel {
payload: None, payload: None,
}; };
if write if write
.send(WsMsg::Binary(initial_ping.encode_to_vec())) .send(WsMsg::Binary(initial_ping.encode_to_vec().into()))
.await .await
.is_err() .is_err()
{ {
@ -303,7 +309,7 @@ impl LarkChannel {
headers: vec![PbHeader { key: "type".into(), value: "ping".into() }], headers: vec![PbHeader { key: "type".into(), value: "ping".into() }],
payload: None, payload: None,
}; };
if write.send(WsMsg::Binary(ping.encode_to_vec())).await.is_err() { if write.send(WsMsg::Binary(ping.encode_to_vec().into())).await.is_err() {
tracing::warn!("Lark: ping failed, reconnecting"); tracing::warn!("Lark: ping failed, reconnecting");
break; break;
} }
@ -321,11 +327,20 @@ impl LarkChannel {
msg = read.next() => { msg = read.next() => {
let raw = match msg { let raw = match msg {
Some(Ok(WsMsg::Binary(b))) => { last_recv = Instant::now(); b } Some(Ok(ws_msg)) => {
Some(Ok(WsMsg::Ping(d))) => { let _ = write.send(WsMsg::Pong(d)).await; continue; } if should_refresh_last_recv(&ws_msg) {
Some(Ok(WsMsg::Close(_))) | None => { tracing::info!("Lark: WS closed — reconnecting"); break; } last_recv = Instant::now();
}
match ws_msg {
WsMsg::Binary(b) => b,
WsMsg::Ping(d) => { let _ = write.send(WsMsg::Pong(d)).await; continue; }
WsMsg::Pong(_) => continue,
WsMsg::Close(_) => { tracing::info!("Lark: WS closed — reconnecting"); break; }
_ => continue,
}
}
None => { tracing::info!("Lark: WS closed — reconnecting"); break; }
Some(Err(e)) => { tracing::error!("Lark: WS read error: {e}"); break; } Some(Err(e)) => { tracing::error!("Lark: WS read error: {e}"); break; }
_ => continue,
}; };
let frame = match PbFrame::decode(&raw[..]) { let frame = match PbFrame::decode(&raw[..]) {
@ -363,7 +378,7 @@ impl LarkChannel {
let mut ack = frame.clone(); let mut ack = frame.clone();
ack.payload = Some(br#"{"code":200,"headers":{},"data":[]}"#.to_vec()); ack.payload = Some(br#"{"code":200,"headers":{},"data":[]}"#.to_vec());
ack.headers.push(PbHeader { key: "biz_rt".into(), value: "0".into() }); ack.headers.push(PbHeader { key: "biz_rt".into(), value: "0".into() });
let _ = write.send(WsMsg::Binary(ack.encode_to_vec())).await; let _ = write.send(WsMsg::Binary(ack.encode_to_vec().into())).await;
} }
// Fragment reassembly // Fragment reassembly
@ -459,6 +474,7 @@ impl LarkChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
tracing::debug!("Lark WS: message in {}", lark_msg.chat_id); tracing::debug!("Lark WS: message in {}", lark_msg.chat_id);
@ -620,6 +636,7 @@ impl LarkChannel {
content: text, content: text,
channel: "lark".to_string(), channel: "lark".to_string(),
timestamp, timestamp,
thread_ts: None,
}); });
messages messages
@ -898,6 +915,21 @@ mod tests {
assert_eq!(ch.name(), "lark"); assert_eq!(ch.name(), "lark");
} }
#[test]
fn lark_ws_activity_refreshes_heartbeat_watchdog() {
assert!(should_refresh_last_recv(&WsMsg::Binary(
vec![1, 2, 3].into()
)));
assert!(should_refresh_last_recv(&WsMsg::Ping(vec![9, 9].into())));
assert!(should_refresh_last_recv(&WsMsg::Pong(vec![8, 8].into())));
}
#[test]
fn lark_ws_non_activity_frames_do_not_refresh_heartbeat_watchdog() {
assert!(!should_refresh_last_recv(&WsMsg::Text("hello".into())));
assert!(!should_refresh_last_recv(&WsMsg::Close(None)));
}
#[test] #[test]
fn lark_user_allowed_exact() { fn lark_user_allowed_exact() {
let ch = make_channel(); let ch = make_channel();

793
src/channels/linq.rs Normal file
View file

@ -0,0 +1,793 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use uuid::Uuid;
/// Linq channel — uses the Linq Partner V3 API for iMessage, RCS, and SMS.
///
/// This channel operates in webhook mode (push-based) rather than polling.
/// Messages are received via the gateway's `/linq` webhook endpoint.
/// The `listen` method here is a keepalive placeholder; actual message handling
/// happens in the gateway when Linq sends webhook events.
pub struct LinqChannel {
api_token: String,
from_phone: String,
allowed_senders: Vec<String>,
client: reqwest::Client,
}
const LINQ_API_BASE: &str = "https://api.linqapp.com/api/partner/v3";
impl LinqChannel {
pub fn new(api_token: String, from_phone: String, allowed_senders: Vec<String>) -> Self {
Self {
api_token,
from_phone,
allowed_senders,
client: reqwest::Client::new(),
}
}
/// Check if a sender phone number is allowed (E.164 format: +1234567890)
fn is_sender_allowed(&self, phone: &str) -> bool {
self.allowed_senders.iter().any(|n| n == "*" || n == phone)
}
/// Get the bot's phone number
pub fn phone_number(&self) -> &str {
&self.from_phone
}
fn media_part_to_image_marker(part: &serde_json::Value) -> Option<String> {
let source = part
.get("url")
.or_else(|| part.get("value"))
.and_then(|value| value.as_str())
.map(str::trim)
.filter(|value| !value.is_empty())?;
let mime_type = part
.get("mime_type")
.and_then(|value| value.as_str())
.map(str::trim)
.unwrap_or_default()
.to_ascii_lowercase();
if !mime_type.starts_with("image/") {
return None;
}
Some(format!("[IMAGE:{source}]"))
}
/// Parse an incoming webhook payload from Linq and extract messages.
///
/// Linq webhook envelope:
/// ```json
/// {
/// "api_version": "v3",
/// "event_type": "message.received",
/// "event_id": "...",
/// "created_at": "...",
/// "trace_id": "...",
/// "data": {
/// "chat_id": "...",
/// "from": "+1...",
/// "recipient_phone": "+1...",
/// "is_from_me": false,
/// "service": "iMessage",
/// "message": {
/// "id": "...",
/// "parts": [{ "type": "text", "value": "..." }]
/// }
/// }
/// }
/// ```
pub fn parse_webhook_payload(&self, payload: &serde_json::Value) -> Vec<ChannelMessage> {
let mut messages = Vec::new();
// Only handle message.received events
let event_type = payload
.get("event_type")
.and_then(|e| e.as_str())
.unwrap_or("");
if event_type != "message.received" {
tracing::debug!("Linq: skipping non-message event: {event_type}");
return messages;
}
let Some(data) = payload.get("data") else {
return messages;
};
// Skip messages sent by the bot itself
if data
.get("is_from_me")
.and_then(|v| v.as_bool())
.unwrap_or(false)
{
tracing::debug!("Linq: skipping is_from_me message");
return messages;
}
// Get sender phone number
let Some(from) = data.get("from").and_then(|f| f.as_str()) else {
return messages;
};
// Normalize to E.164 format
let normalized_from = if from.starts_with('+') {
from.to_string()
} else {
format!("+{from}")
};
// Check allowlist
if !self.is_sender_allowed(&normalized_from) {
tracing::warn!(
"Linq: ignoring message from unauthorized sender: {normalized_from}. \
Add to channels.linq.allowed_senders in config.toml, \
or run `zeroclaw onboard --channels-only` to configure interactively."
);
return messages;
}
// Get chat_id for reply routing
let chat_id = data
.get("chat_id")
.and_then(|c| c.as_str())
.unwrap_or("")
.to_string();
// Extract text from message parts
let Some(message) = data.get("message") else {
return messages;
};
let Some(parts) = message.get("parts").and_then(|p| p.as_array()) else {
return messages;
};
let content_parts: Vec<String> = parts
.iter()
.filter_map(|part| {
let part_type = part.get("type").and_then(|t| t.as_str())?;
match part_type {
"text" => part
.get("value")
.and_then(|v| v.as_str())
.map(ToString::to_string),
"media" | "image" => {
if let Some(marker) = Self::media_part_to_image_marker(part) {
Some(marker)
} else {
tracing::debug!("Linq: skipping unsupported {part_type} part");
None
}
}
_ => {
tracing::debug!("Linq: skipping {part_type} part");
None
}
}
})
.collect();
if content_parts.is_empty() {
return messages;
}
let content = content_parts.join("\n").trim().to_string();
if content.is_empty() {
return messages;
}
// Get timestamp from created_at or use current time
let timestamp = payload
.get("created_at")
.and_then(|t| t.as_str())
.and_then(|t| {
chrono::DateTime::parse_from_rfc3339(t)
.ok()
.map(|dt| dt.timestamp().cast_unsigned())
})
.unwrap_or_else(|| {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
});
// Use chat_id as reply_target so replies go to the right conversation
let reply_target = if chat_id.is_empty() {
normalized_from.clone()
} else {
chat_id
};
messages.push(ChannelMessage {
id: Uuid::new_v4().to_string(),
reply_target,
sender: normalized_from,
content,
channel: "linq".to_string(),
timestamp,
thread_ts: None,
});
messages
}
}
#[async_trait]
impl Channel for LinqChannel {
fn name(&self) -> &str {
"linq"
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// If reply_target looks like a chat_id, send to existing chat.
// Otherwise create a new chat with the recipient phone number.
let recipient = &message.recipient;
let body = serde_json::json!({
"message": {
"parts": [{
"type": "text",
"value": message.content
}]
}
});
// Try sending to existing chat (recipient is chat_id)
let url = format!("{LINQ_API_BASE}/chats/{recipient}/messages");
let resp = self
.client
.post(&url)
.bearer_auth(&self.api_token)
.header("Content-Type", "application/json")
.json(&body)
.send()
.await?;
if resp.status().is_success() {
return Ok(());
}
// If the chat_id-based send failed with 404, try creating a new chat
if resp.status() == reqwest::StatusCode::NOT_FOUND {
let new_chat_body = serde_json::json!({
"from": self.from_phone,
"to": [recipient],
"message": {
"parts": [{
"type": "text",
"value": message.content
}]
}
});
let create_resp = self
.client
.post(format!("{LINQ_API_BASE}/chats"))
.bearer_auth(&self.api_token)
.header("Content-Type", "application/json")
.json(&new_chat_body)
.send()
.await?;
if !create_resp.status().is_success() {
let status = create_resp.status();
let error_body = create_resp.text().await.unwrap_or_default();
tracing::error!("Linq create chat failed: {status} — {error_body}");
anyhow::bail!("Linq API error: {status}");
}
return Ok(());
}
let status = resp.status();
let error_body = resp.text().await.unwrap_or_default();
tracing::error!("Linq send failed: {status} — {error_body}");
anyhow::bail!("Linq API error: {status}");
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
// Linq uses webhooks (push-based), not polling.
// Messages are received via the gateway's /linq endpoint.
tracing::info!(
"Linq channel active (webhook mode). \
Configure Linq webhook to POST to your gateway's /linq endpoint."
);
// Keep the task alive — it will be cancelled when the channel shuts down
loop {
tokio::time::sleep(std::time::Duration::from_secs(3600)).await;
}
}
async fn health_check(&self) -> bool {
// Check if we can reach the Linq API
let url = format!("{LINQ_API_BASE}/phonenumbers");
self.client
.get(&url)
.bearer_auth(&self.api_token)
.send()
.await
.map(|r| r.status().is_success())
.unwrap_or(false)
}
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
let url = format!("{LINQ_API_BASE}/chats/{recipient}/typing");
let resp = self
.client
.post(&url)
.bearer_auth(&self.api_token)
.send()
.await?;
if !resp.status().is_success() {
tracing::debug!("Linq start_typing failed: {}", resp.status());
}
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let url = format!("{LINQ_API_BASE}/chats/{recipient}/typing");
let resp = self
.client
.delete(&url)
.bearer_auth(&self.api_token)
.send()
.await?;
if !resp.status().is_success() {
tracing::debug!("Linq stop_typing failed: {}", resp.status());
}
Ok(())
}
}
/// Verify a Linq webhook signature.
///
/// Linq signs webhooks with HMAC-SHA256 over `"{timestamp}.{body}"`.
/// The signature is sent in `X-Webhook-Signature` (hex-encoded) and the
/// timestamp in `X-Webhook-Timestamp`. Reject timestamps older than 300s.
pub fn verify_linq_signature(secret: &str, body: &str, timestamp: &str, signature: &str) -> bool {
use hmac::{Hmac, Mac};
use sha2::Sha256;
// Reject stale timestamps (>300s old)
if let Ok(ts) = timestamp.parse::<i64>() {
let now = chrono::Utc::now().timestamp();
if (now - ts).unsigned_abs() > 300 {
tracing::warn!("Linq: rejecting stale webhook timestamp ({ts}, now={now})");
return false;
}
} else {
tracing::warn!("Linq: invalid webhook timestamp: {timestamp}");
return false;
}
// Compute HMAC-SHA256 over "{timestamp}.{body}"
let message = format!("{timestamp}.{body}");
let Ok(mut mac) = Hmac::<Sha256>::new_from_slice(secret.as_bytes()) else {
return false;
};
mac.update(message.as_bytes());
let signature_hex = signature
.trim()
.strip_prefix("sha256=")
.unwrap_or(signature);
let Ok(provided) = hex::decode(signature_hex.trim()) else {
tracing::warn!("Linq: invalid webhook signature format");
return false;
};
// Constant-time comparison via HMAC verify.
mac.verify_slice(&provided).is_ok()
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> LinqChannel {
LinqChannel::new(
"test-token".into(),
"+15551234567".into(),
vec!["+1234567890".into()],
)
}
#[test]
fn linq_channel_name() {
let ch = make_channel();
assert_eq!(ch.name(), "linq");
}
#[test]
fn linq_sender_allowed_exact() {
let ch = make_channel();
assert!(ch.is_sender_allowed("+1234567890"));
assert!(!ch.is_sender_allowed("+9876543210"));
}
#[test]
fn linq_sender_allowed_wildcard() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
assert!(ch.is_sender_allowed("+1234567890"));
assert!(ch.is_sender_allowed("+9999999999"));
}
#[test]
fn linq_sender_allowed_empty() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec![]);
assert!(!ch.is_sender_allowed("+1234567890"));
}
#[test]
fn linq_parse_valid_text_message() {
let ch = make_channel();
let payload = serde_json::json!({
"api_version": "v3",
"event_type": "message.received",
"event_id": "evt-123",
"created_at": "2025-01-15T12:00:00Z",
"trace_id": "trace-456",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"recipient_phone": "+15551234567",
"is_from_me": false,
"service": "iMessage",
"message": {
"id": "msg-abc",
"parts": [{
"type": "text",
"value": "Hello ZeroClaw!"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].sender, "+1234567890");
assert_eq!(msgs[0].content, "Hello ZeroClaw!");
assert_eq!(msgs[0].channel, "linq");
assert_eq!(msgs[0].reply_target, "chat-789");
}
#[test]
fn linq_parse_skip_is_from_me() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": true,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "My own message" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "is_from_me messages should be skipped");
}
#[test]
fn linq_parse_skip_non_message_event() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.delivered",
"data": {
"chat_id": "chat-789",
"message_id": "msg-abc"
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Non-message events should be skipped");
}
#[test]
fn linq_parse_unauthorized_sender() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+9999999999",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Spam" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Unauthorized senders should be filtered");
}
#[test]
fn linq_parse_empty_payload() {
let ch = make_channel();
let payload = serde_json::json!({});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_media_only_translated_to_image_marker() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{
"type": "media",
"url": "https://example.com/image.jpg",
"mime_type": "image/jpeg"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].content, "[IMAGE:https://example.com/image.jpg]");
}
#[test]
fn linq_parse_media_non_image_still_skipped() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{
"type": "media",
"url": "https://example.com/sound.mp3",
"mime_type": "audio/mpeg"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Non-image media should still be skipped");
}
#[test]
fn linq_parse_multiple_text_parts() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [
{ "type": "text", "value": "First part" },
{ "type": "text", "value": "Second part" }
]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].content, "First part\nSecond part");
}
/// Fixture secret used exclusively in signature-verification unit tests (not a real credential).
const TEST_WEBHOOK_SECRET: &str = "test_webhook_secret";
#[test]
fn linq_signature_verification_valid() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
// Compute expected signature
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes());
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_signature_verification_invalid() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
assert!(!verify_linq_signature(
secret,
body,
&now,
"deadbeefdeadbeefdeadbeef"
));
}
#[test]
fn linq_signature_verification_stale_timestamp() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
// 10 minutes ago — stale
let stale_ts = (chrono::Utc::now().timestamp() - 600).to_string();
// Even with correct signature, stale timestamp should fail
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{stale_ts}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes());
assert!(
!verify_linq_signature(secret, body, &stale_ts, &signature),
"Stale timestamps (>300s) should be rejected"
);
}
#[test]
fn linq_signature_verification_accepts_sha256_prefix() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = format!("sha256={}", hex::encode(mac.finalize().into_bytes()));
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_signature_verification_accepts_uppercase_hex() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes()).to_ascii_uppercase();
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_parse_normalizes_phone_with_plus() {
let ch = LinqChannel::new(
"tok".into(),
"+15551234567".into(),
vec!["+1234567890".into()],
);
// API sends without +, normalize to +
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Hi" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].sender, "+1234567890");
}
#[test]
fn linq_parse_missing_data() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.received"
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_missing_message_parts() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc"
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_empty_text_value() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Empty text should be skipped");
}
#[test]
fn linq_parse_fallback_reply_target_when_no_chat_id() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Hi" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
// Falls back to sender phone number when no chat_id
assert_eq!(msgs[0].reply_target, "+1234567890");
}
#[test]
fn linq_phone_number_accessor() {
let ch = make_channel();
assert_eq!(ch.phone_number(), "+15551234567");
}
}

View file

@ -24,7 +24,7 @@ pub struct MatrixChannel {
access_token: String, access_token: String,
room_id: String, room_id: String,
allowed_users: Vec<String>, allowed_users: Vec<String>,
session_user_id_hint: Option<String>, session_owner_hint: Option<String>,
session_device_id_hint: Option<String>, session_device_id_hint: Option<String>,
resolved_room_id_cache: Arc<RwLock<Option<String>>>, resolved_room_id_cache: Arc<RwLock<Option<String>>>,
sdk_client: Arc<OnceCell<MatrixSdkClient>>, sdk_client: Arc<OnceCell<MatrixSdkClient>>,
@ -108,7 +108,7 @@ impl MatrixChannel {
access_token: String, access_token: String,
room_id: String, room_id: String,
allowed_users: Vec<String>, allowed_users: Vec<String>,
user_id_hint: Option<String>, owner_hint: Option<String>,
device_id_hint: Option<String>, device_id_hint: Option<String>,
) -> Self { ) -> Self {
let homeserver = homeserver.trim_end_matches('/').to_string(); let homeserver = homeserver.trim_end_matches('/').to_string();
@ -125,7 +125,7 @@ impl MatrixChannel {
access_token, access_token,
room_id, room_id,
allowed_users, allowed_users,
session_user_id_hint: Self::normalize_optional_field(user_id_hint), session_owner_hint: Self::normalize_optional_field(owner_hint),
session_device_id_hint: Self::normalize_optional_field(device_id_hint), session_device_id_hint: Self::normalize_optional_field(device_id_hint),
resolved_room_id_cache: Arc::new(RwLock::new(None)), resolved_room_id_cache: Arc::new(RwLock::new(None)),
sdk_client: Arc::new(OnceCell::new()), sdk_client: Arc::new(OnceCell::new()),
@ -245,7 +245,7 @@ impl MatrixChannel {
let whoami = match identity { let whoami = match identity {
Ok(whoami) => Some(whoami), Ok(whoami) => Some(whoami),
Err(error) => { Err(error) => {
if self.session_user_id_hint.is_some() && self.session_device_id_hint.is_some() if self.session_owner_hint.is_some() && self.session_device_id_hint.is_some()
{ {
tracing::warn!( tracing::warn!(
"Matrix whoami failed; falling back to configured session hints for E2EE session restore: {error}" "Matrix whoami failed; falling back to configured session hints for E2EE session restore: {error}"
@ -258,18 +258,18 @@ impl MatrixChannel {
}; };
let resolved_user_id = if let Some(whoami) = whoami.as_ref() { let resolved_user_id = if let Some(whoami) = whoami.as_ref() {
if let Some(hinted) = self.session_user_id_hint.as_ref() { if let Some(hinted) = self.session_owner_hint.as_ref() {
if hinted != &whoami.user_id { if hinted != &whoami.user_id {
tracing::warn!( tracing::warn!(
"Matrix configured user_id '{}' does not match whoami '{}'; using whoami.", "Matrix configured user_id '{}' does not match whoami '{}'; using whoami.",
hinted, crate::security::redact(hinted),
whoami.user_id crate::security::redact(&whoami.user_id)
); );
} }
} }
whoami.user_id.clone() whoami.user_id.clone()
} else { } else {
self.session_user_id_hint.clone().ok_or_else(|| { self.session_owner_hint.clone().ok_or_else(|| {
anyhow::anyhow!( anyhow::anyhow!(
"Matrix session restore requires user_id when whoami is unavailable" "Matrix session restore requires user_id when whoami is unavailable"
) )
@ -282,8 +282,8 @@ impl MatrixChannel {
if whoami_device_id != hinted { if whoami_device_id != hinted {
tracing::warn!( tracing::warn!(
"Matrix configured device_id '{}' does not match whoami '{}'; using whoami.", "Matrix configured device_id '{}' does not match whoami '{}'; using whoami.",
hinted, crate::security::redact(hinted),
whoami_device_id crate::security::redact(whoami_device_id)
); );
} }
whoami_device_id.clone() whoami_device_id.clone()
@ -513,7 +513,7 @@ impl Channel for MatrixChannel {
let my_user_id: OwnedUserId = match self.get_my_user_id().await { let my_user_id: OwnedUserId = match self.get_my_user_id().await {
Ok(user_id) => user_id.parse()?, Ok(user_id) => user_id.parse()?,
Err(error) => { Err(error) => {
if let Some(hinted) = self.session_user_id_hint.as_ref() { if let Some(hinted) = self.session_owner_hint.as_ref() {
tracing::warn!( tracing::warn!(
"Matrix whoami failed while resolving listener user_id; using configured user_id hint: {error}" "Matrix whoami failed while resolving listener user_id; using configured user_id hint: {error}"
); );
@ -596,6 +596,7 @@ impl Channel for MatrixChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
let _ = tx.send(msg).await; let _ = tx.send(msg).await;
@ -714,7 +715,7 @@ mod tests {
Some(" DEVICE123 ".to_string()), Some(" DEVICE123 ".to_string()),
); );
assert_eq!(ch.session_user_id_hint.as_deref(), Some("@bot:matrix.org")); assert_eq!(ch.session_owner_hint.as_deref(), Some("@bot:matrix.org"));
assert_eq!(ch.session_device_id_hint.as_deref(), Some("DEVICE123")); assert_eq!(ch.session_device_id_hint.as_deref(), Some("DEVICE123"));
} }
@ -726,10 +727,10 @@ mod tests {
"!r:m".to_string(), "!r:m".to_string(),
vec![], vec![],
Some(" ".to_string()), Some(" ".to_string()),
Some("".to_string()), Some(String::new()),
); );
assert!(ch.session_user_id_hint.is_none()); assert!(ch.session_owner_hint.is_none());
assert!(ch.session_device_id_hint.is_none()); assert!(ch.session_device_id_hint.is_none());
} }

View file

@ -321,6 +321,7 @@ impl MattermostChannel {
channel: "mattermost".to_string(), channel: "mattermost".to_string(),
#[allow(clippy::cast_sign_loss)] #[allow(clippy::cast_sign_loss)]
timestamp: (create_at / 1000) as u64, timestamp: (create_at / 1000) as u64,
thread_ts: None,
}) })
} }
} }

File diff suppressed because it is too large Load diff

View file

@ -11,6 +11,15 @@ use uuid::Uuid;
const QQ_API_BASE: &str = "https://api.sgroup.qq.com"; const QQ_API_BASE: &str = "https://api.sgroup.qq.com";
const QQ_AUTH_URL: &str = "https://bots.qq.com/app/getAppAccessToken"; const QQ_AUTH_URL: &str = "https://bots.qq.com/app/getAppAccessToken";
fn ensure_https(url: &str) -> anyhow::Result<()> {
if !url.starts_with("https://") {
anyhow::bail!(
"Refusing to transmit sensitive data over non-HTTPS URL: URL scheme must be https"
);
}
Ok(())
}
/// Deduplication set capacity — evict half of entries when full. /// Deduplication set capacity — evict half of entries when full.
const DEDUP_CAPACITY: usize = 10_000; const DEDUP_CAPACITY: usize = 10_000;
@ -196,6 +205,8 @@ impl Channel for QQChannel {
) )
}; };
ensure_https(&url)?;
let resp = self let resp = self
.http_client() .http_client()
.post(&url) .post(&url)
@ -252,7 +263,9 @@ impl Channel for QQChannel {
} }
} }
}); });
write.send(Message::Text(identify.to_string())).await?; write
.send(Message::Text(identify.to_string().into()))
.await?;
tracing::info!("QQ: connected and identified"); tracing::info!("QQ: connected and identified");
@ -276,7 +289,11 @@ impl Channel for QQChannel {
_ = hb_rx.recv() => { _ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) }; let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d}); let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string())).await.is_err() { if write
.send(Message::Text(hb.to_string().into()))
.await
.is_err()
{
break; break;
} }
} }
@ -287,7 +304,7 @@ impl Channel for QQChannel {
_ => continue, _ => continue,
}; };
let event: serde_json::Value = match serde_json::from_str(&msg) { let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
Ok(e) => e, Ok(e) => e,
Err(_) => continue, Err(_) => continue,
}; };
@ -304,7 +321,11 @@ impl Channel for QQChannel {
1 => { 1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) }; let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d}); let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string())).await.is_err() { if write
.send(Message::Text(hb.to_string().into()))
.await
.is_err()
{
break; break;
} }
continue; continue;
@ -366,6 +387,7 @@ impl Channel for QQChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {
@ -404,6 +426,7 @@ impl Channel for QQChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {

View file

@ -119,12 +119,18 @@ impl SignalChannel {
(2..=15).contains(&number.len()) && number.chars().all(|c| c.is_ascii_digit()) (2..=15).contains(&number.len()) && number.chars().all(|c| c.is_ascii_digit())
} }
/// Check whether a string is a valid UUID (signal-cli uses these for
/// privacy-enabled users who have opted out of sharing their phone number).
fn is_uuid(s: &str) -> bool {
Uuid::parse_str(s).is_ok()
}
fn parse_recipient_target(recipient: &str) -> RecipientTarget { fn parse_recipient_target(recipient: &str) -> RecipientTarget {
if let Some(group_id) = recipient.strip_prefix(GROUP_TARGET_PREFIX) { if let Some(group_id) = recipient.strip_prefix(GROUP_TARGET_PREFIX) {
return RecipientTarget::Group(group_id.to_string()); return RecipientTarget::Group(group_id.to_string());
} }
if Self::is_e164(recipient) { if Self::is_e164(recipient) || Self::is_uuid(recipient) {
RecipientTarget::Direct(recipient.to_string()) RecipientTarget::Direct(recipient.to_string())
} else { } else {
RecipientTarget::Group(recipient.to_string()) RecipientTarget::Group(recipient.to_string())
@ -259,6 +265,7 @@ impl SignalChannel {
content: text.to_string(), content: text.to_string(),
channel: "signal".to_string(), channel: "signal".to_string(),
timestamp: timestamp / 1000, // millis → secs timestamp: timestamp / 1000, // millis → secs
thread_ts: None,
}) })
} }
} }
@ -653,6 +660,15 @@ mod tests {
); );
} }
#[test]
fn parse_recipient_target_uuid_is_direct() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
assert_eq!(
SignalChannel::parse_recipient_target(uuid),
RecipientTarget::Direct(uuid.to_string())
);
}
#[test] #[test]
fn parse_recipient_target_non_e164_plus_is_group() { fn parse_recipient_target_non_e164_plus_is_group() {
assert_eq!( assert_eq!(
@ -661,6 +677,24 @@ mod tests {
); );
} }
#[test]
fn is_uuid_valid() {
assert!(SignalChannel::is_uuid(
"a1b2c3d4-e5f6-7890-abcd-ef1234567890"
));
assert!(SignalChannel::is_uuid(
"00000000-0000-0000-0000-000000000000"
));
}
#[test]
fn is_uuid_invalid() {
assert!(!SignalChannel::is_uuid("+1234567890"));
assert!(!SignalChannel::is_uuid("not-a-uuid"));
assert!(!SignalChannel::is_uuid("group:abc123"));
assert!(!SignalChannel::is_uuid(""));
}
#[test] #[test]
fn sender_prefers_source_number() { fn sender_prefers_source_number() {
let env = Envelope { let env = Envelope {
@ -685,6 +719,73 @@ mod tests {
assert_eq!(SignalChannel::sender(&env), Some("uuid-123".to_string())); assert_eq!(SignalChannel::sender(&env), Some("uuid-123".to_string()));
} }
#[test]
fn process_envelope_uuid_sender_dm() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
let ch = SignalChannel::new(
"http://127.0.0.1:8686".to_string(),
"+1234567890".to_string(),
None,
vec!["*".to_string()],
false,
false,
);
let env = Envelope {
source: Some(uuid.to_string()),
source_number: None,
data_message: Some(DataMessage {
message: Some("Hello from privacy user".to_string()),
timestamp: Some(1_700_000_000_000),
group_info: None,
attachments: None,
}),
story_message: None,
timestamp: Some(1_700_000_000_000),
};
let msg = ch.process_envelope(&env).unwrap();
assert_eq!(msg.sender, uuid);
assert_eq!(msg.reply_target, uuid);
assert_eq!(msg.content, "Hello from privacy user");
// Verify reply routing: UUID sender in DM should route as Direct
let target = SignalChannel::parse_recipient_target(&msg.reply_target);
assert_eq!(target, RecipientTarget::Direct(uuid.to_string()));
}
#[test]
fn process_envelope_uuid_sender_in_group() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
let ch = SignalChannel::new(
"http://127.0.0.1:8686".to_string(),
"+1234567890".to_string(),
Some("testgroup".to_string()),
vec!["*".to_string()],
false,
false,
);
let env = Envelope {
source: Some(uuid.to_string()),
source_number: None,
data_message: Some(DataMessage {
message: Some("Group msg from privacy user".to_string()),
timestamp: Some(1_700_000_000_000),
group_info: Some(GroupInfo {
group_id: Some("testgroup".to_string()),
}),
attachments: None,
}),
story_message: None,
timestamp: Some(1_700_000_000_000),
};
let msg = ch.process_envelope(&env).unwrap();
assert_eq!(msg.sender, uuid);
assert_eq!(msg.reply_target, "group:testgroup");
// Verify reply routing: group message should still route as Group
let target = SignalChannel::parse_recipient_target(&msg.reply_target);
assert_eq!(target, RecipientTarget::Group("testgroup".to_string()));
}
#[test] #[test]
fn sender_none_when_both_missing() { fn sender_none_when_both_missing() {
let env = Envelope { let env = Envelope {

View file

@ -45,6 +45,15 @@ impl SlackChannel {
.and_then(|u| u.as_str()) .and_then(|u| u.as_str())
.map(String::from) .map(String::from)
} }
/// Resolve the thread identifier for inbound Slack messages.
/// Replies carry `thread_ts` (root thread id); top-level messages only have `ts`.
fn inbound_thread_ts(msg: &serde_json::Value, ts: &str) -> Option<String> {
msg.get("thread_ts")
.and_then(|t| t.as_str())
.or(if ts.is_empty() { None } else { Some(ts) })
.map(str::to_string)
}
} }
#[async_trait] #[async_trait]
@ -54,11 +63,15 @@ impl Channel for SlackChannel {
} }
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> { async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
let body = serde_json::json!({ let mut body = serde_json::json!({
"channel": message.recipient, "channel": message.recipient,
"text": message.content "text": message.content
}); });
if let Some(ref ts) = message.thread_ts {
body["thread_ts"] = serde_json::json!(ts);
}
let resp = self let resp = self
.http_client() .http_client()
.post("https://slack.com/api/chat.postMessage") .post("https://slack.com/api/chat.postMessage")
@ -170,6 +183,7 @@ impl Channel for SlackChannel {
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: Self::inbound_thread_ts(msg, ts),
}; };
if tx.send(channel_msg).await.is_err() { if tx.send(channel_msg).await.is_err() {
@ -303,4 +317,33 @@ mod tests {
assert!(!id.contains('-')); // No UUID dashes assert!(!id.contains('-')); // No UUID dashes
assert!(id.starts_with("slack_")); assert!(id.starts_with("slack_"));
} }
#[test]
fn inbound_thread_ts_prefers_explicit_thread_ts() {
let msg = serde_json::json!({
"ts": "123.002",
"thread_ts": "123.001"
});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "123.002");
assert_eq!(thread_ts.as_deref(), Some("123.001"));
}
#[test]
fn inbound_thread_ts_falls_back_to_ts() {
let msg = serde_json::json!({
"ts": "123.001"
});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "123.001");
assert_eq!(thread_ts.as_deref(), Some("123.001"));
}
#[test]
fn inbound_thread_ts_none_when_ts_missing() {
let msg = serde_json::json!({});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "");
assert_eq!(thread_ts, None);
}
} }

View file

@ -6,10 +6,10 @@ use async_trait::async_trait;
use directories::UserDirs; use directories::UserDirs;
use parking_lot::Mutex; use parking_lot::Mutex;
use reqwest::multipart::{Form, Part}; use reqwest::multipart::{Form, Part};
use std::fs;
use std::path::Path; use std::path::Path;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
use std::time::Duration; use std::time::Duration;
use tokio::fs;
/// Telegram's maximum message length for text messages /// Telegram's maximum message length for text messages
const TELEGRAM_MAX_MESSAGE_LENGTH: usize = 4096; const TELEGRAM_MAX_MESSAGE_LENGTH: usize = 4096;
@ -18,7 +18,7 @@ const TELEGRAM_BIND_COMMAND: &str = "/bind";
/// Split a message into chunks that respect Telegram's 4096 character limit. /// Split a message into chunks that respect Telegram's 4096 character limit.
/// Tries to split at word boundaries when possible, and handles continuation. /// Tries to split at word boundaries when possible, and handles continuation.
fn split_message_for_telegram(message: &str) -> Vec<String> { fn split_message_for_telegram(message: &str) -> Vec<String> {
if message.len() <= TELEGRAM_MAX_MESSAGE_LENGTH { if message.chars().count() <= TELEGRAM_MAX_MESSAGE_LENGTH {
return vec![message.to_string()]; return vec![message.to_string()];
} }
@ -26,29 +26,32 @@ fn split_message_for_telegram(message: &str) -> Vec<String> {
let mut remaining = message; let mut remaining = message;
while !remaining.is_empty() { while !remaining.is_empty() {
let chunk_end = if remaining.len() <= TELEGRAM_MAX_MESSAGE_LENGTH { // Find the byte offset for the Nth character boundary.
remaining.len() let hard_split = remaining
.char_indices()
.nth(TELEGRAM_MAX_MESSAGE_LENGTH)
.map_or(remaining.len(), |(idx, _)| idx);
let chunk_end = if hard_split == remaining.len() {
hard_split
} else { } else {
// Try to find a good break point (newline, then space) // Try to find a good break point (newline, then space)
let search_area = &remaining[..TELEGRAM_MAX_MESSAGE_LENGTH]; let search_area = &remaining[..hard_split];
// Prefer splitting at newline // Prefer splitting at newline
if let Some(pos) = search_area.rfind('\n') { if let Some(pos) = search_area.rfind('\n') {
// Don't split if the newline is too close to the start // Don't split if the newline is too close to the start
if pos >= TELEGRAM_MAX_MESSAGE_LENGTH / 2 { if search_area[..pos].chars().count() >= TELEGRAM_MAX_MESSAGE_LENGTH / 2 {
pos + 1 pos + 1
} else { } else {
// Try space as fallback // Try space as fallback
search_area search_area.rfind(' ').unwrap_or(hard_split) + 1
.rfind(' ')
.unwrap_or(TELEGRAM_MAX_MESSAGE_LENGTH)
+ 1
} }
} else if let Some(pos) = search_area.rfind(' ') { } else if let Some(pos) = search_area.rfind(' ') {
pos + 1 pos + 1
} else { } else {
// Hard split at the limit // Hard split at character boundary
TELEGRAM_MAX_MESSAGE_LENGTH hard_split
} }
}; };
@ -373,7 +376,7 @@ impl TelegramChannel {
.collect() .collect()
} }
fn load_config_without_env() -> anyhow::Result<Config> { async fn load_config_without_env() -> anyhow::Result<Config> {
let home = UserDirs::new() let home = UserDirs::new()
.map(|u| u.home_dir().to_path_buf()) .map(|u| u.home_dir().to_path_buf())
.context("Could not find home directory")?; .context("Could not find home directory")?;
@ -381,18 +384,23 @@ impl TelegramChannel {
let config_path = zeroclaw_dir.join("config.toml"); let config_path = zeroclaw_dir.join("config.toml");
let contents = fs::read_to_string(&config_path) let contents = fs::read_to_string(&config_path)
.await
.with_context(|| format!("Failed to read config file: {}", config_path.display()))?; .with_context(|| format!("Failed to read config file: {}", config_path.display()))?;
let mut config: Config = toml::from_str(&contents) let mut config: Config = toml::from_str(&contents)
.context("Failed to parse config file for Telegram binding")?; .context("Failed to parse config.toml — check [channels.telegram] section for syntax errors")?;
config.config_path = config_path; config.config_path = config_path;
config.workspace_dir = zeroclaw_dir.join("workspace"); config.workspace_dir = zeroclaw_dir.join("workspace");
Ok(config) Ok(config)
} }
fn persist_allowed_identity_blocking(identity: &str) -> anyhow::Result<()> { async fn persist_allowed_identity(&self, identity: &str) -> anyhow::Result<()> {
let mut config = Self::load_config_without_env()?; let mut config = Self::load_config_without_env().await?;
let Some(telegram) = config.channels_config.telegram.as_mut() else { let Some(telegram) = config.channels_config.telegram.as_mut() else {
anyhow::bail!("Telegram channel config is missing in config.toml"); anyhow::bail!(
"Missing [channels.telegram] section in config.toml. \
Add bot_token and allowed_users under [channels.telegram], \
or run `zeroclaw onboard --channels-only` to configure interactively"
);
}; };
let normalized = Self::normalize_identity(identity); let normalized = Self::normalize_identity(identity);
@ -404,20 +412,13 @@ impl TelegramChannel {
telegram.allowed_users.push(normalized); telegram.allowed_users.push(normalized);
config config
.save() .save()
.await
.context("Failed to persist Telegram allowlist to config.toml")?; .context("Failed to persist Telegram allowlist to config.toml")?;
} }
Ok(()) Ok(())
} }
async fn persist_allowed_identity(&self, identity: &str) -> anyhow::Result<()> {
let identity = identity.to_string();
tokio::task::spawn_blocking(move || Self::persist_allowed_identity_blocking(&identity))
.await
.map_err(|e| anyhow::anyhow!("Failed to join Telegram bind save task: {e}"))??;
Ok(())
}
fn add_allowed_identity_runtime(&self, identity: &str) { fn add_allowed_identity_runtime(&self, identity: &str) {
let normalized = Self::normalize_identity(identity); let normalized = Self::normalize_identity(identity);
if normalized.is_empty() { if normalized.is_empty() {
@ -600,12 +601,12 @@ impl TelegramChannel {
let username = username_opt.unwrap_or("unknown"); let username = username_opt.unwrap_or("unknown");
let normalized_username = Self::normalize_identity(username); let normalized_username = Self::normalize_identity(username);
let user_id = message let sender_id = message
.get("from") .get("from")
.and_then(|from| from.get("id")) .and_then(|from| from.get("id"))
.and_then(serde_json::Value::as_i64); .and_then(serde_json::Value::as_i64);
let user_id_str = user_id.map(|id| id.to_string()); let sender_id_str = sender_id.map(|id| id.to_string());
let normalized_user_id = user_id_str.as_deref().map(Self::normalize_identity); let normalized_sender_id = sender_id_str.as_deref().map(Self::normalize_identity);
let chat_id = message let chat_id = message
.get("chat") .get("chat")
@ -619,7 +620,7 @@ impl TelegramChannel {
}; };
let mut identities = vec![normalized_username.as_str()]; let mut identities = vec![normalized_username.as_str()];
if let Some(ref id) = normalized_user_id { if let Some(ref id) = normalized_sender_id {
identities.push(id.as_str()); identities.push(id.as_str());
} }
@ -629,9 +630,9 @@ impl TelegramChannel {
if let Some(code) = Self::extract_bind_code(text) { if let Some(code) = Self::extract_bind_code(text) {
if let Some(pairing) = self.pairing.as_ref() { if let Some(pairing) = self.pairing.as_ref() {
match pairing.try_pair(code) { match pairing.try_pair(code, &chat_id).await {
Ok(Some(_token)) => { Ok(Some(_token)) => {
let bind_identity = normalized_user_id.clone().or_else(|| { let bind_identity = normalized_sender_id.clone().or_else(|| {
if normalized_username.is_empty() || normalized_username == "unknown" { if normalized_username.is_empty() || normalized_username == "unknown" {
None None
} else { } else {
@ -694,7 +695,7 @@ impl TelegramChannel {
} else { } else {
let _ = self let _ = self
.send(&SendMessage::new( .send(&SendMessage::new(
" Telegram pairing is not active. Ask operator to update allowlist in config.toml.", " Telegram pairing is not active. Ask operator to add your user ID to channels.telegram.allowed_users in config.toml.",
&chat_id, &chat_id,
)) ))
.await; .await;
@ -703,12 +704,12 @@ impl TelegramChannel {
} }
tracing::warn!( tracing::warn!(
"Telegram: ignoring message from unauthorized user: username={username}, user_id={}. \ "Telegram: ignoring message from unauthorized user: username={username}, sender_id={}. \
Allowlist Telegram username (without '@') or numeric user ID.", Allowlist Telegram username (without '@') or numeric user ID.",
user_id_str.as_deref().unwrap_or("unknown") sender_id_str.as_deref().unwrap_or("unknown")
); );
let suggested_identity = normalized_user_id let suggested_identity = normalized_sender_id
.clone() .clone()
.or_else(|| { .or_else(|| {
if normalized_username.is_empty() || normalized_username == "unknown" { if normalized_username.is_empty() || normalized_username == "unknown" {
@ -750,20 +751,20 @@ Allowlist Telegram username (without '@') or numeric user ID.",
.unwrap_or("unknown") .unwrap_or("unknown")
.to_string(); .to_string();
let user_id = message let sender_id = message
.get("from") .get("from")
.and_then(|from| from.get("id")) .and_then(|from| from.get("id"))
.and_then(serde_json::Value::as_i64) .and_then(serde_json::Value::as_i64)
.map(|id| id.to_string()); .map(|id| id.to_string());
let sender_identity = if username == "unknown" { let sender_identity = if username == "unknown" {
user_id.clone().unwrap_or_else(|| "unknown".to_string()) sender_id.clone().unwrap_or_else(|| "unknown".to_string())
} else { } else {
username.clone() username.clone()
}; };
let mut identities = vec![username.as_str()]; let mut identities = vec![username.as_str()];
if let Some(id) = user_id.as_deref() { if let Some(id) = sender_id.as_deref() {
identities.push(id); identities.push(id);
} }
@ -825,6 +826,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
.duration_since(std::time::UNIX_EPOCH) .duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default() .unwrap_or_default()
.as_secs(), .as_secs(),
thread_ts: None,
}) })
} }
@ -1631,6 +1633,37 @@ impl Channel for TelegramChannel {
.await .await
} }
async fn cancel_draft(&self, recipient: &str, message_id: &str) -> anyhow::Result<()> {
let (chat_id, _) = Self::parse_reply_target(recipient);
self.last_draft_edit.lock().remove(&chat_id);
let message_id = match message_id.parse::<i64>() {
Ok(id) => id,
Err(e) => {
tracing::debug!("Invalid Telegram draft message_id '{message_id}': {e}");
return Ok(());
}
};
let response = self
.client
.post(self.api_url("deleteMessage"))
.json(&serde_json::json!({
"chat_id": chat_id,
"message_id": message_id,
}))
.send()
.await?;
if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_default();
tracing::debug!("Telegram deleteMessage failed ({status}): {body}");
}
Ok(())
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> { async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// Strip tool_call tags before processing to prevent Markdown parsing failures // Strip tool_call tags before processing to prevent Markdown parsing failures
let content = strip_tool_call_tags(&message.content); let content = strip_tool_call_tags(&message.content);
@ -2830,4 +2863,103 @@ mod tests {
let ch_disabled = TelegramChannel::new("token".into(), vec!["*".into()], false); let ch_disabled = TelegramChannel::new("token".into(), vec!["*".into()], false);
assert!(!ch_disabled.mention_only); assert!(!ch_disabled.mention_only);
} }
// ─────────────────────────────────────────────────────────────────────
// TG6: Channel platform limit edge cases for Telegram (4096 char limit)
// Prevents: Pattern 6 — issues #574, #499
// ─────────────────────────────────────────────────────────────────────
#[test]
fn telegram_split_code_block_at_boundary() {
let mut msg = String::new();
msg.push_str("```python\n");
msg.push_str(&"x".repeat(4085));
msg.push_str("\n```\nMore text after code block");
let parts = split_message_for_telegram(&msg);
assert!(
parts.len() >= 2,
"code block spanning boundary should split"
);
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"each part must be <= {TELEGRAM_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
}
#[test]
fn telegram_split_single_long_word() {
let long_word = "a".repeat(5000);
let parts = split_message_for_telegram(&long_word);
assert!(parts.len() >= 2, "word exceeding limit must be split");
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"hard-split part must be <= {TELEGRAM_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
let reassembled: String = parts.join("");
assert_eq!(reassembled, long_word);
}
#[test]
fn telegram_split_exactly_at_limit_no_split() {
let msg = "a".repeat(TELEGRAM_MAX_MESSAGE_LENGTH);
let parts = split_message_for_telegram(&msg);
assert_eq!(parts.len(), 1, "message exactly at limit should not split");
}
#[test]
fn telegram_split_one_over_limit() {
let msg = "a".repeat(TELEGRAM_MAX_MESSAGE_LENGTH + 1);
let parts = split_message_for_telegram(&msg);
assert!(parts.len() >= 2, "message 1 char over limit must split");
}
#[test]
fn telegram_split_many_short_lines() {
let msg: String = (0..1000).map(|i| format!("line {i}\n")).collect();
let parts = split_message_for_telegram(&msg);
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"short-line batch must be <= limit"
);
}
}
#[test]
fn telegram_split_only_whitespace() {
let msg = " \n\n\t ";
let parts = split_message_for_telegram(msg);
assert!(parts.len() <= 1);
}
#[test]
fn telegram_split_emoji_at_boundary() {
let mut msg = "a".repeat(4094);
msg.push_str("🎉🎊"); // 4096 chars total
let parts = split_message_for_telegram(&msg);
for part in &parts {
// The function splits on character count, not byte count
assert!(
part.chars().count() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"emoji boundary split must respect limit"
);
}
}
#[test]
fn telegram_split_consecutive_newlines() {
let mut msg = "a".repeat(4090);
msg.push_str("\n\n\n\n\n\n");
msg.push_str(&"b".repeat(100));
let parts = split_message_for_telegram(&msg);
for part in &parts {
assert!(part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH);
}
}
} }

View file

@ -9,6 +9,9 @@ pub struct ChannelMessage {
pub content: String, pub content: String,
pub channel: String, pub channel: String,
pub timestamp: u64, pub timestamp: u64,
/// Platform thread identifier (e.g. Slack `ts`, Discord thread ID).
/// When set, replies should be posted as threaded responses.
pub thread_ts: Option<String>,
} }
/// Message to send through a channel /// Message to send through a channel
@ -17,6 +20,8 @@ pub struct SendMessage {
pub content: String, pub content: String,
pub recipient: String, pub recipient: String,
pub subject: Option<String>, pub subject: Option<String>,
/// Platform thread identifier for threaded replies (e.g. Slack `thread_ts`).
pub thread_ts: Option<String>,
} }
impl SendMessage { impl SendMessage {
@ -26,6 +31,7 @@ impl SendMessage {
content: content.into(), content: content.into(),
recipient: recipient.into(), recipient: recipient.into(),
subject: None, subject: None,
thread_ts: None,
} }
} }
@ -39,8 +45,15 @@ impl SendMessage {
content: content.into(), content: content.into(),
recipient: recipient.into(), recipient: recipient.into(),
subject: Some(subject.into()), subject: Some(subject.into()),
thread_ts: None,
} }
} }
/// Set the thread identifier for threaded replies.
pub fn in_thread(mut self, thread_ts: Option<String>) -> Self {
self.thread_ts = thread_ts;
self
}
} }
/// Core channel trait — implement for any messaging platform /// Core channel trait — implement for any messaging platform
@ -100,6 +113,11 @@ pub trait Channel: Send + Sync {
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
Ok(()) Ok(())
} }
/// Cancel and remove a previously sent draft message if the channel supports it.
async fn cancel_draft(&self, _recipient: &str, _message_id: &str) -> anyhow::Result<()> {
Ok(())
}
} }
#[cfg(test)] #[cfg(test)]
@ -129,6 +147,7 @@ mod tests {
content: "hello".into(), content: "hello".into(),
channel: "dummy".into(), channel: "dummy".into(),
timestamp: 123, timestamp: 123,
thread_ts: None,
}) })
.await .await
.map_err(|e| anyhow::anyhow!(e.to_string())) .map_err(|e| anyhow::anyhow!(e.to_string()))
@ -144,6 +163,7 @@ mod tests {
content: "ping".into(), content: "ping".into(),
channel: "dummy".into(), channel: "dummy".into(),
timestamp: 999, timestamp: 999,
thread_ts: None,
}; };
let cloned = message.clone(); let cloned = message.clone();
@ -183,6 +203,7 @@ mod tests {
.finalize_draft("bob", "msg_1", "final text") .finalize_draft("bob", "msg_1", "final text")
.await .await
.is_ok()); .is_ok());
assert!(channel.cancel_draft("bob", "msg_1").await.is_ok());
} }
#[tokio::test] #[tokio::test]

View file

@ -8,6 +8,20 @@ use uuid::Uuid;
/// Messages are received via the gateway's `/whatsapp` webhook endpoint. /// Messages are received via the gateway's `/whatsapp` webhook endpoint.
/// The `listen` method here is a no-op placeholder; actual message handling /// The `listen` method here is a no-op placeholder; actual message handling
/// happens in the gateway when Meta sends webhook events. /// happens in the gateway when Meta sends webhook events.
fn ensure_https(url: &str) -> anyhow::Result<()> {
if !url.starts_with("https://") {
anyhow::bail!(
"Refusing to transmit sensitive data over non-HTTPS URL: URL scheme must be https"
);
}
Ok(())
}
///
/// # Runtime Negotiation
///
/// This Cloud API channel is automatically selected when `phone_number_id` is set in the config.
/// Use `WhatsAppWebChannel` (with `session_path`) for native Web mode.
pub struct WhatsAppChannel { pub struct WhatsAppChannel {
access_token: String, access_token: String,
endpoint_id: String, endpoint_id: String,
@ -85,7 +99,8 @@ impl WhatsAppChannel {
if !self.is_number_allowed(&normalized_from) { if !self.is_number_allowed(&normalized_from) {
tracing::warn!( tracing::warn!(
"WhatsApp: ignoring message from unauthorized number: {normalized_from}. \ "WhatsApp: ignoring message from unauthorized number: {normalized_from}. \
Add to allowed_numbers in config.toml, then run `zeroclaw onboard --channels-only`." Add to channels.whatsapp.allowed_numbers in config.toml, \
or run `zeroclaw onboard --channels-only` to configure interactively."
); );
continue; continue;
} }
@ -126,6 +141,7 @@ impl WhatsAppChannel {
content, content,
channel: "whatsapp".to_string(), channel: "whatsapp".to_string(),
timestamp, timestamp,
thread_ts: None,
}); });
} }
} }
@ -165,6 +181,8 @@ impl Channel for WhatsAppChannel {
} }
}); });
ensure_https(&url)?;
let resp = self let resp = self
.http_client() .http_client()
.post(&url) .post(&url)
@ -203,6 +221,10 @@ impl Channel for WhatsAppChannel {
// Check if we can reach the WhatsApp API // Check if we can reach the WhatsApp API
let url = format!("https://graph.facebook.com/v18.0/{}", self.endpoint_id); let url = format!("https://graph.facebook.com/v18.0/{}", self.endpoint_id);
if ensure_https(&url).is_err() {
return false;
}
self.http_client() self.http_client()
.get(&url) .get(&url)
.bearer_auth(&self.access_token) .bearer_auth(&self.access_token)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,564 @@
//! WhatsApp Web channel using wa-rs (native Rust implementation)
//!
//! This channel provides direct WhatsApp Web integration with:
//! - QR code and pair code linking
//! - End-to-end encryption via Signal Protocol
//! - Full Baileys parity (groups, media, presence, reactions, editing/deletion)
//!
//! # Feature Flag
//!
//! This channel requires the `whatsapp-web` feature flag:
//! ```sh
//! cargo build --features whatsapp-web
//! ```
//!
//! # Configuration
//!
//! ```toml
//! [channels_config.whatsapp]
//! session_path = "~/.zeroclaw/whatsapp-session.db" # Required for Web mode
//! pair_phone = "15551234567" # Optional: for pair code linking
//! allowed_numbers = ["+1234567890", "*"] # Same as Cloud API
//! ```
//!
//! # Runtime Negotiation
//!
//! This channel is automatically selected when `session_path` is set in the config.
//! The Cloud API channel is used when `phone_number_id` is set.
use super::traits::{Channel, ChannelMessage, SendMessage};
use super::whatsapp_storage::RusqliteStore;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use std::sync::Arc;
use tokio::select;
/// WhatsApp Web channel using wa-rs with custom rusqlite storage
///
/// # Status: Functional Implementation
///
/// This implementation uses the wa-rs Bot with our custom RusqliteStore backend.
///
/// # Configuration
///
/// ```toml
/// [channels_config.whatsapp]
/// session_path = "~/.zeroclaw/whatsapp-session.db"
/// pair_phone = "15551234567" # Optional
/// allowed_numbers = ["+1234567890", "*"]
/// ```
#[cfg(feature = "whatsapp-web")]
pub struct WhatsAppWebChannel {
/// Session database path
session_path: String,
/// Phone number for pair code linking (optional)
pair_phone: Option<String>,
/// Custom pair code (optional)
pair_code: Option<String>,
/// Allowed phone numbers (E.164 format) or "*" for all
allowed_numbers: Vec<String>,
/// Bot handle for shutdown
bot_handle: Arc<Mutex<Option<tokio::task::JoinHandle<()>>>>,
/// Client handle for sending messages and typing indicators
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
}
impl WhatsAppWebChannel {
/// Create a new WhatsApp Web channel
///
/// # Arguments
///
/// * `session_path` - Path to the SQLite session database
/// * `pair_phone` - Optional phone number for pair code linking (format: "15551234567")
/// * `pair_code` - Optional custom pair code (leave empty for auto-generated)
/// * `allowed_numbers` - Phone numbers allowed to interact (E.164 format) or "*" for all
#[cfg(feature = "whatsapp-web")]
pub fn new(
session_path: String,
pair_phone: Option<String>,
pair_code: Option<String>,
allowed_numbers: Vec<String>,
) -> Self {
Self {
session_path,
pair_phone,
pair_code,
allowed_numbers,
bot_handle: Arc::new(Mutex::new(None)),
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
}
}
/// Check if a phone number is allowed (E.164 format: +1234567890)
#[cfg(feature = "whatsapp-web")]
fn is_number_allowed(&self, phone: &str) -> bool {
self.allowed_numbers.iter().any(|n| n == "*" || n == phone)
}
/// Normalize phone number to E.164 format
#[cfg(feature = "whatsapp-web")]
fn normalize_phone(&self, phone: &str) -> String {
let trimmed = phone.trim();
let user_part = trimmed
.split_once('@')
.map(|(user, _)| user)
.unwrap_or(trimmed);
let normalized_user = user_part.trim_start_matches('+');
if user_part.starts_with('+') {
format!("+{normalized_user}")
} else {
format!("+{normalized_user}")
}
}
/// Whether the recipient string is a WhatsApp JID (contains a domain suffix).
#[cfg(feature = "whatsapp-web")]
fn is_jid(recipient: &str) -> bool {
recipient.trim().contains('@')
}
/// Convert a recipient to a wa-rs JID.
///
/// Supports:
/// - Full JIDs (e.g. "12345@s.whatsapp.net")
/// - E.164-like numbers (e.g. "+1234567890")
#[cfg(feature = "whatsapp-web")]
fn recipient_to_jid(&self, recipient: &str) -> Result<wa_rs_binary::jid::Jid> {
let trimmed = recipient.trim();
if trimmed.is_empty() {
anyhow::bail!("Recipient cannot be empty");
}
if trimmed.contains('@') {
return trimmed
.parse::<wa_rs_binary::jid::Jid>()
.map_err(|e| anyhow!("Invalid WhatsApp JID `{trimmed}`: {e}"));
}
let digits: String = trimmed.chars().filter(|c| c.is_ascii_digit()).collect();
if digits.is_empty() {
anyhow::bail!("Recipient `{trimmed}` does not contain a valid phone number");
}
Ok(wa_rs_binary::jid::Jid::pn(digits))
}
}
#[cfg(feature = "whatsapp-web")]
#[async_trait]
impl Channel for WhatsAppWebChannel {
fn name(&self) -> &str {
"whatsapp"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
// Validate recipient allowlist only for direct phone-number targets.
if !Self::is_jid(&message.recipient) {
let normalized = self.normalize_phone(&message.recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: recipient {} not in allowed list",
message.recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(&message.recipient)?;
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
};
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
message.recipient,
message_id
);
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Store the sender channel for incoming messages
*self.tx.lock() = Some(tx.clone());
use wa_rs::bot::Bot;
use wa_rs::pair_code::PairCodeOptions;
use wa_rs::store::{Device, DeviceStore};
use wa_rs_binary::jid::JidExt as _;
use wa_rs_core::proto_helpers::MessageExt;
use wa_rs_core::types::events::Event;
use wa_rs_tokio_transport::TokioWebSocketTransportFactory;
use wa_rs_ureq_http::UreqHttpClient;
tracing::info!(
"WhatsApp Web channel starting (session: {})",
self.session_path
);
// Initialize storage backend
let storage = RusqliteStore::new(&self.session_path)?;
let backend = Arc::new(storage);
// Check if we have a saved device to load
let mut device = Device::new(backend.clone());
if backend.exists().await? {
tracing::info!("WhatsApp Web: found existing session, loading device");
if let Some(core_device) = backend.load().await? {
device.load_from_serializable(core_device);
} else {
anyhow::bail!("Device exists but failed to load");
}
} else {
tracing::info!(
"WhatsApp Web: no existing session, new device will be created during pairing"
);
};
// Create transport factory
let mut transport_factory = TokioWebSocketTransportFactory::new();
if let Ok(ws_url) = std::env::var("WHATSAPP_WS_URL") {
transport_factory = transport_factory.with_url(ws_url);
}
// Create HTTP client for media operations
let http_client = UreqHttpClient::new();
// Build the bot
let tx_clone = tx.clone();
let allowed_numbers = self.allowed_numbers.clone();
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
.with_http_client(http_client)
.on_event(move |event, _client| {
let tx_inner = tx_clone.clone();
let allowed_numbers = allowed_numbers.clone();
async move {
match event {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender = info.source.sender.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!(
"WhatsApp Web message from {} in {}: {}",
sender,
chat,
text
);
// Check if sender is allowed
let normalized = if sender.starts_with('+') {
sender.clone()
} else {
format!("+{sender}")
};
if allowed_numbers.iter().any(|n| n == "*" || n == &normalized) {
let trimmed = text.trim();
if trimmed.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
);
return;
}
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content: trimmed.to_string(),
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
} else {
tracing::warn!("WhatsApp Web: message from {} not in allowed list", normalized);
}
}
Event::Connected(_) => {
tracing::info!("WhatsApp Web connected successfully");
}
Event::LoggedOut(_) => {
tracing::warn!("WhatsApp Web was logged out");
}
Event::StreamError(stream_error) => {
tracing::error!("WhatsApp Web stream error: {:?}", stream_error);
}
Event::PairingCode { code, .. } => {
tracing::info!("WhatsApp Web pair code received: {}", code);
tracing::info!(
"Link your phone by entering this code in WhatsApp > Linked Devices"
);
}
Event::PairingQrCode { code, .. } => {
tracing::info!(
"WhatsApp Web QR code received (scan with WhatsApp > Linked Devices)"
);
tracing::debug!("QR code: {}", code);
}
_ => {}
}
}
})
;
// Configure pair-code flow when a phone number is provided.
if let Some(ref phone) = self.pair_phone {
tracing::info!("WhatsApp Web: pair-code flow enabled for configured phone number");
builder = builder.with_pair_code(PairCodeOptions {
phone_number: phone.clone(),
custom_code: self.pair_code.clone(),
..Default::default()
});
} else if self.pair_code.is_some() {
tracing::warn!(
"WhatsApp Web: pair_code is set but pair_phone is missing; pair code config is ignored"
);
}
let mut bot = builder.build().await?;
*self.client.lock() = Some(bot.client());
// Run the bot
let bot_handle = bot.run().await?;
// Store the bot handle for later shutdown
*self.bot_handle.lock() = Some(bot_handle);
// Wait for shutdown signal
let (_shutdown_tx, mut shutdown_rx) = tokio::sync::broadcast::channel::<()>(1);
select! {
_ = shutdown_rx.recv() => {
tracing::info!("WhatsApp Web channel shutting down");
}
_ = tokio::signal::ctrl_c() => {
tracing::info!("WhatsApp Web channel received Ctrl+C");
}
}
*self.client.lock() = None;
if let Some(handle) = self.bot_handle.lock().take() {
handle.abort();
}
Ok(())
}
async fn health_check(&self) -> bool {
let bot_handle_guard = self.bot_handle.lock();
bot_handle_guard.is_some()
}
async fn start_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
if !Self::is_jid(recipient) {
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_composing(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (composing): {e}"))?;
tracing::debug!("WhatsApp Web: start typing for {}", recipient);
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
if !Self::is_jid(recipient) {
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_paused(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (paused): {e}"))?;
tracing::debug!("WhatsApp Web: stop typing for {}", recipient);
Ok(())
}
}
// Stub implementation when feature is not enabled
#[cfg(not(feature = "whatsapp-web"))]
pub struct WhatsAppWebChannel {
_private: (),
}
#[cfg(not(feature = "whatsapp-web"))]
impl WhatsAppWebChannel {
pub fn new(
_session_path: String,
_pair_phone: Option<String>,
_pair_code: Option<String>,
_allowed_numbers: Vec<String>,
) -> Self {
Self { _private: () }
}
}
#[cfg(not(feature = "whatsapp-web"))]
#[async_trait]
impl Channel for WhatsAppWebChannel {
fn name(&self) -> &str {
"whatsapp"
}
async fn send(&self, _message: &SendMessage) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn health_check(&self) -> bool {
false
}
async fn start_typing(&self, _recipient: &str) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn stop_typing(&self, _recipient: &str) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[cfg(feature = "whatsapp-web")]
fn make_channel() -> WhatsAppWebChannel {
WhatsAppWebChannel::new(
"/tmp/test-whatsapp.db".into(),
None,
None,
vec!["+1234567890".into()],
)
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_channel_name() {
let ch = make_channel();
assert_eq!(ch.name(), "whatsapp");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_allowed_exact() {
let ch = make_channel();
assert!(ch.is_number_allowed("+1234567890"));
assert!(!ch.is_number_allowed("+9876543210"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_allowed_wildcard() {
let ch = WhatsAppWebChannel::new("/tmp/test.db".into(), None, None, vec!["*".into()]);
assert!(ch.is_number_allowed("+1234567890"));
assert!(ch.is_number_allowed("+9999999999"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_denied_empty() {
let ch = WhatsAppWebChannel::new("/tmp/test.db".into(), None, None, vec![]);
// Empty allowlist means "deny all" (matches channel-wide allowlist policy).
assert!(!ch.is_number_allowed("+1234567890"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_adds_plus() {
let ch = make_channel();
assert_eq!(ch.normalize_phone("1234567890"), "+1234567890");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_preserves_plus() {
let ch = make_channel();
assert_eq!(ch.normalize_phone("+1234567890"), "+1234567890");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_from_jid() {
let ch = make_channel();
assert_eq!(
ch.normalize_phone("1234567890@s.whatsapp.net"),
"+1234567890"
);
}
#[tokio::test]
#[cfg(feature = "whatsapp-web")]
async fn whatsapp_web_health_check_disconnected() {
let ch = make_channel();
assert!(!ch.health_check().await);
}
}

View file

@ -6,14 +6,14 @@ pub use schema::{
build_runtime_proxy_client_with_timeouts, runtime_proxy_config, set_runtime_proxy_config, build_runtime_proxy_client_with_timeouts, runtime_proxy_config, set_runtime_proxy_config,
AgentConfig, AuditConfig, AutonomyConfig, BrowserComputerUseConfig, BrowserConfig, AgentConfig, AuditConfig, AutonomyConfig, BrowserComputerUseConfig, BrowserConfig,
ChannelsConfig, ClassificationRule, ComposioConfig, Config, CostConfig, CronConfig, ChannelsConfig, ClassificationRule, ComposioConfig, Config, CostConfig, CronConfig,
DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, GatewayConfig, HardwareConfig, DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, EmbeddingRouteConfig, GatewayConfig,
HardwareTransport, HeartbeatConfig, HttpRequestConfig, IMessageConfig, IdentityConfig, HardwareConfig, HardwareTransport, HeartbeatConfig, HttpRequestConfig, IMessageConfig,
LarkConfig, MatrixConfig, MemoryConfig, ModelRouteConfig, ObservabilityConfig, IdentityConfig, LarkConfig, MatrixConfig, MemoryConfig, ModelRouteConfig, MultimodalConfig,
PeripheralBoardConfig, PeripheralsConfig, ProxyConfig, ProxyScope, QueryClassificationConfig, ObservabilityConfig, PeripheralBoardConfig, PeripheralsConfig, ProxyConfig, ProxyScope,
ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig, QueryClassificationConfig, ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig,
SchedulerConfig, SecretsConfig, SecurityConfig, SlackConfig, StorageConfig, SandboxBackend, SandboxConfig, SchedulerConfig, SecretsConfig, SecurityConfig, SkillsConfig,
StorageProviderConfig, StorageProviderSection, StreamMode, TelegramConfig, TunnelConfig, SlackConfig, StorageConfig, StorageProviderConfig, StorageProviderSection, StreamMode,
WebSearchConfig, WebhookConfig, TelegramConfig, TunnelConfig, WebSearchConfig, WebhookConfig,
}; };
#[cfg(test)] #[cfg(test)]
@ -36,6 +36,7 @@ mod tests {
allowed_users: vec!["alice".into()], allowed_users: vec!["alice".into()],
stream_mode: StreamMode::default(), stream_mode: StreamMode::default(),
draft_update_interval_ms: 1000, draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false, mention_only: false,
}; };

File diff suppressed because it is too large Load diff

View file

@ -1,5 +1,6 @@
use crate::config::Config; use crate::config::Config;
use anyhow::Result; use crate::security::SecurityPolicy;
use anyhow::{bail, Result};
mod schedule; mod schedule;
mod store; mod store;
@ -96,6 +97,58 @@ pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<(
println!(" Cmd : {}", job.command); println!(" Cmd : {}", job.command);
Ok(()) Ok(())
} }
crate::CronCommands::Update {
id,
expression,
tz,
command,
name,
} => {
if expression.is_none() && tz.is_none() && command.is_none() && name.is_none() {
bail!("At least one of --expression, --tz, --command, or --name must be provided");
}
// Merge expression/tz with the existing schedule so that
// --tz alone updates the timezone and --expression alone
// preserves the existing timezone.
let schedule = if expression.is_some() || tz.is_some() {
let existing = get_job(config, &id)?;
let (existing_expr, existing_tz) = match existing.schedule {
Schedule::Cron {
expr,
tz: existing_tz,
} => (expr, existing_tz),
_ => bail!("Cannot update expression/tz on a non-cron schedule"),
};
Some(Schedule::Cron {
expr: expression.unwrap_or(existing_expr),
tz: tz.or(existing_tz),
})
} else {
None
};
if let Some(ref cmd) = command {
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
if !security.is_command_allowed(cmd) {
bail!("Command blocked by security policy: {cmd}");
}
}
let patch = CronJobPatch {
schedule,
command,
name,
..CronJobPatch::default()
};
let job = update_job(config, &id, patch)?;
println!("\u{2705} Updated cron job {}", job.id);
println!(" Expr: {}", job.expression);
println!(" Next: {}", job.next_run.to_rfc3339());
println!(" Cmd : {}", job.command);
Ok(())
}
crate::CronCommands::Remove { id } => remove_job(config, &id), crate::CronCommands::Remove { id } => remove_job(config, &id),
crate::CronCommands::Pause { id } => { crate::CronCommands::Pause { id } => {
pause_job(config, &id)?; pause_job(config, &id)?;
@ -167,3 +220,197 @@ fn parse_delay(input: &str) -> Result<chrono::Duration> {
}; };
Ok(duration) Ok(duration)
} }
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
fn test_config(tmp: &TempDir) -> Config {
let config = Config {
workspace_dir: tmp.path().join("workspace"),
config_path: tmp.path().join("config.toml"),
..Config::default()
};
std::fs::create_dir_all(&config.workspace_dir).unwrap();
config
}
fn make_job(config: &Config, expr: &str, tz: Option<&str>, cmd: &str) -> CronJob {
add_shell_job(
config,
None,
Schedule::Cron {
expr: expr.into(),
tz: tz.map(Into::into),
},
cmd,
)
.unwrap()
}
fn run_update(
config: &Config,
id: &str,
expression: Option<&str>,
tz: Option<&str>,
command: Option<&str>,
name: Option<&str>,
) -> Result<()> {
handle_command(
crate::CronCommands::Update {
id: id.into(),
expression: expression.map(Into::into),
tz: tz.map(Into::into),
command: command.map(Into::into),
name: name.map(Into::into),
},
config,
)
}
#[test]
fn update_changes_command_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo original");
run_update(&config, &job.id, None, None, Some("echo updated"), None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.command, "echo updated");
assert_eq!(updated.id, job.id);
}
#[test]
fn update_changes_expression_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(&config, &job.id, Some("0 9 * * *"), None, None, None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.expression, "0 9 * * *");
}
#[test]
fn update_changes_name_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(&config, &job.id, None, None, None, Some("new-name")).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.name.as_deref(), Some("new-name"));
}
#[test]
fn update_tz_alone_sets_timezone() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(
&config,
&job.id,
None,
Some("America/Los_Angeles"),
None,
None,
)
.unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(
updated.schedule,
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: Some("America/Los_Angeles".into()),
}
);
}
#[test]
fn update_expression_preserves_existing_tz() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(
&config,
"*/5 * * * *",
Some("America/Los_Angeles"),
"echo test",
);
run_update(&config, &job.id, Some("0 9 * * *"), None, None, None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(
updated.schedule,
Schedule::Cron {
expr: "0 9 * * *".into(),
tz: Some("America/Los_Angeles".into()),
}
);
}
#[test]
fn update_preserves_unchanged_fields() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = add_shell_job(
&config,
Some("original-name".into()),
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"echo original",
)
.unwrap();
run_update(&config, &job.id, None, None, Some("echo changed"), None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.command, "echo changed");
assert_eq!(updated.name.as_deref(), Some("original-name"));
assert_eq!(updated.expression, "*/5 * * * *");
}
#[test]
fn update_no_flags_fails() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
let result = run_update(&config, &job.id, None, None, None, None);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("At least one of"));
}
#[test]
fn update_nonexistent_job_fails() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let result = run_update(
&config,
"nonexistent-id",
None,
None,
Some("echo test"),
None,
);
assert!(result.is_err());
}
#[test]
fn update_security_allows_safe_command() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
assert!(security.is_command_allowed("echo safe"));
}
}

View file

@ -61,7 +61,7 @@ async fn execute_job_with_retry(
for attempt in 0..=retries { for attempt in 0..=retries {
let (success, output) = match job.job_type { let (success, output) = match job.job_type {
JobType::Shell => run_job_command(config, security, job).await, JobType::Shell => run_job_command(config, security, job).await,
JobType::Agent => run_agent_job(config, job).await, JobType::Agent => run_agent_job(config, security, job).await,
}; };
last_output = output; last_output = output;
@ -116,7 +116,31 @@ async fn execute_and_persist_job(
(job.id.clone(), success) (job.id.clone(), success)
} }
async fn run_agent_job(config: &Config, job: &CronJob) -> (bool, String) { async fn run_agent_job(
config: &Config,
security: &SecurityPolicy,
job: &CronJob,
) -> (bool, String) {
if !security.can_act() {
return (
false,
"blocked by security policy: autonomy is read-only".to_string(),
);
}
if security.is_rate_limited() {
return (
false,
"blocked by security policy: rate limit exceeded".to_string(),
);
}
if !security.record_action() {
return (
false,
"blocked by security policy: action budget exhausted".to_string(),
);
}
let name = job.name.clone().unwrap_or_else(|| "cron-job".to_string()); let name = job.name.clone().unwrap_or_else(|| "cron-job".to_string());
let prompt = job.prompt.clone().unwrap_or_default(); let prompt = job.prompt.clone().unwrap_or_default();
let prefixed_prompt = format!("[cron:{} {name}] {prompt}", job.id); let prefixed_prompt = format!("[cron:{} {name}] {prompt}", job.id);
@ -475,13 +499,15 @@ mod tests {
use chrono::{Duration as ChronoDuration, Utc}; use chrono::{Duration as ChronoDuration, Utc};
use tempfile::TempDir; use tempfile::TempDir;
fn test_config(tmp: &TempDir) -> Config { async fn test_config(tmp: &TempDir) -> Config {
let config = Config { let config = Config {
workspace_dir: tmp.path().join("workspace"), workspace_dir: tmp.path().join("workspace"),
config_path: tmp.path().join("config.toml"), config_path: tmp.path().join("config.toml"),
..Config::default() ..Config::default()
}; };
std::fs::create_dir_all(&config.workspace_dir).unwrap(); tokio::fs::create_dir_all(&config.workspace_dir)
.await
.unwrap();
config config
} }
@ -513,7 +539,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_success() { async fn run_job_command_success() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let job = test_job("echo scheduler-ok"); let job = test_job("echo scheduler-ok");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -526,7 +552,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_failure() { async fn run_job_command_failure() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let job = test_job("ls definitely_missing_file_for_scheduler_test"); let job = test_job("ls definitely_missing_file_for_scheduler_test");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -539,7 +565,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_times_out() { async fn run_job_command_times_out() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.autonomy.allowed_commands = vec!["sleep".into()]; config.autonomy.allowed_commands = vec!["sleep".into()];
let job = test_job("sleep 1"); let job = test_job("sleep 1");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -553,7 +579,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_blocks_disallowed_command() { async fn run_job_command_blocks_disallowed_command() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.autonomy.allowed_commands = vec!["echo".into()]; config.autonomy.allowed_commands = vec!["echo".into()];
let job = test_job("curl https://evil.example"); let job = test_job("curl https://evil.example");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -567,7 +593,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_blocks_forbidden_path_argument() { async fn run_job_command_blocks_forbidden_path_argument() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.autonomy.allowed_commands = vec!["cat".into()]; config.autonomy.allowed_commands = vec!["cat".into()];
let job = test_job("cat /etc/passwd"); let job = test_job("cat /etc/passwd");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -582,7 +608,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_blocks_readonly_mode() { async fn run_job_command_blocks_readonly_mode() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.autonomy.level = crate::security::AutonomyLevel::ReadOnly; config.autonomy.level = crate::security::AutonomyLevel::ReadOnly;
let job = test_job("echo should-not-run"); let job = test_job("echo should-not-run");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -596,7 +622,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_job_command_blocks_rate_limited() { async fn run_job_command_blocks_rate_limited() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.autonomy.max_actions_per_hour = 0; config.autonomy.max_actions_per_hour = 0;
let job = test_job("echo should-not-run"); let job = test_job("echo should-not-run");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -610,16 +636,17 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn execute_job_with_retry_recovers_after_first_failure() { async fn execute_job_with_retry_recovers_after_first_failure() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.reliability.scheduler_retries = 1; config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1; config.reliability.provider_backoff_ms = 1;
config.autonomy.allowed_commands = vec!["sh".into()]; config.autonomy.allowed_commands = vec!["sh".into()];
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
std::fs::write( tokio::fs::write(
config.workspace_dir.join("retry-once.sh"), config.workspace_dir.join("retry-once.sh"),
"#!/bin/sh\nif [ -f retry-ok.flag ]; then\n echo recovered\n exit 0\nfi\ntouch retry-ok.flag\nexit 1\n", "#!/bin/sh\nif [ -f retry-ok.flag ]; then\n echo recovered\n exit 0\nfi\ntouch retry-ok.flag\nexit 1\n",
) )
.await
.unwrap(); .unwrap();
let job = test_job("sh ./retry-once.sh"); let job = test_job("sh ./retry-once.sh");
@ -631,7 +658,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn execute_job_with_retry_exhausts_attempts() { async fn execute_job_with_retry_exhausts_attempts() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp); let mut config = test_config(&tmp).await;
config.reliability.scheduler_retries = 1; config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1; config.reliability.provider_backoff_ms = 1;
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir); let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -646,23 +673,53 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn run_agent_job_returns_error_without_provider_key() { async fn run_agent_job_returns_error_without_provider_key() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let mut job = test_job(""); let mut job = test_job("");
job.job_type = JobType::Agent; job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into()); job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &job).await; let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success, "Agent job without provider key should fail"); assert!(!success);
assert!( assert!(output.contains("agent job failed:"));
!output.is_empty(), }
"Expected non-empty error output from failed agent job"
); #[tokio::test]
async fn run_agent_job_blocks_readonly_mode() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.level = crate::security::AutonomyLevel::ReadOnly;
let mut job = test_job("");
job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("read-only"));
}
#[tokio::test]
async fn run_agent_job_blocks_rate_limited() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.max_actions_per_hour = 0;
let mut job = test_job("");
job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("rate limit exceeded"));
} }
#[tokio::test] #[tokio::test]
async fn persist_job_result_records_run_and_reschedules_shell_job() { async fn persist_job_result_records_run_and_reschedules_shell_job() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let job = cron::add_job(&config, "*/5 * * * *", "echo ok").unwrap(); let job = cron::add_job(&config, "*/5 * * * *", "echo ok").unwrap();
let started = Utc::now(); let started = Utc::now();
let finished = started + ChronoDuration::milliseconds(10); let finished = started + ChronoDuration::milliseconds(10);
@ -679,7 +736,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn persist_job_result_success_deletes_one_shot() { async fn persist_job_result_success_deletes_one_shot() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let at = Utc::now() + ChronoDuration::minutes(10); let at = Utc::now() + ChronoDuration::minutes(10);
let job = cron::add_agent_job( let job = cron::add_agent_job(
&config, &config,
@ -704,7 +761,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn persist_job_result_failure_disables_one_shot() { async fn persist_job_result_failure_disables_one_shot() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let at = Utc::now() + ChronoDuration::minutes(10); let at = Utc::now() + ChronoDuration::minutes(10);
let job = cron::add_agent_job( let job = cron::add_agent_job(
&config, &config,
@ -730,7 +787,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn deliver_if_configured_handles_none_and_invalid_channel() { async fn deliver_if_configured_handles_none_and_invalid_channel() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp).await;
let mut job = test_job("echo ok"); let mut job = test_job("echo ok");
assert!(deliver_if_configured(&config, &job, "x").await.is_ok()); assert!(deliver_if_configured(&config, &job, "x").await.is_ok());

View file

@ -209,17 +209,40 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
} }
fn has_supervised_channels(config: &Config) -> bool { fn has_supervised_channels(config: &Config) -> bool {
config.channels_config.telegram.is_some() let crate::config::ChannelsConfig {
|| config.channels_config.discord.is_some() cli: _, // `cli` is used only when running the CLI manually
|| config.channels_config.slack.is_some() webhook: _, // Managed by the gateway
|| config.channels_config.imessage.is_some() telegram,
|| config.channels_config.matrix.is_some() discord,
|| config.channels_config.signal.is_some() slack,
|| config.channels_config.whatsapp.is_some() mattermost,
|| config.channels_config.email.is_some() imessage,
|| config.channels_config.irc.is_some() matrix,
|| config.channels_config.lark.is_some() signal,
|| config.channels_config.dingtalk.is_some() whatsapp,
email,
irc,
lark,
dingtalk,
linq,
qq,
..
} = &config.channels_config;
telegram.is_some()
|| discord.is_some()
|| slack.is_some()
|| mattermost.is_some()
|| imessage.is_some()
|| matrix.is_some()
|| signal.is_some()
|| whatsapp.is_some()
|| email.is_some()
|| irc.is_some()
|| lark.is_some()
|| dingtalk.is_some()
|| linq.is_some()
|| qq.is_some()
} }
#[cfg(test)] #[cfg(test)]
@ -298,6 +321,7 @@ mod tests {
allowed_users: vec![], allowed_users: vec![],
stream_mode: crate::config::StreamMode::default(), stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000, draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false, mention_only: false,
}); });
assert!(has_supervised_channels(&config)); assert!(has_supervised_channels(&config));
@ -313,4 +337,29 @@ mod tests {
}); });
assert!(has_supervised_channels(&config)); assert!(has_supervised_channels(&config));
} }
#[test]
fn detects_mattermost_as_supervised_channel() {
let mut config = Config::default();
config.channels_config.mattermost = Some(crate::config::schema::MattermostConfig {
url: "https://mattermost.example.com".into(),
bot_token: "token".into(),
channel_id: Some("channel-id".into()),
allowed_users: vec!["*".into()],
thread_replies: Some(true),
mention_only: Some(false),
});
assert!(has_supervised_channels(&config));
}
#[test]
fn detects_qq_as_supervised_channel() {
let mut config = Config::default();
config.channels_config.qq = Some(crate::config::schema::QQConfig {
app_id: "app-id".into(),
app_secret: "app-secret".into(),
allowed_users: vec!["*".into()],
});
assert!(has_supervised_channels(&config));
}
} }

View file

@ -344,6 +344,58 @@ fn check_config_semantics(config: &Config, items: &mut Vec<DiagItem>) {
} }
} }
// Embedding routes validation
for route in &config.embedding_routes {
if route.hint.trim().is_empty() {
items.push(DiagItem::warn(cat, "embedding route with empty hint"));
}
if let Some(reason) = embedding_provider_validation_error(&route.provider) {
items.push(DiagItem::warn(
cat,
format!(
"embedding route \"{}\" uses invalid provider \"{}\": {}",
route.hint, route.provider, reason
),
));
}
if route.model.trim().is_empty() {
items.push(DiagItem::warn(
cat,
format!("embedding route \"{}\" has empty model", route.hint),
));
}
if route.dimensions.is_some_and(|value| value == 0) {
items.push(DiagItem::warn(
cat,
format!(
"embedding route \"{}\" has invalid dimensions=0",
route.hint
),
));
}
}
if let Some(hint) = config
.memory
.embedding_model
.strip_prefix("hint:")
.map(str::trim)
.filter(|value| !value.is_empty())
{
if !config
.embedding_routes
.iter()
.any(|route| route.hint.trim() == hint)
{
items.push(DiagItem::warn(
cat,
format!(
"memory.embedding_model uses hint \"{hint}\" but no matching [[embedding_routes]] entry exists"
),
));
}
}
// Channel: at least one configured // Channel: at least one configured
let cc = &config.channels_config; let cc = &config.channels_config;
let has_channel = cc.telegram.is_some() let has_channel = cc.telegram.is_some()
@ -396,6 +448,31 @@ fn provider_validation_error(name: &str) -> Option<String> {
} }
} }
fn embedding_provider_validation_error(name: &str) -> Option<String> {
let normalized = name.trim();
if normalized.eq_ignore_ascii_case("none") || normalized.eq_ignore_ascii_case("openai") {
return None;
}
let Some(url) = normalized.strip_prefix("custom:") else {
return Some("supported values: none, openai, custom:<url>".into());
};
let url = url.trim();
if url.is_empty() {
return Some("custom provider requires a non-empty URL after 'custom:'".into());
}
match reqwest::Url::parse(url) {
Ok(parsed) if matches!(parsed.scheme(), "http" | "https") => None,
Ok(parsed) => Some(format!(
"custom provider URL must use http/https, got '{}'",
parsed.scheme()
)),
Err(err) => Some(format!("invalid custom provider URL: {err}")),
}
}
// ── Workspace integrity ────────────────────────────────────────── // ── Workspace integrity ──────────────────────────────────────────
fn check_workspace(config: &Config, items: &mut Vec<DiagItem>) { fn check_workspace(config: &Config, items: &mut Vec<DiagItem>) {
@ -891,6 +968,62 @@ mod tests {
assert_eq!(route_item.unwrap().severity, Severity::Warn); assert_eq!(route_item.unwrap().severity, Severity::Warn);
} }
#[test]
fn config_validation_warns_empty_embedding_route_model() {
let mut config = Config::default();
config.embedding_routes = vec![crate::config::EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "openai".into(),
model: String::new(),
dimensions: Some(1536),
api_key: None,
}];
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items.iter().find(|item| {
item.message
.contains("embedding route \"semantic\" has empty model")
});
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn config_validation_warns_invalid_embedding_route_provider() {
let mut config = Config::default();
config.embedding_routes = vec![crate::config::EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "groq".into(),
model: "text-embedding-3-small".into(),
dimensions: None,
api_key: None,
}];
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items
.iter()
.find(|item| item.message.contains("uses invalid provider \"groq\""));
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn config_validation_warns_missing_embedding_hint_target() {
let mut config = Config::default();
config.memory.embedding_model = "hint:semantic".into();
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items.iter().find(|item| {
item.message
.contains("no matching [[embedding_routes]] entry exists")
});
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test] #[test]
fn environment_check_finds_git() { fn environment_check_finds_git() {
let mut items = Vec::new(); let mut items = Vec::new();
@ -910,8 +1043,8 @@ mod tests {
#[test] #[test]
fn truncate_for_display_preserves_utf8_boundaries() { fn truncate_for_display_preserves_utf8_boundaries() {
let preview = truncate_for_display("版本号-alpha-build", 3); let preview = truncate_for_display("🙂example-alpha-build", 3);
assert_eq!(preview, "版本号"); assert_eq!(preview, "🙂ex");
} }
#[test] #[test]

View file

@ -7,10 +7,10 @@
//! - Request timeouts (30s) to prevent slow-loris attacks //! - Request timeouts (30s) to prevent slow-loris attacks
//! - Header sanitization (handled by axum/hyper) //! - Header sanitization (handled by axum/hyper)
use crate::channels::{Channel, SendMessage, WhatsAppChannel}; use crate::channels::{Channel, LinqChannel, SendMessage, WhatsAppChannel};
use crate::config::Config; use crate::config::Config;
use crate::memory::{self, Memory, MemoryCategory}; use crate::memory::{self, Memory, MemoryCategory};
use crate::providers::{self, Provider}; use crate::providers::{self, ChatMessage, Provider, ProviderCapabilityError};
use crate::runtime; use crate::runtime;
use crate::security::pairing::{constant_time_eq, is_public_bind, PairingGuard}; use crate::security::pairing::{constant_time_eq, is_public_bind, PairingGuard};
use crate::security::SecurityPolicy; use crate::security::SecurityPolicy;
@ -53,6 +53,10 @@ fn whatsapp_memory_key(msg: &crate::channels::traits::ChannelMessage) -> String
format!("whatsapp_{}_{}", msg.sender, msg.id) format!("whatsapp_{}_{}", msg.sender, msg.id)
} }
fn linq_memory_key(msg: &crate::channels::traits::ChannelMessage) -> String {
format!("linq_{}_{}", msg.sender, msg.id)
}
fn hash_webhook_secret(value: &str) -> String { fn hash_webhook_secret(value: &str) -> String {
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
@ -274,6 +278,9 @@ pub struct AppState {
pub whatsapp: Option<Arc<WhatsAppChannel>>, pub whatsapp: Option<Arc<WhatsAppChannel>>,
/// `WhatsApp` app secret for webhook signature verification (`X-Hub-Signature-256`) /// `WhatsApp` app secret for webhook signature verification (`X-Hub-Signature-256`)
pub whatsapp_app_secret: Option<Arc<str>>, pub whatsapp_app_secret: Option<Arc<str>>,
pub linq: Option<Arc<LinqChannel>>,
/// Linq webhook signing secret for signature verification
pub linq_signing_secret: Option<Arc<str>>,
/// Observability backend for metrics scraping /// Observability backend for metrics scraping
pub observer: Arc<dyn crate::observability::Observer>, pub observer: Arc<dyn crate::observability::Observer>,
} }
@ -306,6 +313,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
auth_profile_override: None, auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from), zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt, secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
}, },
)?); )?);
let model = config let model = config
@ -360,12 +368,16 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
}); });
// WhatsApp channel (if configured) // WhatsApp channel (if configured)
let whatsapp_channel: Option<Arc<WhatsAppChannel>> = let whatsapp_channel: Option<Arc<WhatsAppChannel>> = config
config.channels_config.whatsapp.as_ref().map(|wa| { .channels_config
.whatsapp
.as_ref()
.filter(|wa| wa.is_cloud_config())
.map(|wa| {
Arc::new(WhatsAppChannel::new( Arc::new(WhatsAppChannel::new(
wa.access_token.clone(), wa.access_token.clone().unwrap_or_default(),
wa.phone_number_id.clone(), wa.phone_number_id.clone().unwrap_or_default(),
wa.verify_token.clone(), wa.verify_token.clone().unwrap_or_default(),
wa.allowed_numbers.clone(), wa.allowed_numbers.clone(),
)) ))
}); });
@ -389,6 +401,34 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
}) })
.map(Arc::from); .map(Arc::from);
// Linq channel (if configured)
let linq_channel: Option<Arc<LinqChannel>> = config.channels_config.linq.as_ref().map(|lq| {
Arc::new(LinqChannel::new(
lq.api_token.clone(),
lq.from_phone.clone(),
lq.allowed_senders.clone(),
))
});
// Linq signing secret for webhook signature verification
// Priority: environment variable > config file
let linq_signing_secret: Option<Arc<str>> = std::env::var("ZEROCLAW_LINQ_SIGNING_SECRET")
.ok()
.and_then(|secret| {
let secret = secret.trim();
(!secret.is_empty()).then(|| secret.to_owned())
})
.or_else(|| {
config.channels_config.linq.as_ref().and_then(|lq| {
lq.signing_secret
.as_deref()
.map(str::trim)
.filter(|secret| !secret.is_empty())
.map(ToOwned::to_owned)
})
})
.map(Arc::from);
// ── Pairing guard ────────────────────────────────────── // ── Pairing guard ──────────────────────────────────────
let pairing = Arc::new(PairingGuard::new( let pairing = Arc::new(PairingGuard::new(
config.gateway.require_pairing, config.gateway.require_pairing,
@ -440,6 +480,9 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
println!(" GET /whatsapp — Meta webhook verification"); println!(" GET /whatsapp — Meta webhook verification");
println!(" POST /whatsapp — WhatsApp message webhook"); println!(" POST /whatsapp — WhatsApp message webhook");
} }
if linq_channel.is_some() {
println!(" POST /linq — Linq message webhook (iMessage/RCS/SMS)");
}
println!(" GET /health — health check"); println!(" GET /health — health check");
println!(" GET /metrics — Prometheus metrics"); println!(" GET /metrics — Prometheus metrics");
if let Some(code) = pairing.pairing_code() { if let Some(code) = pairing.pairing_code() {
@ -476,6 +519,8 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
idempotency_store, idempotency_store,
whatsapp: whatsapp_channel, whatsapp: whatsapp_channel,
whatsapp_app_secret, whatsapp_app_secret,
linq: linq_channel,
linq_signing_secret,
observer, observer,
}; };
@ -487,6 +532,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route("/webhook", post(handle_webhook)) .route("/webhook", post(handle_webhook))
.route("/whatsapp", get(handle_whatsapp_verify)) .route("/whatsapp", get(handle_whatsapp_verify))
.route("/whatsapp", post(handle_whatsapp_message)) .route("/whatsapp", post(handle_whatsapp_message))
.route("/linq", post(handle_linq_webhook))
.with_state(state) .with_state(state)
.layer(RequestBodyLimitLayer::new(MAX_BODY_SIZE)) .layer(RequestBodyLimitLayer::new(MAX_BODY_SIZE))
.layer(TimeoutLayer::with_status_code( .layer(TimeoutLayer::with_status_code(
@ -542,15 +588,16 @@ async fn handle_metrics(State(state): State<AppState>) -> impl IntoResponse {
} }
/// POST /pair — exchange one-time code for bearer token /// POST /pair — exchange one-time code for bearer token
#[axum::debug_handler]
async fn handle_pair( async fn handle_pair(
State(state): State<AppState>, State(state): State<AppState>,
ConnectInfo(peer_addr): ConnectInfo<SocketAddr>, ConnectInfo(peer_addr): ConnectInfo<SocketAddr>,
headers: HeaderMap, headers: HeaderMap,
) -> impl IntoResponse { ) -> impl IntoResponse {
let client_key = let rate_key =
client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers); client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers);
if !state.rate_limiter.allow_pair(&client_key) { if !state.rate_limiter.allow_pair(&rate_key) {
tracing::warn!("/pair rate limit exceeded for key: {client_key}"); tracing::warn!("/pair rate limit exceeded");
let err = serde_json::json!({ let err = serde_json::json!({
"error": "Too many pairing requests. Please retry later.", "error": "Too many pairing requests. Please retry later.",
"retry_after": RATE_LIMIT_WINDOW_SECS, "retry_after": RATE_LIMIT_WINDOW_SECS,
@ -563,10 +610,10 @@ async fn handle_pair(
.and_then(|v| v.to_str().ok()) .and_then(|v| v.to_str().ok())
.unwrap_or(""); .unwrap_or("");
match state.pairing.try_pair(code) { match state.pairing.try_pair(code, &rate_key).await {
Ok(Some(token)) => { Ok(Some(token)) => {
tracing::info!("🔐 New client paired successfully"); tracing::info!("🔐 New client paired successfully");
if let Err(err) = persist_pairing_tokens(&state.config, &state.pairing) { if let Err(err) = persist_pairing_tokens(state.config.clone(), &state.pairing).await {
tracing::error!("🔐 Pairing succeeded but token persistence failed: {err:#}"); tracing::error!("🔐 Pairing succeeded but token persistence failed: {err:#}");
let body = serde_json::json!({ let body = serde_json::json!({
"paired": true, "paired": true,
@ -603,12 +650,66 @@ async fn handle_pair(
} }
} }
fn persist_pairing_tokens(config: &Arc<Mutex<Config>>, pairing: &PairingGuard) -> Result<()> { async fn persist_pairing_tokens(config: Arc<Mutex<Config>>, pairing: &PairingGuard) -> Result<()> {
let paired_tokens = pairing.tokens(); let paired_tokens = pairing.tokens();
let mut cfg = config.lock(); // This is needed because parking_lot's guard is not Send so we clone the inner
cfg.gateway.paired_tokens = paired_tokens; // this should be removed once async mutexes are used everywhere
cfg.save() let mut updated_cfg = { config.lock().clone() };
.context("Failed to persist paired tokens to config.toml") updated_cfg.gateway.paired_tokens = paired_tokens;
updated_cfg
.save()
.await
.context("Failed to persist paired tokens to config.toml")?;
// Keep shared runtime config in sync with persisted tokens.
*config.lock() = updated_cfg;
Ok(())
}
async fn run_gateway_chat_with_multimodal(
state: &AppState,
provider_label: &str,
message: &str,
) -> anyhow::Result<String> {
let user_messages = vec![ChatMessage::user(message)];
let image_marker_count = crate::multimodal::count_image_markers(&user_messages);
if image_marker_count > 0 && !state.provider.supports_vision() {
return Err(ProviderCapabilityError {
provider: provider_label.to_string(),
capability: "vision".to_string(),
message: format!(
"received {image_marker_count} image marker(s), but this provider does not support vision input"
),
}
.into());
}
// Keep webhook/gateway prompts aligned with channel behavior by injecting
// workspace-aware system context before model invocation.
let system_prompt = {
let config_guard = state.config.lock();
crate::channels::build_system_prompt(
&config_guard.workspace_dir,
&state.model,
&[], // tools - empty for simple chat
&[], // skills
Some(&config_guard.identity),
None, // bootstrap_max_chars - use default
)
};
let mut messages = Vec::with_capacity(1 + user_messages.len());
messages.push(ChatMessage::system(system_prompt));
messages.extend(user_messages);
let multimodal_config = state.config.lock().multimodal.clone();
let prepared =
crate::multimodal::prepare_messages_for_provider(&messages, &multimodal_config).await?;
state
.provider
.chat_with_history(&prepared.messages, &state.model, state.temperature)
.await
} }
/// Webhook request body /// Webhook request body
@ -624,10 +725,10 @@ async fn handle_webhook(
headers: HeaderMap, headers: HeaderMap,
body: Result<Json<WebhookBody>, axum::extract::rejection::JsonRejection>, body: Result<Json<WebhookBody>, axum::extract::rejection::JsonRejection>,
) -> impl IntoResponse { ) -> impl IntoResponse {
let client_key = let rate_key =
client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers); client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers);
if !state.rate_limiter.allow_webhook(&client_key) { if !state.rate_limiter.allow_webhook(&rate_key) {
tracing::warn!("/webhook rate limit exceeded for key: {client_key}"); tracing::warn!("/webhook rate limit exceeded");
let err = serde_json::json!({ let err = serde_json::json!({
"error": "Too many webhook requests. Please retry later.", "error": "Too many webhook requests. Please retry later.",
"retry_after": RATE_LIMIT_WINDOW_SECS, "retry_after": RATE_LIMIT_WINDOW_SECS,
@ -732,11 +833,7 @@ async fn handle_webhook(
messages_count: 1, messages_count: 1,
}); });
match state match run_gateway_chat_with_multimodal(&state, &provider_label, message).await {
.provider
.simple_chat(message, &state.model, state.temperature)
.await
{
Ok(response) => { Ok(response) => {
let duration = started_at.elapsed(); let duration = started_at.elapsed();
state state
@ -920,6 +1017,12 @@ async fn handle_whatsapp_message(
} }
// Process each message // Process each message
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
for msg in &messages { for msg in &messages {
tracing::info!( tracing::info!(
"WhatsApp message from {}: {}", "WhatsApp message from {}: {}",
@ -936,12 +1039,7 @@ async fn handle_whatsapp_message(
.await; .await;
} }
// Call the LLM match run_gateway_chat_with_multimodal(&state, &provider_label, &msg.content).await {
match state
.provider
.simple_chat(&msg.content, &state.model, state.temperature)
.await
{
Ok(response) => { Ok(response) => {
// Send reply via WhatsApp // Send reply via WhatsApp
if let Err(e) = wa if let Err(e) = wa
@ -967,6 +1065,120 @@ async fn handle_whatsapp_message(
(StatusCode::OK, Json(serde_json::json!({"status": "ok"}))) (StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
} }
/// POST /linq — incoming message webhook (iMessage/RCS/SMS via Linq)
async fn handle_linq_webhook(
State(state): State<AppState>,
headers: HeaderMap,
body: Bytes,
) -> impl IntoResponse {
let Some(ref linq) = state.linq else {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": "Linq not configured"})),
);
};
let body_str = String::from_utf8_lossy(&body);
// ── Security: Verify X-Webhook-Signature if signing_secret is configured ──
if let Some(ref signing_secret) = state.linq_signing_secret {
let timestamp = headers
.get("X-Webhook-Timestamp")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let signature = headers
.get("X-Webhook-Signature")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if !crate::channels::linq::verify_linq_signature(
signing_secret,
&body_str,
timestamp,
signature,
) {
tracing::warn!(
"Linq webhook signature verification failed (signature: {})",
if signature.is_empty() {
"missing"
} else {
"invalid"
}
);
return (
StatusCode::UNAUTHORIZED,
Json(serde_json::json!({"error": "Invalid signature"})),
);
}
}
// Parse JSON body
let Ok(payload) = serde_json::from_slice::<serde_json::Value>(&body) else {
return (
StatusCode::BAD_REQUEST,
Json(serde_json::json!({"error": "Invalid JSON payload"})),
);
};
// Parse messages from the webhook payload
let messages = linq.parse_webhook_payload(&payload);
if messages.is_empty() {
// Acknowledge the webhook even if no messages (could be status/delivery events)
return (StatusCode::OK, Json(serde_json::json!({"status": "ok"})));
}
// Process each message
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
for msg in &messages {
tracing::info!(
"Linq message from {}: {}",
msg.sender,
truncate_with_ellipsis(&msg.content, 50)
);
// Auto-save to memory
if state.auto_save {
let key = linq_memory_key(msg);
let _ = state
.mem
.store(&key, &msg.content, MemoryCategory::Conversation, None)
.await;
}
// Call the LLM
match run_gateway_chat_with_multimodal(&state, &provider_label, &msg.content).await {
Ok(response) => {
// Send reply via Linq
if let Err(e) = linq
.send(&SendMessage::new(response, &msg.reply_target))
.await
{
tracing::error!("Failed to send Linq reply: {e}");
}
}
Err(e) => {
tracing::error!("LLM error for Linq message: {e:#}");
let _ = linq
.send(&SendMessage::new(
"Sorry, I couldn't process your message right now.",
&msg.reply_target,
))
.await;
}
}
}
// Acknowledge the webhook
(StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@ -980,6 +1192,13 @@ mod tests {
use parking_lot::Mutex; use parking_lot::Mutex;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
/// Generate a random hex secret at runtime to avoid hard-coded cryptographic values.
fn generate_test_secret() -> String {
use rand::Rng;
let bytes: [u8; 32] = rand::rng().random();
hex::encode(bytes)
}
#[test] #[test]
fn security_body_limit_is_64kb() { fn security_body_limit_is_64kb() {
assert_eq!(MAX_BODY_SIZE, 65_536); assert_eq!(MAX_BODY_SIZE, 65_536);
@ -1034,6 +1253,8 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
@ -1075,6 +1296,8 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer, observer,
}; };
@ -1221,8 +1444,8 @@ mod tests {
assert_eq!(normalize_max_keys(1, 10_000), 1); assert_eq!(normalize_max_keys(1, 10_000), 1);
} }
#[test] #[tokio::test]
fn persist_pairing_tokens_writes_config_tokens() { async fn persist_pairing_tokens_writes_config_tokens() {
let temp = tempfile::tempdir().unwrap(); let temp = tempfile::tempdir().unwrap();
let config_path = temp.path().join("config.toml"); let config_path = temp.path().join("config.toml");
let workspace_path = temp.path().join("workspace"); let workspace_path = temp.path().join("workspace");
@ -1230,22 +1453,28 @@ mod tests {
let mut config = Config::default(); let mut config = Config::default();
config.config_path = config_path.clone(); config.config_path = config_path.clone();
config.workspace_dir = workspace_path; config.workspace_dir = workspace_path;
config.save().unwrap(); config.save().await.unwrap();
let guard = PairingGuard::new(true, &[]); let guard = PairingGuard::new(true, &[]);
let code = guard.pairing_code().unwrap(); let code = guard.pairing_code().unwrap();
let token = guard.try_pair(&code).unwrap().unwrap(); let token = guard.try_pair(&code, "test_client").await.unwrap().unwrap();
assert!(guard.is_authenticated(&token)); assert!(guard.is_authenticated(&token));
let shared_config = Arc::new(Mutex::new(config)); let shared_config = Arc::new(Mutex::new(config));
persist_pairing_tokens(&shared_config, &guard).unwrap(); persist_pairing_tokens(shared_config.clone(), &guard)
.await
.unwrap();
let saved = std::fs::read_to_string(config_path).unwrap(); let saved = tokio::fs::read_to_string(config_path).await.unwrap();
let parsed: Config = toml::from_str(&saved).unwrap(); let parsed: Config = toml::from_str(&saved).unwrap();
assert_eq!(parsed.gateway.paired_tokens.len(), 1); assert_eq!(parsed.gateway.paired_tokens.len(), 1);
let persisted = &parsed.gateway.paired_tokens[0]; let persisted = &parsed.gateway.paired_tokens[0];
assert_eq!(persisted.len(), 64); assert_eq!(persisted.len(), 64);
assert!(persisted.chars().all(|c| c.is_ascii_hexdigit())); assert!(persisted.chars().all(|c| c.is_ascii_hexdigit()));
let in_memory = shared_config.lock();
assert_eq!(in_memory.gateway.paired_tokens.len(), 1);
assert_eq!(&in_memory.gateway.paired_tokens[0], persisted);
} }
#[test] #[test]
@ -1267,6 +1496,7 @@ mod tests {
content: "hello".into(), content: "hello".into(),
channel: "whatsapp".into(), channel: "whatsapp".into(),
timestamp: 1, timestamp: 1,
thread_ts: None,
}; };
let key = whatsapp_memory_key(&msg); let key = whatsapp_memory_key(&msg);
@ -1426,6 +1656,8 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
@ -1482,6 +1714,8 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
@ -1518,9 +1752,11 @@ mod tests {
#[test] #[test]
fn webhook_secret_hash_is_deterministic_and_nonempty() { fn webhook_secret_hash_is_deterministic_and_nonempty() {
let one = hash_webhook_secret("secret-value"); let secret_a = generate_test_secret();
let two = hash_webhook_secret("secret-value"); let secret_b = generate_test_secret();
let other = hash_webhook_secret("other-value"); let one = hash_webhook_secret(&secret_a);
let two = hash_webhook_secret(&secret_a);
let other = hash_webhook_secret(&secret_b);
assert_eq!(one, two); assert_eq!(one, two);
assert_ne!(one, other); assert_ne!(one, other);
@ -1532,6 +1768,7 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default()); let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone(); let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory); let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let secret = generate_test_secret();
let state = AppState { let state = AppState {
config: Arc::new(Mutex::new(Config::default())), config: Arc::new(Mutex::new(Config::default())),
@ -1540,13 +1777,15 @@ mod tests {
temperature: 0.0, temperature: 0.0,
mem: memory, mem: memory,
auto_save: false, auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))), webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&secret))),
pairing: Arc::new(PairingGuard::new(false, &[])), pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false, trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)), rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
@ -1570,6 +1809,8 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default()); let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone(); let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory); let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let valid_secret = generate_test_secret();
let wrong_secret = generate_test_secret();
let state = AppState { let state = AppState {
config: Arc::new(Mutex::new(Config::default())), config: Arc::new(Mutex::new(Config::default())),
@ -1578,18 +1819,23 @@ mod tests {
temperature: 0.0, temperature: 0.0,
mem: memory, mem: memory,
auto_save: false, auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))), webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&valid_secret))),
pairing: Arc::new(PairingGuard::new(false, &[])), pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false, trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)), rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
let mut headers = HeaderMap::new(); let mut headers = HeaderMap::new();
headers.insert("X-Webhook-Secret", HeaderValue::from_static("wrong-secret")); headers.insert(
"X-Webhook-Secret",
HeaderValue::from_str(&wrong_secret).unwrap(),
);
let response = handle_webhook( let response = handle_webhook(
State(state), State(state),
@ -1611,6 +1857,7 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default()); let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone(); let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory); let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let secret = generate_test_secret();
let state = AppState { let state = AppState {
config: Arc::new(Mutex::new(Config::default())), config: Arc::new(Mutex::new(Config::default())),
@ -1619,18 +1866,20 @@ mod tests {
temperature: 0.0, temperature: 0.0,
mem: memory, mem: memory,
auto_save: false, auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))), webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&secret))),
pairing: Arc::new(PairingGuard::new(false, &[])), pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false, trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)), rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)), idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None, whatsapp: None,
whatsapp_app_secret: None, whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver), observer: Arc::new(crate::observability::NoopObserver),
}; };
let mut headers = HeaderMap::new(); let mut headers = HeaderMap::new();
headers.insert("X-Webhook-Secret", HeaderValue::from_static("super-secret")); headers.insert("X-Webhook-Secret", HeaderValue::from_str(&secret).unwrap());
let response = handle_webhook( let response = handle_webhook(
State(state), State(state),
@ -1666,14 +1915,13 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_valid() { fn whatsapp_signature_valid() {
// Test with known values let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body content"; let body = b"test body content";
let signature_header = compute_whatsapp_signature_header(app_secret, body); let signature_header = compute_whatsapp_signature_header(&app_secret, body);
assert!(verify_whatsapp_signature( assert!(verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1681,14 +1929,14 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_invalid_wrong_secret() { fn whatsapp_signature_invalid_wrong_secret() {
let app_secret = "correct_secret_key_abc"; let app_secret = generate_test_secret();
let wrong_secret = "wrong_secret_key_xyz"; let wrong_secret = generate_test_secret();
let body = b"test body content"; let body = b"test body content";
let signature_header = compute_whatsapp_signature_header(wrong_secret, body); let signature_header = compute_whatsapp_signature_header(&wrong_secret, body);
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1696,15 +1944,15 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_invalid_wrong_body() { fn whatsapp_signature_invalid_wrong_body() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let original_body = b"original body"; let original_body = b"original body";
let tampered_body = b"tampered body"; let tampered_body = b"tampered body";
let signature_header = compute_whatsapp_signature_header(app_secret, original_body); let signature_header = compute_whatsapp_signature_header(&app_secret, original_body);
// Verify with tampered body should fail // Verify with tampered body should fail
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
tampered_body, tampered_body,
&signature_header &signature_header
)); ));
@ -1712,14 +1960,14 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_missing_prefix() { fn whatsapp_signature_missing_prefix() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
// Signature without "sha256=" prefix // Signature without "sha256=" prefix
let signature_header = "abc123def456"; let signature_header = "abc123def456";
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
signature_header signature_header
)); ));
@ -1727,22 +1975,22 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_empty_header() { fn whatsapp_signature_empty_header() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
assert!(!verify_whatsapp_signature(app_secret, body, "")); assert!(!verify_whatsapp_signature(&app_secret, body, ""));
} }
#[test] #[test]
fn whatsapp_signature_invalid_hex() { fn whatsapp_signature_invalid_hex() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
// Invalid hex characters // Invalid hex characters
let signature_header = "sha256=not_valid_hex_zzz"; let signature_header = "sha256=not_valid_hex_zzz";
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
signature_header signature_header
)); ));
@ -1750,13 +1998,13 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_empty_body() { fn whatsapp_signature_empty_body() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b""; let body = b"";
let signature_header = compute_whatsapp_signature_header(app_secret, body); let signature_header = compute_whatsapp_signature_header(&app_secret, body);
assert!(verify_whatsapp_signature( assert!(verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1764,13 +2012,13 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_unicode_body() { fn whatsapp_signature_unicode_body() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = "Hello 🦀 World".as_bytes(); let body = "Hello 🦀 World".as_bytes();
let signature_header = compute_whatsapp_signature_header(app_secret, body); let signature_header = compute_whatsapp_signature_header(&app_secret, body);
assert!(verify_whatsapp_signature( assert!(verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1778,13 +2026,13 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_json_payload() { fn whatsapp_signature_json_payload() {
let app_secret = "test_app_secret_key_xyz"; let app_secret = generate_test_secret();
let body = br#"{"entry":[{"changes":[{"value":{"messages":[{"from":"1234567890","text":{"body":"Hello"}}]}}]}]}"#; let body = br#"{"entry":[{"changes":[{"value":{"messages":[{"from":"1234567890","text":{"body":"Hello"}}]}}]}]}"#;
let signature_header = compute_whatsapp_signature_header(app_secret, body); let signature_header = compute_whatsapp_signature_header(&app_secret, body);
assert!(verify_whatsapp_signature( assert!(verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1792,31 +2040,35 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_case_sensitive_prefix() { fn whatsapp_signature_case_sensitive_prefix() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(app_secret, body); let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
// Wrong case prefix should fail // Wrong case prefix should fail
let wrong_prefix = format!("SHA256={hex_sig}"); let wrong_prefix = format!("SHA256={hex_sig}");
assert!(!verify_whatsapp_signature(app_secret, body, &wrong_prefix)); assert!(!verify_whatsapp_signature(&app_secret, body, &wrong_prefix));
// Correct prefix should pass // Correct prefix should pass
let correct_prefix = format!("sha256={hex_sig}"); let correct_prefix = format!("sha256={hex_sig}");
assert!(verify_whatsapp_signature(app_secret, body, &correct_prefix)); assert!(verify_whatsapp_signature(
&app_secret,
body,
&correct_prefix
));
} }
#[test] #[test]
fn whatsapp_signature_truncated_hex() { fn whatsapp_signature_truncated_hex() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(app_secret, body); let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
let truncated = &hex_sig[..32]; // Only half the signature let truncated = &hex_sig[..32]; // Only half the signature
let signature_header = format!("sha256={truncated}"); let signature_header = format!("sha256={truncated}");
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
@ -1824,17 +2076,65 @@ mod tests {
#[test] #[test]
fn whatsapp_signature_extra_bytes() { fn whatsapp_signature_extra_bytes() {
let app_secret = "test_secret_key_12345"; let app_secret = generate_test_secret();
let body = b"test body"; let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(app_secret, body); let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
let extended = format!("{hex_sig}deadbeef"); let extended = format!("{hex_sig}deadbeef");
let signature_header = format!("sha256={extended}"); let signature_header = format!("sha256={extended}");
assert!(!verify_whatsapp_signature( assert!(!verify_whatsapp_signature(
app_secret, &app_secret,
body, body,
&signature_header &signature_header
)); ));
} }
// ══════════════════════════════════════════════════════════
// IdempotencyStore Edge-Case Tests
// ══════════════════════════════════════════════════════════
#[test]
fn idempotency_store_allows_different_keys() {
let store = IdempotencyStore::new(Duration::from_secs(60), 100);
assert!(store.record_if_new("key-a"));
assert!(store.record_if_new("key-b"));
assert!(store.record_if_new("key-c"));
assert!(store.record_if_new("key-d"));
}
#[test]
fn idempotency_store_max_keys_clamped_to_one() {
let store = IdempotencyStore::new(Duration::from_secs(60), 0);
assert!(store.record_if_new("only-key"));
assert!(!store.record_if_new("only-key"));
}
#[test]
fn idempotency_store_rapid_duplicate_rejected() {
let store = IdempotencyStore::new(Duration::from_secs(300), 100);
assert!(store.record_if_new("rapid"));
assert!(!store.record_if_new("rapid"));
}
#[test]
fn idempotency_store_accepts_after_ttl_expires() {
let store = IdempotencyStore::new(Duration::from_millis(1), 100);
assert!(store.record_if_new("ttl-key"));
std::thread::sleep(Duration::from_millis(10));
assert!(store.record_if_new("ttl-key"));
}
#[test]
fn idempotency_store_eviction_preserves_newest() {
let store = IdempotencyStore::new(Duration::from_secs(300), 1);
assert!(store.record_if_new("old-key"));
std::thread::sleep(Duration::from_millis(2));
assert!(store.record_if_new("new-key"));
let keys = store.keys.lock();
assert_eq!(keys.len(), 1);
assert!(!keys.contains_key("old-key"));
assert!(keys.contains_key("new-key"));
}
} }

View file

@ -1,4 +1,10 @@
//! USB device discovery — enumerate devices and enrich with board registry. //! USB device discovery — enumerate devices and enrich with board registry.
//!
//! USB enumeration via `nusb` is only supported on Linux, macOS, and Windows.
//! On Android (Termux) and other unsupported platforms this module is excluded
//! from compilation; callers in `hardware/mod.rs` fall back to an empty result.
#![cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))]
use super::registry; use super::registry;
use anyhow::Result; use anyhow::Result;

View file

@ -4,10 +4,10 @@
pub mod registry; pub mod registry;
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
pub mod discover; pub mod discover;
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
pub mod introspect; pub mod introspect;
use crate::config::Config; use crate::config::Config;
@ -28,8 +28,9 @@ pub struct DiscoveredDevice {
/// Auto-discover connected hardware devices. /// Auto-discover connected hardware devices.
/// Returns an empty vec on platforms without hardware support. /// Returns an empty vec on platforms without hardware support.
pub fn discover_hardware() -> Vec<DiscoveredDevice> { pub fn discover_hardware() -> Vec<DiscoveredDevice> {
// USB/serial discovery is behind the "hardware" feature gate. // USB/serial discovery is behind the "hardware" feature gate and only
#[cfg(feature = "hardware")] // available on platforms where nusb supports device enumeration.
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
{ {
if let Ok(devices) = discover::list_usb_devices() { if let Ok(devices) = discover::list_usb_devices() {
return devices return devices
@ -102,7 +103,15 @@ pub fn handle_command(cmd: crate::HardwareCommands, _config: &Config) -> Result<
return Ok(()); return Ok(());
} }
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", not(any(target_os = "linux", target_os = "macos", target_os = "windows"))))]
{
let _ = &cmd;
println!("Hardware USB discovery is not supported on this platform.");
println!("Supported platforms: Linux, macOS, Windows.");
return Ok(());
}
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
match cmd { match cmd {
crate::HardwareCommands::Discover => run_discover(), crate::HardwareCommands::Discover => run_discover(),
crate::HardwareCommands::Introspect { path } => run_introspect(&path), crate::HardwareCommands::Introspect { path } => run_introspect(&path),
@ -110,7 +119,7 @@ pub fn handle_command(cmd: crate::HardwareCommands, _config: &Config) -> Result<
} }
} }
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
fn run_discover() -> Result<()> { fn run_discover() -> Result<()> {
let devices = discover::list_usb_devices()?; let devices = discover::list_usb_devices()?;
@ -138,7 +147,7 @@ fn run_discover() -> Result<()> {
Ok(()) Ok(())
} }
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
fn run_introspect(path: &str) -> Result<()> { fn run_introspect(path: &str) -> Result<()> {
let result = introspect::introspect_device(path)?; let result = introspect::introspect_device(path)?;
@ -160,7 +169,7 @@ fn run_introspect(path: &str) -> Result<()> {
Ok(()) Ok(())
} }
#[cfg(feature = "hardware")] #[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
fn run_info(chip: &str) -> Result<()> { fn run_info(chip: &str) -> Result<()> {
#[cfg(feature = "probe")] #[cfg(feature = "probe")]
{ {
@ -192,7 +201,7 @@ fn run_info(chip: &str) -> Result<()> {
} }
} }
#[cfg(all(feature = "hardware", feature = "probe"))] #[cfg(all(feature = "hardware", feature = "probe", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
fn info_via_probe(chip: &str) -> anyhow::Result<()> { fn info_via_probe(chip: &str) -> anyhow::Result<()> {
use probe_rs::config::MemoryRegion; use probe_rs::config::MemoryRegion;
use probe_rs::{Session, SessionConfig}; use probe_rs::{Session, SessionConfig};

View file

@ -790,6 +790,7 @@ mod tests {
allowed_users: vec!["user".into()], allowed_users: vec!["user".into()],
stream_mode: StreamMode::default(), stream_mode: StreamMode::default(),
draft_update_interval_ms: 1000, draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false, mention_only: false,
}); });
let entries = all_integrations(); let entries = all_integrations();

View file

@ -39,46 +39,49 @@ use clap::Subcommand;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
pub mod agent; pub mod agent;
pub mod approval; pub(crate) mod approval;
pub mod auth; pub(crate) mod auth;
pub mod channels; pub mod channels;
pub mod config; pub mod config;
pub mod cost; pub(crate) mod cost;
pub mod cron; pub(crate) mod cron;
pub mod daemon; pub(crate) mod daemon;
pub mod doctor; pub(crate) mod doctor;
pub mod gateway; pub mod gateway;
pub mod hardware; pub(crate) mod hardware;
pub mod health; pub(crate) mod health;
pub mod heartbeat; pub(crate) mod heartbeat;
pub mod identity; pub(crate) mod identity;
pub mod integrations; pub(crate) mod integrations;
pub mod memory; pub mod memory;
pub mod migration; pub(crate) mod migration;
pub(crate) mod multimodal;
pub mod observability; pub mod observability;
pub mod onboard; pub(crate) mod onboard;
pub mod peripherals; pub mod peripherals;
pub mod providers; pub mod providers;
pub mod rag; pub mod rag;
pub mod runtime; pub mod runtime;
pub mod security; pub(crate) mod security;
pub mod service; pub(crate) mod service;
pub mod skills; pub(crate) mod skills;
pub mod tools; pub mod tools;
pub mod tunnel; pub(crate) mod tunnel;
pub mod util; pub(crate) mod util;
pub use config::Config; pub use config::Config;
/// Service management subcommands /// Service management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum ServiceCommands { pub(crate) enum ServiceCommands {
/// Install daemon service unit for auto-start and restart /// Install daemon service unit for auto-start and restart
Install, Install,
/// Start daemon service /// Start daemon service
Start, Start,
/// Stop daemon service /// Stop daemon service
Stop, Stop,
/// Restart daemon service to apply latest config
Restart,
/// Check daemon service status /// Check daemon service status
Status, Status,
/// Uninstall daemon service unit /// Uninstall daemon service unit
@ -87,7 +90,7 @@ pub enum ServiceCommands {
/// Channel management subcommands /// Channel management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum ChannelCommands { pub(crate) enum ChannelCommands {
/// List all configured channels /// List all configured channels
List, List,
/// Start all configured channels (handled in main.rs for async) /// Start all configured channels (handled in main.rs for async)
@ -95,6 +98,17 @@ pub enum ChannelCommands {
/// Run health checks for configured channels (handled in main.rs for async) /// Run health checks for configured channels (handled in main.rs for async)
Doctor, Doctor,
/// Add a new channel configuration /// Add a new channel configuration
#[command(long_about = "\
Add a new channel configuration.
Provide the channel type and a JSON object with the required \
configuration keys for that channel type.
Supported types: telegram, discord, slack, whatsapp, matrix, imessage, email.
Examples:
zeroclaw channel add telegram '{\"bot_token\":\"...\",\"name\":\"my-bot\"}'
zeroclaw channel add discord '{\"bot_token\":\"...\",\"name\":\"my-discord\"}'")]
Add { Add {
/// Channel type (telegram, discord, slack, whatsapp, matrix, imessage, email) /// Channel type (telegram, discord, slack, whatsapp, matrix, imessage, email)
channel_type: String, channel_type: String,
@ -107,6 +121,16 @@ pub enum ChannelCommands {
name: String, name: String,
}, },
/// Bind a Telegram identity (username or numeric user ID) into allowlist /// Bind a Telegram identity (username or numeric user ID) into allowlist
#[command(long_about = "\
Bind a Telegram identity into the allowlist.
Adds a Telegram username (without the '@' prefix) or numeric user \
ID to the channel allowlist so the agent will respond to messages \
from that identity.
Examples:
zeroclaw channel bind-telegram zeroclaw_user
zeroclaw channel bind-telegram 123456789")]
BindTelegram { BindTelegram {
/// Telegram identity to allow (username without '@' or numeric user ID) /// Telegram identity to allow (username without '@' or numeric user ID)
identity: String, identity: String,
@ -115,12 +139,12 @@ pub enum ChannelCommands {
/// Skills management subcommands /// Skills management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum SkillCommands { pub(crate) enum SkillCommands {
/// List all installed skills /// List all installed skills
List, List,
/// Install a new skill from a URL or local path /// Install a new skill from a git URL (HTTPS/SSH) or local path
Install { Install {
/// Source URL or local path /// Source git URL (HTTPS/SSH) or local path
source: String, source: String,
}, },
/// Remove an installed skill /// Remove an installed skill
@ -132,7 +156,7 @@ pub enum SkillCommands {
/// Migration subcommands /// Migration subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum MigrateCommands { pub(crate) enum MigrateCommands {
/// Import memory from an `OpenClaw` workspace into this `ZeroClaw` workspace /// Import memory from an `OpenClaw` workspace into this `ZeroClaw` workspace
Openclaw { Openclaw {
/// Optional path to `OpenClaw` workspace (defaults to ~/.openclaw/workspace) /// Optional path to `OpenClaw` workspace (defaults to ~/.openclaw/workspace)
@ -147,10 +171,20 @@ pub enum MigrateCommands {
/// Cron subcommands /// Cron subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum CronCommands { pub(crate) enum CronCommands {
/// List all scheduled tasks /// List all scheduled tasks
List, List,
/// Add a new scheduled task /// Add a new scheduled task
#[command(long_about = "\
Add a new recurring scheduled task.
Uses standard 5-field cron syntax: 'min hour day month weekday'. \
Times are evaluated in UTC by default; use --tz with an IANA \
timezone name to override.
Examples:
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York
zeroclaw cron add '*/30 * * * *' 'Check system health'")]
Add { Add {
/// Cron expression /// Cron expression
expression: String, expression: String,
@ -161,6 +195,14 @@ pub enum CronCommands {
command: String, command: String,
}, },
/// Add a one-shot scheduled task at an RFC3339 timestamp /// Add a one-shot scheduled task at an RFC3339 timestamp
#[command(long_about = "\
Add a one-shot task that fires at a specific UTC timestamp.
The timestamp must be in RFC 3339 format (e.g. 2025-01-15T14:00:00Z).
Examples:
zeroclaw cron add-at 2025-01-15T14:00:00Z 'Send reminder'
zeroclaw cron add-at 2025-12-31T23:59:00Z 'Happy New Year!'")]
AddAt { AddAt {
/// One-shot timestamp in RFC3339 format /// One-shot timestamp in RFC3339 format
at: String, at: String,
@ -168,6 +210,14 @@ pub enum CronCommands {
command: String, command: String,
}, },
/// Add a fixed-interval scheduled task /// Add a fixed-interval scheduled task
#[command(long_about = "\
Add a task that repeats at a fixed interval.
Interval is specified in milliseconds. For example, 60000 = 1 minute.
Examples:
zeroclaw cron add-every 60000 'Ping heartbeat' # every minute
zeroclaw cron add-every 3600000 'Hourly report' # every hour")]
AddEvery { AddEvery {
/// Interval in milliseconds /// Interval in milliseconds
every_ms: u64, every_ms: u64,
@ -175,6 +225,16 @@ pub enum CronCommands {
command: String, command: String,
}, },
/// Add a one-shot delayed task (e.g. "30m", "2h", "1d") /// Add a one-shot delayed task (e.g. "30m", "2h", "1d")
#[command(long_about = "\
Add a one-shot task that fires after a delay from now.
Accepts human-readable durations: s (seconds), m (minutes), \
h (hours), d (days).
Examples:
zeroclaw cron once 30m 'Run backup in 30 minutes'
zeroclaw cron once 2h 'Follow up on deployment'
zeroclaw cron once 1d 'Daily check'")]
Once { Once {
/// Delay duration /// Delay duration
delay: String, delay: String,
@ -186,6 +246,32 @@ pub enum CronCommands {
/// Task ID /// Task ID
id: String, id: String,
}, },
/// Update a scheduled task
#[command(long_about = "\
Update one or more fields of an existing scheduled task.
Only the fields you specify are changed; others remain unchanged.
Examples:
zeroclaw cron update <task-id> --expression '0 8 * * *'
zeroclaw cron update <task-id> --tz Europe/London --name 'Morning check'
zeroclaw cron update <task-id> --command 'Updated message'")]
Update {
/// Task ID
id: String,
/// New cron expression
#[arg(long)]
expression: Option<String>,
/// New IANA timezone
#[arg(long)]
tz: Option<String>,
/// New command to run
#[arg(long)]
command: Option<String>,
/// New job name
#[arg(long)]
name: Option<String>,
},
/// Pause a scheduled task /// Pause a scheduled task
Pause { Pause {
/// Task ID /// Task ID
@ -200,7 +286,7 @@ pub enum CronCommands {
/// Integration subcommands /// Integration subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum IntegrationCommands { pub(crate) enum IntegrationCommands {
/// Show details about a specific integration /// Show details about a specific integration
Info { Info {
/// Integration name /// Integration name
@ -212,13 +298,39 @@ pub enum IntegrationCommands {
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] #[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum HardwareCommands { pub enum HardwareCommands {
/// Enumerate USB devices (VID/PID) and show known boards /// Enumerate USB devices (VID/PID) and show known boards
#[command(long_about = "\
Enumerate USB devices and show known boards.
Scans connected USB devices by VID/PID and matches them against \
known development boards (STM32 Nucleo, Arduino, ESP32).
Examples:
zeroclaw hardware discover")]
Discover, Discover,
/// Introspect a device by path (e.g. /dev/ttyACM0) /// Introspect a device by path (e.g. /dev/ttyACM0)
#[command(long_about = "\
Introspect a device by its serial or device path.
Opens the specified device path and queries for board information, \
firmware version, and supported capabilities.
Examples:
zeroclaw hardware introspect /dev/ttyACM0
zeroclaw hardware introspect COM3")]
Introspect { Introspect {
/// Serial or device path /// Serial or device path
path: String, path: String,
}, },
/// Get chip info via USB (probe-rs over ST-Link). No firmware needed on target. /// Get chip info via USB (probe-rs over ST-Link). No firmware needed on target.
#[command(long_about = "\
Get chip info via USB using probe-rs over ST-Link.
Queries the target MCU directly through the debug probe without \
requiring any firmware on the target board.
Examples:
zeroclaw hardware info
zeroclaw hardware info --chip STM32F401RETx")]
Info { Info {
/// Chip name (e.g. STM32F401RETx). Default: STM32F401RETx for Nucleo-F401RE /// Chip name (e.g. STM32F401RETx). Default: STM32F401RETx for Nucleo-F401RE
#[arg(long, default_value = "STM32F401RETx")] #[arg(long, default_value = "STM32F401RETx")]
@ -232,6 +344,19 @@ pub enum PeripheralCommands {
/// List configured peripherals /// List configured peripherals
List, List,
/// Add a peripheral (board path, e.g. nucleo-f401re /dev/ttyACM0) /// Add a peripheral (board path, e.g. nucleo-f401re /dev/ttyACM0)
#[command(long_about = "\
Add a peripheral by board type and transport path.
Registers a hardware board so the agent can use its tools (GPIO, \
sensors, actuators). Use 'native' as path for local GPIO on \
single-board computers like Raspberry Pi.
Supported boards: nucleo-f401re, rpi-gpio, esp32, arduino-uno.
Examples:
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0
zeroclaw peripheral add rpi-gpio native
zeroclaw peripheral add esp32 /dev/ttyUSB0")]
Add { Add {
/// Board type (nucleo-f401re, rpi-gpio, esp32) /// Board type (nucleo-f401re, rpi-gpio, esp32)
board: String, board: String,
@ -239,6 +364,16 @@ pub enum PeripheralCommands {
path: String, path: String,
}, },
/// Flash ZeroClaw firmware to Arduino (creates .ino, installs arduino-cli if needed, uploads) /// Flash ZeroClaw firmware to Arduino (creates .ino, installs arduino-cli if needed, uploads)
#[command(long_about = "\
Flash ZeroClaw firmware to an Arduino board.
Generates the .ino sketch, installs arduino-cli if it is not \
already available, compiles, and uploads the firmware.
Examples:
zeroclaw peripheral flash
zeroclaw peripheral flash --port /dev/cu.usbmodem12345
zeroclaw peripheral flash -p COM3")]
Flash { Flash {
/// Serial port (e.g. /dev/cu.usbmodem12345). If omitted, uses first arduino-uno from config. /// Serial port (e.g. /dev/cu.usbmodem12345). If omitted, uses first arduino-uno from config.
#[arg(short, long)] #[arg(short, long)]

View file

@ -39,6 +39,14 @@ use serde::{Deserialize, Serialize};
use tracing::{info, warn}; use tracing::{info, warn};
use tracing_subscriber::{fmt, EnvFilter}; use tracing_subscriber::{fmt, EnvFilter};
fn parse_temperature(s: &str) -> std::result::Result<f64, String> {
let t: f64 = s.parse().map_err(|e| format!("{e}"))?;
if !(0.0..=2.0).contains(&t) {
return Err("temperature must be between 0.0 and 2.0".to_string());
}
Ok(t)
}
mod agent; mod agent;
mod approval; mod approval;
mod auth; mod auth;
@ -58,6 +66,7 @@ mod identity;
mod integrations; mod integrations;
mod memory; mod memory;
mod migration; mod migration;
mod multimodal;
mod observability; mod observability;
mod onboard; mod onboard;
mod peripherals; mod peripherals;
@ -95,6 +104,8 @@ enum ServiceCommands {
Start, Start,
/// Stop daemon service /// Stop daemon service
Stop, Stop,
/// Restart daemon service to apply latest config
Restart,
/// Check daemon service status /// Check daemon service status
Status, Status,
/// Uninstall daemon service unit /// Uninstall daemon service unit
@ -120,13 +131,26 @@ enum Commands {
/// Provider name (used in quick mode, default: openrouter) /// Provider name (used in quick mode, default: openrouter)
#[arg(long)] #[arg(long)]
provider: Option<String>, provider: Option<String>,
/// Model ID override (used in quick mode)
#[arg(long)]
model: Option<String>,
/// Memory backend (sqlite, lucid, markdown, none) - used in quick mode, default: sqlite /// Memory backend (sqlite, lucid, markdown, none) - used in quick mode, default: sqlite
#[arg(long)] #[arg(long)]
memory: Option<String>, memory: Option<String>,
}, },
/// Start the AI agent loop /// Start the AI agent loop
#[command(long_about = "\
Start the AI agent loop.
Launches an interactive chat session with the configured AI provider. \
Use --message for single-shot queries without entering interactive mode.
Examples:
zeroclaw agent # interactive session
zeroclaw agent -m \"Summarize today's logs\" # single message
zeroclaw agent -p anthropic --model claude-sonnet-4-20250514
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0")]
Agent { Agent {
/// Single message mode (don't enter interactive mode) /// Single message mode (don't enter interactive mode)
#[arg(short, long)] #[arg(short, long)]
@ -141,7 +165,7 @@ enum Commands {
model: Option<String>, model: Option<String>,
/// Temperature (0.0 - 2.0) /// Temperature (0.0 - 2.0)
#[arg(short, long, default_value = "0.7")] #[arg(short, long, default_value = "0.7", value_parser = parse_temperature)]
temperature: f64, temperature: f64,
/// Attach a peripheral (board:path, e.g. nucleo-f401re:/dev/ttyACM0) /// Attach a peripheral (board:path, e.g. nucleo-f401re:/dev/ttyACM0)
@ -150,6 +174,18 @@ enum Commands {
}, },
/// Start the gateway server (webhooks, websockets) /// Start the gateway server (webhooks, websockets)
#[command(long_about = "\
Start the gateway server (webhooks, websockets).
Runs the HTTP/WebSocket gateway that accepts incoming webhook events \
and WebSocket connections. Bind address defaults to the values in \
your config file (gateway.host / gateway.port).
Examples:
zeroclaw gateway # use config defaults
zeroclaw gateway -p 8080 # listen on port 8080
zeroclaw gateway --host 0.0.0.0 # bind to all interfaces
zeroclaw gateway -p 0 # random available port")]
Gateway { Gateway {
/// Port to listen on (use 0 for random available port); defaults to config gateway.port /// Port to listen on (use 0 for random available port); defaults to config gateway.port
#[arg(short, long)] #[arg(short, long)]
@ -161,6 +197,21 @@ enum Commands {
}, },
/// Start long-running autonomous runtime (gateway + channels + heartbeat + scheduler) /// Start long-running autonomous runtime (gateway + channels + heartbeat + scheduler)
#[command(long_about = "\
Start the long-running autonomous daemon.
Launches the full ZeroClaw runtime: gateway server, all configured \
channels (Telegram, Discord, Slack, etc.), heartbeat monitor, and \
the cron scheduler. This is the recommended way to run ZeroClaw in \
production or as an always-on assistant.
Use 'zeroclaw service install' to register the daemon as an OS \
service (systemd/launchd) for auto-start on boot.
Examples:
zeroclaw daemon # use config defaults
zeroclaw daemon -p 9090 # gateway on port 9090
zeroclaw daemon --host 127.0.0.1 # localhost only")]
Daemon { Daemon {
/// Port to listen on (use 0 for random available port); defaults to config gateway.port /// Port to listen on (use 0 for random available port); defaults to config gateway.port
#[arg(short, long)] #[arg(short, long)]
@ -187,6 +238,25 @@ enum Commands {
Status, Status,
/// Configure and manage scheduled tasks /// Configure and manage scheduled tasks
#[command(long_about = "\
Configure and manage scheduled tasks.
Schedule recurring, one-shot, or interval-based tasks using cron \
expressions, RFC 3339 timestamps, durations, or fixed intervals.
Cron expressions use the standard 5-field format: \
'min hour day month weekday'. Timezones default to UTC; \
override with --tz and an IANA timezone name.
Examples:
zeroclaw cron list
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York
zeroclaw cron add '*/30 * * * *' 'Check system health'
zeroclaw cron add-at 2025-01-15T14:00:00Z 'Send reminder'
zeroclaw cron add-every 60000 'Ping heartbeat'
zeroclaw cron once 30m 'Run backup in 30 minutes'
zeroclaw cron pause <task-id>
zeroclaw cron update <task-id> --expression '0 8 * * *' --tz Europe/London")]
Cron { Cron {
#[command(subcommand)] #[command(subcommand)]
cron_command: CronCommands, cron_command: CronCommands,
@ -202,6 +272,19 @@ enum Commands {
Providers, Providers,
/// Manage channels (telegram, discord, slack) /// Manage channels (telegram, discord, slack)
#[command(long_about = "\
Manage communication channels.
Add, remove, list, and health-check channels that connect ZeroClaw \
to messaging platforms. Supported channel types: telegram, discord, \
slack, whatsapp, matrix, imessage, email.
Examples:
zeroclaw channel list
zeroclaw channel doctor
zeroclaw channel add telegram '{\"bot_token\":\"...\",\"name\":\"my-bot\"}'
zeroclaw channel remove my-bot
zeroclaw channel bind-telegram zeroclaw_user")]
Channel { Channel {
#[command(subcommand)] #[command(subcommand)]
channel_command: ChannelCommands, channel_command: ChannelCommands,
@ -232,16 +315,62 @@ enum Commands {
}, },
/// Discover and introspect USB hardware /// Discover and introspect USB hardware
#[command(long_about = "\
Discover and introspect USB hardware.
Enumerate connected USB devices, identify known development boards \
(STM32 Nucleo, Arduino, ESP32), and retrieve chip information via \
probe-rs / ST-Link.
Examples:
zeroclaw hardware discover
zeroclaw hardware introspect /dev/ttyACM0
zeroclaw hardware info --chip STM32F401RETx")]
Hardware { Hardware {
#[command(subcommand)] #[command(subcommand)]
hardware_command: zeroclaw::HardwareCommands, hardware_command: zeroclaw::HardwareCommands,
}, },
/// Manage hardware peripherals (STM32, RPi GPIO, etc.) /// Manage hardware peripherals (STM32, RPi GPIO, etc.)
#[command(long_about = "\
Manage hardware peripherals.
Add, list, flash, and configure hardware boards that expose tools \
to the agent (GPIO, sensors, actuators). Supported boards: \
nucleo-f401re, rpi-gpio, esp32, arduino-uno.
Examples:
zeroclaw peripheral list
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0
zeroclaw peripheral add rpi-gpio native
zeroclaw peripheral flash --port /dev/cu.usbmodem12345
zeroclaw peripheral flash-nucleo")]
Peripheral { Peripheral {
#[command(subcommand)] #[command(subcommand)]
peripheral_command: zeroclaw::PeripheralCommands, peripheral_command: zeroclaw::PeripheralCommands,
}, },
/// Manage configuration
#[command(long_about = "\
Manage ZeroClaw configuration.
Inspect and export configuration settings. Use 'schema' to dump \
the full JSON Schema for the config file, which documents every \
available key, type, and default value.
Examples:
zeroclaw config schema # print JSON Schema to stdout
zeroclaw config schema > schema.json")]
Config {
#[command(subcommand)]
config_command: ConfigCommands,
},
}
#[derive(Subcommand, Debug)]
enum ConfigCommands {
/// Dump the full configuration JSON Schema to stdout
Schema,
} }
#[derive(Subcommand, Debug)] #[derive(Subcommand, Debug)]
@ -381,6 +510,23 @@ enum CronCommands {
/// Task ID /// Task ID
id: String, id: String,
}, },
/// Update a scheduled task
Update {
/// Task ID
id: String,
/// New cron expression
#[arg(long)]
expression: Option<String>,
/// New IANA timezone
#[arg(long)]
tz: Option<String>,
/// New command to run
#[arg(long)]
command: Option<String>,
/// New job name
#[arg(long)]
name: Option<String>,
},
/// Pause a scheduled task /// Pause a scheduled task
Pause { Pause {
/// Task ID /// Task ID
@ -452,9 +598,9 @@ enum ChannelCommands {
enum SkillCommands { enum SkillCommands {
/// List installed skills /// List installed skills
List, List,
/// Install a skill from a GitHub URL or local path /// Install a skill from a git URL (HTTPS/SSH) or local path
Install { Install {
/// GitHub URL or local path /// Git URL (HTTPS/SSH) or local path
source: String, source: String,
}, },
/// Remove an installed skill /// Remove an installed skill
@ -503,6 +649,7 @@ async fn main() -> Result<()> {
channels_only, channels_only,
api_key, api_key,
provider, provider,
model,
memory, memory,
} = &cli.command } = &cli.command
{ {
@ -510,25 +657,30 @@ async fn main() -> Result<()> {
let channels_only = *channels_only; let channels_only = *channels_only;
let api_key = api_key.clone(); let api_key = api_key.clone();
let provider = provider.clone(); let provider = provider.clone();
let model = model.clone();
let memory = memory.clone(); let memory = memory.clone();
if interactive && channels_only { if interactive && channels_only {
bail!("Use either --interactive or --channels-only, not both"); bail!("Use either --interactive or --channels-only, not both");
} }
if channels_only && (api_key.is_some() || provider.is_some() || memory.is_some()) { if channels_only
bail!("--channels-only does not accept --api-key, --provider, or --memory"); && (api_key.is_some() || provider.is_some() || model.is_some() || memory.is_some())
{
bail!("--channels-only does not accept --api-key, --provider, --model, or --memory");
} }
let config = if channels_only {
let config = tokio::task::spawn_blocking(move || { onboard::run_channels_repair_wizard().await
if channels_only { } else if interactive {
onboard::run_channels_repair_wizard() onboard::run_wizard().await
} else if interactive { } else {
onboard::run_wizard() onboard::run_quick_setup(
} else { api_key.as_deref(),
onboard::run_quick_setup(api_key.as_deref(), provider.as_deref(), memory.as_deref()) provider.as_deref(),
} model.as_deref(),
}) memory.as_deref(),
.await??; )
.await
}?;
// Auto-start channels if user said yes during wizard // Auto-start channels if user said yes during wizard
if std::env::var("ZEROCLAW_AUTOSTART_CHANNELS").as_deref() == Ok("1") { if std::env::var("ZEROCLAW_AUTOSTART_CHANNELS").as_deref() == Ok("1") {
channels::start_channels(config).await?; channels::start_channels(config).await?;
@ -537,7 +689,7 @@ async fn main() -> Result<()> {
} }
// All other commands need config loaded first // All other commands need config loaded first
let mut config = Config::load_or_init()?; let mut config = Config::load_or_init().await?;
config.apply_env_overrides(); config.apply_env_overrides();
match cli.command { match cli.command {
@ -725,16 +877,14 @@ async fn main() -> Result<()> {
Commands::Channel { channel_command } => match channel_command { Commands::Channel { channel_command } => match channel_command {
ChannelCommands::Start => channels::start_channels(config).await, ChannelCommands::Start => channels::start_channels(config).await,
ChannelCommands::Doctor => channels::doctor_channels(config).await, ChannelCommands::Doctor => channels::doctor_channels(config).await,
other => channels::handle_command(other, &config), other => channels::handle_command(other, &config).await,
}, },
Commands::Integrations { Commands::Integrations {
integration_command, integration_command,
} => integrations::handle_command(integration_command, &config), } => integrations::handle_command(integration_command, &config),
Commands::Skills { skill_command } => { Commands::Skills { skill_command } => skills::handle_command(skill_command, &config),
skills::handle_command(skill_command, &config.workspace_dir)
}
Commands::Migrate { migrate_command } => { Commands::Migrate { migrate_command } => {
migration::handle_command(migrate_command, &config).await migration::handle_command(migrate_command, &config).await
@ -747,8 +897,19 @@ async fn main() -> Result<()> {
} }
Commands::Peripheral { peripheral_command } => { Commands::Peripheral { peripheral_command } => {
peripherals::handle_command(peripheral_command.clone(), &config) peripherals::handle_command(peripheral_command.clone(), &config).await
} }
Commands::Config { config_command } => match config_command {
ConfigCommands::Schema => {
let schema = schemars::schema_for!(config::Config);
println!(
"{}",
serde_json::to_string_pretty(&schema).expect("failed to serialize JSON Schema")
);
Ok(())
}
},
} }
} }
@ -934,12 +1095,11 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
let account_id = let account_id =
extract_openai_account_id_for_profile(&token_set.access_token); extract_openai_account_id_for_profile(&token_set.access_token);
let saved = auth_service auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config); clear_pending_openai_login(config);
println!("Saved profile {}", saved.id); println!("Saved profile {profile}");
println!("Active profile for openai-codex: {}", saved.id); println!("Active profile for openai-codex: {profile}");
return Ok(()); return Ok(());
} }
Err(e) => { Err(e) => {
@ -985,11 +1145,11 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?; auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?;
let account_id = extract_openai_account_id_for_profile(&token_set.access_token); let account_id = extract_openai_account_id_for_profile(&token_set.access_token);
let saved = auth_service.store_openai_tokens(&profile, token_set, account_id, true)?; auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config); clear_pending_openai_login(config);
println!("Saved profile {}", saved.id); println!("Saved profile {profile}");
println!("Active profile for openai-codex: {}", saved.id); println!("Active profile for openai-codex: {profile}");
Ok(()) Ok(())
} }
@ -1038,11 +1198,11 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?; auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?;
let account_id = extract_openai_account_id_for_profile(&token_set.access_token); let account_id = extract_openai_account_id_for_profile(&token_set.access_token);
let saved = auth_service.store_openai_tokens(&profile, token_set, account_id, true)?; auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config); clear_pending_openai_login(config);
println!("Saved profile {}", saved.id); println!("Saved profile {profile}");
println!("Active profile for openai-codex: {}", saved.id); println!("Active profile for openai-codex: {profile}");
Ok(()) Ok(())
} }
@ -1068,10 +1228,9 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
kind.as_metadata_value().to_string(), kind.as_metadata_value().to_string(),
); );
let saved = auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?;
auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?; println!("Saved profile {profile}");
println!("Saved profile {}", saved.id); println!("Active profile for {provider}: {profile}");
println!("Active profile for {provider}: {}", saved.id);
Ok(()) Ok(())
} }
@ -1089,10 +1248,9 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
kind.as_metadata_value().to_string(), kind.as_metadata_value().to_string(),
); );
let saved = auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?;
auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?; println!("Saved profile {profile}");
println!("Saved profile {}", saved.id); println!("Active profile for {provider}: {profile}");
println!("Active profile for {provider}: {}", saved.id);
Ok(()) Ok(())
} }
@ -1131,8 +1289,8 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
AuthCommands::Use { provider, profile } => { AuthCommands::Use { provider, profile } => {
let provider = auth::normalize_provider(&provider)?; let provider = auth::normalize_provider(&provider)?;
let active = auth_service.set_active_profile(&provider, &profile)?; auth_service.set_active_profile(&provider, &profile)?;
println!("Active profile for {provider}: {active}"); println!("Active profile for {provider}: {profile}");
Ok(()) Ok(())
} }
@ -1173,15 +1331,15 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
marker, marker,
id, id,
profile.kind, profile.kind,
profile.account_id.as_deref().unwrap_or("unknown"), crate::security::redact(profile.account_id.as_deref().unwrap_or("unknown")),
format_expiry(profile) format_expiry(profile)
); );
} }
println!(); println!();
println!("Active profiles:"); println!("Active profiles:");
for (provider, active) in &data.active_profiles { for (provider, profile_id) in &data.active_profiles {
println!(" {provider}: {active}"); println!(" {provider}: {profile_id}");
} }
Ok(()) Ok(())
@ -1192,10 +1350,61 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use clap::CommandFactory; use clap::{CommandFactory, Parser};
#[test] #[test]
fn cli_definition_has_no_flag_conflicts() { fn cli_definition_has_no_flag_conflicts() {
Cli::command().debug_assert(); Cli::command().debug_assert();
} }
#[test]
fn onboard_help_includes_model_flag() {
let cmd = Cli::command();
let onboard = cmd
.get_subcommands()
.find(|subcommand| subcommand.get_name() == "onboard")
.expect("onboard subcommand must exist");
let has_model_flag = onboard
.get_arguments()
.any(|arg| arg.get_id().as_str() == "model" && arg.get_long() == Some("model"));
assert!(
has_model_flag,
"onboard help should include --model for quick setup overrides"
);
}
#[test]
fn onboard_cli_accepts_model_provider_and_api_key_in_quick_mode() {
let cli = Cli::try_parse_from([
"zeroclaw",
"onboard",
"--provider",
"openrouter",
"--model",
"custom-model-946",
"--api-key",
"sk-issue946",
])
.expect("quick onboard invocation should parse");
match cli.command {
Commands::Onboard {
interactive,
channels_only,
api_key,
provider,
model,
..
} => {
assert!(!interactive);
assert!(!channels_only);
assert_eq!(provider.as_deref(), Some("openrouter"));
assert_eq!(model.as_deref(), Some("custom-model-946"));
assert_eq!(api_key.as_deref(), Some("sk-issue946"));
}
other => panic!("expected onboard command, got {other:?}"),
}
}
} }

View file

@ -3,12 +3,14 @@
// Splits on markdown headings and paragraph boundaries, respecting // Splits on markdown headings and paragraph boundaries, respecting
// a max token limit per chunk. Preserves heading context. // a max token limit per chunk. Preserves heading context.
use std::rc::Rc;
/// A single chunk of text with metadata. /// A single chunk of text with metadata.
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct Chunk { pub struct Chunk {
pub index: usize, pub index: usize,
pub content: String, pub content: String,
pub heading: Option<String>, pub heading: Option<Rc<str>>,
} }
/// Split markdown text into chunks, each under `max_tokens` approximate tokens. /// Split markdown text into chunks, each under `max_tokens` approximate tokens.
@ -26,9 +28,10 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
let max_chars = max_tokens * 4; let max_chars = max_tokens * 4;
let sections = split_on_headings(text); let sections = split_on_headings(text);
let mut chunks = Vec::new(); let mut chunks = Vec::with_capacity(sections.len());
for (heading, body) in sections { for (heading, body) in sections {
let heading: Option<Rc<str>> = heading.map(Rc::from);
let full = if let Some(ref h) = heading { let full = if let Some(ref h) = heading {
format!("{h}\n{body}") format!("{h}\n{body}")
} else { } else {
@ -45,7 +48,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
// Split on paragraphs (blank lines) // Split on paragraphs (blank lines)
let paragraphs = split_on_blank_lines(&body); let paragraphs = split_on_blank_lines(&body);
let mut current = heading let mut current = heading
.as_ref() .as_deref()
.map_or_else(String::new, |h| format!("{h}\n")); .map_or_else(String::new, |h| format!("{h}\n"));
for para in paragraphs { for para in paragraphs {
@ -56,7 +59,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
heading: heading.clone(), heading: heading.clone(),
}); });
current = heading current = heading
.as_ref() .as_deref()
.map_or_else(String::new, |h| format!("{h}\n")); .map_or_else(String::new, |h| format!("{h}\n"));
} }
@ -69,7 +72,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
heading: heading.clone(), heading: heading.clone(),
}); });
current = heading current = heading
.as_ref() .as_deref()
.map_or_else(String::new, |h| format!("{h}\n")); .map_or_else(String::new, |h| format!("{h}\n"));
} }
for line_chunk in split_on_lines(&para, max_chars) { for line_chunk in split_on_lines(&para, max_chars) {
@ -115,8 +118,7 @@ fn split_on_headings(text: &str) -> Vec<(Option<String>, String)> {
for line in text.lines() { for line in text.lines() {
if line.starts_with("# ") || line.starts_with("## ") || line.starts_with("### ") { if line.starts_with("# ") || line.starts_with("## ") || line.starts_with("### ") {
if !current_body.trim().is_empty() || current_heading.is_some() { if !current_body.trim().is_empty() || current_heading.is_some() {
sections.push((current_heading.take(), current_body.clone())); sections.push((current_heading.take(), std::mem::take(&mut current_body)));
current_body.clear();
} }
current_heading = Some(line.to_string()); current_heading = Some(line.to_string());
} else { } else {
@ -140,8 +142,7 @@ fn split_on_blank_lines(text: &str) -> Vec<String> {
for line in text.lines() { for line in text.lines() {
if line.trim().is_empty() { if line.trim().is_empty() {
if !current.trim().is_empty() { if !current.trim().is_empty() {
paragraphs.push(current.clone()); paragraphs.push(std::mem::take(&mut current));
current.clear();
} }
} else { } else {
current.push_str(line); current.push_str(line);
@ -158,13 +159,12 @@ fn split_on_blank_lines(text: &str) -> Vec<String> {
/// Split text on line boundaries to fit within `max_chars` /// Split text on line boundaries to fit within `max_chars`
fn split_on_lines(text: &str, max_chars: usize) -> Vec<String> { fn split_on_lines(text: &str, max_chars: usize) -> Vec<String> {
let mut chunks = Vec::new(); let mut chunks = Vec::with_capacity(text.len() / max_chars.max(1) + 1);
let mut current = String::new(); let mut current = String::new();
for line in text.lines() { for line in text.lines() {
if current.len() + line.len() + 1 > max_chars && !current.is_empty() { if current.len() + line.len() + 1 > max_chars && !current.is_empty() {
chunks.push(current.clone()); chunks.push(std::mem::take(&mut current));
current.clear();
} }
current.push_str(line); current.push_str(line);
current.push('\n'); current.push('\n');

View file

@ -172,6 +172,15 @@ pub fn create_embedding_provider(
dims, dims,
)) ))
} }
"openrouter" => {
let key = api_key.unwrap_or("");
Box::new(OpenAiEmbedding::new(
"https://openrouter.ai/api/v1",
key,
model,
dims,
))
}
name if name.starts_with("custom:") => { name if name.starts_with("custom:") => {
let base_url = name.strip_prefix("custom:").unwrap_or(""); let base_url = name.strip_prefix("custom:").unwrap_or("");
let key = api_key.unwrap_or(""); let key = api_key.unwrap_or("");
@ -212,6 +221,18 @@ mod tests {
assert_eq!(p.dimensions(), 1536); assert_eq!(p.dimensions(), 1536);
} }
#[test]
fn factory_openrouter() {
let p = create_embedding_provider(
"openrouter",
Some("sk-or-test"),
"openai/text-embedding-3-small",
1536,
);
assert_eq!(p.name(), "openai"); // uses OpenAiEmbedding internally
assert_eq!(p.dimensions(), 1536);
}
#[test] #[test]
fn factory_custom_url() { fn factory_custom_url() {
let p = create_embedding_provider("custom:http://localhost:1234", None, "model", 768); let p = create_embedding_provider("custom:http://localhost:1234", None, "model", 768);
@ -281,6 +302,20 @@ mod tests {
assert_eq!(p.dimensions(), 384); assert_eq!(p.dimensions(), 384);
} }
#[test]
fn embeddings_url_openrouter() {
let p = OpenAiEmbedding::new(
"https://openrouter.ai/api/v1",
"key",
"openai/text-embedding-3-small",
1536,
);
assert_eq!(
p.embeddings_url(),
"https://openrouter.ai/api/v1/embeddings"
);
}
#[test] #[test]
fn embeddings_url_standard_openai() { fn embeddings_url_standard_openai() {
let p = OpenAiEmbedding::new("https://api.openai.com", "key", "model", 1536); let p = OpenAiEmbedding::new("https://api.openai.com", "key", "model", 1536);

View file

@ -608,7 +608,7 @@ exit 1
.iter() .iter()
.any(|e| e.content.contains("Rust should stay local-first"))); .any(|e| e.content.contains("Rust should stay local-first")));
let context_calls = fs::read_to_string(&marker).unwrap_or_default(); let context_calls = tokio::fs::read_to_string(&marker).await.unwrap_or_default();
assert!( assert!(
context_calls.trim().is_empty(), context_calls.trim().is_empty(),
"Expected local-hit short-circuit; got calls: {context_calls}" "Expected local-hit short-circuit; got calls: {context_calls}"
@ -669,7 +669,7 @@ exit 1
assert!(first.is_empty()); assert!(first.is_empty());
assert!(second.is_empty()); assert!(second.is_empty());
let calls = fs::read_to_string(&marker).unwrap_or_default(); let calls = tokio::fs::read_to_string(&marker).await.unwrap_or_default();
assert_eq!(calls.lines().count(), 1); assert_eq!(calls.lines().count(), 1);
} }
} }

View file

@ -229,7 +229,6 @@ impl Memory for MarkdownMemory {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use std::fs as sync_fs;
use tempfile::TempDir; use tempfile::TempDir;
fn temp_workspace() -> (TempDir, MarkdownMemory) { fn temp_workspace() -> (TempDir, MarkdownMemory) {
@ -256,7 +255,7 @@ mod tests {
mem.store("pref", "User likes Rust", MemoryCategory::Core, None) mem.store("pref", "User likes Rust", MemoryCategory::Core, None)
.await .await
.unwrap(); .unwrap();
let content = sync_fs::read_to_string(mem.core_path()).unwrap(); let content = fs::read_to_string(mem.core_path()).await.unwrap();
assert!(content.contains("User likes Rust")); assert!(content.contains("User likes Rust"));
} }
@ -267,7 +266,7 @@ mod tests {
.await .await
.unwrap(); .unwrap();
let path = mem.daily_path(); let path = mem.daily_path();
let content = sync_fs::read_to_string(path).unwrap(); let content = fs::read_to_string(path).await.unwrap();
assert!(content.contains("Finished tests")); assert!(content.contains("Finished tests"));
} }

View file

@ -27,7 +27,7 @@ pub use traits::Memory;
#[allow(unused_imports)] #[allow(unused_imports)]
pub use traits::{MemoryCategory, MemoryEntry}; pub use traits::{MemoryCategory, MemoryEntry};
use crate::config::{MemoryConfig, StorageProviderConfig}; use crate::config::{EmbeddingRouteConfig, MemoryConfig, StorageProviderConfig};
use anyhow::Context; use anyhow::Context;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
@ -75,13 +75,101 @@ pub fn effective_memory_backend_name(
memory_backend.trim().to_ascii_lowercase() memory_backend.trim().to_ascii_lowercase()
} }
/// Legacy auto-save key used for model-authored assistant summaries.
/// These entries are treated as untrusted context and should not be re-injected.
pub fn is_assistant_autosave_key(key: &str) -> bool {
let normalized = key.trim().to_ascii_lowercase();
normalized == "assistant_resp" || normalized.starts_with("assistant_resp_")
}
#[derive(Clone, PartialEq, Eq)]
struct ResolvedEmbeddingConfig {
provider: String,
model: String,
dimensions: usize,
api_key: Option<String>,
}
impl std::fmt::Debug for ResolvedEmbeddingConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ResolvedEmbeddingConfig")
.field("provider", &self.provider)
.field("model", &self.model)
.field("dimensions", &self.dimensions)
.field("api_key", &self.api_key.as_ref().map(|_| "[REDACTED]"))
.finish()
}
}
fn resolve_embedding_config(
config: &MemoryConfig,
embedding_routes: &[EmbeddingRouteConfig],
api_key: Option<&str>,
) -> ResolvedEmbeddingConfig {
let fallback_api_key = api_key
.map(str::trim)
.filter(|value| !value.is_empty())
.map(str::to_string);
let fallback = ResolvedEmbeddingConfig {
provider: config.embedding_provider.trim().to_string(),
model: config.embedding_model.trim().to_string(),
dimensions: config.embedding_dimensions,
api_key: fallback_api_key.clone(),
};
let Some(hint) = config
.embedding_model
.strip_prefix("hint:")
.map(str::trim)
.filter(|value| !value.is_empty())
else {
return fallback;
};
let Some(route) = embedding_routes
.iter()
.find(|route| route.hint.trim() == hint)
else {
tracing::warn!(
hint,
"Unknown embedding route hint; falling back to [memory] embedding settings"
);
return fallback;
};
let provider = route.provider.trim();
let model = route.model.trim();
let dimensions = route.dimensions.unwrap_or(config.embedding_dimensions);
if provider.is_empty() || model.is_empty() || dimensions == 0 {
tracing::warn!(
hint,
"Invalid embedding route configuration; falling back to [memory] embedding settings"
);
return fallback;
}
let routed_api_key = route
.api_key
.as_deref()
.map(str::trim)
.filter(|value: &&str| !value.is_empty())
.map(|value| value.to_string());
ResolvedEmbeddingConfig {
provider: provider.to_string(),
model: model.to_string(),
dimensions,
api_key: routed_api_key.or(fallback_api_key),
}
}
/// Factory: create the right memory backend from config /// Factory: create the right memory backend from config
pub fn create_memory( pub fn create_memory(
config: &MemoryConfig, config: &MemoryConfig,
workspace_dir: &Path, workspace_dir: &Path,
api_key: Option<&str>, api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> { ) -> anyhow::Result<Box<dyn Memory>> {
create_memory_with_storage(config, None, workspace_dir, api_key) create_memory_with_storage_and_routes(config, &[], None, workspace_dir, api_key)
} }
/// Factory: create memory with optional storage-provider override. /// Factory: create memory with optional storage-provider override.
@ -90,9 +178,21 @@ pub fn create_memory_with_storage(
storage_provider: Option<&StorageProviderConfig>, storage_provider: Option<&StorageProviderConfig>,
workspace_dir: &Path, workspace_dir: &Path,
api_key: Option<&str>, api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> {
create_memory_with_storage_and_routes(config, &[], storage_provider, workspace_dir, api_key)
}
/// Factory: create memory with optional storage-provider override and embedding routes.
pub fn create_memory_with_storage_and_routes(
config: &MemoryConfig,
embedding_routes: &[EmbeddingRouteConfig],
storage_provider: Option<&StorageProviderConfig>,
workspace_dir: &Path,
api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> { ) -> anyhow::Result<Box<dyn Memory>> {
let backend_name = effective_memory_backend_name(&config.backend, storage_provider); let backend_name = effective_memory_backend_name(&config.backend, storage_provider);
let backend_kind = classify_memory_backend(&backend_name); let backend_kind = classify_memory_backend(&backend_name);
let resolved_embedding = resolve_embedding_config(config, embedding_routes, api_key);
// Best-effort memory hygiene/retention pass (throttled by state file). // Best-effort memory hygiene/retention pass (throttled by state file).
if let Err(e) = hygiene::run_if_due(config, workspace_dir) { if let Err(e) = hygiene::run_if_due(config, workspace_dir) {
@ -137,14 +237,14 @@ pub fn create_memory_with_storage(
fn build_sqlite_memory( fn build_sqlite_memory(
config: &MemoryConfig, config: &MemoryConfig,
workspace_dir: &Path, workspace_dir: &Path,
api_key: Option<&str>, resolved_embedding: &ResolvedEmbeddingConfig,
) -> anyhow::Result<SqliteMemory> { ) -> anyhow::Result<SqliteMemory> {
let embedder: Arc<dyn embeddings::EmbeddingProvider> = let embedder: Arc<dyn embeddings::EmbeddingProvider> =
Arc::from(embeddings::create_embedding_provider( Arc::from(embeddings::create_embedding_provider(
&config.embedding_provider, &resolved_embedding.provider,
api_key, resolved_embedding.api_key.as_deref(),
&config.embedding_model, &resolved_embedding.model,
config.embedding_dimensions, resolved_embedding.dimensions,
)); ));
#[allow(clippy::cast_possible_truncation)] #[allow(clippy::cast_possible_truncation)]
@ -184,7 +284,7 @@ pub fn create_memory_with_storage(
create_memory_with_builders( create_memory_with_builders(
&backend_name, &backend_name,
workspace_dir, workspace_dir,
|| build_sqlite_memory(config, workspace_dir, api_key), || build_sqlite_memory(config, workspace_dir, &resolved_embedding),
|| build_postgres_memory(storage_provider), || build_postgres_memory(storage_provider),
"", "",
) )
@ -247,7 +347,7 @@ pub fn create_response_cache(config: &MemoryConfig, workspace_dir: &Path) -> Opt
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::config::StorageProviderConfig; use crate::config::{EmbeddingRouteConfig, StorageProviderConfig};
use tempfile::TempDir; use tempfile::TempDir;
#[test] #[test]
@ -261,6 +361,15 @@ mod tests {
assert_eq!(mem.name(), "sqlite"); assert_eq!(mem.name(), "sqlite");
} }
#[test]
fn assistant_autosave_key_detection_matches_legacy_patterns() {
assert!(is_assistant_autosave_key("assistant_resp"));
assert!(is_assistant_autosave_key("assistant_resp_1234"));
assert!(is_assistant_autosave_key("ASSISTANT_RESP_abcd"));
assert!(!is_assistant_autosave_key("assistant_response"));
assert!(!is_assistant_autosave_key("user_msg_1234"));
}
#[test] #[test]
fn factory_markdown() { fn factory_markdown() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
@ -353,4 +462,102 @@ mod tests {
.expect("postgres without db_url should be rejected"); .expect("postgres without db_url should be rejected");
assert!(error.to_string().contains("db_url")); assert!(error.to_string().contains("db_url"));
} }
#[test]
fn resolve_embedding_config_uses_base_config_when_model_is_not_hint() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "text-embedding-3-small".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let resolved = resolve_embedding_config(&cfg, &[], Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "text-embedding-3-small".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_uses_matching_route_with_api_key_override() {
let cfg = MemoryConfig {
embedding_provider: "none".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let routes = vec![EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "custom:https://api.example.com/v1".into(),
model: "custom-embed-v2".into(),
dimensions: Some(1024),
api_key: Some("route-key".into()),
}];
let resolved = resolve_embedding_config(&cfg, &routes, Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "custom:https://api.example.com/v1".into(),
model: "custom-embed-v2".into(),
dimensions: 1024,
api_key: Some("route-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_falls_back_when_hint_is_missing() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let resolved = resolve_embedding_config(&cfg, &[], Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "hint:semantic".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_falls_back_when_route_is_invalid() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let routes = vec![EmbeddingRouteConfig {
hint: "semantic".into(),
provider: String::new(),
model: "text-embedding-3-small".into(),
dimensions: Some(0),
api_key: None,
}];
let resolved = resolve_embedding_config(&cfg, &routes, Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "hint:semantic".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
} }

View file

@ -30,24 +30,16 @@ impl PostgresMemory {
validate_identifier(schema, "storage schema")?; validate_identifier(schema, "storage schema")?;
validate_identifier(table, "storage table")?; validate_identifier(table, "storage table")?;
let mut config: postgres::Config = db_url
.parse()
.context("invalid PostgreSQL connection URL")?;
if let Some(timeout_secs) = connect_timeout_secs {
let bounded = timeout_secs.min(POSTGRES_CONNECT_TIMEOUT_CAP_SECS);
config.connect_timeout(Duration::from_secs(bounded));
}
let mut client = config
.connect(NoTls)
.context("failed to connect to PostgreSQL memory backend")?;
let schema_ident = quote_identifier(schema); let schema_ident = quote_identifier(schema);
let table_ident = quote_identifier(table); let table_ident = quote_identifier(table);
let qualified_table = format!("{schema_ident}.{table_ident}"); let qualified_table = format!("{schema_ident}.{table_ident}");
Self::init_schema(&mut client, &schema_ident, &qualified_table)?; let client = Self::initialize_client(
db_url.to_string(),
connect_timeout_secs,
schema_ident.clone(),
qualified_table.clone(),
)?;
Ok(Self { Ok(Self {
client: Arc::new(Mutex::new(client)), client: Arc::new(Mutex::new(client)),
@ -55,6 +47,40 @@ impl PostgresMemory {
}) })
} }
fn initialize_client(
db_url: String,
connect_timeout_secs: Option<u64>,
schema_ident: String,
qualified_table: String,
) -> Result<Client> {
let init_handle = std::thread::Builder::new()
.name("postgres-memory-init".to_string())
.spawn(move || -> Result<Client> {
let mut config: postgres::Config = db_url
.parse()
.context("invalid PostgreSQL connection URL")?;
if let Some(timeout_secs) = connect_timeout_secs {
let bounded = timeout_secs.min(POSTGRES_CONNECT_TIMEOUT_CAP_SECS);
config.connect_timeout(Duration::from_secs(bounded));
}
let mut client = config
.connect(NoTls)
.context("failed to connect to PostgreSQL memory backend")?;
Self::init_schema(&mut client, &schema_ident, &qualified_table)?;
Ok(client)
})
.context("failed to spawn PostgreSQL initializer thread")?;
let init_result = init_handle
.join()
.map_err(|_| anyhow::anyhow!("PostgreSQL initializer thread panicked"))?;
init_result
}
fn init_schema(client: &mut Client, schema_ident: &str, qualified_table: &str) -> Result<()> { fn init_schema(client: &mut Client, schema_ident: &str, qualified_table: &str) -> Result<()> {
client.batch_execute(&format!( client.batch_execute(&format!(
" "
@ -157,7 +183,7 @@ impl Memory for PostgresMemory {
let key = key.to_string(); let key = key.to_string();
let content = content.to_string(); let content = content.to_string();
let category = Self::category_to_str(&category); let category = Self::category_to_str(&category);
let session_id = session_id.map(str::to_string); let sid = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<()> { tokio::task::spawn_blocking(move || -> Result<()> {
let now = Utc::now(); let now = Utc::now();
@ -177,10 +203,7 @@ impl Memory for PostgresMemory {
); );
let id = Uuid::new_v4().to_string(); let id = Uuid::new_v4().to_string();
client.execute( client.execute(&stmt, &[&id, &key, &content, &category, &now, &now, &sid])?;
&stmt,
&[&id, &key, &content, &category, &now, &now, &session_id],
)?;
Ok(()) Ok(())
}) })
.await? .await?
@ -195,7 +218,7 @@ impl Memory for PostgresMemory {
let client = self.client.clone(); let client = self.client.clone();
let qualified_table = self.qualified_table.clone(); let qualified_table = self.qualified_table.clone();
let query = query.trim().to_string(); let query = query.trim().to_string();
let session_id = session_id.map(str::to_string); let sid = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> { tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> {
let mut client = client.lock(); let mut client = client.lock();
@ -217,7 +240,7 @@ impl Memory for PostgresMemory {
#[allow(clippy::cast_possible_wrap)] #[allow(clippy::cast_possible_wrap)]
let limit_i64 = limit as i64; let limit_i64 = limit as i64;
let rows = client.query(&stmt, &[&query, &session_id, &limit_i64])?; let rows = client.query(&stmt, &[&query, &sid, &limit_i64])?;
rows.iter() rows.iter()
.map(Self::row_to_entry) .map(Self::row_to_entry)
.collect::<Result<Vec<MemoryEntry>>>() .collect::<Result<Vec<MemoryEntry>>>()
@ -255,7 +278,7 @@ impl Memory for PostgresMemory {
let client = self.client.clone(); let client = self.client.clone();
let qualified_table = self.qualified_table.clone(); let qualified_table = self.qualified_table.clone();
let category = category.map(Self::category_to_str); let category = category.map(Self::category_to_str);
let session_id = session_id.map(str::to_string); let sid = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> { tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> {
let mut client = client.lock(); let mut client = client.lock();
@ -270,7 +293,7 @@ impl Memory for PostgresMemory {
); );
let category_ref = category.as_deref(); let category_ref = category.as_deref();
let session_ref = session_id.as_deref(); let session_ref = sid.as_deref();
let rows = client.query(&stmt, &[&category_ref, &session_ref])?; let rows = client.query(&stmt, &[&category_ref, &session_ref])?;
rows.iter() rows.iter()
.map(Self::row_to_entry) .map(Self::row_to_entry)
@ -349,4 +372,22 @@ mod tests {
MemoryCategory::Custom("custom_notes".into()) MemoryCategory::Custom("custom_notes".into())
); );
} }
#[tokio::test(flavor = "current_thread")]
async fn new_does_not_panic_inside_tokio_runtime() {
let outcome = std::panic::catch_unwind(|| {
PostgresMemory::new(
"postgres://zeroclaw:password@127.0.0.1:1/zeroclaw",
"public",
"memories",
Some(1),
)
});
assert!(outcome.is_ok(), "PostgresMemory::new should not panic");
assert!(
outcome.unwrap().is_err(),
"PostgresMemory::new should return a connect error for an unreachable endpoint"
);
}
} }

View file

@ -452,7 +452,7 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone(); let conn = self.conn.clone();
let key = key.to_string(); let key = key.to_string();
let content = content.to_string(); let content = content.to_string();
let session_id = session_id.map(String::from); let sid = session_id.map(String::from);
tokio::task::spawn_blocking(move || -> anyhow::Result<()> { tokio::task::spawn_blocking(move || -> anyhow::Result<()> {
let conn = conn.lock(); let conn = conn.lock();
@ -469,7 +469,7 @@ impl Memory for SqliteMemory {
embedding = excluded.embedding, embedding = excluded.embedding,
updated_at = excluded.updated_at, updated_at = excluded.updated_at,
session_id = excluded.session_id", session_id = excluded.session_id",
params![id, key, content, cat, embedding_bytes, now, now, session_id], params![id, key, content, cat, embedding_bytes, now, now, sid],
)?; )?;
Ok(()) Ok(())
}) })
@ -491,13 +491,13 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone(); let conn = self.conn.clone();
let query = query.to_string(); let query = query.to_string();
let session_id = session_id.map(String::from); let sid = session_id.map(String::from);
let vector_weight = self.vector_weight; let vector_weight = self.vector_weight;
let keyword_weight = self.keyword_weight; let keyword_weight = self.keyword_weight;
tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> { tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> {
let conn = conn.lock(); let conn = conn.lock();
let session_ref = session_id.as_deref(); let session_ref = sid.as_deref();
// FTS5 BM25 keyword search // FTS5 BM25 keyword search
let keyword_results = Self::fts5_search(&conn, &query, limit * 2).unwrap_or_default(); let keyword_results = Self::fts5_search(&conn, &query, limit * 2).unwrap_or_default();
@ -691,11 +691,11 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone(); let conn = self.conn.clone();
let category = category.cloned(); let category = category.cloned();
let session_id = session_id.map(String::from); let sid = session_id.map(String::from);
tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> { tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> {
let conn = conn.lock(); let conn = conn.lock();
let session_ref = session_id.as_deref(); let session_ref = sid.as_deref();
let mut results = Vec::new(); let mut results = Vec::new();
let row_mapper = |row: &rusqlite::Row| -> rusqlite::Result<MemoryEntry> { let row_mapper = |row: &rusqlite::Row| -> rusqlite::Result<MemoryEntry> {

Some files were not shown because too many files have changed in this diff Show more