Compare commits

..

14 commits

Author SHA1 Message Date
harald
a8afe0cbc1 cargo fmt
Some checks failed
Sec Audit / License & Supply Chain (push) Has been cancelled
CI Run / Test (push) Has been cancelled
CI Run / Build (Smoke) (push) Has been cancelled
CI Run / Docs-Only Fast Path (push) Has been cancelled
CI Run / Non-Rust Fast Path (push) Has been cancelled
CI Run / Docs Quality (push) Has been cancelled
CI Run / Lint Feedback (push) Has been cancelled
CI Run / Workflow Owner Approval (push) Has been cancelled
CI Run / CI Required Gate (push) Has been cancelled
Feature Matrix / Check (all-features) (push) Has been cancelled
CI Run / Detect Change Scope (push) Has been cancelled
Pub Docker Img / Build and Push Docker Image (push) Has been cancelled
Sec Audit / Security Audit (push) Has been cancelled
CI Run / Lint Gate (Format + Clippy) (push) Has been cancelled
CI Run / Lint Gate (Strict Delta) (push) Has been cancelled
Feature Matrix / Check (browser-native) (push) Has been cancelled
Feature Matrix / Check (hardware-only) (push) Has been cancelled
Feature Matrix / Check (no-default-features) (push) Has been cancelled
PR Label Policy Check / contributor-tier-consistency (push) Has been cancelled
Pub Docker Img / PR Docker Smoke (push) Has been cancelled
Test Benchmarks / Criterion Benchmarks (push) Has been cancelled
Test E2E / Integration / E2E Tests (push) Has been cancelled
Workflow Sanity / no-tabs (push) Has been cancelled
Workflow Sanity / actionlint (push) Has been cancelled
2026-02-25 17:12:20 +01:00
harald
876635b0b3 fix: resolve all cargo clippy warnings
- daemon: Box::pin large future in heartbeat worker
- wizard: remove redundant match arms with identical bodies, fix stale
  test that expected venice to be unsupported
- proxy_config: allow clippy::option_option on intentional partial-update
  return type
- matrix: use String::new() instead of "".to_string()
- reliable: return expression directly instead of let-and-return

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 17:11:49 +01:00
harald
5cdf1b74f3 feat(tools): refactor pushover into conditional notify tool with Telegram fallback
Replace the always-registered PushoverTool with a NotifyTool that
auto-selects its backend at startup: Pushover if .env credentials exist,
otherwise Telegram (using bot_token + first allowed_users entry as
chat_id). If neither backend is available, the tool is not registered,
saving a tool slot and avoiding agent confusion.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 17:01:28 +01:00
harald
6e8c799af5 chore: apply cargo fmt formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 16:49:47 +01:00
harald
7ca71f500a feat(channels): add /clear, /system, /status, /help slash commands + Telegram menu
Add four new runtime commands for Telegram/Discord channels:
- /clear — clears per-sender conversation history
- /system — shows current system prompt (truncated to 2000 chars)
- /status — shows provider, model, temperature, tools, memory, limits
- /help — lists all available slash commands with descriptions

Register commands with Telegram's setMyCommands API on listener startup
so they appear in the bot's "/" autocomplete menu.

Includes 9 new tests covering parsing, bot-mention suffix stripping,
and handler behavior for /clear, /help, and /status.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 16:49:24 +01:00
harald
0e5215e1ef fix(security): allow 2>/dev/null and 2>&1 in shell commands, add policy logging
The redirect blocker was rejecting safe stderr patterns like
2>/dev/null and 2>&1. Strip these before operator checks so they
don't trigger the generic > or & blockers.

Also adds debug/trace logging to all early rejection paths in
is_command_allowed for audit visibility.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 13:45:01 +01:00
harald
5b896f3378 feat(observability): add debug/trace logging to shell tool and command policy
Shell tool now logs at debug level: command invocations, policy
allow/block decisions with reasons, exit codes, and output sizes.
Trace level adds full stdout/stderr content and risk assessment details.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 13:13:19 +01:00
harald
05e1102af9 feat(security): support wildcard "*" in allowed_commands
Allow `allowed_commands = ["*"]` to bypass the command allowlist check.
Hardcoded safety blocks (subshell operators, redirections, tee,
background &) still apply regardless of wildcard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 12:21:04 +01:00
harald
a7590f9fdc fix(channel): merge delivery instructions into initial system message
Some models (e.g. Qwen 3.5) enforce that all system messages must appear
at the beginning of the conversation. The Telegram delivery instructions
were appended as a separate system message after the user message,
causing a Jinja template error. Merge them into the first system message
instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 12:19:59 +01:00
harald
6a69b47b8a feat(http_request): support wildcard "*" in allowed_domains
Allow ["*"] in http_request.allowed_domains to permit all public
domains without listing each one individually. Private/localhost
hosts remain blocked regardless.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 10:15:28 +01:00
harald
0027b4d746 fix(telegram): treat "message is not modified" as success in finalize_draft
Telegram returns 400 with "message is not modified" when editMessageText
is called with content identical to the current message. This happens
when streaming deltas have already updated the draft to the final text.
Previously this triggered a fallback to sendMessage, producing a
duplicate message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 08:24:31 +01:00
harald
f426edfc17 feat(agent): emit tool status events from run_tool_call_loop
Add ToolStatusEvent enum (Thinking, ToolStart) and extract_tool_detail
helper to the agent loop. run_tool_call_loop now accepts an optional
on_tool_status sender and emits events before LLM calls and tool
executions. CLI callers pass None; the channel orchestrator uses it
for real-time draft updates.

Includes unit tests for extract_tool_detail covering all tool types.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 07:39:50 +01:00
harald
7df2102d9d feat(channel): add tool status display and configurable message timeout
Show real-time tool execution status in channels with draft support
(e.g. Telegram with stream_mode=partial). During processing, the draft
message shows "Thinking..." and progressively adds tool lines like
"🔧 shell(ls -la)" as tools execute. The final response replaces
all status lines cleanly via finalize_draft.

Also makes the channel message timeout configurable via
agent.channel_message_timeout_secs (default 300s), replacing the
previously hardcoded constant.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 07:39:39 +01:00
harald
61a998cae3 fix(cron): correct false high-frequency warning for daily cron jobs
The frequency check compared two consecutive calls to
next_run_for_schedule with now and now+1s, which returned the same
next occurrence for daily schedules — making the interval appear as
0 minutes. Compare two consecutive occurrences instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 07:36:03 +01:00
158 changed files with 3696 additions and 23936 deletions

View file

@ -21,10 +21,6 @@ PROVIDER=openrouter
# Workspace directory override
# ZEROCLAW_WORKSPACE=/path/to/workspace
# Reasoning mode (enables extended thinking for supported models)
# ZEROCLAW_REASONING_ENABLED=false
# REASONING_ENABLED=false
# ── Provider-Specific API Keys ────────────────────────────────
# OpenRouter
# OPENROUTER_API_KEY=sk-or-v1-...
@ -67,22 +63,6 @@ PROVIDER=openrouter
# ZEROCLAW_GATEWAY_HOST=127.0.0.1
# ZEROCLAW_ALLOW_PUBLIC_BIND=false
# ── Storage ─────────────────────────────────────────────────
# Backend override for persistent storage (default: sqlite)
# ZEROCLAW_STORAGE_PROVIDER=sqlite
# ZEROCLAW_STORAGE_DB_URL=postgres://localhost/zeroclaw
# ZEROCLAW_STORAGE_CONNECT_TIMEOUT_SECS=5
# ── Proxy ──────────────────────────────────────────────────
# Forward provider/service traffic through an HTTP(S) proxy.
# ZEROCLAW_PROXY_ENABLED=false
# ZEROCLAW_HTTP_PROXY=http://proxy.example.com:8080
# ZEROCLAW_HTTPS_PROXY=http://proxy.example.com:8080
# ZEROCLAW_ALL_PROXY=socks5://proxy.example.com:1080
# ZEROCLAW_NO_PROXY=localhost,127.0.0.1
# ZEROCLAW_PROXY_SCOPE=zeroclaw # environment|zeroclaw|services
# ZEROCLAW_PROXY_SERVICES=openai,anthropic
# ── Optional Integrations ────────────────────────────────────
# Pushover notifications (`pushover` tool)
# PUSHOVER_TOKEN=your-pushover-app-token

1
.envrc
View file

@ -1 +0,0 @@
use flake

View file

@ -4,13 +4,13 @@ updates:
- package-ecosystem: cargo
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: main
open-pull-requests-limit: 3
open-pull-requests-limit: 5
labels:
- "dependencies"
groups:
rust-all:
rust-minor-patch:
patterns:
- "*"
update-types:
@ -20,14 +20,14 @@ updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: main
open-pull-requests-limit: 1
open-pull-requests-limit: 3
labels:
- "ci"
- "dependencies"
groups:
actions-all:
actions-minor-patch:
patterns:
- "*"
update-types:
@ -37,14 +37,14 @@ updates:
- package-ecosystem: docker
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: main
open-pull-requests-limit: 1
open-pull-requests-limit: 3
labels:
- "ci"
- "dependencies"
groups:
docker-all:
docker-minor-patch:
patterns:
- "*"
update-types:

View file

@ -12,7 +12,11 @@ Describe this PR in 2-5 bullets:
- Risk label (`risk: low|medium|high`):
- Size label (`size: XS|S|M|L|XL`, auto-managed/read-only):
- Scope labels (`core|agent|channel|config|cron|daemon|doctor|gateway|health|heartbeat|integration|memory|observability|onboard|provider|runtime|security|service|skillforge|skills|tool|tunnel|docs|dependencies|ci|tests|scripts|dev`, comma-separated):
<<<<<<< chore/labeler-spacing-trusted-tier
- Module labels (`<module>: <component>`, for example `channel: telegram`, `provider: kimi`, `tool: shell`):
=======
- Module labels (`<module>:<component>`, for example `channel:telegram`, `provider:kimi`, `tool:shell`):
>>>>>>> main
- Contributor tier label (`trusted contributor|experienced contributor|principal contributor|distinguished contributor`, auto-managed/read-only; author merged PRs >=5/10/20/50):
- If any auto-label is incorrect, note requested correction:

View file

@ -41,11 +41,11 @@ jobs:
run: ./scripts/ci/detect_change_scope.sh
lint:
name: Lint Gate (Format + Clippy + Strict Delta)
name: Lint Gate (Format + Clippy)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25
timeout-minutes: 20
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
@ -57,6 +57,22 @@ jobs:
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run rust quality gate
run: ./scripts/ci/rust_quality_gate.sh
lint-strict-delta:
name: Lint Gate (Strict Delta)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: clippy
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run strict lint delta gate
env:
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
@ -64,8 +80,8 @@ jobs:
test:
name: Test
needs: [changes, lint]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) && needs.lint.result == 'success'
needs: [changes, lint, lint-strict-delta]
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) && needs.lint.result == 'success' && needs.lint-strict-delta.result == 'success'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
steps:
@ -90,8 +106,8 @@ jobs:
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Build binary (smoke check)
run: cargo build --locked --verbose
- name: Build release binary
run: cargo build --release --locked --verbose
docs-only:
name: Docs-Only Fast Path
@ -169,7 +185,7 @@ jobs:
lint-feedback:
name: Lint Feedback
if: github.event_name == 'pull_request'
needs: [changes, lint, docs-quality]
needs: [changes, lint, lint-strict-delta, docs-quality]
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
@ -185,7 +201,7 @@ jobs:
RUST_CHANGED: ${{ needs.changes.outputs.rust_changed }}
DOCS_CHANGED: ${{ needs.changes.outputs.docs_changed }}
LINT_RESULT: ${{ needs.lint.result }}
LINT_DELTA_RESULT: ${{ needs.lint.result }}
LINT_DELTA_RESULT: ${{ needs.lint-strict-delta.result }}
DOCS_RESULT: ${{ needs.docs-quality.result }}
with:
script: |
@ -215,7 +231,7 @@ jobs:
ci-required:
name: CI Required Gate
if: always()
needs: [changes, lint, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval]
needs: [changes, lint, lint-strict-delta, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval]
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Enforce required status
@ -260,7 +276,7 @@ jobs:
fi
lint_result="${{ needs.lint.result }}"
lint_strict_delta_result="${{ needs.lint.result }}"
lint_strict_delta_result="${{ needs.lint-strict-delta.result }}"
test_result="${{ needs.test.result }}"
build_result="${{ needs.build.result }}"

View file

@ -1,6 +1,12 @@
name: Feature Matrix
on:
push:
branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
schedule:
- cron: "30 4 * * 1" # Weekly Monday 4:30am UTC
workflow_dispatch:
@ -55,3 +61,6 @@ jobs:
- name: Check feature combination
run: cargo check --locked ${{ matrix.args }}
- name: Test feature combination
run: cargo test --locked ${{ matrix.args }}

View file

@ -143,7 +143,7 @@ Workflow: `.github/workflows/pub-docker-img.yml`
- `latest` + SHA tag (`sha-<12 chars>`) for `main`
- semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag for tag pushes
- branch name + SHA tag for non-`main` manual dispatch refs
5. Multi-platform publish is used for both `main` and tag pushes (`linux/amd64,linux/arm64`).
5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`), while `main` publish stays `linux/amd64`.
6. Typical runtime in recent sample: ~139.9s.
7. Result: pushed image tags under `ghcr.io/<owner>/<repo>`.

View file

@ -15,7 +15,7 @@ jobs:
(github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled' || github.event.action == 'unlabeled')) ||
(github.event_name == 'pull_request_target' &&
(github.event.action == 'labeled' || github.event.action == 'unlabeled'))
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
issues: write
@ -34,7 +34,7 @@ jobs:
await script({ github, context, core });
first-interaction:
if: github.event.action == 'opened'
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
issues: write
pull-requests: write
@ -65,7 +65,7 @@ jobs:
labeled-routes:
if: github.event.action == 'labeled'
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
issues: write

View file

@ -12,7 +12,7 @@ jobs:
permissions:
issues: write
pull-requests: write
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Mark stale issues and pull requests
uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0

View file

@ -2,7 +2,7 @@ name: PR Check Status
on:
schedule:
- cron: "15 8 * * *" # Once daily at 8:15am UTC
- cron: "15 */12 * * *"
workflow_dispatch:
permissions: {}
@ -13,13 +13,13 @@ concurrency:
jobs:
nudge-stale-prs:
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
pull-requests: write
issues: write
env:
STALE_HOURS: "48"
STALE_HOURS: "4"
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -16,7 +16,7 @@ permissions:
jobs:
intake:
name: Intake Checks
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- name: Checkout repository

View file

@ -25,7 +25,8 @@ permissions:
jobs:
label:
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -21,8 +21,13 @@ on:
paths:
- "Dockerfile"
- ".dockerignore"
- "docker-compose.yml"
- "Cargo.toml"
- "Cargo.lock"
- "rust-toolchain.toml"
- "src/**"
- "crates/**"
- "benches/**"
- "firmware/**"
- "dev/config.template.toml"
- ".github/workflows/pub-docker-img.yml"
workflow_dispatch:
@ -70,8 +75,6 @@ jobs:
tags: zeroclaw-pr-smoke:latest
labels: ${{ steps.meta.outputs.labels || '' }}
platforms: linux/amd64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Verify image
run: docker run --rm zeroclaw-pr-smoke:latest --version
@ -80,7 +83,7 @@ jobs:
name: Build and Push Docker Image
if: (github.event_name == 'workflow_dispatch' || (github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v')))) && github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 45
timeout-minutes: 25
permissions:
contents: read
packages: write
@ -125,9 +128,7 @@ jobs:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: ${{ startsWith(github.ref, 'refs/tags/v') && 'linux/amd64,linux/arm64' || 'linux/amd64' }}
- name: Set GHCR package visibility to public
shell: bash

View file

@ -27,45 +27,15 @@ jobs:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-latest
target: armv7-unknown-linux-gnueabihf
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
- os: macos-15-intel
- os: macos-latest
target: x86_64-apple-darwin
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: macos-14
- os: macos-latest
target: aarch64-apple-darwin
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
archive_ext: zip
cross_compiler: ""
linker_env: ""
linker: ""
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
@ -76,41 +46,20 @@ jobs:
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Install cross-compilation toolchain (Linux)
if: runner.os == 'Linux' && matrix.cross_compiler != ''
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release
env:
LINKER_ENV: ${{ matrix.linker_env }}
LINKER: ${{ matrix.linker }}
run: |
if [ -n "$LINKER_ENV" ] && [ -n "$LINKER" ]; then
echo "Using linker override: $LINKER_ENV=$LINKER"
export "$LINKER_ENV=$LINKER"
fi
cargo build --release --locked --target ${{ matrix.target }}
run: cargo build --release --locked --target ${{ matrix.target }}
- name: Check binary size (Unix)
if: runner.os != 'Windows'
run: |
BIN="target/${{ matrix.target }}/release/${{ matrix.artifact }}"
if [ ! -f "$BIN" ]; then
echo "::error::Expected binary not found: $BIN"
exit 1
fi
SIZE=$(stat -f%z "$BIN" 2>/dev/null || stat -c%s "$BIN")
SIZE=$(stat -f%z target/${{ matrix.target }}/release/${{ matrix.artifact }} 2>/dev/null || stat -c%s target/${{ matrix.target }}/release/${{ matrix.artifact }})
SIZE_MB=$((SIZE / 1024 / 1024))
echo "Binary size: ${SIZE_MB}MB ($SIZE bytes)"
echo "### Binary Size: ${{ matrix.target }}" >> "$GITHUB_STEP_SUMMARY"
echo "- Size: ${SIZE_MB}MB ($SIZE bytes)" >> "$GITHUB_STEP_SUMMARY"
if [ "$SIZE" -gt 41943040 ]; then
echo "::error::Binary exceeds 40MB safeguard (${SIZE_MB}MB)"
if [ "$SIZE" -gt 15728640 ]; then
echo "::error::Binary exceeds 15MB hard limit (${SIZE_MB}MB)"
exit 1
elif [ "$SIZE" -gt 15728640 ]; then
echo "::warning::Binary exceeds 15MB advisory target (${SIZE_MB}MB)"
elif [ "$SIZE" -gt 5242880 ]; then
echo "::warning::Binary exceeds 5MB target (${SIZE_MB}MB)"
else
@ -121,19 +70,19 @@ jobs:
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
tar czf ../../../zeroclaw-${{ matrix.target }}.tar.gz ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
7z a ../../../zeroclaw-${{ matrix.target }}.zip ${{ matrix.artifact }}
- name: Upload artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }}
path: zeroclaw-${{ matrix.target }}.*
retention-days: 7
publish:
@ -145,7 +94,7 @@ jobs:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Download all artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
path: artifacts
@ -170,7 +119,7 @@ jobs:
cat SHA256SUMS
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
- name: Sign artifacts with cosign (keyless)
run: |

View file

@ -3,20 +3,8 @@ name: Sec Audit
on:
push:
branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
pull_request:
branches: [main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
schedule:
- cron: "0 6 * * 1" # Weekly on Monday 6am UTC

View file

@ -2,7 +2,7 @@ name: Sec CodeQL
on:
schedule:
- cron: "0 6 * * 1" # Weekly Monday 6am UTC
- cron: "0 6,18 * * *" # Twice daily at 6am and 6pm UTC
workflow_dispatch:
concurrency:

View file

@ -17,7 +17,7 @@ permissions:
jobs:
update-notice:
name: Update NOTICE with new contributors
runs-on: ubuntu-latest
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4

View file

@ -1,8 +1,8 @@
name: Test Benchmarks
on:
schedule:
- cron: "0 3 * * 1" # Weekly Monday 3am UTC
push:
branches: [main]
workflow_dispatch:
concurrency:
@ -39,7 +39,7 @@ jobs:
path: |
target/criterion/
benchmark_output.txt
retention-days: 7
retention-days: 30
- name: Post benchmark summary on PR
if: github.event_name == 'pull_request'

1
.gitignore vendored
View file

@ -3,7 +3,6 @@ firmware/*/target
*.db
*.db-journal
.DS_Store
._*
.wt-pr37/
__pycache__/
*.pyc

View file

@ -26,13 +26,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `enc:` prefix for encrypted secrets — Use `enc2:` (ChaCha20-Poly1305) instead.
Legacy values are still decrypted for backward compatibility but should be migrated.
### Fixed
- **Onboarding channel menu dispatch** now uses an enum-backed selector instead of hard-coded
numeric match arms, preventing duplicated pattern arms and related `unreachable pattern`
compiler warnings in `src/onboard/wizard.rs`.
- **OpenAI native tool spec parsing** now uses owned serializable/deserializable structs,
fixing a compile-time type mismatch when validating tool schemas before API calls.
## [0.1.0] - 2026-02-13
### Added

132
CLA.md
View file

@ -1,132 +0,0 @@
# ZeroClaw Contributor License Agreement (CLA)
**Version 1.0 — February 2026**
**ZeroClaw Labs**
---
## Purpose
This Contributor License Agreement ("CLA") clarifies the intellectual
property rights granted by contributors to ZeroClaw Labs. This agreement
protects both contributors and users of the ZeroClaw project.
By submitting a contribution (pull request, patch, issue with code, or any
other form of code submission) to the ZeroClaw repository, you agree to the
terms of this CLA.
---
## 1. Definitions
- **"Contribution"** means any original work of authorship, including any
modifications or additions to existing work, submitted to ZeroClaw Labs
for inclusion in the ZeroClaw project.
- **"You"** means the individual or legal entity submitting a Contribution.
- **"ZeroClaw Labs"** means the maintainers and organization responsible
for the ZeroClaw project at https://github.com/zeroclaw-labs/zeroclaw.
---
## 2. Grant of Copyright License
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to:
- Reproduce, prepare derivative works of, publicly display, publicly
perform, sublicense, and distribute your Contributions and derivative
works under **both the MIT License and the Apache License 2.0**.
---
## 3. Grant of Patent License
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable patent license to make, have made, use, offer to sell, sell,
import, and otherwise transfer your Contributions.
This patent license applies only to patent claims licensable by you that
are necessarily infringed by your Contribution alone or in combination with
the ZeroClaw project.
**This protects you:** if a third party files a patent claim against
ZeroClaw that covers your Contribution, your patent license to the project
is not revoked.
---
## 4. You Retain Your Rights
This CLA does **not** transfer ownership of your Contribution to ZeroClaw
Labs. You retain full copyright ownership of your Contribution. You are
free to use your Contribution in any other project under any license.
---
## 5. Original Work
You represent that:
1. Each Contribution is your original creation, or you have sufficient
rights to submit it under this CLA.
2. Your Contribution does not knowingly infringe any third-party patent,
copyright, trademark, or other intellectual property right.
3. If your employer has rights to intellectual property you create, you
have received permission to submit the Contribution, or your employer
has signed a corporate CLA with ZeroClaw Labs.
---
## 6. No Trademark Rights
This CLA does not grant you any rights to use the ZeroClaw name,
trademarks, service marks, or logos. See TRADEMARK.md for trademark policy.
---
## 7. Attribution
ZeroClaw Labs will maintain attribution to contributors in the repository
commit history and NOTICE file. Your contributions are permanently and
publicly recorded.
---
## 8. Dual-License Commitment
All Contributions accepted into the ZeroClaw project are licensed under
both:
- **MIT License** — permissive open-source use
- **Apache License 2.0** — patent protection and stronger IP guarantees
This dual-license model ensures maximum compatibility and protection for
the entire contributor community.
---
## 9. How to Agree
By opening a pull request or submitting a patch to the ZeroClaw repository,
you indicate your agreement to this CLA. No separate signature is required
for individual contributors.
For **corporate contributors** (submitting on behalf of a company or
organization), please open an issue titled "Corporate CLA — [Company Name]"
and a maintainer will follow up.
---
## 10. Questions
If you have questions about this CLA, open an issue at:
https://github.com/zeroclaw-labs/zeroclaw/issues
---
*This CLA is based on the Apache Individual Contributor License Agreement
v2.0, adapted for the ZeroClaw dual-license model.*

1083
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -26,7 +26,7 @@ tokio-util = { version = "0.7", default-features = false }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream", "socks"] }
# Matrix client + E2EE decryption
matrix-sdk = { version = "0.16", optional = true, default-features = false, features = ["e2e-encryption", "rustls-tls", "markdown"] }
matrix-sdk = { version = "0.16", default-features = false, features = ["e2e-encryption", "rustls-tls", "markdown"] }
# Serialization
serde = { version = "1.0", default-features = false, features = ["derive"] }
@ -37,9 +37,6 @@ directories = "6.0"
toml = "1.0"
shellexpand = "3.1"
# JSON Schema generation for config export
schemars = "1.2"
# Logging - minimal
tracing = { version = "0.1", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi", "env-filter"] }
@ -72,10 +69,7 @@ sha2 = "0.10"
hex = "0.4"
# CSPRNG for secure token generation
rand = "0.10"
# serde-big-array for wa-rs storage (large array serialization)
serde-big-array = { version = "0.5", optional = true }
rand = "0.9"
# Fast mutexes that don't poison on panic
parking_lot = "0.12"
@ -103,8 +97,8 @@ console = "0.16"
# Hardware discovery (device path globbing)
glob = "0.3"
# WebSocket client channels (Discord/Lark/DingTalk)
tokio-tungstenite = { version = "0.28", features = ["rustls-tls-webpki-roots"] }
# Discord WebSocket gateway
tokio-tungstenite = { version = "0.24", features = ["rustls-tls-webpki-roots"] }
futures-util = { version = "0.3", default-features = false, features = ["sink"] }
futures = "0.3"
regex = "1.10"
@ -120,42 +114,27 @@ mail-parser = "0.11.2"
async-imap = { version = "0.11",features = ["runtime-tokio"], default-features = false }
# HTTP server (gateway) — replaces raw TCP for proper HTTP/1.1 compliance
axum = { version = "0.8", default-features = false, features = ["http1", "json", "tokio", "query", "ws", "macros"] }
axum = { version = "0.8", default-features = false, features = ["http1", "json", "tokio", "query", "ws"] }
tower = { version = "0.5", default-features = false }
tower-http = { version = "0.6", default-features = false, features = ["limit", "timeout"] }
http-body-util = "0.1"
# OpenTelemetry — OTLP trace + metrics export.
# Use the blocking HTTP exporter client to avoid Tokio-reactor panics in
# OpenTelemetry background batch threads when ZeroClaw emits spans/metrics from
# non-Tokio contexts.
# OpenTelemetry — OTLP trace + metrics export
opentelemetry = { version = "0.31", default-features = false, features = ["trace", "metrics"] }
opentelemetry_sdk = { version = "0.31", default-features = false, features = ["trace", "metrics"] }
opentelemetry-otlp = { version = "0.31", default-features = false, features = ["trace", "metrics", "http-proto", "reqwest-blocking-client", "reqwest-rustls-webpki-roots"] }
opentelemetry-otlp = { version = "0.31", default-features = false, features = ["trace", "metrics", "http-proto", "reqwest-client", "reqwest-rustls-webpki-roots"] }
# USB device enumeration (hardware discovery)
nusb = { version = "0.2", default-features = false, optional = true }
# Serial port for peripheral communication (STM32, etc.)
tokio-serial = { version = "5", default-features = false, optional = true }
# USB device enumeration (hardware discovery) — only on platforms nusb supports
# (Linux, macOS, Windows). Android/Termux uses target_os="android" and is excluded.
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))'.dependencies]
nusb = { version = "0.2", default-features = false, optional = true }
# probe-rs for STM32/Nucleo memory read (Phase B)
probe-rs = { version = "0.31", optional = true }
probe-rs = { version = "0.30", optional = true }
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
pdf-extract = { version = "0.10", optional = true }
tokio-stream = { version = "0.1.18", features = ["full"] }
# WhatsApp Web client (wa-rs) — optional, enable with --features whatsapp-web
# Uses wa-rs for Bot and Client, wa-rs-core for storage traits, custom rusqlite backend avoids Diesel conflict.
wa-rs = { version = "0.2", optional = true, default-features = false }
wa-rs-core = { version = "0.2", optional = true, default-features = false }
wa-rs-binary = { version = "0.2", optional = true, default-features = false }
wa-rs-proto = { version = "0.2", optional = true, default-features = false }
wa-rs-ureq-http = { version = "0.2", optional = true }
wa-rs-tokio-transport = { version = "0.2", optional = true, default-features = false }
# Raspberry Pi GPIO / Landlock (Linux only) — target-specific to avoid compile failure on macOS
[target.'cfg(target_os = "linux")'.dependencies]
@ -163,9 +142,8 @@ rppal = { version = "0.22", optional = true }
landlock = { version = "0.4", optional = true }
[features]
default = ["hardware", "channel-matrix"]
default = ["hardware"]
hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
peripheral-rpi = ["rppal"]
# Browser backend feature alias used by cfg(feature = "browser-native")
browser-native = ["dep:fantoccini"]
@ -180,9 +158,6 @@ landlock = ["sandbox-landlock"]
probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG
rag-pdf = ["dep:pdf-extract"]
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "serde-big-array"]
[profile.release]
opt-level = "z" # Optimize for size
lto = "thin" # Lower memory use during release builds
@ -206,7 +181,7 @@ panic = "abort"
[dev-dependencies]
tempfile = "3.14"
criterion = { version = "0.8", features = ["async_tokio"] }
criterion = { version = "0.5", features = ["async_tokio"] }
[[bench]]
name = "agent_benchmarks"

29
LICENSE
View file

@ -22,34 +22,7 @@ SOFTWARE.
================================================================================
TRADEMARK NOTICE
This license does not grant permission to use the trade names, trademarks,
service marks, or product names of ZeroClaw Labs, including "ZeroClaw",
"zeroclaw-labs", or associated logos, except as required for reasonable and
customary use in describing the origin of the Software.
Unauthorized use of the ZeroClaw name or branding to imply endorsement,
affiliation, or origin is strictly prohibited. See TRADEMARK.md for details.
================================================================================
DUAL LICENSE NOTICE
This software is available under a dual-license model:
1. MIT License (this file) — for open-source, research, academic, and
personal use. See LICENSE (this file).
2. Apache License 2.0 — for contributors and deployments requiring explicit
patent grants and stronger IP protection. See LICENSE-APACHE.
You may choose either license for your use. Contributors submitting patches
grant rights under both licenses. See CLA.md for the contributor agreement.
================================================================================
This product includes software developed by ZeroClaw Labs and contributors:
https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors
See NOTICE for full contributor attribution.
See NOTICE file for full contributor attribution.

View file

@ -1,186 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship made available under
the License, as indicated by a copyright notice that is included in
or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean, as defined in Section 5, any work of
authorship, including the original version of the Work and any
modifications or additions to that Work or Derivative Works of the
Work, that is intentionally submitted to the Licensor for inclusion
in the Work by the copyright owner or by an individual or Legal Entity
authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal,
or written communication sent to the Licensor or its representatives,
including but not limited to communication on electronic mailing lists,
source code control systems, and issue tracking systems that are managed
by, or on behalf of, the Licensor for the purpose of discussing and
improving the Work, but excluding communication that is conspicuously
marked or designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" shall mean Licensor and any Legal Entity on behalf of
whom a Contribution has been received by the Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a cross-claim
or counterclaim in a lawsuit) alleging that the Work or any Contribution
incorporated within the Work constitutes direct or contributory patent
infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or Derivative
Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, You must include a readable copy of the
attribution notices contained within such NOTICE file, in
at least one of the following places: within a NOTICE text
file distributed as part of the Derivative Works; within
the Source form or documentation, if provided along with the
Derivative Works; or, within a display generated by the
Derivative Works, if and wherever such third-party notices
normally appear. The contents of the NOTICE file are for
informational purposes only and do not modify the License.
You may add Your own attribution notices within Derivative
Works that You distribute, alongside or as an addendum to
the NOTICE text from the Work, provided that such additional
attribution notices cannot be construed as modifying the License.
You may add Your own license statement for Your modifications and
may provide additional grant of rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the
Contribution, either on its own or as part of the Work.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
including "ZeroClaw", "zeroclaw-labs", or associated logos, except
as required for reasonable and customary use in describing the origin
of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or exemplary damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or all other
commercial damages or losses), even if such Contributor has been
advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may offer such
obligations only on Your own behalf and on Your sole responsibility,
not on behalf of any other Contributor, and only if You agree to
indemnify, defend, and hold each Contributor harmless for any
liability incurred by, or claims asserted against, such Contributor
by reason of your accepting any warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2025 ZeroClaw Labs
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

24
NOTICE
View file

@ -3,26 +3,6 @@ Copyright 2025 ZeroClaw Labs
This product includes software developed at ZeroClaw Labs (https://github.com/zeroclaw-labs).
Official Repository
===================
The only official ZeroClaw repository is:
https://github.com/zeroclaw-labs/zeroclaw
Any other repository claiming to be ZeroClaw is unauthorized.
See TRADEMARK.md for the full trademark policy.
License
=======
This software is available under a dual-license model:
1. MIT License — see LICENSE
2. Apache License 2.0 — see LICENSE-APACHE
You may use either license. Contributors grant rights under both.
See CLA.md for the contributor license agreement.
Contributors
============
@ -30,10 +10,6 @@ This NOTICE file is maintained by repository automation.
For the latest contributor list, see the repository contributors page:
https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors
All contributors retain copyright ownership of their contributions.
Contributions are permanently attributed in the repository commit history.
Patent rights are protected for all contributors under Apache License 2.0.
Third-Party Dependencies
========================

View file

@ -8,15 +8,6 @@
<strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong>
</p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
🌐 言語: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p>
@ -42,17 +33,7 @@
>
> コマンド名、設定キー、API パス、Trait 名などの技術識別子は英語のまま維持しています。
>
> 最終同期日: **2026-02-19**
## 📢 お知らせボード
重要なお知らせ(互換性破壊変更、セキュリティ告知、メンテナンス時間、リリース阻害事項など)をここに掲載します。
| 日付 (UTC) | レベル | お知らせ | 対応 |
|---|---|---|---|
| 2026-02-19 | _緊急_ | 私たちは `openagen/zeroclaw` および `zeroclaw.org` とは**一切関係ありません**。`zeroclaw.org` は現在 `openagen/zeroclaw` の fork を指しており、そのドメイン/リポジトリは当プロジェクトの公式サイト・公式プロジェクトを装っています。 | これらの情報源による案内、バイナリ、資金調達情報、公式発表は信頼しないでください。必ず本リポジトリと認証済み公式SNSのみを参照してください。 |
| 2026-02-19 | _重要_ | 公式サイトは**まだ公開しておらず**、なりすましの試みを確認しています。ZeroClaw 名義の投資・資金調達などの活動には参加しないでください。 | 情報は本リポジトリを最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Telegram CN@zeroclawlabs_cn](https://t.me/zeroclawlabs_cn)、[Telegram RU@zeroclawlabs_ru](https://t.me/zeroclawlabs_ru) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-19 | _重要_ | Anthropic は 2026-02-19 に Authentication and Credential Use を更新しました。条文では、OAuth authenticationFree/Pro/Maxは Claude Code と Claude.ai 専用であり、Claude Free/Pro/Max で取得した OAuth トークンを他の製品・ツール・サービスAgent SDK を含むで使用することは許可されず、Consumer Terms of Service 違反に該当すると明記されています。 | 損失回避のため、当面は Claude Code OAuth 連携を試さないでください。原文: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
> 最終同期日: **2026-02-18**
## 概要
@ -119,12 +100,6 @@ cd zeroclaw
## クイックスタート
### HomebrewmacOS/Linuxbrew
```bash
brew install zeroclaw
```
```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
@ -142,106 +117,6 @@ zeroclaw gateway
zeroclaw daemon
```
## Subscription AuthOpenAI Codex / Claude Code
ZeroClaw はサブスクリプションベースのネイティブ認証プロファイルをサポートしています(マルチアカウント対応、保存時暗号化)。
- 保存先: `~/.zeroclaw/auth-profiles.json`
- 暗号化キー: `~/.zeroclaw/.secret_key`
- Profile ID 形式: `<provider>:<profile_name>`(例: `openai-codex:work`
OpenAI Codex OAuthChatGPT サブスクリプション):
```bash
# サーバー/ヘッドレス環境向け推奨
zeroclaw auth login --provider openai-codex --device-code
# ブラウザ/コールバックフロー(ペーストフォールバック付き)
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# 確認 / リフレッシュ / プロファイル切替
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token:
```bash
# サブスクリプション/setup token の貼り付けAuthorization header モード)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# エイリアスコマンド
zeroclaw auth setup-token --provider anthropic --profile default
```
Subscription auth で agent を実行:
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic は API key と auth token の両方の環境変数をサポート:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## アーキテクチャ
すべてのサブシステムは **Trait** — 設定変更だけで実装を差し替え可能、コード変更不要。
<p align="center">
<img src="docs/architecture.svg" alt="ZeroClaw アーキテクチャ" width="900" />
</p>
| サブシステム | Trait | 内蔵実装 | 拡張方法 |
|-------------|-------|----------|----------|
| **AI モデル** | `Provider` | `zeroclaw providers` で確認(現在 28 個の組み込み + エイリアス、カスタムエンドポイント対応) | `custom:https://your-api.com`OpenAI 互換)または `anthropic-custom:https://your-api.com` |
| **チャネル** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | 任意のメッセージ API |
| **メモリ** | `Memory` | SQLite ハイブリッド検索, PostgreSQL バックエンド, Lucid ブリッジ, Markdown ファイル, 明示的 `none` バックエンド, スナップショット/復元, オプション応答キャッシュ | 任意の永続化バックエンド |
| **ツール** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, ハードウェアツール | 任意の機能 |
| **オブザーバビリティ** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **ランタイム** | `RuntimeAdapter` | Native, Dockerサンドボックス | adapter 経由で追加可能;未対応の kind は即座にエラー |
| **セキュリティ** | `SecurityPolicy` | Gateway ペアリング, サンドボックス, allowlist, レート制限, ファイルシステムスコープ, 暗号化シークレット | — |
| **アイデンティティ** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | 任意の ID フォーマット |
| **トンネル** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | 任意のトンネルバイナリ |
| **ハートビート** | Engine | HEARTBEAT.md 定期タスク | — |
| **スキル** | Loader | TOML マニフェスト + SKILL.md インストラクション | コミュニティスキルパック |
| **インテグレーション** | Registry | 9 カテゴリ、70 件以上の連携 | プラグインシステム |
### ランタイムサポート(現状)
- ✅ 現在サポート: `runtime.kind = "native"` または `runtime.kind = "docker"`
- 🚧 計画中(未実装): WASM / エッジランタイム
未対応の `runtime.kind` が設定された場合、ZeroClaw は native へのサイレントフォールバックではなく、明確なエラーで終了します。
### メモリシステム(フルスタック検索エンジン)
すべて自社実装、外部依存ゼロ — Pinecone、Elasticsearch、LangChain 不要:
| レイヤー | 実装 |
|---------|------|
| **ベクトル DB** | Embeddings を SQLite に BLOB として保存、コサイン類似度検索 |
| **キーワード検索** | FTS5 仮想テーブル、BM25 スコアリング |
| **ハイブリッドマージ** | カスタム重み付きマージ関数(`vector.rs` |
| **Embeddings** | `EmbeddingProvider` trait — OpenAI、カスタム URL、または noop |
| **チャンキング** | 行ベースの Markdown チャンカー(見出し構造保持) |
| **キャッシュ** | SQLite `embedding_cache` テーブル、LRU エビクション |
| **安全な再インデックス** | FTS5 再構築 + 欠落ベクトルの再埋め込みをアトミックに実行 |
Agent はツール経由でメモリの呼び出し・保存・管理を自動的に行います。
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## セキュリティのデフォルト
- Gateway の既定バインド: `127.0.0.1:3000`

169
README.md
View file

@ -13,19 +13,13 @@
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
Built by students and members of the Harvard, MIT, and Sundai.Club communities.
</p>
<p align="center">
🌐 <strong>Languages:</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> · <a href="README.vi.md">Tiếng Việt</a>
🌐 <strong>Languages:</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p>
<p align="center">
@ -52,16 +46,6 @@ Built by students and members of the Harvard, MIT, and Sundai.Club communities.
<p align="center"><code>Trait-driven architecture · secure-by-default runtime · provider/channel/tool swappable · pluggable everything</code></p>
### 📢 Announcements
Use this board for important notices (breaking changes, security advisories, maintenance windows, and release blockers).
| Date (UTC) | Level | Notice | Action |
|---|---|---|---|
| 2026-02-19 | _Critical_ | We are **not affiliated** with `openagen/zeroclaw` or `zeroclaw.org`. The `zeroclaw.org` domain currently points to the `openagen/zeroclaw` fork, and that domain/repository are impersonating our official website/project. | Do not trust information, binaries, fundraising, or announcements from those sources. Use only this repository and our verified social accounts. |
| 2026-02-19 | _Important_ | We have **not** launched an official website yet, and we are seeing impersonation attempts. Do **not** join any investment or fundraising activity claiming the ZeroClaw name. | Use this repository as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Telegram CN (@zeroclawlabs_cn)](https://t.me/zeroclawlabs_cn), [Telegram RU (@zeroclawlabs_ru)](https://t.me/zeroclawlabs_ru), and [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) for official updates. |
| 2026-02-19 | _Important_ | Anthropic updated the Authentication and Credential Use terms on 2026-02-19. OAuth authentication (Free, Pro, Max) is intended exclusively for Claude Code and Claude.ai; using OAuth tokens from Claude Free/Pro/Max in any other product, tool, or service (including Agent SDK) is not permitted and may violate the Consumer Terms of Service. | Please temporarily avoid Claude Code OAuth integrations to prevent potential loss. Original clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Features
- 🏎️ **Lean Runtime by Default:** Common CLI and status workflows run in a few-megabyte memory envelope on release builds.
@ -88,7 +72,7 @@ Local machine quick benchmark (macOS arm64, Feb 2026) normalized for 0.8GHz edge
| **Binary Size** | ~28MB (dist) | N/A (Scripts) | ~8MB | **3.4 MB** |
| **Cost** | Mac Mini $599 | Linux SBC ~$50 | Linux Board $10 | **Any hardware $10** |
> Notes: ZeroClaw results are measured on release builds using `/usr/bin/time -l`. OpenClaw requires Node.js runtime (typically ~390MB additional memory overhead), while NanoBot requires Python runtime. PicoClaw and ZeroClaw are static binaries. The RAM figures above are runtime memory; build-time compilation requirements are higher.
> Notes: ZeroClaw results are measured on release builds using `/usr/bin/time -l`. OpenClaw requires Node.js runtime (typically ~390MB additional memory overhead), while NanoBot requires Python runtime. PicoClaw and ZeroClaw are static binaries.
<p align="center">
<img src="zero-claw.jpeg" alt="ZeroClaw vs OpenClaw Comparison" width="800" />
@ -173,44 +157,17 @@ Or skip the steps above and install everything (system deps, Rust, ZeroClaw) in
curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/install.sh | bash
```
#### Compilation resource requirements
Building from source needs more resources than running the resulting binary:
| Resource | Minimum | Recommended |
|---|---|---|
| **RAM + swap** | 2 GB | 4 GB+ |
| **Free disk** | 6 GB | 10 GB+ |
If your host is below the minimum, use pre-built binaries:
```bash
./bootstrap.sh --prefer-prebuilt
```
To require binary-only install with no source fallback:
```bash
./bootstrap.sh --prebuilt-only
```
#### Optional
- **Docker** — required only if using the [Docker sandboxed runtime](#runtime-support-current) (`runtime.kind = "docker"`). Install via your package manager or [docker.com](https://docs.docker.com/engine/install/).
> **Note:** The default `cargo build --release` uses `codegen-units=1` to lower peak compile pressure. For faster builds on powerful machines, use `cargo build --profile release-fast`.
> **Note:** The default `cargo build --release` uses `codegen-units=1` for compatibility with low-memory devices (e.g., Raspberry Pi 3 with 1GB RAM). For faster builds on powerful machines, use `cargo build --profile release-fast`.
</details>
## Quick Start
### Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
### One-click bootstrap
```bash
@ -222,17 +179,8 @@ cd zeroclaw
# Optional: bootstrap dependencies + Rust on fresh machines
./bootstrap.sh --install-system-deps --install-rust
# Optional: pre-built binary first (recommended on low-RAM/low-disk hosts)
./bootstrap.sh --prefer-prebuilt
# Optional: binary-only install (no source build fallback)
./bootstrap.sh --prebuilt-only
# Optional: run onboarding in the same flow
./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter [--model "openrouter/auto"]
# Optional: run bootstrap + onboarding fully in Docker
./bootstrap.sh --docker
./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter
```
Remote one-liner (review first in security-sensitive environments):
@ -243,25 +191,6 @@ curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts
Details: [`docs/one-click-bootstrap.md`](docs/one-click-bootstrap.md) (toolchain mode may request `sudo` for system packages).
### Pre-built binaries
Release assets are published for:
- Linux: `x86_64`, `aarch64`, `armv7`
- macOS: `x86_64`, `aarch64`
- Windows: `x86_64`
Download the latest assets from:
<https://github.com/zeroclaw-labs/zeroclaw/releases/latest>
Example (ARM64 Linux):
```bash
curl -fsSLO https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-aarch64-unknown-linux-gnu.tar.gz
tar xzf zeroclaw-aarch64-unknown-linux-gnu.tar.gz
install -m 0755 zeroclaw "$HOME/.cargo/bin/zeroclaw"
```
```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
@ -271,8 +200,8 @@ cargo install --path . --force --locked
# Ensure ~/.cargo/bin is in your PATH
export PATH="$HOME/.cargo/bin:$PATH"
# Quick setup (no prompts, optional model specification)
zeroclaw onboard --api-key sk-... --provider openrouter [--model "openrouter/auto"]
# Quick setup (no prompts)
zeroclaw onboard --api-key sk-... --provider openrouter
# Or interactive wizard
zeroclaw onboard --interactive
@ -315,7 +244,6 @@ zeroclaw integrations info Telegram
# Manage background service
zeroclaw service install
zeroclaw service status
zeroclaw service restart
# Migrate memory from OpenClaw (safe preview first)
zeroclaw migrate openclaw --dry-run
@ -524,37 +452,7 @@ For non-text replies, ZeroClaw can send Telegram attachments when the assistant
Paths can be local files (for example `/tmp/screenshot.png`) or HTTPS URLs.
### WhatsApp Setup
ZeroClaw supports two WhatsApp backends:
- **WhatsApp Web mode** (QR / pair code, no Meta Business API required)
- **WhatsApp Business Cloud API mode** (official Meta webhook flow)
#### WhatsApp Web mode (recommended for personal/self-hosted use)
1. **Build with WhatsApp Web support:**
```bash
cargo build --features whatsapp-web
```
2. **Configure ZeroClaw:**
```toml
[channels_config.whatsapp]
session_path = "~/.zeroclaw/state/whatsapp-web/session.db"
pair_phone = "15551234567" # optional; omit to use QR flow
pair_code = "" # optional custom pair code
allowed_numbers = ["+1234567890"] # E.164 format, or ["*"] for all
```
3. **Start channels/daemon and link device:**
- Run `zeroclaw channel start` (or `zeroclaw daemon`).
- Follow terminal pairing output (QR or pair code).
- In WhatsApp on phone: **Settings → Linked Devices**.
4. **Test:** Send a message from an allowed number and verify the agent replies.
#### WhatsApp Business Cloud API mode
### WhatsApp Business Cloud API Setup
WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling):
@ -595,10 +493,6 @@ WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling):
Config: `~/.zeroclaw/config.toml` (created by `onboard`)
When `zeroclaw channel start` is already running, changes to `default_provider`,
`default_model`, `default_temperature`, `api_key`, `api_url`, and `reliability.*`
are hot-applied on the next inbound channel message.
```toml
api_key = "sk-..."
default_provider = "openrouter"
@ -697,8 +591,6 @@ window_allowlist = [] # optional window title/process allowlist hints
enabled = false # opt-in: 1000+ OAuth apps via composio.dev
# api_key = "cmp_..." # optional: stored encrypted when [secrets].encrypt = true
entity_id = "default" # default user_id for Composio tool calls
# Runtime tip: if execute asks for connected_account_id, run composio with
# action='list_accounts' and app='gmail' (or your toolkit) to retrieve account IDs.
[identity]
format = "openclaw" # "openclaw" (default, markdown files) or "aieos" (JSON)
@ -875,7 +767,7 @@ See [aieos.org](https://aieos.org) for the full schema and live examples.
| `service` | Manage user-level background service |
| `doctor` | Diagnose daemon/scheduler/channel freshness |
| `status` | Show full system status |
| `cron` | Manage scheduled tasks (`list/add/add-at/add-every/once/remove/update/pause/resume`) |
| `cron` | Manage scheduled tasks (`list/add/add-at/add-every/once/remove/pause/resume`) |
| `models` | Refresh provider model catalogs (`models refresh`) |
| `providers` | List supported providers and aliases |
| `channel` | List/start/doctor channels and bind Telegram identities |
@ -887,18 +779,6 @@ See [aieos.org](https://aieos.org) for the full schema and live examples.
For a task-oriented command guide, see [`docs/commands-reference.md`](docs/commands-reference.md).
### Open-Skills Opt-In
Community `open-skills` sync is disabled by default. Enable it explicitly in `config.toml`:
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # optional
```
You can also override at runtime with `ZEROCLAW_OPEN_SKILLS_ENABLED` and `ZEROCLAW_OPEN_SKILLS_DIR`.
## Development
```bash
@ -989,42 +869,13 @@ A heartfelt thank you to the communities and institutions that inspire and fuel
We're building in the open because the best ideas come from everywhere. If you're reading this, you're part of it. Welcome. 🦀❤️
## ⚠️ Official Repository & Impersonation Warning
**This is the only official ZeroClaw repository:**
> https://github.com/zeroclaw-labs/zeroclaw
Any other repository, organization, domain, or package claiming to be "ZeroClaw" or implying affiliation with ZeroClaw Labs is **unauthorized and not affiliated with this project**. Known unauthorized forks will be listed in [TRADEMARK.md](TRADEMARK.md).
If you encounter impersonation or trademark misuse, please [open an issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## License
ZeroClaw is dual-licensed for maximum openness and contributor protection:
| License | Use case |
|---|---|
| [MIT](LICENSE) | Open-source, research, academic, personal use |
| [Apache 2.0](LICENSE-APACHE) | Patent protection, institutional, commercial deployment |
You may choose either license. **Contributors automatically grant rights under both** — see [CLA.md](CLA.md) for the full contributor agreement.
### Trademark
The **ZeroClaw** name and logo are trademarks of ZeroClaw Labs. This license does not grant permission to use them to imply endorsement or affiliation. See [TRADEMARK.md](TRADEMARK.md) for permitted and prohibited uses.
### Contributor Protections
- You **retain copyright** of your contributions
- **Patent grant** (Apache 2.0) shields you from patent claims by other contributors
- Your contributions are **permanently attributed** in commit history and [NOTICE](NOTICE)
- No trademark rights are transferred by contributing
MIT — see [LICENSE](LICENSE) for license terms and attribution baseline
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) and [CLA.md](CLA.md). Implement a trait, submit a PR:
See [CONTRIBUTING.md](CONTRIBUTING.md). Implement a trait, submit a PR:
- CI workflow guide: [docs/ci-map.md](docs/ci-map.md)
- New `Provider``src/providers/`
- New `Channel``src/channels/`

View file

@ -8,15 +8,6 @@
<strong>Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.</strong>
</p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
🌐 Языки: <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p>
@ -42,17 +33,7 @@
>
> Технические идентификаторы (команды, ключи конфигурации, API-пути, имена Trait) сохранены на английском.
>
> Последняя синхронизация: **2026-02-19**.
## 📢 Доска объявлений
Публикуйте здесь важные уведомления (breaking changes, security advisories, окна обслуживания и блокеры релиза).
| Дата (UTC) | Уровень | Объявление | Действие |
|---|---|---|---|
| 2026-02-19 | _Срочно_ | Мы **не аффилированы** с `openagen/zeroclaw` и `zeroclaw.org`. Домен `zeroclaw.org` сейчас указывает на fork `openagen/zeroclaw`, и этот домен/репозиторий выдают себя за наш официальный сайт и проект. | Не доверяйте информации, бинарникам, сборам средств и «официальным» объявлениям из этих источников. Используйте только этот репозиторий и наши верифицированные соцсети. |
| 2026-02-19 | _Важно_ | Официальный сайт пока **не запущен**, и мы уже видим попытки выдавать себя за ZeroClaw. Пожалуйста, не участвуйте в инвестициях, сборах средств или похожих активностях от имени ZeroClaw. | Ориентируйтесь только на этот репозиторий; также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Telegram CN (@zeroclawlabs_cn)](https://t.me/zeroclawlabs_cn), [Telegram RU (@zeroclawlabs_ru)](https://t.me/zeroclawlabs_ru) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-19 | _Важно_ | Anthropic обновил раздел Authentication and Credential Use 2026-02-19. В нем указано, что OAuth authentication (Free/Pro/Max) предназначена только для Claude Code и Claude.ai; использование OAuth-токенов, полученных через Claude Free/Pro/Max, в любых других продуктах, инструментах или сервисах (включая Agent SDK), не допускается и может считаться нарушением Consumer Terms of Service. | Чтобы избежать потерь, временно не используйте Claude Code OAuth-интеграции. Оригинал: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
> Последняя синхронизация: **2026-02-18**.
## О проекте
@ -119,12 +100,6 @@ cd zeroclaw
## Быстрый старт
### Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
@ -142,106 +117,6 @@ zeroclaw gateway
zeroclaw daemon
```
## Subscription Auth (OpenAI Codex / Claude Code)
ZeroClaw поддерживает нативные профили авторизации на основе подписки (мультиаккаунт, шифрование при хранении).
- Файл хранения: `~/.zeroclaw/auth-profiles.json`
- Ключ шифрования: `~/.zeroclaw/.secret_key`
- Формат Profile ID: `<provider>:<profile_name>` (пример: `openai-codex:work`)
OpenAI Codex OAuth (подписка ChatGPT):
```bash
# Рекомендуется для серверов/headless-окружений
zeroclaw auth login --provider openai-codex --device-code
# Браузерный/callback-поток с paste-фолбэком
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# Проверка / обновление / переключение профиля
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token:
```bash
# Вставка subscription/setup token (режим Authorization header)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# Команда-алиас
zeroclaw auth setup-token --provider anthropic --profile default
```
Запуск agent с subscription auth:
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic поддерживает и API key, и auth token через переменные окружения:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## Архитектура
Каждая подсистема — это **Trait**: меняйте реализации через конфигурацию, без изменения кода.
<p align="center">
<img src="docs/architecture.svg" alt="Архитектура ZeroClaw" width="900" />
</p>
| Подсистема | Trait | Встроенные реализации | Расширение |
|-----------|-------|---------------------|------------|
| **AI-модели** | `Provider` | Каталог через `zeroclaw providers` (сейчас 28 встроенных + алиасы, плюс пользовательские endpoint) | `custom:https://your-api.com` (OpenAI-совместимый) или `anthropic-custom:https://your-api.com` |
| **Каналы** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | Любой messaging API |
| **Память** | `Memory` | SQLite гибридный поиск, PostgreSQL-бэкенд, Lucid-мост, Markdown-файлы, явный `none`-бэкенд, snapshot/hydrate, опциональный кэш ответов | Любой persistence-бэкенд |
| **Инструменты** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, аппаратные инструменты | Любая функциональность |
| **Наблюдаемость** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **Runtime** | `RuntimeAdapter` | Native, Docker (sandbox) | Через adapter; неподдерживаемые kind завершаются с ошибкой |
| **Безопасность** | `SecurityPolicy` | Gateway pairing, sandbox, allowlist, rate limits, scoping файловой системы, шифрование секретов | — |
| **Идентификация** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | Любой формат идентификации |
| **Туннели** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | Любой tunnel-бинарник |
| **Heartbeat** | Engine | HEARTBEAT.md — периодические задачи | — |
| **Навыки** | Loader | TOML-манифесты + SKILL.md-инструкции | Пакеты навыков сообщества |
| **Интеграции** | Registry | 70+ интеграций в 9 категориях | Плагинная система |
### Поддержка runtime (текущая)
- ✅ Поддерживается сейчас: `runtime.kind = "native"` или `runtime.kind = "docker"`
- 🚧 Запланировано, но ещё не реализовано: WASM / edge-runtime
При указании неподдерживаемого `runtime.kind` ZeroClaw завершается с явной ошибкой, а не молча откатывается к native.
### Система памяти (полнофункциональный поисковый движок)
Полностью собственная реализация, ноль внешних зависимостей — без Pinecone, Elasticsearch, LangChain:
| Уровень | Реализация |
|---------|-----------|
| **Векторная БД** | Embeddings хранятся как BLOB в SQLite, поиск по косинусному сходству |
| **Поиск по ключевым словам** | Виртуальные таблицы FTS5 со скорингом BM25 |
| **Гибридное слияние** | Пользовательская взвешенная функция слияния (`vector.rs`) |
| **Embeddings** | Trait `EmbeddingProvider` — OpenAI, пользовательский URL или noop |
| **Чанкинг** | Построчный Markdown-чанкер с сохранением заголовков |
| **Кэширование** | Таблица `embedding_cache` в SQLite с LRU-вытеснением |
| **Безопасная переиндексация** | Атомарная перестройка FTS5 + повторное встраивание отсутствующих векторов |
Agent автоматически вспоминает, сохраняет и управляет памятью через инструменты.
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## Важные security-дефолты
- Gateway по умолчанию: `127.0.0.1:3000`

File diff suppressed because it is too large Load diff

View file

@ -8,15 +8,6 @@
<strong>零开销、零妥协;随处部署、万物可换。</strong>
</p>
<p align="center">
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN: @zeroclawlabs_cn" /></a>
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU: @zeroclawlabs_ru" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
🌐 语言:<a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a>
</p>
@ -42,17 +33,7 @@
>
> 技术标识命令、配置键、API 路径、Trait 名称)保持英文,避免语义漂移。
>
> 最后对齐时间:**2026-02-19**。
## 📢 公告板
用于发布重要通知(破坏性变更、安全通告、维护窗口、版本阻塞问题等)。
| 日期UTC | 级别 | 通知 | 处理建议 |
|---|---|---|---|
| 2026-02-19 | _紧急_ | 我们与 `openagen/zeroclaw``zeroclaw.org` **没有任何关系**`zeroclaw.org` 当前会指向 `openagen/zeroclaw` 这个 fork并且该域名/仓库正在冒充我们的官网与官方项目。 | 请不要相信上述来源发布的任何信息、二进制、募资活动或官方声明。请仅以本仓库和已验证官方社媒为准。 |
| 2026-02-19 | _重要_ | 我们目前**尚未发布官方正式网站**,且已发现有人尝试冒充我们。请勿参与任何打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以本仓库为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Telegram 中文频道(@zeroclawlabs_cn](https://t.me/zeroclawlabs_cn)、[Telegram 俄语频道(@zeroclawlabs_ru](https://t.me/zeroclawlabs_ru) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-19 | _重要_ | Anthropic 于 2026-02-19 更新了 Authentication and Credential Use 条款。条款明确OAuth authentication用于 Free、Pro、Max仅适用于 Claude Code 与 Claude.ai将 Claude Free/Pro/Max 账号获得的 OAuth token 用于其他任何产品、工具或服务(包括 Agent SDK不被允许并可能构成对 Consumer Terms of Service 的违规。 | 为避免损失,请暂时不要尝试 Claude Code OAuth 集成;原文见:[Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
> 最后对齐时间:**2026-02-18**。
## 项目简介
@ -119,12 +100,6 @@ cd zeroclaw
## 快速开始
### HomebrewmacOS/Linuxbrew
```bash
brew install zeroclaw
```
```bash
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
@ -147,106 +122,6 @@ zeroclaw gateway
zeroclaw daemon
```
## Subscription AuthOpenAI Codex / Claude Code
ZeroClaw 现已支持基于订阅的原生鉴权配置(多账号、静态加密存储)。
- 配置文件:`~/.zeroclaw/auth-profiles.json`
- 加密密钥:`~/.zeroclaw/.secret_key`
- Profile ID 格式:`<provider>:<profile_name>`(例:`openai-codex:work`
OpenAI Codex OAuthChatGPT 订阅):
```bash
# 推荐用于服务器/无显示器环境
zeroclaw auth login --provider openai-codex --device-code
# 浏览器/回调流程,支持粘贴回退
zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
# 检查 / 刷新 / 切换 profile
zeroclaw auth status
zeroclaw auth refresh --provider openai-codex --profile default
zeroclaw auth use --provider openai-codex --profile work
```
Claude Code / Anthropic setup-token
```bash
# 粘贴订阅/setup tokenAuthorization header 模式)
zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
# 别名命令
zeroclaw auth setup-token --provider anthropic --profile default
```
使用 subscription auth 运行 agent
```bash
zeroclaw agent --provider openai-codex -m "hello"
zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
# Anthropic 同时支持 API key 和 auth token 环境变量:
# ANTHROPIC_AUTH_TOKEN, ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
zeroclaw agent --provider anthropic -m "hello"
```
## 架构
每个子系统都是一个 **Trait** — 通过配置切换即可更换实现,无需修改代码。
<p align="center">
<img src="docs/architecture.svg" alt="ZeroClaw 架构图" width="900" />
</p>
| 子系统 | Trait | 内置实现 | 扩展方式 |
|--------|-------|----------|----------|
| **AI 模型** | `Provider` | 通过 `zeroclaw providers` 查看(当前 28 个内置 + 别名,以及自定义端点) | `custom:https://your-api.com`OpenAI 兼容)或 `anthropic-custom:https://your-api.com` |
| **通道** | `Channel` | CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook | 任意消息 API |
| **记忆** | `Memory` | SQLite 混合搜索, PostgreSQL 后端, Lucid 桥接, Markdown 文件, 显式 `none` 后端, 快照/恢复, 可选响应缓存 | 任意持久化后端 |
| **工具** | `Tool` | shell/file/memory, cron/schedule, git, pushover, browser, http_request, screenshot/image_info, composio (opt-in), delegate, 硬件工具 | 任意能力 |
| **可观测性** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **运行时** | `RuntimeAdapter` | Native, Docker沙箱 | 通过 adapter 添加;不支持的类型会快速失败 |
| **安全** | `SecurityPolicy` | Gateway 配对, 沙箱, allowlist, 速率限制, 文件系统作用域, 加密密钥 | — |
| **身份** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | 任意身份格式 |
| **隧道** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | 任意隧道工具 |
| **心跳** | Engine | HEARTBEAT.md 定期任务 | — |
| **技能** | Loader | TOML 清单 + SKILL.md 指令 | 社区技能包 |
| **集成** | Registry | 9 个分类下 70+ 集成 | 插件系统 |
### 运行时支持(当前)
- ✅ 当前支持:`runtime.kind = "native"``runtime.kind = "docker"`
- 🚧 计划中尚未实现WASM / 边缘运行时
配置了不支持的 `runtime.kind`ZeroClaw 会以明确的错误退出,而非静默回退到 native。
### 记忆系统(全栈搜索引擎)
全部自研,零外部依赖 — 无需 Pinecone、Elasticsearch、LangChain
| 层级 | 实现 |
|------|------|
| **向量数据库** | Embeddings 以 BLOB 存储于 SQLite余弦相似度搜索 |
| **关键词搜索** | FTS5 虚拟表BM25 评分 |
| **混合合并** | 自定义加权合并函数(`vector.rs` |
| **Embeddings** | `EmbeddingProvider` trait — OpenAI、自定义 URL 或 noop |
| **分块** | 基于行的 Markdown 分块器,保留标题结构 |
| **缓存** | SQLite `embedding_cache`LRU 淘汰策略 |
| **安全重索引** | 原子化重建 FTS5 + 重新嵌入缺失向量 |
Agent 通过工具自动进行记忆的回忆、保存和管理。
```toml
[memory]
backend = "sqlite" # "sqlite", "lucid", "postgres", "markdown", "none"
auto_save = true
embedding_provider = "none" # "none", "openai", "custom:https://..."
vector_weight = 0.7
keyword_weight = 0.3
```
## 安全默认行为(关键)
- Gateway 默认绑定:`127.0.0.1:3000`

View file

@ -1,129 +0,0 @@
# ZeroClaw Trademark Policy
**Effective date:** February 2026
**Maintained by:** ZeroClaw Labs
---
## Our Trademarks
The following are trademarks of ZeroClaw Labs:
- **ZeroClaw** (word mark)
- **zeroclaw-labs** (organization name)
- The ZeroClaw logo and associated visual identity
These marks identify the official ZeroClaw project and distinguish it from
unauthorized forks, derivatives, or impersonators.
---
## Official Repository
The **only** official ZeroClaw repository is:
> https://github.com/zeroclaw-labs/zeroclaw
Any other repository, organization, domain, or product claiming to be
"ZeroClaw" or implying affiliation with ZeroClaw Labs is unauthorized and
may constitute trademark infringement.
**Known unauthorized forks:**
- `openagen/zeroclaw` — not affiliated with ZeroClaw Labs
If you encounter an unauthorized use, please report it by opening an issue
at https://github.com/zeroclaw-labs/zeroclaw/issues.
---
## Permitted Uses
You **may** use the ZeroClaw name and marks in the following ways without
prior written permission:
1. **Attribution** — stating that your software is based on or derived from
ZeroClaw, provided it is clear your project is not the official ZeroClaw.
2. **Descriptive reference** — referring to ZeroClaw in documentation,
articles, blog posts, or presentations to accurately describe the software.
3. **Community discussion** — using the name in forums, issues, or social
media to discuss the project.
4. **Fork identification** — identifying your fork as "a fork of ZeroClaw"
with a clear link to the official repository.
---
## Prohibited Uses
You **may not** use the ZeroClaw name or marks in ways that:
1. **Imply official endorsement** — suggest your project, product, or
organization is officially affiliated with or endorsed by ZeroClaw Labs.
2. **Cause brand confusion** — use "ZeroClaw" as the primary name of a
competing or derivative product in a way that could confuse users about
the source.
3. **Impersonate the project** — create repositories, domains, packages,
or accounts that could be mistaken for the official ZeroClaw project.
4. **Misrepresent origin** — remove or obscure attribution to ZeroClaw Labs
while distributing the software or derivatives.
5. **Commercial trademark use** — use the marks in commercial products,
services, or marketing without prior written permission from ZeroClaw Labs.
---
## Fork Guidelines
Forks are welcome under the terms of the MIT and Apache 2.0 licenses. If
you fork ZeroClaw, you must:
- Clearly state your project is a fork of ZeroClaw
- Link back to the official repository
- Not use "ZeroClaw" as the primary name of your fork
- Not imply your fork is the official or original project
- Retain all copyright, license, and attribution notices
---
## Contributor Protections
Contributors to the official ZeroClaw repository are protected under the
dual MIT + Apache 2.0 license model:
- **Patent grant** (Apache 2.0) — your contributions are protected from
patent claims by other contributors.
- **Attribution** — your contributions are permanently recorded in the
repository history and NOTICE file.
- **No trademark transfer** — contributing code does not transfer any
trademark rights to third parties.
---
## Reporting Infringement
If you believe someone is infringing ZeroClaw trademarks:
1. Open an issue at https://github.com/zeroclaw-labs/zeroclaw/issues
2. Include the URL of the infringing content
3. Describe how it violates this policy
For serious or commercial infringement, contact the maintainers directly
through the repository.
---
## Changes to This Policy
ZeroClaw Labs reserves the right to update this policy at any time. Changes
will be committed to the official repository with a clear commit message.
---
*This trademark policy is separate from and in addition to the MIT and
Apache 2.0 software licenses. The licenses govern use of the source code;
this policy governs use of the ZeroClaw name and brand.*

View file

@ -1,5 +1,5 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd || pwd)"
exec "$ROOT_DIR/zeroclaw_install.sh" "$@"
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
exec "$ROOT_DIR/scripts/bootstrap.sh" "$@"

View file

@ -30,7 +30,7 @@ tokio = { version = "1.42", features = ["rt-multi-thread", "macros", "time", "sy
# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
toml = "1.0"
toml = "0.8"
# HTTP client (for Ollama vision)
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
@ -52,7 +52,7 @@ tracing = "0.1"
chrono = { version = "0.4", features = ["clock", "std"] }
# User directories
directories = "6.0"
directories = "5.0"
[target.'cfg(target_os = "linux")'.dependencies]

View file

@ -14,11 +14,6 @@ else
fi
COMPOSE_FILE="$BASE_DIR/docker-compose.yml"
if [ "$BASE_DIR" = "dev" ]; then
ENV_FILE=".env"
else
ENV_FILE="../.env"
fi
# Colors
GREEN='\033[0;32m'
@ -26,15 +21,6 @@ YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
function load_env {
if [ -f "$ENV_FILE" ]; then
# Auto-export variables from .env for docker compose passthrough.
set -a
source "$ENV_FILE"
set +a
fi
}
function ensure_config {
CONFIG_DIR="$HOST_TARGET_DIR/.zeroclaw"
CONFIG_FILE="$CONFIG_DIR/config.toml"
@ -69,8 +55,6 @@ if [ -z "$1" ]; then
exit 1
fi
load_env
case "$1" in
up)
ensure_config

View file

@ -20,20 +20,11 @@ services:
container_name: zeroclaw-dev
restart: unless-stopped
environment:
- API_KEY
- PROVIDER
- ZEROCLAW_MODEL
- ZEROCLAW_GATEWAY_PORT=3000
- SANDBOX_HOST=zeroclaw-sandbox
secrets:
- source: zeroclaw_env
target: zeroclaw_env
entrypoint: ["/bin/bash", "-lc"]
command:
- |
if [ -f /run/secrets/zeroclaw_env ]; then
set -a
. /run/secrets/zeroclaw_env
set +a
fi
exec zeroclaw gateway --port "${ZEROCLAW_GATEWAY_PORT:-3000}" --host "[::]"
volumes:
# Mount single config file (avoids shadowing other files in .zeroclaw)
- ../target/.zeroclaw/config.toml:/zeroclaw-data/.zeroclaw/config.toml
@ -66,7 +57,3 @@ services:
networks:
dev-net:
driver: bridge
secrets:
zeroclaw_env:
file: ../.env

View file

@ -51,43 +51,8 @@ Notes:
- Model cache previews come from `zeroclaw models refresh --provider <ID>`.
- These are runtime chat commands, not CLI subcommands.
## Inbound Image Marker Protocol
ZeroClaw supports multimodal input through inline message markers:
- Syntax: ``[IMAGE:<source>]``
- `<source>` can be:
- Local file path
- Data URI (`data:image/...;base64,...`)
- Remote URL only when `[multimodal].allow_remote_fetch = true`
Operational notes:
- Marker parsing applies to user-role messages before provider calls.
- Provider capability is enforced at runtime: if the selected provider does not support vision, the request fails with a structured capability error (`capability=vision`).
- Linq webhook `media` parts with `image/*` MIME type are automatically converted to this marker format.
## Channel Matrix
### Build Feature Toggle (`channel-matrix`)
Matrix support is controlled at compile time by the `channel-matrix` Cargo feature.
- Default builds include Matrix support (`default = ["hardware", "channel-matrix"]`).
- For faster local iteration when Matrix is not needed:
```bash
cargo check --no-default-features --features hardware
```
- To explicitly enable Matrix support in custom feature sets:
```bash
cargo check --no-default-features --features hardware,channel-matrix
```
If `[channels_config.matrix]` is present but the binary was built without `channel-matrix`, `zeroclaw channel list`, `zeroclaw channel doctor`, and `zeroclaw channel start` will log that Matrix is intentionally skipped for this build.
---
## 2. Delivery Modes at a Glance
@ -101,7 +66,7 @@ If `[channels_config.matrix]` is present but the binary was built without `chann
| Mattermost | polling | No |
| Matrix | sync API (supports E2EE) | No |
| Signal | signal-cli HTTP bridge | No (local bridge endpoint) |
| WhatsApp | webhook (Cloud API) or websocket (Web mode) | Cloud API: Yes (public HTTPS callback), Web mode: No |
| WhatsApp | webhook | Yes (public HTTPS callback) |
| Webhook | gateway endpoint (`/webhook`) | Usually yes |
| Email | IMAP polling + SMTP send | No |
| IRC | IRC socket | No |
@ -138,17 +103,8 @@ Field names differ by channel:
[channels_config.telegram]
bot_token = "123456:telegram-token"
allowed_users = ["*"]
stream_mode = "off" # optional: off | partial
draft_update_interval_ms = 1000 # optional: edit throttle for partial streaming
mention_only = false # optional: require @mention in groups
interrupt_on_new_message = false # optional: cancel in-flight same-sender same-chat request
```
Telegram notes:
- `interrupt_on_new_message = true` preserves interrupted user turns in conversation history, then restarts generation on the newest message.
- Interruption scope is strict: same sender in the same chat. Messages from different chats are processed independently.
### 4.2 Discord
```toml
@ -208,13 +164,6 @@ ignore_stories = true
### 4.7 WhatsApp
ZeroClaw supports two WhatsApp backends:
- **Cloud API mode** (`phone_number_id` + `access_token` + `verify_token`)
- **WhatsApp Web mode** (`session_path`, requires build flag `--features whatsapp-web`)
Cloud API mode:
```toml
[channels_config.whatsapp]
access_token = "EAAB..."
@ -224,22 +173,6 @@ app_secret = "your-app-secret" # optional but recommended
allowed_numbers = ["*"]
```
WhatsApp Web mode:
```toml
[channels_config.whatsapp]
session_path = "~/.zeroclaw/state/whatsapp-web/session.db"
pair_phone = "15551234567" # optional; omit to use QR flow
pair_code = "" # optional custom pair code
allowed_numbers = ["*"]
```
Notes:
- Build with `cargo build --features whatsapp-web` (or equivalent run command).
- Keep `session_path` on persistent storage to avoid relinking after restart.
- Reply routing uses the originating chat JID, so direct and group replies work correctly.
### 4.8 Webhook Channel Config (Gateway)
`channels_config.webhook` enables webhook-specific gateway behavior.
@ -398,7 +331,7 @@ rg -n "Matrix|Telegram|Discord|Slack|Mattermost|Signal|WhatsApp|Email|IRC|Lark|D
| Mattermost | `Mattermost channel listening on` | `Mattermost: ignoring message from unauthorized user:` | `Mattermost poll error:` / `Mattermost parse error:` |
| Matrix | `Matrix channel listening on room` / `Matrix room ... is encrypted; E2EE decryption is enabled via matrix-sdk.` | `Matrix whoami failed; falling back to configured session hints for E2EE session restore:` / `Matrix whoami failed while resolving listener user_id; using configured user_id hint:` | `Matrix sync error: ... retrying...` |
| Signal | `Signal channel listening via SSE on` | (allowlist checks are enforced by `allowed_from`) | `Signal SSE returned ...` / `Signal SSE connect error:` |
| WhatsApp (channel) | `WhatsApp channel active (webhook mode).` / `WhatsApp Web connected successfully` | `WhatsApp: ignoring message from unauthorized number:` / `WhatsApp Web: message from ... not in allowed list` | `WhatsApp send failed:` / `WhatsApp Web stream error:` |
| WhatsApp (channel) | `WhatsApp channel active (webhook mode).` | `WhatsApp: ignoring message from unauthorized number:` | `WhatsApp send failed:` |
| Webhook / WhatsApp (gateway) | `WhatsApp webhook verified successfully` | `Webhook: rejected — not paired / invalid bearer token` / `Webhook: rejected request — invalid or missing X-Webhook-Secret` / `WhatsApp webhook verification failed — token mismatch` | `Webhook JSON parse error:` |
| Email | `Email polling every ...` / `Email sent to ...` | `Blocked email from ...` | `Email poll failed:` / `Email poll task panicked:` |
| IRC | `IRC channel connecting to ...` / `IRC registered as ...` | (allowlist checks are enforced by `allowed_users`) | `IRC SASL authentication failed (...)` / `IRC server does not support SASL...` / `IRC nickname ... is in use, trying ...` |
@ -416,3 +349,4 @@ If a specific channel task crashes or exits, the channel supervisor in `channels
- `Channel message worker crashed:`
These messages indicate automatic restart behavior is active, and you should inspect preceding logs for root cause.

View file

@ -2,7 +2,7 @@
This reference is derived from the current CLI surface (`zeroclaw --help`).
Last verified: **February 19, 2026**.
Last verified: **February 18, 2026**.
## Top-Level Commands
@ -22,7 +22,6 @@ Last verified: **February 19, 2026**.
| `integrations` | Inspect integration details |
| `skills` | List/install/remove skills |
| `migrate` | Import from external runtimes (currently OpenClaw) |
| `config` | Export machine-readable config schema |
| `hardware` | Discover and introspect USB hardware |
| `peripheral` | Configure and flash peripherals |
@ -34,7 +33,6 @@ Last verified: **February 19, 2026**.
- `zeroclaw onboard --interactive`
- `zeroclaw onboard --channels-only`
- `zeroclaw onboard --api-key <KEY> --provider <ID> --memory <sqlite|lucid|markdown|none>`
- `zeroclaw onboard --api-key <KEY> --provider <ID> --model <MODEL_ID> --memory <sqlite|lucid|markdown|none>`
### `agent`
@ -53,7 +51,6 @@ Last verified: **February 19, 2026**.
- `zeroclaw service install`
- `zeroclaw service start`
- `zeroclaw service stop`
- `zeroclaw service restart`
- `zeroclaw service status`
- `zeroclaw service uninstall`
@ -92,13 +89,6 @@ Runtime in-chat commands (Telegram/Discord while channel server is running):
- `/model`
- `/model <model-id>`
Channel runtime also watches `config.toml` and hot-applies updates to:
- `default_provider`
- `default_model`
- `default_temperature`
- `api_key` / `api_url` (for the default provider)
- `reliability.*` provider retry settings
`add/remove` currently route you back to managed setup/manual config paths (not full declarative mutators yet).
### `integrations`
@ -111,20 +101,10 @@ Channel runtime also watches `config.toml` and hot-applies updates to:
- `zeroclaw skills install <source>`
- `zeroclaw skills remove <name>`
`<source>` accepts git remotes (`https://...`, `http://...`, `ssh://...`, and `git@host:owner/repo.git`) or a local filesystem path.
Skill manifests (`SKILL.toml`) support `prompts` and `[[tools]]`; both are injected into the agent system prompt at runtime, so the model can follow skill instructions without manually reading skill files.
### `migrate`
- `zeroclaw migrate openclaw [--source <path>] [--dry-run]`
### `config`
- `zeroclaw config schema`
`config schema` prints a JSON Schema (draft 2020-12) for the full `config.toml` contract to stdout.
### `hardware`
- `zeroclaw hardware discover`

View file

@ -2,21 +2,11 @@
This is a high-signal reference for common config sections and defaults.
Last verified: **February 19, 2026**.
Last verified: **February 18, 2026**.
Config path resolution at startup:
Config file path:
1. `ZEROCLAW_WORKSPACE` override (if set)
2. persisted `~/.zeroclaw/active_workspace.toml` marker (if present)
3. default `~/.zeroclaw/config.toml`
ZeroClaw logs the resolved config on startup at `INFO` level:
- `Config loaded` with fields: `path`, `workspace`, `source`, `initialized`
Schema export command:
- `zeroclaw config schema` (prints JSON Schema draft 2020-12 to stdout)
- `~/.zeroclaw/config.toml`
## Core Keys
@ -26,216 +16,17 @@ Schema export command:
| `default_model` | `anthropic/claude-sonnet-4-6` | model routed through selected provider |
| `default_temperature` | `0.7` | model temperature |
## `[observability]`
| Key | Default | Purpose |
|---|---|---|
| `backend` | `none` | Observability backend: `none`, `noop`, `log`, `prometheus`, `otel`, `opentelemetry`, or `otlp` |
| `otel_endpoint` | `http://localhost:4318` | OTLP HTTP endpoint used when backend is `otel` |
| `otel_service_name` | `zeroclaw` | Service name emitted to OTLP collector |
Notes:
- `backend = "otel"` uses OTLP HTTP export with a blocking exporter client so spans and metrics can be emitted safely from non-Tokio contexts.
- Alias values `opentelemetry` and `otlp` map to the same OTel backend.
Example:
```toml
[observability]
backend = "otel"
otel_endpoint = "http://localhost:4318"
otel_service_name = "zeroclaw"
```
## Environment Provider Overrides
Provider selection can also be controlled by environment variables. Precedence is:
1. `ZEROCLAW_PROVIDER` (explicit override, always wins when non-empty)
2. `PROVIDER` (legacy fallback, only applied when config provider is unset or still `openrouter`)
3. `default_provider` in `config.toml`
Operational note for container users:
- If your `config.toml` sets an explicit custom provider like `custom:https://.../v1`, a default `PROVIDER=openrouter` from Docker/container env will no longer replace it.
- Use `ZEROCLAW_PROVIDER` when you intentionally want runtime env to override a non-default configured provider.
## `[agent]`
| Key | Default | Purpose |
|---|---|---|
| `compact_context` | `false` | When true: bootstrap_max_chars=6000, rag_chunk_limit=2. Use for 13B or smaller models |
| `max_tool_iterations` | `10` | Maximum tool-call loop turns per user message across CLI, gateway, and channels |
| `max_history_messages` | `50` | Maximum conversation history messages retained per session |
| `parallel_tools` | `false` | Enable parallel tool execution within a single iteration |
| `tool_dispatcher` | `auto` | Tool dispatch strategy |
Notes:
- Setting `max_tool_iterations = 0` falls back to safe default `10`.
- If a channel message exceeds this value, the runtime returns: `Agent exceeded maximum tool iterations (<value>)`.
## `[agents.<name>]`
Delegate sub-agent configurations. Each key under `[agents]` defines a named sub-agent that the primary agent can delegate to.
| Key | Default | Purpose |
|---|---|---|
| `provider` | _required_ | Provider name (e.g. `"ollama"`, `"openrouter"`, `"anthropic"`) |
| `model` | _required_ | Model name for the sub-agent |
| `system_prompt` | unset | Optional system prompt override for the sub-agent |
| `api_key` | unset | Optional API key override (stored encrypted when `secrets.encrypt = true`) |
| `temperature` | unset | Temperature override for the sub-agent |
| `max_depth` | `3` | Max recursion depth for nested delegation |
```toml
[agents.researcher]
provider = "openrouter"
model = "anthropic/claude-sonnet-4-6"
system_prompt = "You are a research assistant."
max_depth = 2
[agents.coder]
provider = "ollama"
model = "qwen2.5-coder:32b"
temperature = 0.2
```
## `[runtime]`
| Key | Default | Purpose |
|---|---|---|
| `reasoning_enabled` | unset (`None`) | Global reasoning/thinking override for providers that support explicit controls |
Notes:
- `reasoning_enabled = false` explicitly disables provider-side reasoning for supported providers (currently `ollama`, via request field `think: false`).
- `reasoning_enabled = true` explicitly requests reasoning for supported providers (`think: true` on `ollama`).
- Unset keeps provider defaults.
## `[skills]`
| Key | Default | Purpose |
|---|---|---|
| `open_skills_enabled` | `false` | Opt-in loading/sync of community `open-skills` repository |
| `open_skills_dir` | unset | Optional local path for `open-skills` (defaults to `$HOME/open-skills` when enabled) |
Notes:
- Security-first default: ZeroClaw does **not** clone or sync `open-skills` unless `open_skills_enabled = true`.
- Environment overrides:
- `ZEROCLAW_OPEN_SKILLS_ENABLED` accepts `1/0`, `true/false`, `yes/no`, `on/off`.
- `ZEROCLAW_OPEN_SKILLS_DIR` overrides the repository path when non-empty.
- Precedence for enable flag: `ZEROCLAW_OPEN_SKILLS_ENABLED``skills.open_skills_enabled` in `config.toml` → default `false`.
## `[composio]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable Composio managed OAuth tools |
| `api_key` | unset | Composio API key used by the `composio` tool |
| `entity_id` | `default` | Default `user_id` sent on connect/execute calls |
Notes:
- Backward compatibility: legacy `enable = true` is accepted as an alias for `enabled = true`.
- If `enabled = false` or `api_key` is missing, the `composio` tool is not registered.
- ZeroClaw requests Composio v3 tools with `toolkit_versions=latest` and executes tools with `version="latest"` to avoid stale default tool revisions.
- Typical flow: call `connect`, complete browser OAuth, then run `execute` for the desired tool action.
- If Composio returns a missing connected-account reference error, call `list_accounts` (optionally with `app`) and pass the returned `connected_account_id` to `execute`.
## `[cost]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable cost tracking |
| `daily_limit_usd` | `10.00` | Daily spending limit in USD |
| `monthly_limit_usd` | `100.00` | Monthly spending limit in USD |
| `warn_at_percent` | `80` | Warn when spending reaches this percentage of limit |
| `allow_override` | `false` | Allow requests to exceed budget with `--override` flag |
Notes:
- When `enabled = true`, the runtime tracks per-request cost estimates and enforces daily/monthly limits.
- At `warn_at_percent` threshold, a warning is emitted but requests continue.
- When a limit is reached, requests are rejected unless `allow_override = true` and the `--override` flag is passed.
## `[identity]`
| Key | Default | Purpose |
|---|---|---|
| `format` | `openclaw` | Identity format: `"openclaw"` (default) or `"aieos"` |
| `aieos_path` | unset | Path to AIEOS JSON file (relative to workspace) |
| `aieos_inline` | unset | Inline AIEOS JSON (alternative to file path) |
Notes:
- Use `format = "aieos"` with either `aieos_path` or `aieos_inline` to load an AIEOS / OpenClaw identity document.
- Only one of `aieos_path` or `aieos_inline` should be set; `aieos_path` takes precedence.
## `[multimodal]`
| Key | Default | Purpose |
|---|---|---|
| `max_images` | `4` | Maximum image markers accepted per request |
| `max_image_size_mb` | `5` | Per-image size limit before base64 encoding |
| `allow_remote_fetch` | `false` | Allow fetching `http(s)` image URLs from markers |
Notes:
- Runtime accepts image markers in user messages with syntax: ``[IMAGE:<source>]``.
- Supported sources:
- Local file path (for example ``[IMAGE:/tmp/screenshot.png]``)
- Data URI (for example ``[IMAGE:data:image/png;base64,...]``)
- Remote URL only when `allow_remote_fetch = true`
- Allowed MIME types: `image/png`, `image/jpeg`, `image/webp`, `image/gif`, `image/bmp`.
- When the active provider does not support vision, requests fail with a structured capability error (`capability=vision`) instead of silently dropping images.
## `[browser]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable `browser_open` tool (opens URLs without scraping) |
| `allowed_domains` | `[]` | Allowed domains for `browser_open` (exact or subdomain match) |
| `session_name` | unset | Browser session name (for agent-browser automation) |
| `backend` | `agent_browser` | Browser automation backend: `"agent_browser"`, `"rust_native"`, `"computer_use"`, or `"auto"` |
| `native_headless` | `true` | Headless mode for rust-native backend |
| `native_webdriver_url` | `http://127.0.0.1:9515` | WebDriver endpoint URL for rust-native backend |
| `native_chrome_path` | unset | Optional Chrome/Chromium executable path for rust-native backend |
### `[browser.computer_use]`
| Key | Default | Purpose |
|---|---|---|
| `endpoint` | `http://127.0.0.1:8787/v1/actions` | Sidecar endpoint for computer-use actions (OS-level mouse/keyboard/screenshot) |
| `api_key` | unset | Optional bearer token for computer-use sidecar (stored encrypted) |
| `timeout_ms` | `15000` | Per-action request timeout in milliseconds |
| `allow_remote_endpoint` | `false` | Allow remote/public endpoint for computer-use sidecar |
| `window_allowlist` | `[]` | Optional window title/process allowlist forwarded to sidecar policy |
| `max_coordinate_x` | unset | Optional X-axis boundary for coordinate-based actions |
| `max_coordinate_y` | unset | Optional Y-axis boundary for coordinate-based actions |
Notes:
- When `backend = "computer_use"`, the agent delegates browser actions to the sidecar at `computer_use.endpoint`.
- `allow_remote_endpoint = false` (default) rejects any non-loopback endpoint to prevent accidental public exposure.
- Use `window_allowlist` to restrict which OS windows the sidecar can interact with.
## `[http_request]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable `http_request` tool for API interactions |
| `allowed_domains` | `[]` | Allowed domains for HTTP requests (exact or subdomain match) |
| `max_response_size` | `1000000` | Maximum response size in bytes (default: 1 MB) |
| `timeout_secs` | `30` | Request timeout in seconds |
Notes:
- Deny-by-default: if `allowed_domains` is empty, all HTTP requests are rejected.
- Use exact domain or subdomain matching (e.g. `"api.example.com"`, `"example.com"`).
## `[gateway]`
| Key | Default | Purpose |
@ -245,133 +36,20 @@ Notes:
| `require_pairing` | `true` | require pairing before bearer auth |
| `allow_public_bind` | `false` | block accidental public exposure |
## `[autonomy]`
| Key | Default | Purpose |
|---|---|---|
| `level` | `supervised` | `read_only`, `supervised`, or `full` |
| `workspace_only` | `true` | restrict writes/command paths to workspace scope |
| `allowed_commands` | _required for shell execution_ | allowlist of executable names |
| `forbidden_paths` | `[]` | explicit path denylist |
| `max_actions_per_hour` | `100` | per-policy action budget |
| `max_cost_per_day_cents` | `1000` | per-policy spend guardrail |
| `require_approval_for_medium_risk` | `true` | approval gate for medium-risk commands |
| `block_high_risk_commands` | `true` | hard block for high-risk commands |
| `auto_approve` | `[]` | tool operations always auto-approved |
| `always_ask` | `[]` | tool operations that always require approval |
Notes:
- `level = "full"` skips medium-risk approval gating for shell execution, while still enforcing configured guardrails.
- Shell separator/operator parsing is quote-aware. Characters like `;` inside quoted arguments are treated as literals, not command separators.
- Unquoted shell chaining/operators are still enforced by policy checks (`;`, `|`, `&&`, `||`, background chaining, and redirects).
## `[memory]`
| Key | Default | Purpose |
|---|---|---|
| `backend` | `sqlite` | `sqlite`, `lucid`, `markdown`, `none` |
| `auto_save` | `true` | persist user-stated inputs only (assistant outputs are excluded) |
| `auto_save` | `true` | automatic persistence |
| `embedding_provider` | `none` | `none`, `openai`, or custom endpoint |
| `embedding_model` | `text-embedding-3-small` | embedding model ID, or `hint:<name>` route |
| `embedding_dimensions` | `1536` | expected vector size for selected embedding model |
| `vector_weight` | `0.7` | hybrid ranking vector weight |
| `keyword_weight` | `0.3` | hybrid ranking keyword weight |
Notes:
- Memory context injection ignores legacy `assistant_resp*` auto-save keys to prevent old model-authored summaries from being treated as facts.
## `[[model_routes]]` and `[[embedding_routes]]`
Use route hints so integrations can keep stable names while model IDs evolve.
### `[[model_routes]]`
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Task hint name (e.g. `"reasoning"`, `"fast"`, `"code"`, `"summarize"`) |
| `provider` | _required_ | Provider to route to (must match a known provider name) |
| `model` | _required_ | Model to use with that provider |
| `api_key` | unset | Optional API key override for this route's provider |
### `[[embedding_routes]]`
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Route hint name (e.g. `"semantic"`, `"archive"`, `"faq"`) |
| `provider` | _required_ | Embedding provider (`"none"`, `"openai"`, or `"custom:<url>"`) |
| `model` | _required_ | Embedding model to use with that provider |
| `dimensions` | unset | Optional embedding dimension override for this route |
| `api_key` | unset | Optional API key override for this route's provider |
```toml
[memory]
embedding_model = "hint:semantic"
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "provider/model-id"
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
dimensions = 1536
```
Upgrade strategy:
1. Keep hints stable (`hint:reasoning`, `hint:semantic`).
2. Update only `model = "...new-version..."` in the route entries.
3. Validate with `zeroclaw doctor` before restart/rollout.
## `[query_classification]`
Automatic model hint routing — maps user messages to `[[model_routes]]` hints based on content patterns.
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable automatic query classification |
| `rules` | `[]` | Classification rules (evaluated in priority order) |
Each rule in `rules`:
| Key | Default | Purpose |
|---|---|---|
| `hint` | _required_ | Must match a `[[model_routes]]` hint value |
| `keywords` | `[]` | Case-insensitive substring matches |
| `patterns` | `[]` | Case-sensitive literal matches (for code fences, keywords like `"fn "`) |
| `min_length` | unset | Only match if message length ≥ N chars |
| `max_length` | unset | Only match if message length ≤ N chars |
| `priority` | `0` | Higher priority rules are checked first |
```toml
[query_classification]
enabled = true
[[query_classification.rules]]
hint = "reasoning"
keywords = ["explain", "analyze", "why"]
min_length = 200
priority = 10
[[query_classification.rules]]
hint = "fast"
keywords = ["hi", "hello", "thanks"]
max_length = 50
priority = 5
```
## `[channels_config]`
Top-level channel options are configured under `channels_config`.
| Key | Default | Purpose |
|---|---|---|
| `message_timeout_secs` | `300` | Base timeout in seconds for channel message processing; runtime scales this with tool-loop depth (up to 4x) |
Examples:
- `[channels_config.telegram]`
@ -379,107 +57,8 @@ Examples:
- `[channels_config.whatsapp]`
- `[channels_config.email]`
Notes:
- Default `300s` is optimized for on-device LLMs (Ollama) which are slower than cloud APIs.
- Runtime timeout budget is `message_timeout_secs * scale`, where `scale = min(max_tool_iterations, 4)` and a minimum of `1`.
- This scaling avoids false timeouts when the first LLM turn is slow/retried but later tool-loop turns still need to complete.
- If using cloud APIs (OpenAI, Anthropic, etc.), you can reduce this to `60` or lower.
- Values below `30` are clamped to `30` to avoid immediate timeout churn.
- When a timeout occurs, users receive: `⚠️ Request timed out while waiting for the model. Please try again.`
- Telegram-only interruption behavior is controlled with `channels_config.telegram.interrupt_on_new_message` (default `false`).
When enabled, a newer message from the same sender in the same chat cancels the in-flight request and preserves interrupted user context.
- While `zeroclaw channel start` is running, updates to `default_provider`, `default_model`, `default_temperature`, `api_key`, `api_url`, and `reliability.*` are hot-applied from `config.toml` on the next inbound message.
See detailed channel matrix and allowlist behavior in [channels-reference.md](channels-reference.md).
### `[channels_config.whatsapp]`
WhatsApp supports two backends under one config table.
Cloud API mode (Meta webhook):
| Key | Required | Purpose |
|---|---|---|
| `access_token` | Yes | Meta Cloud API bearer token |
| `phone_number_id` | Yes | Meta phone number ID |
| `verify_token` | Yes | Webhook verification token |
| `app_secret` | Optional | Enables webhook signature verification (`X-Hub-Signature-256`) |
| `allowed_numbers` | Recommended | Allowed inbound numbers (`[]` = deny all, `"*"` = allow all) |
WhatsApp Web mode (native client):
| Key | Required | Purpose |
|---|---|---|
| `session_path` | Yes | Persistent SQLite session path |
| `pair_phone` | Optional | Pair-code flow phone number (digits only) |
| `pair_code` | Optional | Custom pair code (otherwise auto-generated) |
| `allowed_numbers` | Recommended | Allowed inbound numbers (`[]` = deny all, `"*"` = allow all) |
Notes:
- WhatsApp Web requires build flag `whatsapp-web`.
- If both Cloud and Web fields are present, Cloud mode wins for backward compatibility.
## `[hardware]`
Hardware wizard configuration for physical-world access (STM32, probe, serial).
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Whether hardware access is enabled |
| `transport` | `none` | Transport mode: `"none"`, `"native"`, `"serial"`, or `"probe"` |
| `serial_port` | unset | Serial port path (e.g. `"/dev/ttyACM0"`) |
| `baud_rate` | `115200` | Serial baud rate |
| `probe_target` | unset | Probe target chip (e.g. `"STM32F401RE"`) |
| `workspace_datasheets` | `false` | Enable workspace datasheet RAG (index PDF schematics for AI pin lookups) |
Notes:
- Use `transport = "serial"` with `serial_port` for USB-serial connections.
- Use `transport = "probe"` with `probe_target` for debug-probe flashing (e.g. ST-Link).
- See [hardware-peripherals-design.md](hardware-peripherals-design.md) for protocol details.
## `[peripherals]`
Higher-level peripheral board configuration. Boards become agent tools when enabled.
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable peripheral support (boards become agent tools) |
| `boards` | `[]` | Board configurations |
| `datasheet_dir` | unset | Path to datasheet docs (relative to workspace) for RAG retrieval |
Each entry in `boards`:
| Key | Default | Purpose |
|---|---|---|
| `board` | _required_ | Board type: `"nucleo-f401re"`, `"rpi-gpio"`, `"esp32"`, etc. |
| `transport` | `serial` | Transport: `"serial"`, `"native"`, `"websocket"` |
| `path` | unset | Path for serial: `"/dev/ttyACM0"`, `"/dev/ttyUSB0"` |
| `baud` | `115200` | Baud rate for serial |
```toml
[peripherals]
enabled = true
datasheet_dir = "docs/datasheets"
[[peripherals.boards]]
board = "nucleo-f401re"
transport = "serial"
path = "/dev/ttyACM0"
baud = 115200
[[peripherals.boards]]
board = "rpi-gpio"
transport = "native"
```
Notes:
- Place `.md`/`.txt` datasheet files named by board (e.g. `nucleo-f401re.md`, `rpi-gpio.md`) in `datasheet_dir` for RAG retrieval.
- See [hardware-peripherals-design.md](hardware-peripherals-design.md) for board protocol and firmware notes.
## Security-Relevant Defaults
- deny-by-default channel allowlists (`[]` means deny all)
@ -494,7 +73,6 @@ After editing config:
zeroclaw status
zeroclaw doctor
zeroclaw channel doctor
zeroclaw service restart
```
## Related Docs

View file

@ -26,7 +26,7 @@ pub fn run_wizard() -> Result<Config> {
security: SecurityConfig::autodetect(), // Silent!
};
config.save().await?;
config.save()?;
Ok(config)
}
```

View file

@ -8,15 +8,6 @@ For first-time setup and quick orientation.
2. One-click setup and dual bootstrap mode: [../one-click-bootstrap.md](../one-click-bootstrap.md)
3. Find commands by tasks: [../commands-reference.md](../commands-reference.md)
## Choose Your Path
| Scenario | Command |
|----------|---------|
| I have an API key, want fastest setup | `zeroclaw onboard --api-key sk-... --provider openrouter` |
| I want guided prompts | `zeroclaw onboard --interactive` |
| Config exists, just fix channels | `zeroclaw onboard --channels-only` |
| Using subscription auth | See [Subscription Auth](../../README.md#subscription-auth-openai-codex--claude-code) |
## Onboarding and Validation
- Quick onboarding: `zeroclaw onboard --api-key "sk-..." --provider openrouter`

View file

@ -2,8 +2,6 @@
For board integration, firmware flow, and peripheral architecture.
ZeroClaw's hardware subsystem enables direct control of microcontrollers and peripherals via the `Peripheral` trait. Each board exposes tools for GPIO, ADC, and sensor operations, allowing agent-driven hardware interaction on boards like STM32 Nucleo, Raspberry Pi, and ESP32. See [hardware-peripherals-design.md](../hardware-peripherals-design.md) for the full architecture.
## Entry Points
- Architecture and peripheral model: [../hardware-peripherals-design.md](../hardware-peripherals-design.md)

View file

@ -2,13 +2,7 @@
This page defines the fastest supported path to install and initialize ZeroClaw.
Last verified: **February 20, 2026**.
## Option 0: Homebrew (macOS/Linuxbrew)
```bash
brew install zeroclaw
```
Last verified: **February 18, 2026**.
## Option A (Recommended): Clone + local script
@ -23,31 +17,6 @@ What it does by default:
1. `cargo build --release --locked`
2. `cargo install --path . --force --locked`
### Resource preflight and pre-built flow
Source builds typically require at least:
- **2 GB RAM + swap**
- **6 GB free disk**
When resources are constrained, bootstrap now attempts a pre-built binary first.
```bash
./bootstrap.sh --prefer-prebuilt
```
To require binary-only installation and fail if no compatible release asset exists:
```bash
./bootstrap.sh --prebuilt-only
```
To bypass pre-built flow and force source compilation:
```bash
./bootstrap.sh --force-source-build
```
## Dual-mode bootstrap
Default behavior is **app-only** (build/install ZeroClaw) and expects existing Rust toolchain.
@ -62,9 +31,6 @@ Notes:
- `--install-system-deps` installs compiler/build prerequisites (may require `sudo`).
- `--install-rust` installs Rust via `rustup` when missing.
- `--prefer-prebuilt` tries release binary download first, then falls back to source build.
- `--prebuilt-only` disables source fallback.
- `--force-source-build` disables pre-built flow entirely.
## Option B: Remote one-liner
@ -86,15 +52,6 @@ If you run Option B outside a repository checkout, the bootstrap script automati
## Optional onboarding modes
### Containerized onboarding (Docker)
```bash
./bootstrap.sh --docker
```
This builds a local ZeroClaw image and launches onboarding inside a container while
persisting config/workspace to `./.zeroclaw-docker`.
### Quick onboarding (non-interactive)
```bash

View file

@ -8,10 +8,6 @@ Time-bound project status snapshots for planning documentation and operations wo
## Scope
Project snapshots are time-bound assessments of open PRs, issues, and documentation health. Use these to:
Use snapshots to understand changing PR/issue pressure and prioritize doc maintenance.
- Identify documentation gaps driven by feature work
- Prioritize docs maintenance alongside code changes
- Track evolving PR/issue pressure over time
For stable documentation classification (not time-bound), use [docs-inventory.md](../docs-inventory.md).
For stable classification of docs intent, use [../docs-inventory.md](../docs-inventory.md).

View file

@ -2,7 +2,7 @@
This document maps provider IDs, aliases, and credential environment variables.
Last verified: **February 19, 2026**.
Last verified: **February 18, 2026**.
## How to List Providers
@ -18,10 +18,6 @@ Runtime resolution order is:
2. Provider-specific env var(s)
3. Generic fallback env vars: `ZEROCLAW_API_KEY` then `API_KEY`
For resilient fallback chains (`reliability.fallback_providers`), each fallback
provider resolves credentials independently. The primary provider's explicit
credential is not reused for fallback providers.
## Provider Catalog
| Canonical ID | Aliases | Local | Provider-specific env var(s) |
@ -41,9 +37,9 @@ credential is not reused for fallback providers.
| `zai` | `z.ai` | No | `ZAI_API_KEY` |
| `glm` | `zhipu` | No | `GLM_API_KEY` |
| `minimax` | `minimax-intl`, `minimax-io`, `minimax-global`, `minimax-cn`, `minimaxi`, `minimax-oauth`, `minimax-oauth-cn`, `minimax-portal`, `minimax-portal-cn` | No | `MINIMAX_OAUTH_TOKEN`, `MINIMAX_API_KEY` |
| `bedrock` | `aws-bedrock` | No | `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` (optional: `AWS_REGION`) |
| `bedrock` | `aws-bedrock` | No | (use config/`API_KEY` fallback) |
| `qianfan` | `baidu` | No | `QIANFAN_API_KEY` |
| `qwen` | `dashscope`, `qwen-intl`, `dashscope-intl`, `qwen-us`, `dashscope-us`, `qwen-code`, `qwen-oauth`, `qwen_oauth` | No | `QWEN_OAUTH_TOKEN`, `DASHSCOPE_API_KEY` |
| `qwen` | `dashscope`, `qwen-intl`, `dashscope-intl`, `qwen-us`, `dashscope-us` | No | `DASHSCOPE_API_KEY` |
| `groq` | — | No | `GROQ_API_KEY` |
| `mistral` | — | No | `MISTRAL_API_KEY` |
| `xai` | `grok` | No | `XAI_API_KEY` |
@ -56,46 +52,6 @@ credential is not reused for fallback providers.
| `lmstudio` | `lm-studio` | Yes | (optional; local by default) |
| `nvidia` | `nvidia-nim`, `build.nvidia.com` | No | `NVIDIA_API_KEY` |
### Gemini Notes
- Provider ID: `gemini` (aliases: `google`, `google-gemini`)
- Auth can come from `GEMINI_API_KEY`, `GOOGLE_API_KEY`, or Gemini CLI OAuth cache (`~/.gemini/oauth_creds.json`)
- API key requests use `generativelanguage.googleapis.com/v1beta`
- Gemini CLI OAuth requests use `cloudcode-pa.googleapis.com/v1internal` with Code Assist request envelope semantics
### Ollama Vision Notes
- Provider ID: `ollama`
- Vision input is supported through user message image markers: ``[IMAGE:<source>]``.
- After multimodal normalization, ZeroClaw sends image payloads through Ollama's native `messages[].images` field.
- If a non-vision provider is selected, ZeroClaw returns a structured capability error instead of silently ignoring images.
### Bedrock Notes
- Provider ID: `bedrock` (alias: `aws-bedrock`)
- API: [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
- Authentication: AWS AKSK (not a single API key). Set `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` environment variables.
- Optional: `AWS_SESSION_TOKEN` for temporary/STS credentials, `AWS_REGION` or `AWS_DEFAULT_REGION` (default: `us-east-1`).
- Default onboarding model: `anthropic.claude-sonnet-4-5-20250929-v1:0`
- Supports native tool calling and prompt caching (`cachePoint`).
- Cross-region inference profiles supported (e.g., `us.anthropic.claude-*`).
- Model IDs use Bedrock format: `anthropic.claude-sonnet-4-6`, `anthropic.claude-opus-4-6-v1`, etc.
### Ollama Reasoning Toggle
You can control Ollama reasoning/thinking behavior from `config.toml`:
```toml
[runtime]
reasoning_enabled = false
```
Behavior:
- `false`: sends `think: false` to Ollama `/api/chat` requests.
- `true`: sends `think: true`.
- Unset: omits `think` and keeps Ollama/model defaults.
### Kimi Code Notes
- Provider ID: `kimi-code`
@ -151,33 +107,6 @@ Optional:
- `MINIMAX_OAUTH_REGION=global` or `cn` (defaults by provider alias)
- `MINIMAX_OAUTH_CLIENT_ID` to override the default OAuth client id
Channel compatibility note:
- For MiniMax-backed channel conversations, runtime history is normalized to keep valid `user`/`assistant` turn order.
- Channel-specific delivery guidance (for example Telegram attachment markers) is merged into the leading system prompt instead of being appended as a trailing `system` turn.
## Qwen Code OAuth Setup (config.toml)
Set Qwen Code OAuth mode in config:
```toml
default_provider = "qwen-code"
api_key = "qwen-oauth"
```
Credential resolution for `qwen-code`:
1. Explicit `api_key` value (if not the placeholder `qwen-oauth`)
2. `QWEN_OAUTH_TOKEN`
3. `~/.qwen/oauth_creds.json` (reuses Qwen Code cached OAuth credentials)
4. Optional refresh via `QWEN_OAUTH_REFRESH_TOKEN` (or cached refresh token)
5. If no OAuth placeholder is used, `DASHSCOPE_API_KEY` can still be used as fallback
Optional endpoint override:
- `QWEN_OAUTH_RESOURCE_URL` (normalized to `https://.../v1` if needed)
- If unset, `resource_url` from cached OAuth credentials is used when available
## Model Routing (`hint:<name>`)
You can route model calls by hint using `[[model_routes]]`:
@ -199,56 +128,3 @@ Then call with a hint model name (for example from tool or integration paths):
```text
hint:reasoning
```
## Embedding Routing (`hint:<name>`)
You can route embedding calls with the same hint pattern using `[[embedding_routes]]`.
Set `[memory].embedding_model` to a `hint:<name>` value to activate routing.
```toml
[memory]
embedding_model = "hint:semantic"
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
dimensions = 1536
[[embedding_routes]]
hint = "archive"
provider = "custom:https://embed.example.com/v1"
model = "your-embedding-model-id"
dimensions = 1024
```
Supported embedding providers:
- `none`
- `openai`
- `custom:<url>` (OpenAI-compatible embeddings endpoint)
Optional per-route key override:
```toml
[[embedding_routes]]
hint = "semantic"
provider = "openai"
model = "text-embedding-3-small"
api_key = "sk-route-specific"
```
## Upgrading Models Safely
Use stable hints and update only route targets when providers deprecate model IDs.
Recommended workflow:
1. Keep call sites stable (`hint:reasoning`, `hint:semantic`).
2. Change only the target model under `[[model_routes]]` or `[[embedding_routes]]`.
3. Run:
- `zeroclaw doctor`
- `zeroclaw status`
4. Smoke test one representative flow (chat + memory retrieval) before rollout.
This minimizes breakage because integrations and prompts do not need to change when model IDs are upgraded.

View file

@ -2,7 +2,7 @@
This guide focuses on common setup/runtime failures and fast resolution paths.
Last verified: **February 20, 2026**.
Last verified: **February 18, 2026**.
## Installation / Bootstrap
@ -32,93 +32,6 @@ Fix:
./bootstrap.sh --install-system-deps
```
### Build fails on low-RAM / low-disk hosts
Symptoms:
- `cargo build --release` is killed (`signal: 9`, OOM killer, or `cannot allocate memory`)
- Build crashes after adding swap because disk space runs out
Why this happens:
- Runtime memory (<5MB for common operations) is not the same as compile-time memory.
- Full source build can require **2 GB RAM + swap** and **6+ GB free disk**.
- Enabling swap on a tiny disk can avoid RAM OOM but still fail due to disk exhaustion.
Preferred path for constrained machines:
```bash
./bootstrap.sh --prefer-prebuilt
```
Binary-only mode (no source fallback):
```bash
./bootstrap.sh --prebuilt-only
```
If you must compile from source on constrained hosts:
1. Add swap only if you also have enough free disk for both swap + build output.
1. Limit cargo parallelism:
```bash
CARGO_BUILD_JOBS=1 cargo build --release --locked
```
1. Reduce heavy features when Matrix is not required:
```bash
cargo build --release --locked --no-default-features --features hardware
```
1. Cross-compile on a stronger machine and copy the binary to the target host.
### Build is very slow or appears stuck
Symptoms:
- `cargo check` / `cargo build` appears stuck at `Checking zeroclaw` for a long time
- repeated `Blocking waiting for file lock on package cache` or `build directory`
Why this happens in ZeroClaw:
- Matrix E2EE stack (`matrix-sdk`, `ruma`, `vodozemac`) is large and expensive to type-check.
- TLS + crypto native build scripts (`aws-lc-sys`, `ring`) add noticeable compile time.
- `rusqlite` with bundled SQLite compiles C code locally.
- Running multiple cargo jobs/worktrees in parallel causes lock contention.
Fast checks:
```bash
cargo check --timings
cargo tree -d
```
The timing report is written to `target/cargo-timings/cargo-timing.html`.
Faster local iteration (when Matrix channel is not needed):
```bash
cargo check --no-default-features --features hardware
```
This skips `channel-matrix` and can significantly reduce compile time.
To build with Matrix support explicitly enabled:
```bash
cargo check --no-default-features --features hardware,channel-matrix
```
Lock-contention mitigation:
```bash
pgrep -af "cargo (check|build|test)|cargo check|cargo build|cargo test"
```
Stop unrelated cargo jobs before running your own build.
### `zeroclaw` command not found after install
Symptom:

99
flake.lock generated
View file

@ -1,99 +0,0 @@
{
"nodes": {
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1771398736,
"narHash": "sha256-pjV3C7VJHN0o2SvE3O6xiwraLt7bnlWIF3o7Q0BC1jk=",
"owner": "nix-community",
"repo": "fenix",
"rev": "0f608091816de13d92e1f4058b501028b782dddd",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-unstable",
"type": "indirect"
}
},
"root": {
"inputs": {
"fenix": "fenix",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1771353660,
"narHash": "sha256-yp1y55kXgaa08g/gR3CNiUdkg1JRjPYfkKtEIRNE6S8=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "09f2d468eda25a5f06ae70046357c70ae5cd77c7",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

View file

@ -1,61 +0,0 @@
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
};
nixpkgs.url = "nixpkgs/nixos-unstable";
};
outputs = { flake-utils, fenix, nixpkgs, ... }:
let
nixosModule = { pkgs, ... }: {
nixpkgs.overlays = [ fenix.overlays.default ];
environment.systemPackages = [
(pkgs.fenix.stable.withComponents [
"cargo"
"clippy"
"rust-src"
"rustc"
"rustfmt"
])
pkgs.rust-analyzer
];
};
in
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs {
inherit system;
overlays = [ fenix.overlays.default ];
};
rustToolchain = pkgs.fenix.stable.withComponents [
"cargo"
"clippy"
"rust-src"
"rustc"
"rustfmt"
];
in {
packages.default = fenix.packages.${system}.stable.toolchain;
devShells.default = pkgs.mkShell {
packages = [
rustToolchain
pkgs.rust-analyzer
];
};
}) // {
nixosConfigurations = {
nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [ nixosModule ];
};
nixos-aarch64 = nixpkgs.lib.nixosSystem {
system = "aarch64-linux";
modules = [ nixosModule ];
};
};
};
}

View file

@ -24,21 +24,3 @@ name = "fuzz_tool_params"
path = "fuzz_targets/fuzz_tool_params.rs"
test = false
doc = false
[[bin]]
name = "fuzz_webhook_payload"
path = "fuzz_targets/fuzz_webhook_payload.rs"
test = false
doc = false
[[bin]]
name = "fuzz_provider_response"
path = "fuzz_targets/fuzz_provider_response.rs"
test = false
doc = false
[[bin]]
name = "fuzz_command_validation"
path = "fuzz_targets/fuzz_command_validation.rs"
test = false
doc = false

View file

@ -1,10 +0,0 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use zeroclaw::security::SecurityPolicy;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
let policy = SecurityPolicy::default();
let _ = policy.validate_command_execution(s, false);
}
});

View file

@ -1,9 +0,0 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
// Fuzz provider API response deserialization
let _ = serde_json::from_str::<serde_json::Value>(s);
}
});

View file

@ -1,9 +0,0 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
// Fuzz webhook body deserialization
let _ = serde_json::from_str::<serde_json::Value>(s);
}
});

View file

@ -15,61 +15,38 @@ error() {
usage() {
cat <<'USAGE'
ZeroClaw installer bootstrap engine
ZeroClaw one-click bootstrap
Usage:
./zeroclaw_install.sh [options]
./bootstrap.sh [options] # compatibility entrypoint
./bootstrap.sh [options]
Modes:
Default mode installs/builds ZeroClaw only (requires existing Rust toolchain).
Guided mode asks setup questions and configures options interactively.
Optional bootstrap mode can also install system dependencies and Rust.
Options:
--guided Run interactive guided installer
--no-guided Disable guided installer
--docker Run bootstrap in Docker and launch onboarding inside the container
--install-system-deps Install build dependencies (Linux/macOS)
--install-rust Install Rust via rustup if missing
--prefer-prebuilt Try latest release binary first; fallback to source build on miss
--prebuilt-only Install only from latest release binary (no source build fallback)
--force-source-build Disable prebuilt flow and always build from source
--onboard Run onboarding after install
--interactive-onboard Run interactive onboarding (implies --onboard)
--api-key <key> API key for non-interactive onboarding
--provider <id> Provider for non-interactive onboarding (default: openrouter)
--model <id> Model for non-interactive onboarding (optional)
--build-first Alias for explicitly enabling separate `cargo build --release --locked`
--skip-build Skip `cargo build --release --locked`
--skip-install Skip `cargo install --path . --force --locked`
-h, --help Show help
Examples:
./zeroclaw_install.sh
./zeroclaw_install.sh --guided
./zeroclaw_install.sh --install-system-deps --install-rust
./zeroclaw_install.sh --prefer-prebuilt
./zeroclaw_install.sh --prebuilt-only
./zeroclaw_install.sh --onboard --api-key "sk-..." --provider openrouter [--model "openrouter/auto"]
./zeroclaw_install.sh --interactive-onboard
# Compatibility entrypoint:
./bootstrap.sh --docker
./bootstrap.sh
./bootstrap.sh --install-system-deps --install-rust
./bootstrap.sh --onboard --api-key "sk-..." --provider openrouter
./bootstrap.sh --interactive-onboard
# Remote one-liner
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/bootstrap.sh | bash
Environment:
ZEROCLAW_DOCKER_DATA_DIR Host path for Docker config/workspace persistence
ZEROCLAW_DOCKER_IMAGE Docker image tag to build/run (default: zeroclaw-bootstrap:local)
ZEROCLAW_API_KEY Used when --api-key is not provided
ZEROCLAW_PROVIDER Used when --provider is not provided (default: openrouter)
ZEROCLAW_MODEL Used when --model is not provided
ZEROCLAW_BOOTSTRAP_MIN_RAM_MB Minimum RAM threshold for source build preflight (default: 2048)
ZEROCLAW_BOOTSTRAP_MIN_DISK_MB Minimum free disk threshold for source build preflight (default: 6144)
ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS
Set to 1 to disable Alpine auto-install of missing prerequisites
USAGE
}
@ -77,155 +54,6 @@ have_cmd() {
command -v "$1" >/dev/null 2>&1
}
get_total_memory_mb() {
case "$(uname -s)" in
Linux)
if [[ -r /proc/meminfo ]]; then
awk '/MemTotal:/ {printf "%d\n", $2 / 1024}' /proc/meminfo
fi
;;
Darwin)
if have_cmd sysctl; then
local bytes
bytes="$(sysctl -n hw.memsize 2>/dev/null || true)"
if [[ "$bytes" =~ ^[0-9]+$ ]]; then
echo $((bytes / 1024 / 1024))
fi
fi
;;
esac
}
get_available_disk_mb() {
local path="${1:-.}"
local free_kb
free_kb="$(df -Pk "$path" 2>/dev/null | awk 'NR==2 {print $4}')"
if [[ "$free_kb" =~ ^[0-9]+$ ]]; then
echo $((free_kb / 1024))
fi
}
detect_release_target() {
local os arch
os="$(uname -s)"
arch="$(uname -m)"
case "$os:$arch" in
Linux:x86_64)
echo "x86_64-unknown-linux-gnu"
;;
Linux:aarch64|Linux:arm64)
echo "aarch64-unknown-linux-gnu"
;;
Linux:armv7l|Linux:armv6l)
echo "armv7-unknown-linux-gnueabihf"
;;
Darwin:x86_64)
echo "x86_64-apple-darwin"
;;
Darwin:arm64|Darwin:aarch64)
echo "aarch64-apple-darwin"
;;
*)
return 1
;;
esac
}
should_attempt_prebuilt_for_resources() {
local workspace="${1:-.}"
local min_ram_mb min_disk_mb total_ram_mb free_disk_mb low_resource
min_ram_mb="${ZEROCLAW_BOOTSTRAP_MIN_RAM_MB:-2048}"
min_disk_mb="${ZEROCLAW_BOOTSTRAP_MIN_DISK_MB:-6144}"
total_ram_mb="$(get_total_memory_mb || true)"
free_disk_mb="$(get_available_disk_mb "$workspace" || true)"
low_resource=false
if [[ "$total_ram_mb" =~ ^[0-9]+$ && "$total_ram_mb" -lt "$min_ram_mb" ]]; then
low_resource=true
fi
if [[ "$free_disk_mb" =~ ^[0-9]+$ && "$free_disk_mb" -lt "$min_disk_mb" ]]; then
low_resource=true
fi
if [[ "$low_resource" == true ]]; then
warn "Source build preflight indicates constrained resources."
if [[ "$total_ram_mb" =~ ^[0-9]+$ ]]; then
warn "Detected RAM: ${total_ram_mb}MB (recommended >= ${min_ram_mb}MB for local source builds)."
else
warn "Unable to detect total RAM automatically."
fi
if [[ "$free_disk_mb" =~ ^[0-9]+$ ]]; then
warn "Detected free disk: ${free_disk_mb}MB (recommended >= ${min_disk_mb}MB)."
else
warn "Unable to detect free disk space automatically."
fi
return 0
fi
return 1
}
install_prebuilt_binary() {
local target archive_url temp_dir archive_path extracted_bin install_dir
if ! have_cmd curl; then
warn "curl is required for pre-built binary installation."
return 1
fi
if ! have_cmd tar; then
warn "tar is required for pre-built binary installation."
return 1
fi
target="$(detect_release_target || true)"
if [[ -z "$target" ]]; then
warn "No pre-built binary target mapping for $(uname -s)/$(uname -m)."
return 1
fi
archive_url="https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-${target}.tar.gz"
temp_dir="$(mktemp -d -t zeroclaw-prebuilt-XXXXXX)"
archive_path="$temp_dir/zeroclaw-${target}.tar.gz"
info "Attempting pre-built binary install for target: $target"
if ! curl -fsSL "$archive_url" -o "$archive_path"; then
warn "Could not download release asset: $archive_url"
rm -rf "$temp_dir"
return 1
fi
if ! tar -xzf "$archive_path" -C "$temp_dir"; then
warn "Failed to extract pre-built archive."
rm -rf "$temp_dir"
return 1
fi
extracted_bin="$temp_dir/zeroclaw"
if [[ ! -x "$extracted_bin" ]]; then
extracted_bin="$(find "$temp_dir" -maxdepth 2 -type f -name zeroclaw -perm -u+x | head -n 1 || true)"
fi
if [[ -z "$extracted_bin" || ! -x "$extracted_bin" ]]; then
warn "Archive did not contain an executable zeroclaw binary."
rm -rf "$temp_dir"
return 1
fi
install_dir="$HOME/.cargo/bin"
mkdir -p "$install_dir"
install -m 0755 "$extracted_bin" "$install_dir/zeroclaw"
rm -rf "$temp_dir"
info "Installed pre-built binary to $install_dir/zeroclaw"
if [[ ":$PATH:" != *":$install_dir:"* ]]; then
warn "$install_dir is not in PATH for this shell."
warn "Run: export PATH=\"$install_dir:\$PATH\""
fi
return 0
}
run_privileged() {
if [[ "$(id -u)" -eq 0 ]]; then
"$@"
@ -237,152 +65,19 @@ run_privileged() {
fi
}
is_container_runtime() {
if [[ -f /.dockerenv || -f /run/.containerenv ]]; then
return 0
fi
if [[ -r /proc/1/cgroup ]] && grep -Eq '(docker|containerd|kubepods|podman|lxc)' /proc/1/cgroup; then
return 0
fi
return 1
}
run_pacman() {
if ! have_cmd pacman; then
error "pacman is not available."
return 1
fi
if ! is_container_runtime; then
run_privileged pacman "$@"
return $?
fi
local pacman_cfg_tmp=""
local pacman_rc=0
pacman_cfg_tmp="$(mktemp /tmp/zeroclaw-pacman.XXXXXX.conf)"
cp /etc/pacman.conf "$pacman_cfg_tmp"
if ! grep -Eq '^[[:space:]]*DisableSandboxSyscalls([[:space:]]|$)' "$pacman_cfg_tmp"; then
printf '\nDisableSandboxSyscalls\n' >> "$pacman_cfg_tmp"
fi
if run_privileged pacman --config "$pacman_cfg_tmp" "$@"; then
pacman_rc=0
else
pacman_rc=$?
fi
rm -f "$pacman_cfg_tmp"
return "$pacman_rc"
}
ALPINE_PREREQ_PACKAGES=(
bash
build-base
pkgconf
git
curl
openssl-dev
perl
ca-certificates
)
ALPINE_MISSING_PKGS=()
find_missing_alpine_prereqs() {
ALPINE_MISSING_PKGS=()
if ! have_cmd apk; then
return 0
fi
local pkg=""
for pkg in "${ALPINE_PREREQ_PACKAGES[@]}"; do
if ! apk info -e "$pkg" >/dev/null 2>&1; then
ALPINE_MISSING_PKGS+=("$pkg")
fi
done
}
bool_to_word() {
if [[ "$1" == true ]]; then
echo "yes"
else
echo "no"
fi
}
prompt_yes_no() {
local question="$1"
local default_answer="$2"
local prompt=""
local answer=""
if [[ "$default_answer" == "yes" ]]; then
prompt="[Y/n]"
else
prompt="[y/N]"
fi
while true; do
if ! read -r -p "$question $prompt " answer; then
error "guided installer input was interrupted."
exit 1
fi
answer="${answer:-$default_answer}"
case "$(printf '%s' "$answer" | tr '[:upper:]' '[:lower:]')" in
y|yes)
return 0
;;
n|no)
return 1
;;
*)
echo "Please answer yes or no."
;;
esac
done
}
install_system_deps() {
info "Installing system dependencies"
case "$(uname -s)" in
Linux)
if have_cmd apk; then
find_missing_alpine_prereqs
if [[ ${#ALPINE_MISSING_PKGS[@]} -eq 0 ]]; then
info "Alpine prerequisites already installed"
else
info "Installing Alpine prerequisites: ${ALPINE_MISSING_PKGS[*]}"
run_privileged apk add --no-cache "${ALPINE_MISSING_PKGS[@]}"
fi
elif have_cmd apt-get; then
if have_cmd apt-get; then
run_privileged apt-get update -qq
run_privileged apt-get install -y build-essential pkg-config git curl
elif have_cmd dnf; then
run_privileged dnf install -y \
gcc \
gcc-c++ \
make \
pkgconf-pkg-config \
git \
curl \
openssl-devel \
perl
elif have_cmd pacman; then
run_pacman -Sy --noconfirm
run_pacman -S --noconfirm --needed \
gcc \
make \
pkgconf \
git \
curl \
openssl \
perl \
ca-certificates
run_privileged dnf group install -y development-tools
run_privileged dnf install -y pkg-config git curl
else
warn "Unsupported Linux distribution. Install compiler toolchain + pkg-config + git + curl + OpenSSL headers + perl manually."
warn "Unsupported Linux distribution. Install compiler toolchain + pkg-config + git + curl manually."
fi
;;
Darwin)
@ -431,236 +126,22 @@ install_rust_toolchain() {
fi
}
run_guided_installer() {
local os_name="$1"
local provider_input=""
local model_input=""
local api_key_input=""
echo
echo "ZeroClaw guided installer"
echo "Answer a few questions, then the installer will run automatically."
echo
if [[ "$os_name" == "Linux" ]]; then
if prompt_yes_no "Install Linux build dependencies (toolchain/pkg-config/git/curl)?" "yes"; then
INSTALL_SYSTEM_DEPS=true
fi
else
if prompt_yes_no "Install system dependencies for $os_name?" "no"; then
INSTALL_SYSTEM_DEPS=true
fi
fi
if have_cmd cargo && have_cmd rustc; then
info "Detected Rust toolchain: $(rustc --version)"
else
if prompt_yes_no "Rust toolchain not found. Install Rust via rustup now?" "yes"; then
INSTALL_RUST=true
fi
fi
if prompt_yes_no "Run a separate prebuild before install?" "yes"; then
SKIP_BUILD=false
else
SKIP_BUILD=true
fi
if prompt_yes_no "Install zeroclaw into cargo bin now?" "yes"; then
SKIP_INSTALL=false
else
SKIP_INSTALL=true
fi
if prompt_yes_no "Run onboarding after install?" "no"; then
RUN_ONBOARD=true
if prompt_yes_no "Use interactive onboarding?" "yes"; then
INTERACTIVE_ONBOARD=true
else
INTERACTIVE_ONBOARD=false
if ! read -r -p "Provider [$PROVIDER]: " provider_input; then
error "guided installer input was interrupted."
exit 1
fi
if [[ -n "$provider_input" ]]; then
PROVIDER="$provider_input"
fi
if ! read -r -p "Model [${MODEL:-leave empty}]: " model_input; then
error "guided installer input was interrupted."
exit 1
fi
if [[ -n "$model_input" ]]; then
MODEL="$model_input"
fi
if [[ -z "$API_KEY" ]]; then
if ! read -r -s -p "API key (hidden, leave empty to switch to interactive onboarding): " api_key_input; then
echo
error "guided installer input was interrupted."
exit 1
fi
echo
if [[ -n "$api_key_input" ]]; then
API_KEY="$api_key_input"
else
warn "No API key entered. Using interactive onboarding instead."
INTERACTIVE_ONBOARD=true
fi
fi
fi
fi
echo
info "Installer plan"
local install_binary=true
local build_first=false
if [[ "$SKIP_INSTALL" == true ]]; then
install_binary=false
fi
if [[ "$SKIP_BUILD" == false ]]; then
build_first=true
fi
echo " docker-mode: $(bool_to_word "$DOCKER_MODE")"
echo " install-system-deps: $(bool_to_word "$INSTALL_SYSTEM_DEPS")"
echo " install-rust: $(bool_to_word "$INSTALL_RUST")"
echo " build-first: $(bool_to_word "$build_first")"
echo " install-binary: $(bool_to_word "$install_binary")"
echo " onboard: $(bool_to_word "$RUN_ONBOARD")"
if [[ "$RUN_ONBOARD" == true ]]; then
echo " interactive-onboard: $(bool_to_word "$INTERACTIVE_ONBOARD")"
if [[ "$INTERACTIVE_ONBOARD" == false ]]; then
echo " provider: $PROVIDER"
if [[ -n "$MODEL" ]]; then
echo " model: $MODEL"
fi
fi
fi
echo
if ! prompt_yes_no "Proceed with this install plan?" "yes"; then
info "Installation canceled by user."
exit 0
fi
}
ensure_docker_ready() {
if ! have_cmd docker; then
error "docker is not installed."
cat <<'MSG' >&2
Install Docker first, then re-run with:
./zeroclaw_install.sh --docker
MSG
exit 1
fi
if ! docker info >/dev/null 2>&1; then
error "Docker daemon is not reachable."
error "Start Docker and re-run bootstrap."
exit 1
fi
}
run_docker_bootstrap() {
local docker_image docker_data_dir default_data_dir
docker_image="${ZEROCLAW_DOCKER_IMAGE:-zeroclaw-bootstrap:local}"
if [[ "$TEMP_CLONE" == true ]]; then
default_data_dir="$HOME/.zeroclaw-docker"
else
default_data_dir="$WORK_DIR/.zeroclaw-docker"
fi
docker_data_dir="${ZEROCLAW_DOCKER_DATA_DIR:-$default_data_dir}"
DOCKER_DATA_DIR="$docker_data_dir"
mkdir -p "$docker_data_dir/.zeroclaw" "$docker_data_dir/workspace"
if [[ "$SKIP_INSTALL" == true ]]; then
warn "--skip-install has no effect with --docker."
fi
if [[ "$SKIP_BUILD" == false ]]; then
info "Building Docker image ($docker_image)"
docker build --target release -t "$docker_image" "$WORK_DIR"
else
info "Skipping Docker image build"
fi
info "Docker data directory: $docker_data_dir"
local onboard_cmd=()
if [[ "$INTERACTIVE_ONBOARD" == true ]]; then
info "Launching interactive onboarding in container"
onboard_cmd=(onboard --interactive)
else
if [[ -z "$API_KEY" ]]; then
cat <<'MSG'
==> Onboarding requested, but API key not provided.
Use either:
--api-key "sk-..."
or:
ZEROCLAW_API_KEY="sk-..." ./zeroclaw_install.sh --docker
or run interactive:
./zeroclaw_install.sh --docker --interactive-onboard
MSG
exit 1
fi
if [[ -n "$MODEL" ]]; then
info "Launching quick onboarding in container (provider: $PROVIDER, model: $MODEL)"
else
info "Launching quick onboarding in container (provider: $PROVIDER)"
fi
onboard_cmd=(onboard --api-key "$API_KEY" --provider "$PROVIDER")
if [[ -n "$MODEL" ]]; then
onboard_cmd+=(--model "$MODEL")
fi
fi
docker run --rm -it \
--user "$(id -u):$(id -g)" \
-e HOME=/zeroclaw-data \
-e ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace \
-v "$docker_data_dir/.zeroclaw:/zeroclaw-data/.zeroclaw" \
-v "$docker_data_dir/workspace:/zeroclaw-data/workspace" \
"$docker_image" \
"${onboard_cmd[@]}"
}
SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"
SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_PATH")" >/dev/null 2>&1 && pwd || pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/.." >/dev/null 2>&1 && pwd || pwd)"
REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git"
ORIGINAL_ARG_COUNT=$#
GUIDED_MODE="auto"
DOCKER_MODE=false
INSTALL_SYSTEM_DEPS=false
INSTALL_RUST=false
PREFER_PREBUILT=false
PREBUILT_ONLY=false
FORCE_SOURCE_BUILD=false
RUN_ONBOARD=false
INTERACTIVE_ONBOARD=false
SKIP_BUILD=false
SKIP_INSTALL=false
PREBUILT_INSTALLED=false
API_KEY="${ZEROCLAW_API_KEY:-}"
PROVIDER="${ZEROCLAW_PROVIDER:-openrouter}"
MODEL="${ZEROCLAW_MODEL:-}"
while [[ $# -gt 0 ]]; do
case "$1" in
--guided)
GUIDED_MODE="on"
shift
;;
--no-guided)
GUIDED_MODE="off"
shift
;;
--docker)
DOCKER_MODE=true
shift
;;
--install-system-deps)
INSTALL_SYSTEM_DEPS=true
shift
@ -669,18 +150,6 @@ while [[ $# -gt 0 ]]; do
INSTALL_RUST=true
shift
;;
--prefer-prebuilt)
PREFER_PREBUILT=true
shift
;;
--prebuilt-only)
PREBUILT_ONLY=true
shift
;;
--force-source-build)
FORCE_SOURCE_BUILD=true
shift
;;
--onboard)
RUN_ONBOARD=true
shift
@ -706,18 +175,6 @@ while [[ $# -gt 0 ]]; do
}
shift 2
;;
--model)
MODEL="${2:-}"
[[ -n "$MODEL" ]] || {
error "--model requires a value"
exit 1
}
shift 2
;;
--build-first)
SKIP_BUILD=false
shift
;;
--skip-build)
SKIP_BUILD=true
shift
@ -739,41 +196,6 @@ while [[ $# -gt 0 ]]; do
esac
done
OS_NAME="$(uname -s)"
if [[ "$GUIDED_MODE" == "auto" ]]; then
if [[ "$OS_NAME" == "Linux" && "$ORIGINAL_ARG_COUNT" -eq 0 && -t 0 && -t 1 ]]; then
GUIDED_MODE="on"
else
GUIDED_MODE="off"
fi
fi
if [[ "$DOCKER_MODE" == true && "$GUIDED_MODE" == "on" ]]; then
warn "--guided is ignored with --docker."
GUIDED_MODE="off"
fi
if [[ "$GUIDED_MODE" == "on" ]]; then
run_guided_installer "$OS_NAME"
fi
if [[ "$DOCKER_MODE" == true ]]; then
if [[ "$INSTALL_SYSTEM_DEPS" == true ]]; then
warn "--install-system-deps is ignored with --docker."
fi
if [[ "$INSTALL_RUST" == true ]]; then
warn "--install-rust is ignored with --docker."
fi
else
if [[ "$OS_NAME" == "Linux" && -z "${ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS:-}" ]] && have_cmd apk; then
find_missing_alpine_prereqs
if [[ ${#ALPINE_MISSING_PKGS[@]} -gt 0 && "$INSTALL_SYSTEM_DEPS" == false ]]; then
info "Detected Alpine with missing prerequisites: ${ALPINE_MISSING_PKGS[*]}"
info "Auto-enabling system dependency installation (set ZEROCLAW_DISABLE_ALPINE_AUTO_DEPS=1 to disable)."
INSTALL_SYSTEM_DEPS=true
fi
fi
if [[ "$INSTALL_SYSTEM_DEPS" == true ]]; then
install_system_deps
fi
@ -781,6 +203,15 @@ else
if [[ "$INSTALL_RUST" == true ]]; then
install_rust_toolchain
fi
if ! have_cmd cargo; then
error "cargo is not installed."
cat <<'MSG' >&2
Install Rust first: https://rustup.rs/
or re-run with:
./bootstrap.sh --install-rust
MSG
exit 1
fi
WORK_DIR="$ROOT_DIR"
@ -823,73 +254,6 @@ echo " workspace: $WORK_DIR"
cd "$WORK_DIR"
if [[ "$FORCE_SOURCE_BUILD" == true ]]; then
PREFER_PREBUILT=false
PREBUILT_ONLY=false
fi
if [[ "$PREBUILT_ONLY" == true ]]; then
PREFER_PREBUILT=true
fi
if [[ "$DOCKER_MODE" == true ]]; then
ensure_docker_ready
if [[ "$RUN_ONBOARD" == false ]]; then
RUN_ONBOARD=true
if [[ -z "$API_KEY" ]]; then
INTERACTIVE_ONBOARD=true
fi
fi
run_docker_bootstrap
cat <<'DONE'
✅ Docker bootstrap complete.
Your containerized ZeroClaw data is persisted under:
DONE
echo " $DOCKER_DATA_DIR"
cat <<'DONE'
Next steps:
./zeroclaw_install.sh --docker --interactive-onboard
./zeroclaw_install.sh --docker --api-key "sk-..." --provider openrouter
DONE
exit 0
fi
if [[ "$FORCE_SOURCE_BUILD" == false ]]; then
if [[ "$PREFER_PREBUILT" == false && "$PREBUILT_ONLY" == false ]]; then
if should_attempt_prebuilt_for_resources "$WORK_DIR"; then
info "Attempting pre-built binary first due to resource preflight."
PREFER_PREBUILT=true
fi
fi
if [[ "$PREFER_PREBUILT" == true ]]; then
if install_prebuilt_binary; then
PREBUILT_INSTALLED=true
SKIP_BUILD=true
SKIP_INSTALL=true
elif [[ "$PREBUILT_ONLY" == true ]]; then
error "Pre-built-only mode requested, but no compatible release asset is available."
error "Try again later, or run with --force-source-build on a machine with enough RAM/disk."
exit 1
else
warn "Pre-built install unavailable; falling back to source build."
fi
fi
fi
if [[ "$PREBUILT_INSTALLED" == false && ( "$SKIP_BUILD" == false || "$SKIP_INSTALL" == false ) ]] && ! have_cmd cargo; then
error "cargo is not installed."
cat <<'MSG' >&2
Install Rust first: https://rustup.rs/
or re-run with:
./zeroclaw_install.sh --install-rust
MSG
exit 1
fi
if [[ "$SKIP_BUILD" == false ]]; then
info "Building release binary"
cargo build --release --locked
@ -907,8 +271,6 @@ fi
ZEROCLAW_BIN=""
if have_cmd zeroclaw; then
ZEROCLAW_BIN="zeroclaw"
elif [[ -x "$HOME/.cargo/bin/zeroclaw" ]]; then
ZEROCLAW_BIN="$HOME/.cargo/bin/zeroclaw"
elif [[ -x "$WORK_DIR/target/release/zeroclaw" ]]; then
ZEROCLAW_BIN="$WORK_DIR/target/release/zeroclaw"
fi
@ -930,22 +292,14 @@ if [[ "$RUN_ONBOARD" == true ]]; then
Use either:
--api-key "sk-..."
or:
ZEROCLAW_API_KEY="sk-..." ./zeroclaw_install.sh --onboard
ZEROCLAW_API_KEY="sk-..." ./bootstrap.sh --onboard
or run interactive:
./zeroclaw_install.sh --interactive-onboard
./bootstrap.sh --interactive-onboard
MSG
exit 1
fi
if [[ -n "$MODEL" ]]; then
info "Running quick onboarding (provider: $PROVIDER, model: $MODEL)"
else
info "Running quick onboarding (provider: $PROVIDER)"
fi
ONBOARD_CMD=("$ZEROCLAW_BIN" onboard --api-key "$API_KEY" --provider "$PROVIDER")
if [[ -n "$MODEL" ]]; then
ONBOARD_CMD+=(--model "$MODEL")
fi
"${ONBOARD_CMD[@]}"
"$ZEROCLAW_BIN" onboard --api-key "$API_KEY" --provider "$PROVIDER"
fi
fi

View file

@ -1,209 +0,0 @@
#!/usr/bin/env python3
"""Fetch GitHub Actions workflow runs for a given date and summarize costs.
Usage:
python fetch_actions_data.py [OPTIONS]
Options:
--date YYYY-MM-DD Date to query (default: yesterday)
--mode brief|full Output mode (default: full)
brief: billable minutes/hours table only
full: detailed breakdown with per-run list
--repo OWNER/NAME Repository (default: zeroclaw-labs/zeroclaw)
-h, --help Show this help message
"""
import argparse
import json
import subprocess
from datetime import datetime, timedelta, timezone
def parse_args():
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(
description="Fetch GitHub Actions workflow runs and summarize costs.",
)
yesterday = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%d")
parser.add_argument(
"--date",
default=yesterday,
help="Date to query in YYYY-MM-DD format (default: yesterday)",
)
parser.add_argument(
"--mode",
choices=["brief", "full"],
default="full",
help="Output mode: 'brief' for billable hours only, 'full' for detailed breakdown (default: full)",
)
parser.add_argument(
"--repo",
default="zeroclaw-labs/zeroclaw",
help="Repository in OWNER/NAME format (default: zeroclaw-labs/zeroclaw)",
)
return parser.parse_args()
def fetch_runs(repo, date_str, page=1, per_page=100):
"""Fetch completed workflow runs for a given date."""
url = (
f"https://api.github.com/repos/{repo}/actions/runs"
f"?created={date_str}&per_page={per_page}&page={page}"
)
result = subprocess.run(
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
capture_output=True, text=True
)
return json.loads(result.stdout)
def fetch_jobs(repo, run_id):
"""Fetch jobs for a specific run."""
url = f"https://api.github.com/repos/{repo}/actions/runs/{run_id}/jobs?per_page=100"
result = subprocess.run(
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
capture_output=True, text=True
)
return json.loads(result.stdout)
def parse_duration(started, completed):
"""Return duration in seconds between two ISO timestamps."""
if not started or not completed:
return 0
try:
s = datetime.fromisoformat(started.replace("Z", "+00:00"))
c = datetime.fromisoformat(completed.replace("Z", "+00:00"))
return max(0, (c - s).total_seconds())
except Exception:
return 0
def main():
args = parse_args()
repo = args.repo
date_str = args.date
brief = args.mode == "brief"
print(f"Fetching workflow runs for {repo} on {date_str}...")
print("=" * 100)
all_runs = []
for page in range(1, 5): # up to 400 runs
data = fetch_runs(repo, date_str, page=page)
runs = data.get("workflow_runs", [])
if not runs:
break
all_runs.extend(runs)
if len(runs) < 100:
break
print(f"Total workflow runs found: {len(all_runs)}")
print()
# Group by workflow name
workflow_stats = {}
for run in all_runs:
name = run.get("name", "Unknown")
event = run.get("event", "unknown")
conclusion = run.get("conclusion", "unknown")
run_id = run.get("id")
if name not in workflow_stats:
workflow_stats[name] = {
"count": 0,
"events": {},
"conclusions": {},
"total_job_seconds": 0,
"total_jobs": 0,
"run_ids": [],
}
workflow_stats[name]["count"] += 1
workflow_stats[name]["events"][event] = workflow_stats[name]["events"].get(event, 0) + 1
workflow_stats[name]["conclusions"][conclusion] = workflow_stats[name]["conclusions"].get(conclusion, 0) + 1
workflow_stats[name]["run_ids"].append(run_id)
# For each workflow, sample up to 3 runs to get job-level timing
print("Sampling job-level timing (up to 3 runs per workflow)...")
print()
for name, stats in workflow_stats.items():
sample_ids = stats["run_ids"][:3]
for run_id in sample_ids:
jobs_data = fetch_jobs(repo, run_id)
jobs = jobs_data.get("jobs", [])
for job in jobs:
started = job.get("started_at")
completed = job.get("completed_at")
duration = parse_duration(started, completed)
stats["total_job_seconds"] += duration
stats["total_jobs"] += 1
# Extrapolate: if we sampled N runs but there are M total, scale up
sampled = len(sample_ids)
total = stats["count"]
if sampled > 0 and sampled < total:
scale = total / sampled
stats["estimated_total_seconds"] = stats["total_job_seconds"] * scale
else:
stats["estimated_total_seconds"] = stats["total_job_seconds"]
# Print summary sorted by estimated cost (descending)
sorted_workflows = sorted(
workflow_stats.items(),
key=lambda x: x[1]["estimated_total_seconds"],
reverse=True
)
if brief:
# Brief mode: compact billable hours table
print(f"{'Workflow':<40} {'Runs':>5} {'Est.Mins':>9} {'Est.Hours':>10}")
print("-" * 68)
grand_total_minutes = 0
for name, stats in sorted_workflows:
est_mins = stats["estimated_total_seconds"] / 60
grand_total_minutes += est_mins
print(f"{name:<40} {stats['count']:>5} {est_mins:>9.1f} {est_mins/60:>10.2f}")
print("-" * 68)
print(f"{'TOTAL':<40} {len(all_runs):>5} {grand_total_minutes:>9.0f} {grand_total_minutes/60:>10.1f}")
print(f"\nProjected monthly: ~{grand_total_minutes/60*30:.0f} hours")
else:
# Full mode: detailed breakdown with per-run list
print("=" * 100)
print(f"{'Workflow':<40} {'Runs':>5} {'SampledJobs':>12} {'SampledMins':>12} {'Est.TotalMins':>14} {'Events'}")
print("-" * 100)
grand_total_minutes = 0
for name, stats in sorted_workflows:
sampled_mins = stats["total_job_seconds"] / 60
est_total_mins = stats["estimated_total_seconds"] / 60
grand_total_minutes += est_total_mins
events_str = ", ".join(f"{k}={v}" for k, v in stats["events"].items())
conclusions_str = ", ".join(f"{k}={v}" for k, v in stats["conclusions"].items())
print(
f"{name:<40} {stats['count']:>5} {stats['total_jobs']:>12} "
f"{sampled_mins:>12.1f} {est_total_mins:>14.1f} {events_str}"
)
print(f"{'':>40} {'':>5} {'':>12} {'':>12} {'':>14} outcomes: {conclusions_str}")
print("-" * 100)
print(f"{'GRAND TOTAL':>40} {len(all_runs):>5} {'':>12} {'':>12} {grand_total_minutes:>14.1f}")
print(f"\nEstimated total billable minutes on {date_str}: {grand_total_minutes:.0f} min ({grand_total_minutes/60:.1f} hours)")
print()
# Also show raw run list
print("\n" + "=" * 100)
print("DETAILED RUN LIST")
print("=" * 100)
for run in all_runs:
name = run.get("name", "Unknown")
event = run.get("event", "unknown")
conclusion = run.get("conclusion", "unknown")
run_id = run.get("id")
started = run.get("run_started_at", "?")
print(f" [{run_id}] {name:<40} conclusion={conclusion:<12} event={event:<20} started={started}")
if __name__ == "__main__":
main()

View file

@ -2,15 +2,10 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd || pwd)"
INSTALLER_LOCAL="$(cd "$SCRIPT_DIR/.." >/dev/null 2>&1 && pwd || pwd)/zeroclaw_install.sh"
BOOTSTRAP_LOCAL="$SCRIPT_DIR/bootstrap.sh"
REPO_URL="https://github.com/zeroclaw-labs/zeroclaw.git"
echo "[deprecated] scripts/install.sh -> ./zeroclaw_install.sh" >&2
if [[ -x "$INSTALLER_LOCAL" ]]; then
exec "$INSTALLER_LOCAL" "$@"
fi
echo "[deprecated] scripts/install.sh -> bootstrap.sh" >&2
if [[ -f "$BOOTSTRAP_LOCAL" ]]; then
exec "$BOOTSTRAP_LOCAL" "$@"
@ -29,15 +24,35 @@ trap cleanup EXIT
git clone --depth 1 "$REPO_URL" "$TEMP_DIR" >/dev/null 2>&1
if [[ -x "$TEMP_DIR/zeroclaw_install.sh" ]]; then
exec "$TEMP_DIR/zeroclaw_install.sh" "$@"
fi
if [[ -x "$TEMP_DIR/scripts/bootstrap.sh" ]]; then
exec "$TEMP_DIR/scripts/bootstrap.sh" "$@"
"$TEMP_DIR/scripts/bootstrap.sh" "$@"
exit 0
fi
echo "error: zeroclaw_install.sh/bootstrap.sh was not found in the fetched revision." >&2
echo "Run the local bootstrap directly when possible:" >&2
echo " ./zeroclaw_install.sh --help" >&2
echo "[deprecated] cloned revision has no bootstrap.sh; falling back to legacy source install flow" >&2
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then
cat <<'USAGE'
Legacy install.sh fallback mode
Behavior:
- Clone repository
- cargo build --release --locked
- cargo install --path <clone> --force --locked
For the new dual-mode installer, use:
./bootstrap.sh --help
USAGE
exit 0
fi
if ! command -v cargo >/dev/null 2>&1; then
echo "error: cargo is required for legacy install.sh fallback mode" >&2
echo "Install Rust first: https://rustup.rs/" >&2
exit 1
fi
cargo build --release --locked --manifest-path "$TEMP_DIR/Cargo.toml"
cargo install --path "$TEMP_DIR" --force --locked
echo "Legacy source install completed." >&2

View file

@ -10,6 +10,7 @@ use crate::providers::{self, ChatMessage, ChatRequest, ConversationMessage, Prov
use crate::runtime;
use crate::security::SecurityPolicy;
use crate::tools::{self, Tool, ToolSpec};
use crate::util::truncate_with_ellipsis;
use anyhow::Result;
use std::io::Write as IoWrite;
use std::sync::Arc;
@ -228,9 +229,8 @@ impl Agent {
&config.workspace_dir,
));
let memory: Arc<dyn Memory> = Arc::from(memory::create_memory_with_storage_and_routes(
let memory: Arc<dyn Memory> = Arc::from(memory::create_memory_with_storage(
&config.memory,
&config.embedding_routes,
Some(&config.storage.provider.config),
&config.workspace_dir,
config.api_key.as_deref(),
@ -308,10 +308,7 @@ impl Agent {
.classification_config(config.query_classification.clone())
.available_hints(available_hints)
.identity_config(config.identity.clone())
.skills(crate::skills::load_skills_with_config(
&config.workspace_dir,
config,
))
.skills(crate::skills::load_skills(&config.workspace_dir))
.auto_save(config.memory.auto_save)
.build()
}
@ -403,8 +400,11 @@ impl Agent {
return results;
}
let futs: Vec<_> = calls.iter().map(|call| self.execute_tool_call(call)).collect();
futures::future::join_all(futs).await
let mut results = Vec::with_capacity(calls.len());
for call in calls {
results.push(self.execute_tool_call(call).await);
}
results
}
fn classify_model(&self, user_message: &str) -> String {
@ -486,6 +486,14 @@ impl Agent {
)));
self.trim_history();
if self.auto_save {
let summary = truncate_with_ellipsis(&final_text, 100);
let _ = self
.memory
.store("assistant_resp", &summary, MemoryCategory::Daily, None)
.await;
}
return Ok(final_text);
}
@ -678,8 +686,7 @@ mod tests {
..crate::config::MemoryConfig::default()
};
let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None).unwrap(),
);
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
@ -691,7 +698,7 @@ mod tests {
.tool_dispatcher(Box::new(XmlToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp"))
.build()
.expect("agent builder should succeed with valid config");
.unwrap();
let response = agent.turn("hi").await.unwrap();
assert_eq!(response, "hello");
@ -721,8 +728,7 @@ mod tests {
..crate::config::MemoryConfig::default()
};
let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None).unwrap(),
);
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
@ -734,7 +740,7 @@ mod tests {
.tool_dispatcher(Box::new(NativeToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp"))
.build()
.expect("agent builder should succeed with valid config");
.unwrap();
let response = agent.turn("hi").await.unwrap();
assert_eq!(response, "done");

View file

@ -1,11 +1,8 @@
use crate::approval::{ApprovalManager, ApprovalRequest, ApprovalResponse};
use crate::config::Config;
use crate::memory::{self, Memory, MemoryCategory};
use crate::multimodal;
use crate::observability::{self, Observer, ObserverEvent};
use crate::providers::{
self, ChatMessage, ChatRequest, Provider, ProviderCapabilityError, ToolCall,
};
use crate::providers::{self, ChatMessage, ChatRequest, Provider, ToolCall};
use crate::runtime;
use crate::security::SecurityPolicy;
use crate::tools::{self, Tool};
@ -16,9 +13,47 @@ use std::fmt::Write;
use std::io::Write as _;
use std::sync::{Arc, LazyLock};
use std::time::Instant;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
/// Events emitted during tool execution for real-time status display in channels.
#[derive(Debug, Clone)]
pub enum ToolStatusEvent {
/// LLM request started (thinking).
Thinking,
/// A tool is about to execute.
ToolStart {
name: String,
detail: Option<String>,
},
}
/// Extract a short display summary from tool arguments for status display.
pub fn extract_tool_detail(tool_name: &str, args: &serde_json::Value) -> Option<String> {
match tool_name {
"shell" => args.get("command").and_then(|v| v.as_str()).map(|s| {
if s.len() > 60 {
format!("{}...", &s[..57])
} else {
s.to_string()
}
}),
"file_read" | "file_write" => args.get("path").and_then(|v| v.as_str()).map(String::from),
"memory_recall" | "web_search_tool" => args
.get("query")
.and_then(|v| v.as_str())
.map(|s| format!("\"{s}\"")),
"http_request" | "browser_open" => {
args.get("url").and_then(|v| v.as_str()).map(String::from)
}
"git_operations" => args
.get("operation")
.and_then(|v| v.as_str())
.map(String::from),
"memory_store" => args.get("key").and_then(|v| v.as_str()).map(String::from),
_ => None,
}
}
/// Minimum characters per chunk when relaying LLM text to a streaming draft.
const STREAM_CHUNK_MIN_CHARS: usize = 80;
@ -26,10 +61,6 @@ const STREAM_CHUNK_MIN_CHARS: usize = 80;
/// Used as a safe fallback when `max_tool_iterations` is unset or configured as zero.
const DEFAULT_MAX_TOOL_ITERATIONS: usize = 10;
/// Minimum user-message length (in chars) for auto-save to memory.
/// Matches the channel-side constant in `channels/mod.rs`.
const AUTOSAVE_MIN_MESSAGE_CHARS: usize = 20;
static SENSITIVE_KEY_PATTERNS: LazyLock<RegexSet> = LazyLock::new(|| {
RegexSet::new([
r"(?i)token",
@ -231,16 +262,9 @@ async fn build_context(mem: &dyn Memory, user_msg: &str, min_relevance_score: f6
if !relevant.is_empty() {
context.push_str("[Memory context]\n");
for entry in &relevant {
if memory::is_assistant_autosave_key(&entry.key) {
continue;
}
let _ = writeln!(context, "- {}: {}", entry.key, entry.content);
}
if context != "[Memory context]\n" {
context.push('\n');
} else {
context.clear();
}
}
}
@ -594,17 +618,6 @@ fn parse_glm_style_tool_calls(text: &str) -> Vec<(String, serde_json::Value, Opt
calls
}
// ── Tool-Call Parsing ─────────────────────────────────────────────────────
// LLM responses may contain tool calls in multiple formats depending on
// the provider. Parsing follows a priority chain:
// 1. OpenAI-style JSON with `tool_calls` array (native API)
// 2. XML tags: <tool_call>, <toolcall>, <tool-call>, <invoke>
// 3. Markdown code blocks with `tool_call` language
// 4. GLM-style line-based format (e.g. `shell/command>ls`)
// SECURITY: We never fall back to extracting arbitrary JSON from the
// response body, because that would enable prompt-injection attacks where
// malicious content in emails/files/web pages mimics a tool call.
/// Parse tool calls from an LLM response that uses XML-style function calling.
///
/// Expected format (common with system-prompt-guided tool use):
@ -839,21 +852,6 @@ struct ParsedToolCall {
arguments: serde_json::Value,
}
#[derive(Debug)]
pub(crate) struct ToolLoopCancelled;
impl std::fmt::Display for ToolLoopCancelled {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str("tool loop cancelled")
}
}
impl std::error::Error for ToolLoopCancelled {}
pub(crate) fn is_tool_loop_cancelled(err: &anyhow::Error) -> bool {
err.chain().any(|source| source.is::<ToolLoopCancelled>())
}
/// Execute a single turn of the agent loop: send messages, parse tool calls,
/// execute tools, and loop until the LLM produces a final text response.
/// When `silent` is true, suppresses stdout (for channel use).
@ -867,7 +865,6 @@ pub(crate) async fn agent_turn(
model: &str,
temperature: f64,
silent: bool,
multimodal_config: &crate::config::MultimodalConfig,
max_tool_iterations: usize,
) -> Result<String> {
run_tool_call_loop(
@ -881,7 +878,6 @@ pub(crate) async fn agent_turn(
silent,
None,
"channel",
multimodal_config,
max_tool_iterations,
None,
None,
@ -889,18 +885,6 @@ pub(crate) async fn agent_turn(
.await
}
// ── Agent Tool-Call Loop ──────────────────────────────────────────────────
// Core agentic iteration: send conversation to the LLM, parse any tool
// calls from the response, execute them, append results to history, and
// repeat until the LLM produces a final text-only answer.
//
// Loop invariant: at the start of each iteration, `history` contains the
// full conversation so far (system prompt + user messages + prior tool
// results). The loop exits when:
// • the LLM returns no tool calls (final answer), or
// • max_iterations is reached (runaway safety), or
// • the cancellation token fires (external abort).
/// Execute a single turn of the agent loop: send messages, parse tool calls,
/// execute tools, and loop until the LLM produces a final text response.
#[allow(clippy::too_many_arguments)]
@ -915,10 +899,9 @@ pub(crate) async fn run_tool_call_loop(
silent: bool,
approval: Option<&ApprovalManager>,
channel_name: &str,
multimodal_config: &crate::config::MultimodalConfig,
max_tool_iterations: usize,
cancellation_token: Option<CancellationToken>,
on_delta: Option<tokio::sync::mpsc::Sender<String>>,
on_tool_status: Option<tokio::sync::mpsc::Sender<ToolStatusEvent>>,
) -> Result<String> {
let max_iterations = if max_tool_iterations == 0 {
DEFAULT_MAX_TOOL_ITERATIONS
@ -931,28 +914,10 @@ pub(crate) async fn run_tool_call_loop(
let use_native_tools = provider.supports_native_tools() && !tool_specs.is_empty();
for _iteration in 0..max_iterations {
if cancellation_token
.as_ref()
.is_some_and(CancellationToken::is_cancelled)
{
return Err(ToolLoopCancelled.into());
if let Some(ref tx) = on_tool_status {
let _ = tx.send(ToolStatusEvent::Thinking).await;
}
let image_marker_count = multimodal::count_image_markers(history);
if image_marker_count > 0 && !provider.supports_vision() {
return Err(ProviderCapabilityError {
provider: provider_name.to_string(),
capability: "vision".to_string(),
message: format!(
"received {image_marker_count} image marker(s), but this provider does not support vision input"
),
}
.into());
}
let prepared_messages =
multimodal::prepare_messages_for_provider(history, multimodal_config).await?;
observer.record_event(&ObserverEvent::LlmRequest {
provider: provider_name.to_string(),
model: model.to_string(),
@ -969,26 +934,18 @@ pub(crate) async fn run_tool_call_loop(
None
};
let chat_future = provider.chat(
let (response_text, parsed_text, tool_calls, assistant_history_content, native_tool_calls) =
match provider
.chat(
ChatRequest {
messages: &prepared_messages.messages,
messages: history,
tools: request_tools,
},
model,
temperature,
);
let chat_result = if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = chat_future => result,
}
} else {
chat_future.await
};
let (response_text, parsed_text, tool_calls, assistant_history_content, native_tool_calls) =
match chat_result {
)
.await
{
Ok(resp) => {
observer.record_event(&ObserverEvent::LlmResponse {
provider: provider_name.to_string(),
@ -999,10 +956,6 @@ pub(crate) async fn run_tool_call_loop(
});
let response_text = resp.text_or_empty().to_string();
// First try native structured tool calls (OpenAI-format).
// Fall back to text-based parsing (XML tags, markdown blocks,
// GLM format) only if the provider returned no native calls —
// this ensures we support both native and prompt-guided models.
let mut calls = parse_structured_tool_calls(&resp.tool_calls);
let mut parsed_text = String::new();
@ -1058,12 +1011,6 @@ pub(crate) async fn run_tool_call_loop(
// STREAM_CHUNK_MIN_CHARS characters for progressive draft updates.
let mut chunk = String::new();
for word in display_text.split_inclusive(char::is_whitespace) {
if cancellation_token
.as_ref()
.is_some_and(CancellationToken::is_cancelled)
{
return Err(ToolLoopCancelled.into());
}
chunk.push_str(word);
if chunk.len() >= STREAM_CHUNK_MIN_CHARS
&& tx.send(std::mem::take(&mut chunk)).await.is_err()
@ -1099,13 +1046,11 @@ pub(crate) async fn run_tool_call_loop(
arguments: call.arguments.clone(),
};
// On CLI, prompt interactively. On other channels where
// interactive approval is not possible, deny the call to
// respect the supervised autonomy setting.
// Only prompt interactively on CLI; auto-approve on other channels.
let decision = if channel_name == "cli" {
mgr.prompt_cli(&request)
} else {
ApprovalResponse::No
ApprovalResponse::Yes
};
mgr.record_decision(&call.name, &call.arguments, decision, channel_name);
@ -1126,19 +1071,18 @@ pub(crate) async fn run_tool_call_loop(
observer.record_event(&ObserverEvent::ToolCallStart {
tool: call.name.clone(),
});
if let Some(ref tx) = on_tool_status {
let detail = extract_tool_detail(&call.name, &call.arguments);
let _ = tx
.send(ToolStatusEvent::ToolStart {
name: call.name.clone(),
detail,
})
.await;
}
let start = Instant::now();
let result = if let Some(tool) = find_tool(tools_registry, &call.name) {
let tool_future = tool.execute(call.arguments.clone());
let tool_result = if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = tool_future => result,
}
} else {
tool_future.await
};
match tool_result {
match tool.execute(call.arguments.clone()).await {
Ok(r) => {
observer.record_event(&ObserverEvent::ToolCall {
tool: call.name.clone(),
@ -1223,12 +1167,6 @@ pub(crate) fn build_tool_instructions(tools_registry: &[Box<dyn Tool>]) -> Strin
instructions
}
// ── CLI Entrypoint ───────────────────────────────────────────────────────
// Wires up all subsystems (observer, runtime, security, memory, tools,
// provider, hardware RAG, peripherals) and enters either single-shot or
// interactive REPL mode. The interactive loop manages history compaction
// and hard trimming to keep the context window bounded.
#[allow(clippy::too_many_lines)]
pub async fn run(
config: Config,
@ -1307,21 +1245,13 @@ pub async fn run(
.or(config.default_model.as_deref())
.unwrap_or("anthropic/claude-sonnet-4");
let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
};
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
let provider: Box<dyn Provider> = providers::create_routed_provider(
provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
model_name,
&provider_runtime_options,
)?;
observer.record_event(&ObserverEvent::AgentStart {
@ -1350,7 +1280,7 @@ pub async fn run(
.collect();
// ── Build system prompt from workspace MD files (OpenClaw framework) ──
let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let skills = crate::skills::load_skills(&config.workspace_dir);
let mut tool_descs: Vec<(&str, &str)> = vec![
(
"shell",
@ -1460,21 +1390,17 @@ pub async fn run(
} else {
None
};
let native_tools = provider.supports_native_tools();
let mut system_prompt = crate::channels::build_system_prompt_with_mode(
let mut system_prompt = crate::channels::build_system_prompt(
&config.workspace_dir,
model_name,
&tool_descs,
&skills,
Some(&config.identity),
bootstrap_max_chars,
native_tools,
);
// Append structured tool-use instructions with schemas (only for non-native providers)
if !native_tools {
// Append structured tool-use instructions with schemas
system_prompt.push_str(&build_tool_instructions(&tools_registry));
}
// ── Approval manager (supervised mode) ───────────────────────
let approval_manager = ApprovalManager::from_config(&config.autonomy);
@ -1485,8 +1411,8 @@ pub async fn run(
let mut final_output = String::new();
if let Some(msg) = message {
// Auto-save user message to memory (skip short/trivial messages)
if config.memory.auto_save && msg.chars().count() >= AUTOSAVE_MIN_MESSAGE_CHARS {
// Auto-save user message to memory
if config.memory.auto_save {
let user_key = autosave_memory_key("user_msg");
let _ = mem
.store(&user_key, &msg, MemoryCategory::Conversation, None)
@ -1524,7 +1450,6 @@ pub async fn run(
false,
Some(&approval_manager),
"cli",
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
@ -1533,6 +1458,15 @@ pub async fn run(
final_output = response.clone();
println!("{response}");
observer.record_event(&ObserverEvent::TurnComplete);
// Auto-save assistant response to daily log
if config.memory.auto_save {
let summary = truncate_with_ellipsis(&response, 100);
let response_key = autosave_memory_key("assistant_resp");
let _ = mem
.store(&response_key, &summary, MemoryCategory::Daily, None)
.await;
}
} else {
println!("🦀 ZeroClaw Interactive Mode");
println!("Type /help for commands.\n");
@ -1607,10 +1541,8 @@ pub async fn run(
_ => {}
}
// Auto-save conversation turns (skip short/trivial messages)
if config.memory.auto_save
&& user_input.chars().count() >= AUTOSAVE_MIN_MESSAGE_CHARS
{
// Auto-save conversation turns
if config.memory.auto_save {
let user_key = autosave_memory_key("user_msg");
let _ = mem
.store(&user_key, &user_input, MemoryCategory::Conversation, None)
@ -1645,7 +1577,6 @@ pub async fn run(
false,
Some(&approval_manager),
"cli",
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
@ -1685,6 +1616,14 @@ pub async fn run(
// Hard cap as a safety net.
trim_history(&mut history, config.agent.max_history_messages);
if config.memory.auto_save {
let summary = truncate_with_ellipsis(&response, 100);
let response_key = autosave_memory_key("assistant_resp");
let _ = mem
.store(&response_key, &summary, MemoryCategory::Daily, None)
.await;
}
}
}
@ -1749,20 +1688,13 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
.default_model
.clone()
.unwrap_or_else(|| "anthropic/claude-sonnet-4-20250514".into());
let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
};
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
let provider: Box<dyn Provider> = providers::create_routed_provider(
provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
&model_name,
&provider_runtime_options,
)?;
let hardware_rag: Option<crate::rag::HardwareRag> = config
@ -1780,7 +1712,7 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
.map(|b| b.board.clone())
.collect();
let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let skills = crate::skills::load_skills(&config.workspace_dir);
let mut tool_descs: Vec<(&str, &str)> = vec![
("shell", "Execute terminal commands."),
("file_read", "Read file contents."),
@ -1829,19 +1761,15 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
} else {
None
};
let native_tools = provider.supports_native_tools();
let mut system_prompt = crate::channels::build_system_prompt_with_mode(
let mut system_prompt = crate::channels::build_system_prompt(
&config.workspace_dir,
&model_name,
&tool_descs,
&skills,
Some(&config.identity),
bootstrap_max_chars,
native_tools,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry));
}
let mem_context = build_context(mem.as_ref(), message, config.memory.min_relevance_score).await;
let rag_limit = if config.agent.compact_context { 2 } else { 5 };
@ -1870,7 +1798,6 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
&model_name,
config.default_temperature,
true,
&config.multimodal,
config.agent.max_tool_iterations,
)
.await
@ -1879,10 +1806,6 @@ pub async fn process_message(config: Config, message: &str) -> Result<String> {
#[cfg(test)]
mod tests {
use super::*;
use async_trait::async_trait;
use base64::{engine::general_purpose::STANDARD, Engine as _};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
#[test]
fn test_scrub_credentials() {
@ -1903,194 +1826,8 @@ mod tests {
assert!(scrubbed.contains("public"));
}
use crate::memory::{Memory, MemoryCategory, SqliteMemory};
use crate::observability::NoopObserver;
use crate::providers::traits::ProviderCapabilities;
use crate::providers::ChatResponse;
use tempfile::TempDir;
struct NonVisionProvider {
calls: Arc<AtomicUsize>,
}
#[async_trait]
impl Provider for NonVisionProvider {
async fn chat_with_system(
&self,
_system_prompt: Option<&str>,
_message: &str,
_model: &str,
_temperature: f64,
) -> anyhow::Result<String> {
self.calls.fetch_add(1, Ordering::SeqCst);
Ok("ok".to_string())
}
}
struct VisionProvider {
calls: Arc<AtomicUsize>,
}
#[async_trait]
impl Provider for VisionProvider {
fn capabilities(&self) -> ProviderCapabilities {
ProviderCapabilities {
native_tool_calling: false,
vision: true,
}
}
async fn chat_with_system(
&self,
_system_prompt: Option<&str>,
_message: &str,
_model: &str,
_temperature: f64,
) -> anyhow::Result<String> {
self.calls.fetch_add(1, Ordering::SeqCst);
Ok("ok".to_string())
}
async fn chat(
&self,
request: ChatRequest<'_>,
_model: &str,
_temperature: f64,
) -> anyhow::Result<ChatResponse> {
self.calls.fetch_add(1, Ordering::SeqCst);
let marker_count = crate::multimodal::count_image_markers(request.messages);
if marker_count == 0 {
anyhow::bail!("expected image markers in request messages");
}
if request.tools.is_some() {
anyhow::bail!("no tools should be attached for this test");
}
Ok(ChatResponse {
text: Some("vision-ok".to_string()),
tool_calls: Vec::new(),
})
}
}
#[tokio::test]
async fn run_tool_call_loop_returns_structured_error_for_non_vision_provider() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = NonVisionProvider {
calls: Arc::clone(&calls),
};
let mut history = vec![ChatMessage::user(
"please inspect [IMAGE:data:image/png;base64,iVBORw0KGgo=]".to_string(),
)];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let err = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&crate::config::MultimodalConfig::default(),
3,
None,
None,
)
.await
.expect_err("provider without vision support should fail");
assert!(err.to_string().contains("provider_capability_error"));
assert!(err.to_string().contains("capability=vision"));
assert_eq!(calls.load(Ordering::SeqCst), 0);
}
#[tokio::test]
async fn run_tool_call_loop_rejects_oversized_image_payload() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = VisionProvider {
calls: Arc::clone(&calls),
};
let oversized_payload = STANDARD.encode(vec![0_u8; (1024 * 1024) + 1]);
let mut history = vec![ChatMessage::user(format!(
"[IMAGE:data:image/png;base64,{oversized_payload}]"
))];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let multimodal = crate::config::MultimodalConfig {
max_images: 4,
max_image_size_mb: 1,
allow_remote_fetch: false,
};
let err = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&multimodal,
3,
None,
None,
)
.await
.expect_err("oversized payload must fail");
assert!(err
.to_string()
.contains("multimodal image size limit exceeded"));
assert_eq!(calls.load(Ordering::SeqCst), 0);
}
#[tokio::test]
async fn run_tool_call_loop_accepts_valid_multimodal_request_flow() {
let calls = Arc::new(AtomicUsize::new(0));
let provider = VisionProvider {
calls: Arc::clone(&calls),
};
let mut history = vec![ChatMessage::user(
"Analyze this [IMAGE:data:image/png;base64,iVBORw0KGgo=]".to_string(),
)];
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let observer = NoopObserver;
let result = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"cli",
&crate::config::MultimodalConfig::default(),
3,
None,
None,
)
.await
.expect("valid multimodal payload should pass");
assert_eq!(result, "vision-ok");
assert_eq!(calls.load(Ordering::SeqCst), 1);
}
#[test]
fn parse_tool_calls_extracts_single_call() {
let response = r#"Let me check that.
@ -2534,33 +2271,6 @@ Done."#;
assert!(recalled.iter().any(|entry| entry.content.contains("45")));
}
#[tokio::test]
async fn build_context_ignores_legacy_assistant_autosave_entries() {
let tmp = TempDir::new().unwrap();
let mem = SqliteMemory::new(tmp.path()).unwrap();
mem.store(
"assistant_resp_poisoned",
"User suffered a fabricated event",
MemoryCategory::Daily,
None,
)
.await
.unwrap();
mem.store(
"user_msg_real",
"User asked for concise status updates",
MemoryCategory::Conversation,
None,
)
.await
.unwrap();
let context = build_context(&mem, "status updates", 0.0).await;
assert!(context.contains("user_msg_real"));
assert!(!context.contains("assistant_resp_poisoned"));
assert!(!context.contains("fabricated event"));
}
// ═══════════════════════════════════════════════════════════════════════
// Recovery Tests - Tool Call Parsing Edge Cases
// ═══════════════════════════════════════════════════════════════════════
@ -2858,194 +2568,97 @@ browser_open/url>https://example.com"#;
assert_eq!(text, "Done");
}
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): parse_tool_calls robustness — malformed/edge-case inputs
// Prevents: Pattern 4 issues #746, #418, #777, #848
// ─────────────────────────────────────────────────────────────────────
// ═══════════════════════════════════════════════════════════════════════
// Tool Status Display - extract_tool_detail
// ═══════════════════════════════════════════════════════════════════════
#[test]
fn parse_tool_calls_empty_input_returns_empty() {
let (text, calls) = parse_tool_calls("");
assert!(calls.is_empty(), "empty input should produce no tool calls");
assert!(text.is_empty(), "empty input should produce no text");
fn extract_tool_detail_shell_short() {
let args = serde_json::json!({"command": "ls -la"});
assert_eq!(extract_tool_detail("shell", &args), Some("ls -la".into()));
}
#[test]
fn parse_tool_calls_whitespace_only_returns_empty_calls() {
let (text, calls) = parse_tool_calls(" \n\t ");
assert!(calls.is_empty());
assert!(text.is_empty() || text.trim().is_empty());
fn extract_tool_detail_shell_truncates_long_command() {
let long = "a".repeat(80);
let args = serde_json::json!({"command": long});
let detail = extract_tool_detail("shell", &args).unwrap();
assert_eq!(detail.len(), 60); // 57 chars + "..."
assert!(detail.ends_with("..."));
}
#[test]
fn parse_tool_calls_nested_xml_tags_handled() {
// Double-wrapped tool call should still parse the inner call
let response = r#"<tool_call><tool_call>{"name":"echo","arguments":{"msg":"hi"}}</tool_call></tool_call>"#;
let (_text, calls) = parse_tool_calls(response);
// Should find at least one tool call
assert!(
!calls.is_empty(),
"nested XML tags should still yield at least one tool call"
);
}
#[test]
fn parse_tool_calls_truncated_json_no_panic() {
// Incomplete JSON inside tool_call tags
let response = r#"<tool_call>{"name":"shell","arguments":{"command":"ls"</tool_call>"#;
let (_text, _calls) = parse_tool_calls(response);
// Should not panic — graceful handling of truncated JSON
}
#[test]
fn parse_tool_calls_empty_json_object_in_tag() {
let response = "<tool_call>{}</tool_call>";
let (_text, calls) = parse_tool_calls(response);
// Empty JSON object has no name field — should not produce valid tool call
assert!(
calls.is_empty(),
"empty JSON object should not produce a tool call"
);
}
#[test]
fn parse_tool_calls_closing_tag_only_returns_text() {
let response = "Some text </tool_call> more text";
let (text, calls) = parse_tool_calls(response);
assert!(
calls.is_empty(),
"closing tag only should not produce calls"
);
assert!(
!text.is_empty(),
"text around orphaned closing tag should be preserved"
);
}
#[test]
fn parse_tool_calls_very_large_arguments_no_panic() {
let large_arg = "x".repeat(100_000);
let response = format!(
r#"<tool_call>{{"name":"echo","arguments":{{"message":"{}"}}}}</tool_call>"#,
large_arg
);
let (_text, calls) = parse_tool_calls(&response);
assert_eq!(calls.len(), 1, "large arguments should still parse");
assert_eq!(calls[0].name, "echo");
}
#[test]
fn parse_tool_calls_special_characters_in_arguments() {
let response = r#"<tool_call>{"name":"echo","arguments":{"message":"hello \"world\" <>&'\n\t"}}</tool_call>"#;
let (_text, calls) = parse_tool_calls(response);
assert_eq!(calls.len(), 1);
assert_eq!(calls[0].name, "echo");
}
#[test]
fn parse_tool_calls_text_with_embedded_json_not_extracted() {
// Raw JSON without any tags should NOT be extracted as a tool call
let response = r#"Here is some data: {"name":"echo","arguments":{"message":"hi"}} end."#;
let (_text, calls) = parse_tool_calls(response);
assert!(
calls.is_empty(),
"raw JSON in text without tags should not be extracted"
);
}
#[test]
fn parse_tool_calls_multiple_formats_mixed() {
// Mix of text and properly tagged tool call
let response = r#"I'll help you with that.
<tool_call>
{"name":"shell","arguments":{"command":"echo hello"}}
</tool_call>
Let me check the result."#;
let (text, calls) = parse_tool_calls(response);
fn extract_tool_detail_file_read() {
let args = serde_json::json!({"path": "src/main.rs"});
assert_eq!(
calls.len(),
1,
"should extract one tool call from mixed content"
);
assert_eq!(calls[0].name, "shell");
assert!(
text.contains("help you"),
"text before tool call should be preserved"
extract_tool_detail("file_read", &args),
Some("src/main.rs".into())
);
}
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): scrub_credentials edge cases
// ─────────────────────────────────────────────────────────────────────
#[test]
fn scrub_credentials_empty_input() {
let result = scrub_credentials("");
assert_eq!(result, "");
}
#[test]
fn scrub_credentials_no_sensitive_data() {
let input = "normal text without any secrets";
let result = scrub_credentials(input);
fn extract_tool_detail_file_write() {
let args = serde_json::json!({"path": "/tmp/out.txt", "content": "data"});
assert_eq!(
result, input,
"non-sensitive text should pass through unchanged"
extract_tool_detail("file_write", &args),
Some("/tmp/out.txt".into())
);
}
#[test]
fn scrub_credentials_short_values_not_redacted() {
// Values shorter than 8 chars should not be redacted
let input = r#"api_key="short""#;
let result = scrub_credentials(input);
assert_eq!(result, input, "short values should not be redacted");
}
// ─────────────────────────────────────────────────────────────────────
// TG4 (inline): trim_history edge cases
// ─────────────────────────────────────────────────────────────────────
#[test]
fn trim_history_empty_history() {
let mut history: Vec<crate::providers::ChatMessage> = vec![];
trim_history(&mut history, 10);
assert!(history.is_empty());
fn extract_tool_detail_memory_recall() {
let args = serde_json::json!({"query": "project goals"});
assert_eq!(
extract_tool_detail("memory_recall", &args),
Some("\"project goals\"".into())
);
}
#[test]
fn trim_history_system_only() {
let mut history = vec![crate::providers::ChatMessage::system("system prompt")];
trim_history(&mut history, 10);
assert_eq!(history.len(), 1);
assert_eq!(history[0].role, "system");
fn extract_tool_detail_web_search() {
let args = serde_json::json!({"query": "rust async"});
assert_eq!(
extract_tool_detail("web_search_tool", &args),
Some("\"rust async\"".into())
);
}
#[test]
fn trim_history_exactly_at_limit() {
let mut history = vec![
crate::providers::ChatMessage::system("system"),
crate::providers::ChatMessage::user("msg 1"),
crate::providers::ChatMessage::assistant("reply 1"),
];
trim_history(&mut history, 2); // 2 non-system messages = exactly at limit
assert_eq!(history.len(), 3, "should not trim when exactly at limit");
fn extract_tool_detail_http_request() {
let args = serde_json::json!({"url": "https://example.com/api", "method": "GET"});
assert_eq!(
extract_tool_detail("http_request", &args),
Some("https://example.com/api".into())
);
}
#[test]
fn trim_history_removes_oldest_non_system() {
let mut history = vec![
crate::providers::ChatMessage::system("system"),
crate::providers::ChatMessage::user("old msg"),
crate::providers::ChatMessage::assistant("old reply"),
crate::providers::ChatMessage::user("new msg"),
crate::providers::ChatMessage::assistant("new reply"),
];
trim_history(&mut history, 2);
assert_eq!(history.len(), 3); // system + 2 kept
assert_eq!(history[0].role, "system");
assert_eq!(history[1].content, "new msg");
fn extract_tool_detail_git_operations() {
let args = serde_json::json!({"operation": "status"});
assert_eq!(
extract_tool_detail("git_operations", &args),
Some("status".into())
);
}
#[test]
fn extract_tool_detail_memory_store() {
let args = serde_json::json!({"key": "user_pref", "value": "dark mode"});
assert_eq!(
extract_tool_detail("memory_store", &args),
Some("user_pref".into())
);
}
#[test]
fn extract_tool_detail_unknown_tool_returns_none() {
let args = serde_json::json!({"foo": "bar"});
assert_eq!(extract_tool_detail("unknown_tool", &args), None);
}
#[test]
fn extract_tool_detail_missing_key_returns_none() {
let args = serde_json::json!({"other": "value"});
assert_eq!(extract_tool_detail("shell", &args), None);
}
}

View file

@ -1,4 +1,4 @@
use crate::memory::{self, Memory};
use crate::memory::Memory;
use async_trait::async_trait;
use std::fmt::Write;
@ -45,9 +45,6 @@ impl MemoryLoader for DefaultMemoryLoader {
let mut context = String::from("[Memory context]\n");
for entry in entries {
if memory::is_assistant_autosave_key(&entry.key) {
continue;
}
if let Some(score) = entry.score {
if score < self.min_relevance_score {
continue;
@ -70,12 +67,8 @@ impl MemoryLoader for DefaultMemoryLoader {
mod tests {
use super::*;
use crate::memory::{Memory, MemoryCategory, MemoryEntry};
use std::sync::Arc;
struct MockMemory;
struct MockMemoryWithEntries {
entries: Arc<Vec<MemoryEntry>>,
}
#[async_trait]
impl Memory for MockMemory {
@ -138,56 +131,6 @@ mod tests {
}
}
#[async_trait]
impl Memory for MockMemoryWithEntries {
async fn store(
&self,
_key: &str,
_content: &str,
_category: MemoryCategory,
_session_id: Option<&str>,
) -> anyhow::Result<()> {
Ok(())
}
async fn recall(
&self,
_query: &str,
_limit: usize,
_session_id: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
Ok(self.entries.as_ref().clone())
}
async fn get(&self, _key: &str) -> anyhow::Result<Option<MemoryEntry>> {
Ok(None)
}
async fn list(
&self,
_category: Option<&MemoryCategory>,
_session_id: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
Ok(vec![])
}
async fn forget(&self, _key: &str) -> anyhow::Result<bool> {
Ok(true)
}
async fn count(&self) -> anyhow::Result<usize> {
Ok(self.entries.len())
}
async fn health_check(&self) -> bool {
true
}
fn name(&self) -> &str {
"mock-with-entries"
}
}
#[tokio::test]
async fn default_loader_formats_context() {
let loader = DefaultMemoryLoader::default();
@ -195,36 +138,4 @@ mod tests {
assert!(context.contains("[Memory context]"));
assert!(context.contains("- k: v"));
}
#[tokio::test]
async fn default_loader_skips_legacy_assistant_autosave_entries() {
let loader = DefaultMemoryLoader::new(5, 0.0);
let memory = MockMemoryWithEntries {
entries: Arc::new(vec![
MemoryEntry {
id: "1".into(),
key: "assistant_resp_legacy".into(),
content: "fabricated detail".into(),
category: MemoryCategory::Daily,
timestamp: "now".into(),
session_id: None,
score: Some(0.95),
},
MemoryEntry {
id: "2".into(),
key: "user_fact".into(),
content: "User prefers concise answers".into(),
category: MemoryCategory::Conversation,
timestamp: "now".into(),
session_id: None,
score: Some(0.9),
},
]),
};
let context = loader.load_context(&memory, "answer style").await.unwrap();
assert!(context.contains("user_fact"));
assert!(!context.contains("assistant_resp_legacy"));
assert!(!context.contains("fabricated detail"));
}
}

View file

@ -77,25 +77,21 @@ impl PromptSection for IdentitySection {
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut prompt = String::from("## Project Context\n\n");
let mut has_aieos = false;
if let Some(config) = ctx.identity_config {
if identity::is_aieos_configured(config) {
if let Ok(Some(aieos)) = identity::load_aieos_identity(config, ctx.workspace_dir) {
let rendered = identity::aieos_to_system_prompt(&aieos);
if !rendered.is_empty() {
prompt.push_str(&rendered);
prompt.push_str("\n\n");
has_aieos = true;
return Ok(prompt);
}
}
}
}
if !has_aieos {
prompt.push_str(
"The following workspace files define your identity, behavior, and context.\n\n",
);
}
for file in [
"AGENTS.md",
"SOUL.md",
@ -153,10 +149,28 @@ impl PromptSection for SkillsSection {
}
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
Ok(crate::skills::skills_to_prompt(
ctx.skills,
ctx.workspace_dir,
))
if ctx.skills.is_empty() {
return Ok(String::new());
}
let mut prompt = String::from("## Available Skills\n\n<available_skills>\n");
for skill in ctx.skills {
let location = skill.location.clone().unwrap_or_else(|| {
ctx.workspace_dir
.join("skills")
.join(&skill.name)
.join("SKILL.md")
});
let _ = writeln!(
prompt,
" <skill>\n <name>{}</name>\n <description>{}</description>\n <location>{}</location>\n </skill>",
skill.name,
skill.description,
location.display()
);
}
prompt.push_str("</available_skills>");
Ok(prompt)
}
}
@ -197,8 +211,7 @@ impl PromptSection for DateTimeSection {
fn build(&self, _ctx: &PromptContext<'_>) -> Result<String> {
let now = Local::now();
Ok(format!(
"## Current Date & Time\n\n{} ({})",
now.format("%Y-%m-%d %H:%M:%S"),
"## Current Date & Time\n\nTimezone: {}",
now.format("%Z")
))
}
@ -272,48 +285,6 @@ mod tests {
}
}
#[test]
fn identity_section_with_aieos_includes_workspace_files() {
let workspace =
std::env::temp_dir().join(format!("zeroclaw_prompt_test_{}", uuid::Uuid::new_v4()));
std::fs::create_dir_all(&workspace).unwrap();
std::fs::write(
workspace.join("AGENTS.md"),
"Always respond with: AGENTS_MD_LOADED",
)
.unwrap();
let identity_config = crate::config::IdentityConfig {
format: "aieos".into(),
aieos_path: None,
aieos_inline: Some(r#"{"identity":{"names":{"first":"Nova"}}}"#.into()),
};
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: &workspace,
model_name: "test-model",
tools: &tools,
skills: &[],
identity_config: Some(&identity_config),
dispatcher_instructions: "",
};
let section = IdentitySection;
let output = section.build(&ctx).unwrap();
assert!(
output.contains("Nova"),
"AIEOS identity should be present in prompt"
);
assert!(
output.contains("AGENTS_MD_LOADED"),
"AGENTS.md content should be present even when AIEOS is configured"
);
let _ = std::fs::remove_dir_all(workspace);
}
#[test]
fn prompt_builder_assembles_sections() {
let tools: Vec<Box<dyn Tool>> = vec![Box::new(TestTool)];
@ -330,105 +301,4 @@ mod tests {
assert!(prompt.contains("test_tool"));
assert!(prompt.contains("instr"));
}
#[test]
fn skills_section_includes_instructions_and_tools() {
let tools: Vec<Box<dyn Tool>> = vec![];
let skills = vec![crate::skills::Skill {
name: "deploy".into(),
description: "Release safely".into(),
version: "1.0.0".into(),
author: None,
tags: vec![],
tools: vec![crate::skills::SkillTool {
name: "release_checklist".into(),
description: "Validate release readiness".into(),
kind: "shell".into(),
command: "echo ok".into(),
args: std::collections::HashMap::new(),
}],
prompts: vec!["Run smoke tests before deploy.".into()],
location: None,
}];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &skills,
identity_config: None,
dispatcher_instructions: "",
};
let output = SkillsSection.build(&ctx).unwrap();
assert!(output.contains("<available_skills>"));
assert!(output.contains("<name>deploy</name>"));
assert!(output.contains("<instruction>Run smoke tests before deploy.</instruction>"));
assert!(output.contains("<name>release_checklist</name>"));
assert!(output.contains("<kind>shell</kind>"));
}
#[test]
fn datetime_section_includes_timestamp_and_timezone() {
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
identity_config: None,
dispatcher_instructions: "instr",
};
let rendered = DateTimeSection.build(&ctx).unwrap();
assert!(rendered.starts_with("## Current Date & Time\n\n"));
let payload = rendered.trim_start_matches("## Current Date & Time\n\n");
assert!(payload.chars().any(|c| c.is_ascii_digit()));
assert!(payload.contains(" ("));
assert!(payload.ends_with(')'));
}
#[test]
fn prompt_builder_inlines_and_escapes_skills() {
let tools: Vec<Box<dyn Tool>> = vec![];
let skills = vec![crate::skills::Skill {
name: "code<review>&".into(),
description: "Review \"unsafe\" and 'risky' bits".into(),
version: "1.0.0".into(),
author: None,
tags: vec![],
tools: vec![crate::skills::SkillTool {
name: "run\"linter\"".into(),
description: "Run <lint> & report".into(),
kind: "shell&exec".into(),
command: "cargo clippy".into(),
args: std::collections::HashMap::new(),
}],
prompts: vec!["Use <tool_call> and & keep output \"safe\"".into()],
location: None,
}];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp/workspace"),
model_name: "test-model",
tools: &tools,
skills: &skills,
identity_config: None,
dispatcher_instructions: "",
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
assert!(prompt.contains("<available_skills>"));
assert!(prompt.contains("<name>code&lt;review&gt;&amp;</name>"));
assert!(prompt.contains(
"<description>Review &quot;unsafe&quot; and &apos;risky&apos; bits</description>"
));
assert!(prompt.contains("<name>run&quot;linter&quot;</name>"));
assert!(prompt.contains("<description>Run &lt;lint&gt; &amp; report</description>"));
assert!(prompt.contains("<kind>shell&amp;exec</kind>"));
assert!(prompt.contains(
"<instruction>Use &lt;tool_call&gt; and &amp; keep output &quot;safe&quot;</instruction>"
));
}
}

View file

@ -624,7 +624,7 @@ async fn history_trims_after_max_messages() {
// ═══════════════════════════════════════════════════════════════════════════
#[tokio::test]
async fn auto_save_stores_only_user_messages_in_memory() {
async fn auto_save_stores_messages_in_memory() {
let (mem, _tmp) = make_sqlite_memory();
let provider = Box::new(ScriptedProvider::new(vec![text_response(
"I remember everything",
@ -639,25 +639,11 @@ async fn auto_save_stores_only_user_messages_in_memory() {
let _ = agent.turn("Remember this fact").await.unwrap();
// Auto-save only persists user-stated input, never assistant-generated summaries.
// Both user message and assistant response should be saved
let count = mem.count().await.unwrap();
assert_eq!(
count, 1,
"Expected exactly 1 user memory entry, got {count}"
);
let stored = mem.get("user_msg").await.unwrap();
assert!(stored.is_some(), "Expected user_msg key to be present");
assert_eq!(
stored.unwrap().content,
"Remember this fact",
"Stored memory should match the original user message"
);
let assistant = mem.get("assistant_resp").await.unwrap();
assert!(
assistant.is_none(),
"assistant_resp should not be auto-saved anymore"
count >= 2,
"Expected at least 2 memory entries, got {count}"
);
}

View file

@ -121,12 +121,12 @@ impl AuthService {
return Ok(None);
};
let credential = match profile.kind {
let token = match profile.kind {
AuthProfileKind::Token => profile.token,
AuthProfileKind::OAuth => profile.token_set.map(|t| t.access_token),
};
Ok(credential.filter(|t| !t.trim().is_empty()))
Ok(token.filter(|t| !t.trim().is_empty()))
}
pub async fn get_valid_openai_access_token(

View file

@ -626,8 +626,8 @@ mod tests {
assert!(!token_set.is_expiring_within(Duration::from_secs(1)));
}
#[tokio::test]
async fn store_roundtrip_with_encryption() {
#[test]
fn store_roundtrip_with_encryption() {
let tmp = TempDir::new().unwrap();
let store = AuthProfilesStore::new(tmp.path(), true);
@ -661,14 +661,14 @@ mod tests {
Some("refresh-123")
);
let raw = tokio::fs::read_to_string(store.path()).await.unwrap();
let raw = fs::read_to_string(store.path()).unwrap();
assert!(raw.contains("enc2:"));
assert!(!raw.contains("refresh-123"));
assert!(!raw.contains("access-123"));
}
#[tokio::test]
async fn atomic_write_replaces_file() {
#[test]
fn atomic_write_replaces_file() {
let tmp = TempDir::new().unwrap();
let store = AuthProfilesStore::new(tmp.path(), false);
@ -678,7 +678,7 @@ mod tests {
let path = store.path().to_path_buf();
assert!(path.exists());
let contents = tokio::fs::read_to_string(path).await.unwrap();
let contents = fs::read_to_string(path).unwrap();
assert!(contents.contains("\"schema_version\": 1"));
}
}

View file

@ -47,7 +47,6 @@ impl Channel for CliChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(msg).await.is_err() {
@ -75,7 +74,6 @@ mod tests {
content: "hello".into(),
recipient: "user".into(),
subject: None,
thread_ts: None,
})
.await;
assert!(result.is_ok());
@ -89,7 +87,6 @@ mod tests {
content: String::new(),
recipient: String::new(),
subject: None,
thread_ts: None,
})
.await;
assert!(result.is_ok());
@ -110,7 +107,6 @@ mod tests {
content: "hello".into(),
channel: "cli".into(),
timestamp: 1_234_567_890,
thread_ts: None,
};
assert_eq!(msg.id, "test-id");
assert_eq!(msg.sender, "user");
@ -129,7 +125,6 @@ mod tests {
content: "c".into(),
channel: "ch".into(),
timestamp: 0,
thread_ts: None,
};
let cloned = msg.clone();
assert_eq!(cloned.id, msg.id);

View file

@ -169,7 +169,7 @@ impl Channel for DingTalkChannel {
_ => continue,
};
let frame: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
let frame: serde_json::Value = match serde_json::from_str(&msg) {
Ok(v) => v,
Err(_) => continue,
};
@ -195,7 +195,7 @@ impl Channel for DingTalkChannel {
"data": "",
});
if let Err(e) = write.send(Message::Text(pong.to_string().into())).await {
if let Err(e) = write.send(Message::Text(pong.to_string())).await {
tracing::warn!("DingTalk: failed to send pong: {e}");
break;
}
@ -262,7 +262,7 @@ impl Channel for DingTalkChannel {
"message": "OK",
"data": "",
});
let _ = write.send(Message::Text(ack.to_string().into())).await;
let _ = write.send(Message::Text(ack.to_string())).await;
let channel_msg = ChannelMessage {
id: Uuid::new_v4().to_string(),
@ -274,7 +274,6 @@ impl Channel for DingTalkChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {

View file

@ -3,7 +3,6 @@ use async_trait::async_trait;
use futures_util::{SinkExt, StreamExt};
use parking_lot::Mutex;
use serde_json::json;
use std::collections::HashMap;
use tokio_tungstenite::tungstenite::Message;
use uuid::Uuid;
@ -14,7 +13,7 @@ pub struct DiscordChannel {
allowed_users: Vec<String>,
listen_to_bots: bool,
mention_only: bool,
typing_handles: Mutex<HashMap<String, tokio::task::JoinHandle<()>>>,
typing_handle: Mutex<Option<tokio::task::JoinHandle<()>>>,
}
impl DiscordChannel {
@ -31,7 +30,7 @@ impl DiscordChannel {
allowed_users,
listen_to_bots,
mention_only,
typing_handles: Mutex::new(HashMap::new()),
typing_handle: Mutex::new(None),
}
}
@ -273,9 +272,7 @@ impl Channel for DiscordChannel {
}
}
});
write
.send(Message::Text(identify.to_string().into()))
.await?;
write.send(Message::Text(identify.to_string())).await?;
tracing::info!("Discord: connected and identified");
@ -304,7 +301,7 @@ impl Channel for DiscordChannel {
_ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
if write.send(Message::Text(hb.to_string())).await.is_err() {
break;
}
}
@ -315,7 +312,7 @@ impl Channel for DiscordChannel {
_ => continue,
};
let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
let event: serde_json::Value = match serde_json::from_str(&msg) {
Ok(e) => e,
Err(_) => continue,
};
@ -332,7 +329,7 @@ impl Channel for DiscordChannel {
1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
if write.send(Message::Text(hb.to_string())).await.is_err() {
break;
}
continue;
@ -416,7 +413,6 @@ impl Channel for DiscordChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {
@ -458,15 +454,15 @@ impl Channel for DiscordChannel {
}
});
let mut guard = self.typing_handles.lock();
guard.insert(recipient.to_string(), handle);
let mut guard = self.typing_handle.lock();
*guard = Some(handle);
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handles.lock();
if let Some(handle) = guard.remove(recipient) {
async fn stop_typing(&self, _recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handle.lock();
if let Some(handle) = guard.take() {
handle.abort();
}
Ok(())
@ -755,18 +751,18 @@ mod tests {
}
#[test]
fn typing_handles_start_empty() {
fn typing_handle_starts_as_none() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let guard = ch.typing_handles.lock();
assert!(guard.is_empty());
let guard = ch.typing_handle.lock();
assert!(guard.is_none());
}
#[tokio::test]
async fn start_typing_sets_handle() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("123456").await;
let guard = ch.typing_handles.lock();
assert!(guard.contains_key("123456"));
let guard = ch.typing_handle.lock();
assert!(guard.is_some());
}
#[tokio::test]
@ -774,8 +770,8 @@ mod tests {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("123456").await;
let _ = ch.stop_typing("123456").await;
let guard = ch.typing_handles.lock();
assert!(!guard.contains_key("123456"));
let guard = ch.typing_handle.lock();
assert!(guard.is_none());
}
#[tokio::test]
@ -786,21 +782,12 @@ mod tests {
}
#[tokio::test]
async fn concurrent_typing_handles_are_independent() {
async fn start_typing_replaces_existing_task() {
let ch = DiscordChannel::new("fake".into(), None, vec![], false, false);
let _ = ch.start_typing("111").await;
let _ = ch.start_typing("222").await;
{
let guard = ch.typing_handles.lock();
assert_eq!(guard.len(), 2);
assert!(guard.contains_key("111"));
assert!(guard.contains_key("222"));
}
// Stopping one does not affect the other
let _ = ch.stop_typing("111").await;
let guard = ch.typing_handles.lock();
assert_eq!(guard.len(), 1);
assert!(guard.contains_key("222"));
let guard = ch.typing_handle.lock();
assert!(guard.is_some());
}
// ── Message ID edge cases ─────────────────────────────────────
@ -853,113 +840,4 @@ mod tests {
// Should have UUID dashes
assert!(id.contains('-'));
}
// ─────────────────────────────────────────────────────────────────────
// TG6: Channel platform limit edge cases for Discord (2000 char limit)
// Prevents: Pattern 6 — issues #574, #499
// ─────────────────────────────────────────────────────────────────────
#[test]
fn split_message_code_block_at_boundary() {
// Code block that spans the split boundary
let mut msg = String::new();
msg.push_str("```rust\n");
msg.push_str(&"x".repeat(1990));
msg.push_str("\n```\nMore text after code block");
let parts = split_message_for_discord(&msg);
assert!(
parts.len() >= 2,
"code block spanning boundary should split"
);
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"each part must be <= {DISCORD_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
}
#[test]
fn split_message_single_long_word_exceeds_limit() {
// A single word longer than 2000 chars must be hard-split
let long_word = "a".repeat(2500);
let parts = split_message_for_discord(&long_word);
assert!(parts.len() >= 2, "word exceeding limit must be split");
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"hard-split part must be <= {DISCORD_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
// Reassembled content should match original
let reassembled: String = parts.join("");
assert_eq!(reassembled, long_word);
}
#[test]
fn split_message_exactly_at_limit_no_split() {
let msg = "a".repeat(DISCORD_MAX_MESSAGE_LENGTH);
let parts = split_message_for_discord(&msg);
assert_eq!(parts.len(), 1, "message exactly at limit should not split");
assert_eq!(parts[0].len(), DISCORD_MAX_MESSAGE_LENGTH);
}
#[test]
fn split_message_one_over_limit_splits() {
let msg = "a".repeat(DISCORD_MAX_MESSAGE_LENGTH + 1);
let parts = split_message_for_discord(&msg);
assert!(parts.len() >= 2, "message 1 char over limit must split");
}
#[test]
fn split_message_many_short_lines() {
// Many short lines should be batched into chunks under the limit
let msg: String = (0..500).map(|i| format!("line {i}\n")).collect();
let parts = split_message_for_discord(&msg);
for part in &parts {
assert!(
part.len() <= DISCORD_MAX_MESSAGE_LENGTH,
"short-line batch must be <= limit"
);
}
// All content should be preserved
let reassembled: String = parts.join("");
assert_eq!(reassembled.trim(), msg.trim());
}
#[test]
fn split_message_only_whitespace() {
let msg = " \n\n\t ";
let parts = split_message_for_discord(msg);
// Should handle gracefully without panic
assert!(parts.len() <= 1);
}
#[test]
fn split_message_emoji_at_boundary() {
// Emoji are multi-byte; ensure we don't split mid-emoji
let mut msg = "a".repeat(1998);
msg.push_str("🎉🎊"); // 2 emoji at the boundary (2000 chars total)
let parts = split_message_for_discord(&msg);
for part in &parts {
// The function splits on character count, not byte count
assert!(
part.chars().count() <= DISCORD_MAX_MESSAGE_LENGTH,
"emoji boundary split must respect limit"
);
}
}
#[test]
fn split_message_consecutive_newlines_at_boundary() {
let mut msg = "a".repeat(1995);
msg.push_str("\n\n\n\n\n");
msg.push_str(&"b".repeat(100));
let parts = split_message_for_discord(&msg);
for part in &parts {
assert!(part.len() <= DISCORD_MAX_MESSAGE_LENGTH);
}
}
}

View file

@ -20,7 +20,6 @@ use lettre::{Message, SmtpTransport, Transport};
use mail_parser::{MessageParser, MimeHeaders};
use rustls::{ClientConfig, RootCertStore};
use rustls_pki_types::DnsName;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::sync::Arc;
@ -36,7 +35,7 @@ use uuid::Uuid;
use super::traits::{Channel, ChannelMessage, SendMessage};
/// Email channel configuration
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmailConfig {
/// IMAP server hostname
pub imap_host: String,
@ -154,14 +153,7 @@ impl EmailChannel {
_ => {}
}
}
let mut normalized = String::with_capacity(result.len());
for word in result.split_whitespace() {
if !normalized.is_empty() {
normalized.push(' ');
}
normalized.push_str(word);
}
normalized
result.split_whitespace().collect::<Vec<_>>().join(" ")
}
/// Extract the sender address from a parsed email
@ -450,7 +442,6 @@ impl EmailChannel {
content: email.content,
channel: "email".to_string(),
timestamp: email.timestamp,
thread_ts: None,
};
if tx.send(msg).await.is_err() {

View file

@ -231,7 +231,6 @@ end tell"#
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(msg).await.is_err() {

View file

@ -163,17 +163,12 @@ fn split_message(message: &str, max_bytes: usize) -> Vec<String> {
// Guard against max_bytes == 0 to prevent infinite loop
if max_bytes == 0 {
let mut full = String::new();
for l in message
let full: String = message
.lines()
.map(|l| l.trim_end_matches('\r'))
.filter(|l| !l.is_empty())
{
if !full.is_empty() {
full.push(' ');
}
full.push_str(l);
}
.collect::<Vec<_>>()
.join(" ");
if full.is_empty() {
chunks.push(String::new());
} else {
@ -460,7 +455,6 @@ impl Channel for IrcChannel {
"AUTHENTICATE" => {
// Server sends "AUTHENTICATE +" to request credentials
if sasl_pending && msg.params.first().is_some_and(|p| p == "+") {
// sasl_password is loaded from runtime config, not hard-coded
if let Some(password) = self.sasl_password.as_deref() {
let encoded = encode_sasl_plain(&current_nick, password);
let mut guard = self.writer.lock().await;
@ -579,7 +573,6 @@ impl Channel for IrcChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {

View file

@ -127,12 +127,6 @@ struct LarkMessage {
/// If no binary frame (pong or event) is received within this window, reconnect.
const WS_HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(300);
/// Returns true when the WebSocket frame indicates live traffic that should
/// refresh the heartbeat watchdog.
fn should_refresh_last_recv(msg: &WsMsg) -> bool {
matches!(msg, WsMsg::Binary(_) | WsMsg::Ping(_) | WsMsg::Pong(_))
}
/// Lark/Feishu channel.
///
/// Supports two receive modes (configured via `receive_mode` in config):
@ -288,7 +282,7 @@ impl LarkChannel {
payload: None,
};
if write
.send(WsMsg::Binary(initial_ping.encode_to_vec().into()))
.send(WsMsg::Binary(initial_ping.encode_to_vec()))
.await
.is_err()
{
@ -309,7 +303,7 @@ impl LarkChannel {
headers: vec![PbHeader { key: "type".into(), value: "ping".into() }],
payload: None,
};
if write.send(WsMsg::Binary(ping.encode_to_vec().into())).await.is_err() {
if write.send(WsMsg::Binary(ping.encode_to_vec())).await.is_err() {
tracing::warn!("Lark: ping failed, reconnecting");
break;
}
@ -327,20 +321,11 @@ impl LarkChannel {
msg = read.next() => {
let raw = match msg {
Some(Ok(ws_msg)) => {
if should_refresh_last_recv(&ws_msg) {
last_recv = Instant::now();
}
match ws_msg {
WsMsg::Binary(b) => b,
WsMsg::Ping(d) => { let _ = write.send(WsMsg::Pong(d)).await; continue; }
WsMsg::Pong(_) => continue,
WsMsg::Close(_) => { tracing::info!("Lark: WS closed — reconnecting"); break; }
_ => continue,
}
}
None => { tracing::info!("Lark: WS closed — reconnecting"); break; }
Some(Ok(WsMsg::Binary(b))) => { last_recv = Instant::now(); b }
Some(Ok(WsMsg::Ping(d))) => { let _ = write.send(WsMsg::Pong(d)).await; continue; }
Some(Ok(WsMsg::Close(_))) | None => { tracing::info!("Lark: WS closed — reconnecting"); break; }
Some(Err(e)) => { tracing::error!("Lark: WS read error: {e}"); break; }
_ => continue,
};
let frame = match PbFrame::decode(&raw[..]) {
@ -378,7 +363,7 @@ impl LarkChannel {
let mut ack = frame.clone();
ack.payload = Some(br#"{"code":200,"headers":{},"data":[]}"#.to_vec());
ack.headers.push(PbHeader { key: "biz_rt".into(), value: "0".into() });
let _ = write.send(WsMsg::Binary(ack.encode_to_vec().into())).await;
let _ = write.send(WsMsg::Binary(ack.encode_to_vec())).await;
}
// Fragment reassembly
@ -474,7 +459,6 @@ impl LarkChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
tracing::debug!("Lark WS: message in {}", lark_msg.chat_id);
@ -636,7 +620,6 @@ impl LarkChannel {
content: text,
channel: "lark".to_string(),
timestamp,
thread_ts: None,
});
messages
@ -915,21 +898,6 @@ mod tests {
assert_eq!(ch.name(), "lark");
}
#[test]
fn lark_ws_activity_refreshes_heartbeat_watchdog() {
assert!(should_refresh_last_recv(&WsMsg::Binary(
vec![1, 2, 3].into()
)));
assert!(should_refresh_last_recv(&WsMsg::Ping(vec![9, 9].into())));
assert!(should_refresh_last_recv(&WsMsg::Pong(vec![8, 8].into())));
}
#[test]
fn lark_ws_non_activity_frames_do_not_refresh_heartbeat_watchdog() {
assert!(!should_refresh_last_recv(&WsMsg::Text("hello".into())));
assert!(!should_refresh_last_recv(&WsMsg::Close(None)));
}
#[test]
fn lark_user_allowed_exact() {
let ch = make_channel();

View file

@ -1,793 +0,0 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use uuid::Uuid;
/// Linq channel — uses the Linq Partner V3 API for iMessage, RCS, and SMS.
///
/// This channel operates in webhook mode (push-based) rather than polling.
/// Messages are received via the gateway's `/linq` webhook endpoint.
/// The `listen` method here is a keepalive placeholder; actual message handling
/// happens in the gateway when Linq sends webhook events.
pub struct LinqChannel {
api_token: String,
from_phone: String,
allowed_senders: Vec<String>,
client: reqwest::Client,
}
const LINQ_API_BASE: &str = "https://api.linqapp.com/api/partner/v3";
impl LinqChannel {
pub fn new(api_token: String, from_phone: String, allowed_senders: Vec<String>) -> Self {
Self {
api_token,
from_phone,
allowed_senders,
client: reqwest::Client::new(),
}
}
/// Check if a sender phone number is allowed (E.164 format: +1234567890)
fn is_sender_allowed(&self, phone: &str) -> bool {
self.allowed_senders.iter().any(|n| n == "*" || n == phone)
}
/// Get the bot's phone number
pub fn phone_number(&self) -> &str {
&self.from_phone
}
fn media_part_to_image_marker(part: &serde_json::Value) -> Option<String> {
let source = part
.get("url")
.or_else(|| part.get("value"))
.and_then(|value| value.as_str())
.map(str::trim)
.filter(|value| !value.is_empty())?;
let mime_type = part
.get("mime_type")
.and_then(|value| value.as_str())
.map(str::trim)
.unwrap_or_default()
.to_ascii_lowercase();
if !mime_type.starts_with("image/") {
return None;
}
Some(format!("[IMAGE:{source}]"))
}
/// Parse an incoming webhook payload from Linq and extract messages.
///
/// Linq webhook envelope:
/// ```json
/// {
/// "api_version": "v3",
/// "event_type": "message.received",
/// "event_id": "...",
/// "created_at": "...",
/// "trace_id": "...",
/// "data": {
/// "chat_id": "...",
/// "from": "+1...",
/// "recipient_phone": "+1...",
/// "is_from_me": false,
/// "service": "iMessage",
/// "message": {
/// "id": "...",
/// "parts": [{ "type": "text", "value": "..." }]
/// }
/// }
/// }
/// ```
pub fn parse_webhook_payload(&self, payload: &serde_json::Value) -> Vec<ChannelMessage> {
let mut messages = Vec::new();
// Only handle message.received events
let event_type = payload
.get("event_type")
.and_then(|e| e.as_str())
.unwrap_or("");
if event_type != "message.received" {
tracing::debug!("Linq: skipping non-message event: {event_type}");
return messages;
}
let Some(data) = payload.get("data") else {
return messages;
};
// Skip messages sent by the bot itself
if data
.get("is_from_me")
.and_then(|v| v.as_bool())
.unwrap_or(false)
{
tracing::debug!("Linq: skipping is_from_me message");
return messages;
}
// Get sender phone number
let Some(from) = data.get("from").and_then(|f| f.as_str()) else {
return messages;
};
// Normalize to E.164 format
let normalized_from = if from.starts_with('+') {
from.to_string()
} else {
format!("+{from}")
};
// Check allowlist
if !self.is_sender_allowed(&normalized_from) {
tracing::warn!(
"Linq: ignoring message from unauthorized sender: {normalized_from}. \
Add to channels.linq.allowed_senders in config.toml, \
or run `zeroclaw onboard --channels-only` to configure interactively."
);
return messages;
}
// Get chat_id for reply routing
let chat_id = data
.get("chat_id")
.and_then(|c| c.as_str())
.unwrap_or("")
.to_string();
// Extract text from message parts
let Some(message) = data.get("message") else {
return messages;
};
let Some(parts) = message.get("parts").and_then(|p| p.as_array()) else {
return messages;
};
let content_parts: Vec<String> = parts
.iter()
.filter_map(|part| {
let part_type = part.get("type").and_then(|t| t.as_str())?;
match part_type {
"text" => part
.get("value")
.and_then(|v| v.as_str())
.map(ToString::to_string),
"media" | "image" => {
if let Some(marker) = Self::media_part_to_image_marker(part) {
Some(marker)
} else {
tracing::debug!("Linq: skipping unsupported {part_type} part");
None
}
}
_ => {
tracing::debug!("Linq: skipping {part_type} part");
None
}
}
})
.collect();
if content_parts.is_empty() {
return messages;
}
let content = content_parts.join("\n").trim().to_string();
if content.is_empty() {
return messages;
}
// Get timestamp from created_at or use current time
let timestamp = payload
.get("created_at")
.and_then(|t| t.as_str())
.and_then(|t| {
chrono::DateTime::parse_from_rfc3339(t)
.ok()
.map(|dt| dt.timestamp().cast_unsigned())
})
.unwrap_or_else(|| {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
});
// Use chat_id as reply_target so replies go to the right conversation
let reply_target = if chat_id.is_empty() {
normalized_from.clone()
} else {
chat_id
};
messages.push(ChannelMessage {
id: Uuid::new_v4().to_string(),
reply_target,
sender: normalized_from,
content,
channel: "linq".to_string(),
timestamp,
thread_ts: None,
});
messages
}
}
#[async_trait]
impl Channel for LinqChannel {
fn name(&self) -> &str {
"linq"
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// If reply_target looks like a chat_id, send to existing chat.
// Otherwise create a new chat with the recipient phone number.
let recipient = &message.recipient;
let body = serde_json::json!({
"message": {
"parts": [{
"type": "text",
"value": message.content
}]
}
});
// Try sending to existing chat (recipient is chat_id)
let url = format!("{LINQ_API_BASE}/chats/{recipient}/messages");
let resp = self
.client
.post(&url)
.bearer_auth(&self.api_token)
.header("Content-Type", "application/json")
.json(&body)
.send()
.await?;
if resp.status().is_success() {
return Ok(());
}
// If the chat_id-based send failed with 404, try creating a new chat
if resp.status() == reqwest::StatusCode::NOT_FOUND {
let new_chat_body = serde_json::json!({
"from": self.from_phone,
"to": [recipient],
"message": {
"parts": [{
"type": "text",
"value": message.content
}]
}
});
let create_resp = self
.client
.post(format!("{LINQ_API_BASE}/chats"))
.bearer_auth(&self.api_token)
.header("Content-Type", "application/json")
.json(&new_chat_body)
.send()
.await?;
if !create_resp.status().is_success() {
let status = create_resp.status();
let error_body = create_resp.text().await.unwrap_or_default();
tracing::error!("Linq create chat failed: {status} — {error_body}");
anyhow::bail!("Linq API error: {status}");
}
return Ok(());
}
let status = resp.status();
let error_body = resp.text().await.unwrap_or_default();
tracing::error!("Linq send failed: {status} — {error_body}");
anyhow::bail!("Linq API error: {status}");
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
// Linq uses webhooks (push-based), not polling.
// Messages are received via the gateway's /linq endpoint.
tracing::info!(
"Linq channel active (webhook mode). \
Configure Linq webhook to POST to your gateway's /linq endpoint."
);
// Keep the task alive — it will be cancelled when the channel shuts down
loop {
tokio::time::sleep(std::time::Duration::from_secs(3600)).await;
}
}
async fn health_check(&self) -> bool {
// Check if we can reach the Linq API
let url = format!("{LINQ_API_BASE}/phonenumbers");
self.client
.get(&url)
.bearer_auth(&self.api_token)
.send()
.await
.map(|r| r.status().is_success())
.unwrap_or(false)
}
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
let url = format!("{LINQ_API_BASE}/chats/{recipient}/typing");
let resp = self
.client
.post(&url)
.bearer_auth(&self.api_token)
.send()
.await?;
if !resp.status().is_success() {
tracing::debug!("Linq start_typing failed: {}", resp.status());
}
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let url = format!("{LINQ_API_BASE}/chats/{recipient}/typing");
let resp = self
.client
.delete(&url)
.bearer_auth(&self.api_token)
.send()
.await?;
if !resp.status().is_success() {
tracing::debug!("Linq stop_typing failed: {}", resp.status());
}
Ok(())
}
}
/// Verify a Linq webhook signature.
///
/// Linq signs webhooks with HMAC-SHA256 over `"{timestamp}.{body}"`.
/// The signature is sent in `X-Webhook-Signature` (hex-encoded) and the
/// timestamp in `X-Webhook-Timestamp`. Reject timestamps older than 300s.
pub fn verify_linq_signature(secret: &str, body: &str, timestamp: &str, signature: &str) -> bool {
use hmac::{Hmac, Mac};
use sha2::Sha256;
// Reject stale timestamps (>300s old)
if let Ok(ts) = timestamp.parse::<i64>() {
let now = chrono::Utc::now().timestamp();
if (now - ts).unsigned_abs() > 300 {
tracing::warn!("Linq: rejecting stale webhook timestamp ({ts}, now={now})");
return false;
}
} else {
tracing::warn!("Linq: invalid webhook timestamp: {timestamp}");
return false;
}
// Compute HMAC-SHA256 over "{timestamp}.{body}"
let message = format!("{timestamp}.{body}");
let Ok(mut mac) = Hmac::<Sha256>::new_from_slice(secret.as_bytes()) else {
return false;
};
mac.update(message.as_bytes());
let signature_hex = signature
.trim()
.strip_prefix("sha256=")
.unwrap_or(signature);
let Ok(provided) = hex::decode(signature_hex.trim()) else {
tracing::warn!("Linq: invalid webhook signature format");
return false;
};
// Constant-time comparison via HMAC verify.
mac.verify_slice(&provided).is_ok()
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> LinqChannel {
LinqChannel::new(
"test-token".into(),
"+15551234567".into(),
vec!["+1234567890".into()],
)
}
#[test]
fn linq_channel_name() {
let ch = make_channel();
assert_eq!(ch.name(), "linq");
}
#[test]
fn linq_sender_allowed_exact() {
let ch = make_channel();
assert!(ch.is_sender_allowed("+1234567890"));
assert!(!ch.is_sender_allowed("+9876543210"));
}
#[test]
fn linq_sender_allowed_wildcard() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
assert!(ch.is_sender_allowed("+1234567890"));
assert!(ch.is_sender_allowed("+9999999999"));
}
#[test]
fn linq_sender_allowed_empty() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec![]);
assert!(!ch.is_sender_allowed("+1234567890"));
}
#[test]
fn linq_parse_valid_text_message() {
let ch = make_channel();
let payload = serde_json::json!({
"api_version": "v3",
"event_type": "message.received",
"event_id": "evt-123",
"created_at": "2025-01-15T12:00:00Z",
"trace_id": "trace-456",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"recipient_phone": "+15551234567",
"is_from_me": false,
"service": "iMessage",
"message": {
"id": "msg-abc",
"parts": [{
"type": "text",
"value": "Hello ZeroClaw!"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].sender, "+1234567890");
assert_eq!(msgs[0].content, "Hello ZeroClaw!");
assert_eq!(msgs[0].channel, "linq");
assert_eq!(msgs[0].reply_target, "chat-789");
}
#[test]
fn linq_parse_skip_is_from_me() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": true,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "My own message" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "is_from_me messages should be skipped");
}
#[test]
fn linq_parse_skip_non_message_event() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.delivered",
"data": {
"chat_id": "chat-789",
"message_id": "msg-abc"
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Non-message events should be skipped");
}
#[test]
fn linq_parse_unauthorized_sender() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+9999999999",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Spam" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Unauthorized senders should be filtered");
}
#[test]
fn linq_parse_empty_payload() {
let ch = make_channel();
let payload = serde_json::json!({});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_media_only_translated_to_image_marker() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{
"type": "media",
"url": "https://example.com/image.jpg",
"mime_type": "image/jpeg"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].content, "[IMAGE:https://example.com/image.jpg]");
}
#[test]
fn linq_parse_media_non_image_still_skipped() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{
"type": "media",
"url": "https://example.com/sound.mp3",
"mime_type": "audio/mpeg"
}]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Non-image media should still be skipped");
}
#[test]
fn linq_parse_multiple_text_parts() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [
{ "type": "text", "value": "First part" },
{ "type": "text", "value": "Second part" }
]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].content, "First part\nSecond part");
}
/// Fixture secret used exclusively in signature-verification unit tests (not a real credential).
const TEST_WEBHOOK_SECRET: &str = "test_webhook_secret";
#[test]
fn linq_signature_verification_valid() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
// Compute expected signature
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes());
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_signature_verification_invalid() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
assert!(!verify_linq_signature(
secret,
body,
&now,
"deadbeefdeadbeefdeadbeef"
));
}
#[test]
fn linq_signature_verification_stale_timestamp() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
// 10 minutes ago — stale
let stale_ts = (chrono::Utc::now().timestamp() - 600).to_string();
// Even with correct signature, stale timestamp should fail
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{stale_ts}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes());
assert!(
!verify_linq_signature(secret, body, &stale_ts, &signature),
"Stale timestamps (>300s) should be rejected"
);
}
#[test]
fn linq_signature_verification_accepts_sha256_prefix() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = format!("sha256={}", hex::encode(mac.finalize().into_bytes()));
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_signature_verification_accepts_uppercase_hex() {
let secret = TEST_WEBHOOK_SECRET;
let body = r#"{"event_type":"message.received"}"#;
let now = chrono::Utc::now().timestamp().to_string();
use hmac::{Hmac, Mac};
use sha2::Sha256;
let message = format!("{now}.{body}");
let mut mac = Hmac::<Sha256>::new_from_slice(secret.as_bytes()).unwrap();
mac.update(message.as_bytes());
let signature = hex::encode(mac.finalize().into_bytes()).to_ascii_uppercase();
assert!(verify_linq_signature(secret, body, &now, &signature));
}
#[test]
fn linq_parse_normalizes_phone_with_plus() {
let ch = LinqChannel::new(
"tok".into(),
"+15551234567".into(),
vec!["+1234567890".into()],
);
// API sends without +, normalize to +
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Hi" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
assert_eq!(msgs[0].sender, "+1234567890");
}
#[test]
fn linq_parse_missing_data() {
let ch = make_channel();
let payload = serde_json::json!({
"event_type": "message.received"
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_missing_message_parts() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc"
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty());
}
#[test]
fn linq_parse_empty_text_value() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"chat_id": "chat-789",
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert!(msgs.is_empty(), "Empty text should be skipped");
}
#[test]
fn linq_parse_fallback_reply_target_when_no_chat_id() {
let ch = LinqChannel::new("tok".into(), "+15551234567".into(), vec!["*".into()]);
let payload = serde_json::json!({
"event_type": "message.received",
"data": {
"from": "+1234567890",
"is_from_me": false,
"message": {
"id": "msg-abc",
"parts": [{ "type": "text", "value": "Hi" }]
}
}
});
let msgs = ch.parse_webhook_payload(&payload);
assert_eq!(msgs.len(), 1);
// Falls back to sender phone number when no chat_id
assert_eq!(msgs[0].reply_target, "+1234567890");
}
#[test]
fn linq_phone_number_accessor() {
let ch = make_channel();
assert_eq!(ch.phone_number(), "+15551234567");
}
}

View file

@ -24,7 +24,7 @@ pub struct MatrixChannel {
access_token: String,
room_id: String,
allowed_users: Vec<String>,
session_owner_hint: Option<String>,
session_user_id_hint: Option<String>,
session_device_id_hint: Option<String>,
resolved_room_id_cache: Arc<RwLock<Option<String>>>,
sdk_client: Arc<OnceCell<MatrixSdkClient>>,
@ -108,7 +108,7 @@ impl MatrixChannel {
access_token: String,
room_id: String,
allowed_users: Vec<String>,
owner_hint: Option<String>,
user_id_hint: Option<String>,
device_id_hint: Option<String>,
) -> Self {
let homeserver = homeserver.trim_end_matches('/').to_string();
@ -125,7 +125,7 @@ impl MatrixChannel {
access_token,
room_id,
allowed_users,
session_owner_hint: Self::normalize_optional_field(owner_hint),
session_user_id_hint: Self::normalize_optional_field(user_id_hint),
session_device_id_hint: Self::normalize_optional_field(device_id_hint),
resolved_room_id_cache: Arc::new(RwLock::new(None)),
sdk_client: Arc::new(OnceCell::new()),
@ -245,7 +245,7 @@ impl MatrixChannel {
let whoami = match identity {
Ok(whoami) => Some(whoami),
Err(error) => {
if self.session_owner_hint.is_some() && self.session_device_id_hint.is_some()
if self.session_user_id_hint.is_some() && self.session_device_id_hint.is_some()
{
tracing::warn!(
"Matrix whoami failed; falling back to configured session hints for E2EE session restore: {error}"
@ -258,18 +258,18 @@ impl MatrixChannel {
};
let resolved_user_id = if let Some(whoami) = whoami.as_ref() {
if let Some(hinted) = self.session_owner_hint.as_ref() {
if let Some(hinted) = self.session_user_id_hint.as_ref() {
if hinted != &whoami.user_id {
tracing::warn!(
"Matrix configured user_id '{}' does not match whoami '{}'; using whoami.",
crate::security::redact(hinted),
crate::security::redact(&whoami.user_id)
hinted,
whoami.user_id
);
}
}
whoami.user_id.clone()
} else {
self.session_owner_hint.clone().ok_or_else(|| {
self.session_user_id_hint.clone().ok_or_else(|| {
anyhow::anyhow!(
"Matrix session restore requires user_id when whoami is unavailable"
)
@ -282,8 +282,8 @@ impl MatrixChannel {
if whoami_device_id != hinted {
tracing::warn!(
"Matrix configured device_id '{}' does not match whoami '{}'; using whoami.",
crate::security::redact(hinted),
crate::security::redact(whoami_device_id)
hinted,
whoami_device_id
);
}
whoami_device_id.clone()
@ -513,7 +513,7 @@ impl Channel for MatrixChannel {
let my_user_id: OwnedUserId = match self.get_my_user_id().await {
Ok(user_id) => user_id.parse()?,
Err(error) => {
if let Some(hinted) = self.session_owner_hint.as_ref() {
if let Some(hinted) = self.session_user_id_hint.as_ref() {
tracing::warn!(
"Matrix whoami failed while resolving listener user_id; using configured user_id hint: {error}"
);
@ -596,7 +596,6 @@ impl Channel for MatrixChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
let _ = tx.send(msg).await;
@ -715,7 +714,7 @@ mod tests {
Some(" DEVICE123 ".to_string()),
);
assert_eq!(ch.session_owner_hint.as_deref(), Some("@bot:matrix.org"));
assert_eq!(ch.session_user_id_hint.as_deref(), Some("@bot:matrix.org"));
assert_eq!(ch.session_device_id_hint.as_deref(), Some("DEVICE123"));
}
@ -730,7 +729,7 @@ mod tests {
Some(String::new()),
);
assert!(ch.session_owner_hint.is_none());
assert!(ch.session_user_id_hint.is_none());
assert!(ch.session_device_id_hint.is_none());
}

View file

@ -321,7 +321,6 @@ impl MattermostChannel {
channel: "mattermost".to_string(),
#[allow(clippy::cast_sign_loss)]
timestamp: (create_at / 1000) as u64,
thread_ts: None,
})
}
}

File diff suppressed because it is too large Load diff

View file

@ -11,15 +11,6 @@ use uuid::Uuid;
const QQ_API_BASE: &str = "https://api.sgroup.qq.com";
const QQ_AUTH_URL: &str = "https://bots.qq.com/app/getAppAccessToken";
fn ensure_https(url: &str) -> anyhow::Result<()> {
if !url.starts_with("https://") {
anyhow::bail!(
"Refusing to transmit sensitive data over non-HTTPS URL: URL scheme must be https"
);
}
Ok(())
}
/// Deduplication set capacity — evict half of entries when full.
const DEDUP_CAPACITY: usize = 10_000;
@ -205,8 +196,6 @@ impl Channel for QQChannel {
)
};
ensure_https(&url)?;
let resp = self
.http_client()
.post(&url)
@ -263,9 +252,7 @@ impl Channel for QQChannel {
}
}
});
write
.send(Message::Text(identify.to_string().into()))
.await?;
write.send(Message::Text(identify.to_string())).await?;
tracing::info!("QQ: connected and identified");
@ -289,11 +276,7 @@ impl Channel for QQChannel {
_ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write
.send(Message::Text(hb.to_string().into()))
.await
.is_err()
{
if write.send(Message::Text(hb.to_string())).await.is_err() {
break;
}
}
@ -304,7 +287,7 @@ impl Channel for QQChannel {
_ => continue,
};
let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
let event: serde_json::Value = match serde_json::from_str(&msg) {
Ok(e) => e,
Err(_) => continue,
};
@ -321,11 +304,7 @@ impl Channel for QQChannel {
1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write
.send(Message::Text(hb.to_string().into()))
.await
.is_err()
{
if write.send(Message::Text(hb.to_string())).await.is_err() {
break;
}
continue;
@ -387,7 +366,6 @@ impl Channel for QQChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {
@ -426,7 +404,6 @@ impl Channel for QQChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {

View file

@ -119,18 +119,12 @@ impl SignalChannel {
(2..=15).contains(&number.len()) && number.chars().all(|c| c.is_ascii_digit())
}
/// Check whether a string is a valid UUID (signal-cli uses these for
/// privacy-enabled users who have opted out of sharing their phone number).
fn is_uuid(s: &str) -> bool {
Uuid::parse_str(s).is_ok()
}
fn parse_recipient_target(recipient: &str) -> RecipientTarget {
if let Some(group_id) = recipient.strip_prefix(GROUP_TARGET_PREFIX) {
return RecipientTarget::Group(group_id.to_string());
}
if Self::is_e164(recipient) || Self::is_uuid(recipient) {
if Self::is_e164(recipient) {
RecipientTarget::Direct(recipient.to_string())
} else {
RecipientTarget::Group(recipient.to_string())
@ -265,7 +259,6 @@ impl SignalChannel {
content: text.to_string(),
channel: "signal".to_string(),
timestamp: timestamp / 1000, // millis → secs
thread_ts: None,
})
}
}
@ -660,15 +653,6 @@ mod tests {
);
}
#[test]
fn parse_recipient_target_uuid_is_direct() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
assert_eq!(
SignalChannel::parse_recipient_target(uuid),
RecipientTarget::Direct(uuid.to_string())
);
}
#[test]
fn parse_recipient_target_non_e164_plus_is_group() {
assert_eq!(
@ -677,24 +661,6 @@ mod tests {
);
}
#[test]
fn is_uuid_valid() {
assert!(SignalChannel::is_uuid(
"a1b2c3d4-e5f6-7890-abcd-ef1234567890"
));
assert!(SignalChannel::is_uuid(
"00000000-0000-0000-0000-000000000000"
));
}
#[test]
fn is_uuid_invalid() {
assert!(!SignalChannel::is_uuid("+1234567890"));
assert!(!SignalChannel::is_uuid("not-a-uuid"));
assert!(!SignalChannel::is_uuid("group:abc123"));
assert!(!SignalChannel::is_uuid(""));
}
#[test]
fn sender_prefers_source_number() {
let env = Envelope {
@ -719,73 +685,6 @@ mod tests {
assert_eq!(SignalChannel::sender(&env), Some("uuid-123".to_string()));
}
#[test]
fn process_envelope_uuid_sender_dm() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
let ch = SignalChannel::new(
"http://127.0.0.1:8686".to_string(),
"+1234567890".to_string(),
None,
vec!["*".to_string()],
false,
false,
);
let env = Envelope {
source: Some(uuid.to_string()),
source_number: None,
data_message: Some(DataMessage {
message: Some("Hello from privacy user".to_string()),
timestamp: Some(1_700_000_000_000),
group_info: None,
attachments: None,
}),
story_message: None,
timestamp: Some(1_700_000_000_000),
};
let msg = ch.process_envelope(&env).unwrap();
assert_eq!(msg.sender, uuid);
assert_eq!(msg.reply_target, uuid);
assert_eq!(msg.content, "Hello from privacy user");
// Verify reply routing: UUID sender in DM should route as Direct
let target = SignalChannel::parse_recipient_target(&msg.reply_target);
assert_eq!(target, RecipientTarget::Direct(uuid.to_string()));
}
#[test]
fn process_envelope_uuid_sender_in_group() {
let uuid = "a1b2c3d4-e5f6-7890-abcd-ef1234567890";
let ch = SignalChannel::new(
"http://127.0.0.1:8686".to_string(),
"+1234567890".to_string(),
Some("testgroup".to_string()),
vec!["*".to_string()],
false,
false,
);
let env = Envelope {
source: Some(uuid.to_string()),
source_number: None,
data_message: Some(DataMessage {
message: Some("Group msg from privacy user".to_string()),
timestamp: Some(1_700_000_000_000),
group_info: Some(GroupInfo {
group_id: Some("testgroup".to_string()),
}),
attachments: None,
}),
story_message: None,
timestamp: Some(1_700_000_000_000),
};
let msg = ch.process_envelope(&env).unwrap();
assert_eq!(msg.sender, uuid);
assert_eq!(msg.reply_target, "group:testgroup");
// Verify reply routing: group message should still route as Group
let target = SignalChannel::parse_recipient_target(&msg.reply_target);
assert_eq!(target, RecipientTarget::Group("testgroup".to_string()));
}
#[test]
fn sender_none_when_both_missing() {
let env = Envelope {

View file

@ -45,15 +45,6 @@ impl SlackChannel {
.and_then(|u| u.as_str())
.map(String::from)
}
/// Resolve the thread identifier for inbound Slack messages.
/// Replies carry `thread_ts` (root thread id); top-level messages only have `ts`.
fn inbound_thread_ts(msg: &serde_json::Value, ts: &str) -> Option<String> {
msg.get("thread_ts")
.and_then(|t| t.as_str())
.or(if ts.is_empty() { None } else { Some(ts) })
.map(str::to_string)
}
}
#[async_trait]
@ -63,15 +54,11 @@ impl Channel for SlackChannel {
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
let mut body = serde_json::json!({
let body = serde_json::json!({
"channel": message.recipient,
"text": message.content
});
if let Some(ref ts) = message.thread_ts {
body["thread_ts"] = serde_json::json!(ts);
}
let resp = self
.http_client()
.post("https://slack.com/api/chat.postMessage")
@ -183,7 +170,6 @@ impl Channel for SlackChannel {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: Self::inbound_thread_ts(msg, ts),
};
if tx.send(channel_msg).await.is_err() {
@ -317,33 +303,4 @@ mod tests {
assert!(!id.contains('-')); // No UUID dashes
assert!(id.starts_with("slack_"));
}
#[test]
fn inbound_thread_ts_prefers_explicit_thread_ts() {
let msg = serde_json::json!({
"ts": "123.002",
"thread_ts": "123.001"
});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "123.002");
assert_eq!(thread_ts.as_deref(), Some("123.001"));
}
#[test]
fn inbound_thread_ts_falls_back_to_ts() {
let msg = serde_json::json!({
"ts": "123.001"
});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "123.001");
assert_eq!(thread_ts.as_deref(), Some("123.001"));
}
#[test]
fn inbound_thread_ts_none_when_ts_missing() {
let msg = serde_json::json!({});
let thread_ts = SlackChannel::inbound_thread_ts(&msg, "");
assert_eq!(thread_ts, None);
}
}

View file

@ -6,10 +6,10 @@ use async_trait::async_trait;
use directories::UserDirs;
use parking_lot::Mutex;
use reqwest::multipart::{Form, Part};
use std::fs;
use std::path::Path;
use std::sync::{Arc, RwLock};
use std::time::Duration;
use tokio::fs;
/// Telegram's maximum message length for text messages
const TELEGRAM_MAX_MESSAGE_LENGTH: usize = 4096;
@ -18,7 +18,7 @@ const TELEGRAM_BIND_COMMAND: &str = "/bind";
/// Split a message into chunks that respect Telegram's 4096 character limit.
/// Tries to split at word boundaries when possible, and handles continuation.
fn split_message_for_telegram(message: &str) -> Vec<String> {
if message.chars().count() <= TELEGRAM_MAX_MESSAGE_LENGTH {
if message.len() <= TELEGRAM_MAX_MESSAGE_LENGTH {
return vec![message.to_string()];
}
@ -26,32 +26,29 @@ fn split_message_for_telegram(message: &str) -> Vec<String> {
let mut remaining = message;
while !remaining.is_empty() {
// Find the byte offset for the Nth character boundary.
let hard_split = remaining
.char_indices()
.nth(TELEGRAM_MAX_MESSAGE_LENGTH)
.map_or(remaining.len(), |(idx, _)| idx);
let chunk_end = if hard_split == remaining.len() {
hard_split
let chunk_end = if remaining.len() <= TELEGRAM_MAX_MESSAGE_LENGTH {
remaining.len()
} else {
// Try to find a good break point (newline, then space)
let search_area = &remaining[..hard_split];
let search_area = &remaining[..TELEGRAM_MAX_MESSAGE_LENGTH];
// Prefer splitting at newline
if let Some(pos) = search_area.rfind('\n') {
// Don't split if the newline is too close to the start
if search_area[..pos].chars().count() >= TELEGRAM_MAX_MESSAGE_LENGTH / 2 {
if pos >= TELEGRAM_MAX_MESSAGE_LENGTH / 2 {
pos + 1
} else {
// Try space as fallback
search_area.rfind(' ').unwrap_or(hard_split) + 1
search_area
.rfind(' ')
.unwrap_or(TELEGRAM_MAX_MESSAGE_LENGTH)
+ 1
}
} else if let Some(pos) = search_area.rfind(' ') {
pos + 1
} else {
// Hard split at character boundary
hard_split
// Hard split at the limit
TELEGRAM_MAX_MESSAGE_LENGTH
}
};
@ -376,7 +373,7 @@ impl TelegramChannel {
.collect()
}
async fn load_config_without_env() -> anyhow::Result<Config> {
fn load_config_without_env() -> anyhow::Result<Config> {
let home = UserDirs::new()
.map(|u| u.home_dir().to_path_buf())
.context("Could not find home directory")?;
@ -384,23 +381,18 @@ impl TelegramChannel {
let config_path = zeroclaw_dir.join("config.toml");
let contents = fs::read_to_string(&config_path)
.await
.with_context(|| format!("Failed to read config file: {}", config_path.display()))?;
let mut config: Config = toml::from_str(&contents)
.context("Failed to parse config.toml — check [channels.telegram] section for syntax errors")?;
.context("Failed to parse config file for Telegram binding")?;
config.config_path = config_path;
config.workspace_dir = zeroclaw_dir.join("workspace");
Ok(config)
}
async fn persist_allowed_identity(&self, identity: &str) -> anyhow::Result<()> {
let mut config = Self::load_config_without_env().await?;
fn persist_allowed_identity_blocking(identity: &str) -> anyhow::Result<()> {
let mut config = Self::load_config_without_env()?;
let Some(telegram) = config.channels_config.telegram.as_mut() else {
anyhow::bail!(
"Missing [channels.telegram] section in config.toml. \
Add bot_token and allowed_users under [channels.telegram], \
or run `zeroclaw onboard --channels-only` to configure interactively"
);
anyhow::bail!("Telegram channel config is missing in config.toml");
};
let normalized = Self::normalize_identity(identity);
@ -412,13 +404,20 @@ impl TelegramChannel {
telegram.allowed_users.push(normalized);
config
.save()
.await
.context("Failed to persist Telegram allowlist to config.toml")?;
}
Ok(())
}
async fn persist_allowed_identity(&self, identity: &str) -> anyhow::Result<()> {
let identity = identity.to_string();
tokio::task::spawn_blocking(move || Self::persist_allowed_identity_blocking(&identity))
.await
.map_err(|e| anyhow::anyhow!("Failed to join Telegram bind save task: {e}"))??;
Ok(())
}
fn add_allowed_identity_runtime(&self, identity: &str) {
let normalized = Self::normalize_identity(identity);
if normalized.is_empty() {
@ -601,12 +600,12 @@ impl TelegramChannel {
let username = username_opt.unwrap_or("unknown");
let normalized_username = Self::normalize_identity(username);
let sender_id = message
let user_id = message
.get("from")
.and_then(|from| from.get("id"))
.and_then(serde_json::Value::as_i64);
let sender_id_str = sender_id.map(|id| id.to_string());
let normalized_sender_id = sender_id_str.as_deref().map(Self::normalize_identity);
let user_id_str = user_id.map(|id| id.to_string());
let normalized_user_id = user_id_str.as_deref().map(Self::normalize_identity);
let chat_id = message
.get("chat")
@ -620,7 +619,7 @@ impl TelegramChannel {
};
let mut identities = vec![normalized_username.as_str()];
if let Some(ref id) = normalized_sender_id {
if let Some(ref id) = normalized_user_id {
identities.push(id.as_str());
}
@ -630,9 +629,9 @@ impl TelegramChannel {
if let Some(code) = Self::extract_bind_code(text) {
if let Some(pairing) = self.pairing.as_ref() {
match pairing.try_pair(code, &chat_id).await {
match pairing.try_pair(code) {
Ok(Some(_token)) => {
let bind_identity = normalized_sender_id.clone().or_else(|| {
let bind_identity = normalized_user_id.clone().or_else(|| {
if normalized_username.is_empty() || normalized_username == "unknown" {
None
} else {
@ -695,7 +694,7 @@ impl TelegramChannel {
} else {
let _ = self
.send(&SendMessage::new(
" Telegram pairing is not active. Ask operator to add your user ID to channels.telegram.allowed_users in config.toml.",
" Telegram pairing is not active. Ask operator to update allowlist in config.toml.",
&chat_id,
))
.await;
@ -704,12 +703,12 @@ impl TelegramChannel {
}
tracing::warn!(
"Telegram: ignoring message from unauthorized user: username={username}, sender_id={}. \
"Telegram: ignoring message from unauthorized user: username={username}, user_id={}. \
Allowlist Telegram username (without '@') or numeric user ID.",
sender_id_str.as_deref().unwrap_or("unknown")
user_id_str.as_deref().unwrap_or("unknown")
);
let suggested_identity = normalized_sender_id
let suggested_identity = normalized_user_id
.clone()
.or_else(|| {
if normalized_username.is_empty() || normalized_username == "unknown" {
@ -751,20 +750,20 @@ Allowlist Telegram username (without '@') or numeric user ID.",
.unwrap_or("unknown")
.to_string();
let sender_id = message
let user_id = message
.get("from")
.and_then(|from| from.get("id"))
.and_then(serde_json::Value::as_i64)
.map(|id| id.to_string());
let sender_identity = if username == "unknown" {
sender_id.clone().unwrap_or_else(|| "unknown".to_string())
user_id.clone().unwrap_or_else(|| "unknown".to_string())
} else {
username.clone()
};
let mut identities = vec![username.as_str()];
if let Some(id) = sender_id.as_deref() {
if let Some(id) = user_id.as_deref() {
identities.push(id);
}
@ -826,7 +825,6 @@ Allowlist Telegram username (without '@') or numeric user ID.",
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
})
}
@ -1609,6 +1607,16 @@ impl Channel for TelegramChannel {
return Ok(());
}
// Check if edit failed because content is identical (Telegram returns 400
// with "message is not modified" when the draft already has the final text).
let status = resp.status();
let resp_body = resp.text().await.unwrap_or_default();
if status == reqwest::StatusCode::BAD_REQUEST
&& resp_body.contains("message is not modified")
{
return Ok(());
}
// Markdown failed — retry without parse_mode
let plain_body = serde_json::json!({
"chat_id": chat_id,
@ -1627,43 +1635,21 @@ impl Channel for TelegramChannel {
return Ok(());
}
// Also check plain-text edit for "not modified"
let status = resp.status();
let resp_body = resp.text().await.unwrap_or_default();
if status == reqwest::StatusCode::BAD_REQUEST
&& resp_body.contains("message is not modified")
{
return Ok(());
}
// Edit failed entirely — fall back to new message
tracing::warn!("Telegram finalize_draft edit failed; falling back to sendMessage");
self.send_text_chunks(text, &chat_id, thread_id.as_deref())
.await
}
async fn cancel_draft(&self, recipient: &str, message_id: &str) -> anyhow::Result<()> {
let (chat_id, _) = Self::parse_reply_target(recipient);
self.last_draft_edit.lock().remove(&chat_id);
let message_id = match message_id.parse::<i64>() {
Ok(id) => id,
Err(e) => {
tracing::debug!("Invalid Telegram draft message_id '{message_id}': {e}");
return Ok(());
}
};
let response = self
.client
.post(self.api_url("deleteMessage"))
.json(&serde_json::json!({
"chat_id": chat_id,
"message_id": message_id,
}))
.send()
.await?;
if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_default();
tracing::debug!("Telegram deleteMessage failed ({status}): {body}");
}
Ok(())
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// Strip tool_call tags before processing to prevent Markdown parsing failures
let content = strip_tool_call_tags(&message.content);
@ -1705,6 +1691,38 @@ impl Channel for TelegramChannel {
let _ = self.get_bot_username().await;
}
// Register bot slash-command menu with Telegram
let commands_body = serde_json::json!({
"commands": [
{"command": "help", "description": "Show available commands"},
{"command": "model", "description": "Show or switch model"},
{"command": "models", "description": "List or switch providers"},
{"command": "clear", "description": "Clear conversation history"},
{"command": "system", "description": "Show system prompt"},
{"command": "status", "description": "Show current configuration"}
]
});
match self
.http_client()
.post(self.api_url("setMyCommands"))
.json(&commands_body)
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
tracing::debug!("Telegram setMyCommands registered successfully");
}
Ok(resp) => {
tracing::debug!(
"Telegram setMyCommands failed with status {}",
resp.status()
);
}
Err(e) => {
tracing::debug!("Telegram setMyCommands request failed: {e}");
}
}
tracing::info!("Telegram channel listening for messages...");
loop {
@ -2863,103 +2881,4 @@ mod tests {
let ch_disabled = TelegramChannel::new("token".into(), vec!["*".into()], false);
assert!(!ch_disabled.mention_only);
}
// ─────────────────────────────────────────────────────────────────────
// TG6: Channel platform limit edge cases for Telegram (4096 char limit)
// Prevents: Pattern 6 — issues #574, #499
// ─────────────────────────────────────────────────────────────────────
#[test]
fn telegram_split_code_block_at_boundary() {
let mut msg = String::new();
msg.push_str("```python\n");
msg.push_str(&"x".repeat(4085));
msg.push_str("\n```\nMore text after code block");
let parts = split_message_for_telegram(&msg);
assert!(
parts.len() >= 2,
"code block spanning boundary should split"
);
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"each part must be <= {TELEGRAM_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
}
#[test]
fn telegram_split_single_long_word() {
let long_word = "a".repeat(5000);
let parts = split_message_for_telegram(&long_word);
assert!(parts.len() >= 2, "word exceeding limit must be split");
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"hard-split part must be <= {TELEGRAM_MAX_MESSAGE_LENGTH}, got {}",
part.len()
);
}
let reassembled: String = parts.join("");
assert_eq!(reassembled, long_word);
}
#[test]
fn telegram_split_exactly_at_limit_no_split() {
let msg = "a".repeat(TELEGRAM_MAX_MESSAGE_LENGTH);
let parts = split_message_for_telegram(&msg);
assert_eq!(parts.len(), 1, "message exactly at limit should not split");
}
#[test]
fn telegram_split_one_over_limit() {
let msg = "a".repeat(TELEGRAM_MAX_MESSAGE_LENGTH + 1);
let parts = split_message_for_telegram(&msg);
assert!(parts.len() >= 2, "message 1 char over limit must split");
}
#[test]
fn telegram_split_many_short_lines() {
let msg: String = (0..1000).map(|i| format!("line {i}\n")).collect();
let parts = split_message_for_telegram(&msg);
for part in &parts {
assert!(
part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"short-line batch must be <= limit"
);
}
}
#[test]
fn telegram_split_only_whitespace() {
let msg = " \n\n\t ";
let parts = split_message_for_telegram(msg);
assert!(parts.len() <= 1);
}
#[test]
fn telegram_split_emoji_at_boundary() {
let mut msg = "a".repeat(4094);
msg.push_str("🎉🎊"); // 4096 chars total
let parts = split_message_for_telegram(&msg);
for part in &parts {
// The function splits on character count, not byte count
assert!(
part.chars().count() <= TELEGRAM_MAX_MESSAGE_LENGTH,
"emoji boundary split must respect limit"
);
}
}
#[test]
fn telegram_split_consecutive_newlines() {
let mut msg = "a".repeat(4090);
msg.push_str("\n\n\n\n\n\n");
msg.push_str(&"b".repeat(100));
let parts = split_message_for_telegram(&msg);
for part in &parts {
assert!(part.len() <= TELEGRAM_MAX_MESSAGE_LENGTH);
}
}
}

View file

@ -9,9 +9,6 @@ pub struct ChannelMessage {
pub content: String,
pub channel: String,
pub timestamp: u64,
/// Platform thread identifier (e.g. Slack `ts`, Discord thread ID).
/// When set, replies should be posted as threaded responses.
pub thread_ts: Option<String>,
}
/// Message to send through a channel
@ -20,8 +17,6 @@ pub struct SendMessage {
pub content: String,
pub recipient: String,
pub subject: Option<String>,
/// Platform thread identifier for threaded replies (e.g. Slack `thread_ts`).
pub thread_ts: Option<String>,
}
impl SendMessage {
@ -31,7 +26,6 @@ impl SendMessage {
content: content.into(),
recipient: recipient.into(),
subject: None,
thread_ts: None,
}
}
@ -45,15 +39,8 @@ impl SendMessage {
content: content.into(),
recipient: recipient.into(),
subject: Some(subject.into()),
thread_ts: None,
}
}
/// Set the thread identifier for threaded replies.
pub fn in_thread(mut self, thread_ts: Option<String>) -> Self {
self.thread_ts = thread_ts;
self
}
}
/// Core channel trait — implement for any messaging platform
@ -113,11 +100,6 @@ pub trait Channel: Send + Sync {
) -> anyhow::Result<()> {
Ok(())
}
/// Cancel and remove a previously sent draft message if the channel supports it.
async fn cancel_draft(&self, _recipient: &str, _message_id: &str) -> anyhow::Result<()> {
Ok(())
}
}
#[cfg(test)]
@ -147,7 +129,6 @@ mod tests {
content: "hello".into(),
channel: "dummy".into(),
timestamp: 123,
thread_ts: None,
})
.await
.map_err(|e| anyhow::anyhow!(e.to_string()))
@ -163,7 +144,6 @@ mod tests {
content: "ping".into(),
channel: "dummy".into(),
timestamp: 999,
thread_ts: None,
};
let cloned = message.clone();
@ -203,7 +183,6 @@ mod tests {
.finalize_draft("bob", "msg_1", "final text")
.await
.is_ok());
assert!(channel.cancel_draft("bob", "msg_1").await.is_ok());
}
#[tokio::test]

View file

@ -8,20 +8,6 @@ use uuid::Uuid;
/// Messages are received via the gateway's `/whatsapp` webhook endpoint.
/// The `listen` method here is a no-op placeholder; actual message handling
/// happens in the gateway when Meta sends webhook events.
fn ensure_https(url: &str) -> anyhow::Result<()> {
if !url.starts_with("https://") {
anyhow::bail!(
"Refusing to transmit sensitive data over non-HTTPS URL: URL scheme must be https"
);
}
Ok(())
}
///
/// # Runtime Negotiation
///
/// This Cloud API channel is automatically selected when `phone_number_id` is set in the config.
/// Use `WhatsAppWebChannel` (with `session_path`) for native Web mode.
pub struct WhatsAppChannel {
access_token: String,
endpoint_id: String,
@ -99,8 +85,7 @@ impl WhatsAppChannel {
if !self.is_number_allowed(&normalized_from) {
tracing::warn!(
"WhatsApp: ignoring message from unauthorized number: {normalized_from}. \
Add to channels.whatsapp.allowed_numbers in config.toml, \
or run `zeroclaw onboard --channels-only` to configure interactively."
Add to allowed_numbers in config.toml, then run `zeroclaw onboard --channels-only`."
);
continue;
}
@ -141,7 +126,6 @@ impl WhatsAppChannel {
content,
channel: "whatsapp".to_string(),
timestamp,
thread_ts: None,
});
}
}
@ -181,8 +165,6 @@ impl Channel for WhatsAppChannel {
}
});
ensure_https(&url)?;
let resp = self
.http_client()
.post(&url)
@ -221,10 +203,6 @@ impl Channel for WhatsAppChannel {
// Check if we can reach the WhatsApp API
let url = format!("https://graph.facebook.com/v18.0/{}", self.endpoint_id);
if ensure_https(&url).is_err() {
return false;
}
self.http_client()
.get(&url)
.bearer_auth(&self.access_token)

File diff suppressed because it is too large Load diff

View file

@ -1,564 +0,0 @@
//! WhatsApp Web channel using wa-rs (native Rust implementation)
//!
//! This channel provides direct WhatsApp Web integration with:
//! - QR code and pair code linking
//! - End-to-end encryption via Signal Protocol
//! - Full Baileys parity (groups, media, presence, reactions, editing/deletion)
//!
//! # Feature Flag
//!
//! This channel requires the `whatsapp-web` feature flag:
//! ```sh
//! cargo build --features whatsapp-web
//! ```
//!
//! # Configuration
//!
//! ```toml
//! [channels_config.whatsapp]
//! session_path = "~/.zeroclaw/whatsapp-session.db" # Required for Web mode
//! pair_phone = "15551234567" # Optional: for pair code linking
//! allowed_numbers = ["+1234567890", "*"] # Same as Cloud API
//! ```
//!
//! # Runtime Negotiation
//!
//! This channel is automatically selected when `session_path` is set in the config.
//! The Cloud API channel is used when `phone_number_id` is set.
use super::traits::{Channel, ChannelMessage, SendMessage};
use super::whatsapp_storage::RusqliteStore;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use std::sync::Arc;
use tokio::select;
/// WhatsApp Web channel using wa-rs with custom rusqlite storage
///
/// # Status: Functional Implementation
///
/// This implementation uses the wa-rs Bot with our custom RusqliteStore backend.
///
/// # Configuration
///
/// ```toml
/// [channels_config.whatsapp]
/// session_path = "~/.zeroclaw/whatsapp-session.db"
/// pair_phone = "15551234567" # Optional
/// allowed_numbers = ["+1234567890", "*"]
/// ```
#[cfg(feature = "whatsapp-web")]
pub struct WhatsAppWebChannel {
/// Session database path
session_path: String,
/// Phone number for pair code linking (optional)
pair_phone: Option<String>,
/// Custom pair code (optional)
pair_code: Option<String>,
/// Allowed phone numbers (E.164 format) or "*" for all
allowed_numbers: Vec<String>,
/// Bot handle for shutdown
bot_handle: Arc<Mutex<Option<tokio::task::JoinHandle<()>>>>,
/// Client handle for sending messages and typing indicators
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
}
impl WhatsAppWebChannel {
/// Create a new WhatsApp Web channel
///
/// # Arguments
///
/// * `session_path` - Path to the SQLite session database
/// * `pair_phone` - Optional phone number for pair code linking (format: "15551234567")
/// * `pair_code` - Optional custom pair code (leave empty for auto-generated)
/// * `allowed_numbers` - Phone numbers allowed to interact (E.164 format) or "*" for all
#[cfg(feature = "whatsapp-web")]
pub fn new(
session_path: String,
pair_phone: Option<String>,
pair_code: Option<String>,
allowed_numbers: Vec<String>,
) -> Self {
Self {
session_path,
pair_phone,
pair_code,
allowed_numbers,
bot_handle: Arc::new(Mutex::new(None)),
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
}
}
/// Check if a phone number is allowed (E.164 format: +1234567890)
#[cfg(feature = "whatsapp-web")]
fn is_number_allowed(&self, phone: &str) -> bool {
self.allowed_numbers.iter().any(|n| n == "*" || n == phone)
}
/// Normalize phone number to E.164 format
#[cfg(feature = "whatsapp-web")]
fn normalize_phone(&self, phone: &str) -> String {
let trimmed = phone.trim();
let user_part = trimmed
.split_once('@')
.map(|(user, _)| user)
.unwrap_or(trimmed);
let normalized_user = user_part.trim_start_matches('+');
if user_part.starts_with('+') {
format!("+{normalized_user}")
} else {
format!("+{normalized_user}")
}
}
/// Whether the recipient string is a WhatsApp JID (contains a domain suffix).
#[cfg(feature = "whatsapp-web")]
fn is_jid(recipient: &str) -> bool {
recipient.trim().contains('@')
}
/// Convert a recipient to a wa-rs JID.
///
/// Supports:
/// - Full JIDs (e.g. "12345@s.whatsapp.net")
/// - E.164-like numbers (e.g. "+1234567890")
#[cfg(feature = "whatsapp-web")]
fn recipient_to_jid(&self, recipient: &str) -> Result<wa_rs_binary::jid::Jid> {
let trimmed = recipient.trim();
if trimmed.is_empty() {
anyhow::bail!("Recipient cannot be empty");
}
if trimmed.contains('@') {
return trimmed
.parse::<wa_rs_binary::jid::Jid>()
.map_err(|e| anyhow!("Invalid WhatsApp JID `{trimmed}`: {e}"));
}
let digits: String = trimmed.chars().filter(|c| c.is_ascii_digit()).collect();
if digits.is_empty() {
anyhow::bail!("Recipient `{trimmed}` does not contain a valid phone number");
}
Ok(wa_rs_binary::jid::Jid::pn(digits))
}
}
#[cfg(feature = "whatsapp-web")]
#[async_trait]
impl Channel for WhatsAppWebChannel {
fn name(&self) -> &str {
"whatsapp"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
// Validate recipient allowlist only for direct phone-number targets.
if !Self::is_jid(&message.recipient) {
let normalized = self.normalize_phone(&message.recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: recipient {} not in allowed list",
message.recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(&message.recipient)?;
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
};
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
message.recipient,
message_id
);
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Store the sender channel for incoming messages
*self.tx.lock() = Some(tx.clone());
use wa_rs::bot::Bot;
use wa_rs::pair_code::PairCodeOptions;
use wa_rs::store::{Device, DeviceStore};
use wa_rs_binary::jid::JidExt as _;
use wa_rs_core::proto_helpers::MessageExt;
use wa_rs_core::types::events::Event;
use wa_rs_tokio_transport::TokioWebSocketTransportFactory;
use wa_rs_ureq_http::UreqHttpClient;
tracing::info!(
"WhatsApp Web channel starting (session: {})",
self.session_path
);
// Initialize storage backend
let storage = RusqliteStore::new(&self.session_path)?;
let backend = Arc::new(storage);
// Check if we have a saved device to load
let mut device = Device::new(backend.clone());
if backend.exists().await? {
tracing::info!("WhatsApp Web: found existing session, loading device");
if let Some(core_device) = backend.load().await? {
device.load_from_serializable(core_device);
} else {
anyhow::bail!("Device exists but failed to load");
}
} else {
tracing::info!(
"WhatsApp Web: no existing session, new device will be created during pairing"
);
};
// Create transport factory
let mut transport_factory = TokioWebSocketTransportFactory::new();
if let Ok(ws_url) = std::env::var("WHATSAPP_WS_URL") {
transport_factory = transport_factory.with_url(ws_url);
}
// Create HTTP client for media operations
let http_client = UreqHttpClient::new();
// Build the bot
let tx_clone = tx.clone();
let allowed_numbers = self.allowed_numbers.clone();
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
.with_http_client(http_client)
.on_event(move |event, _client| {
let tx_inner = tx_clone.clone();
let allowed_numbers = allowed_numbers.clone();
async move {
match event {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender = info.source.sender.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!(
"WhatsApp Web message from {} in {}: {}",
sender,
chat,
text
);
// Check if sender is allowed
let normalized = if sender.starts_with('+') {
sender.clone()
} else {
format!("+{sender}")
};
if allowed_numbers.iter().any(|n| n == "*" || n == &normalized) {
let trimmed = text.trim();
if trimmed.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
);
return;
}
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content: trimmed.to_string(),
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
} else {
tracing::warn!("WhatsApp Web: message from {} not in allowed list", normalized);
}
}
Event::Connected(_) => {
tracing::info!("WhatsApp Web connected successfully");
}
Event::LoggedOut(_) => {
tracing::warn!("WhatsApp Web was logged out");
}
Event::StreamError(stream_error) => {
tracing::error!("WhatsApp Web stream error: {:?}", stream_error);
}
Event::PairingCode { code, .. } => {
tracing::info!("WhatsApp Web pair code received: {}", code);
tracing::info!(
"Link your phone by entering this code in WhatsApp > Linked Devices"
);
}
Event::PairingQrCode { code, .. } => {
tracing::info!(
"WhatsApp Web QR code received (scan with WhatsApp > Linked Devices)"
);
tracing::debug!("QR code: {}", code);
}
_ => {}
}
}
})
;
// Configure pair-code flow when a phone number is provided.
if let Some(ref phone) = self.pair_phone {
tracing::info!("WhatsApp Web: pair-code flow enabled for configured phone number");
builder = builder.with_pair_code(PairCodeOptions {
phone_number: phone.clone(),
custom_code: self.pair_code.clone(),
..Default::default()
});
} else if self.pair_code.is_some() {
tracing::warn!(
"WhatsApp Web: pair_code is set but pair_phone is missing; pair code config is ignored"
);
}
let mut bot = builder.build().await?;
*self.client.lock() = Some(bot.client());
// Run the bot
let bot_handle = bot.run().await?;
// Store the bot handle for later shutdown
*self.bot_handle.lock() = Some(bot_handle);
// Wait for shutdown signal
let (_shutdown_tx, mut shutdown_rx) = tokio::sync::broadcast::channel::<()>(1);
select! {
_ = shutdown_rx.recv() => {
tracing::info!("WhatsApp Web channel shutting down");
}
_ = tokio::signal::ctrl_c() => {
tracing::info!("WhatsApp Web channel received Ctrl+C");
}
}
*self.client.lock() = None;
if let Some(handle) = self.bot_handle.lock().take() {
handle.abort();
}
Ok(())
}
async fn health_check(&self) -> bool {
let bot_handle_guard = self.bot_handle.lock();
bot_handle_guard.is_some()
}
async fn start_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
if !Self::is_jid(recipient) {
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_composing(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (composing): {e}"))?;
tracing::debug!("WhatsApp Web: start typing for {}", recipient);
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
if !Self::is_jid(recipient) {
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_paused(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (paused): {e}"))?;
tracing::debug!("WhatsApp Web: stop typing for {}", recipient);
Ok(())
}
}
// Stub implementation when feature is not enabled
#[cfg(not(feature = "whatsapp-web"))]
pub struct WhatsAppWebChannel {
_private: (),
}
#[cfg(not(feature = "whatsapp-web"))]
impl WhatsAppWebChannel {
pub fn new(
_session_path: String,
_pair_phone: Option<String>,
_pair_code: Option<String>,
_allowed_numbers: Vec<String>,
) -> Self {
Self { _private: () }
}
}
#[cfg(not(feature = "whatsapp-web"))]
#[async_trait]
impl Channel for WhatsAppWebChannel {
fn name(&self) -> &str {
"whatsapp"
}
async fn send(&self, _message: &SendMessage) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn health_check(&self) -> bool {
false
}
async fn start_typing(&self, _recipient: &str) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn stop_typing(&self, _recipient: &str) -> Result<()> {
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[cfg(feature = "whatsapp-web")]
fn make_channel() -> WhatsAppWebChannel {
WhatsAppWebChannel::new(
"/tmp/test-whatsapp.db".into(),
None,
None,
vec!["+1234567890".into()],
)
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_channel_name() {
let ch = make_channel();
assert_eq!(ch.name(), "whatsapp");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_allowed_exact() {
let ch = make_channel();
assert!(ch.is_number_allowed("+1234567890"));
assert!(!ch.is_number_allowed("+9876543210"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_allowed_wildcard() {
let ch = WhatsAppWebChannel::new("/tmp/test.db".into(), None, None, vec!["*".into()]);
assert!(ch.is_number_allowed("+1234567890"));
assert!(ch.is_number_allowed("+9999999999"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_number_denied_empty() {
let ch = WhatsAppWebChannel::new("/tmp/test.db".into(), None, None, vec![]);
// Empty allowlist means "deny all" (matches channel-wide allowlist policy).
assert!(!ch.is_number_allowed("+1234567890"));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_adds_plus() {
let ch = make_channel();
assert_eq!(ch.normalize_phone("1234567890"), "+1234567890");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_preserves_plus() {
let ch = make_channel();
assert_eq!(ch.normalize_phone("+1234567890"), "+1234567890");
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn whatsapp_web_normalize_phone_from_jid() {
let ch = make_channel();
assert_eq!(
ch.normalize_phone("1234567890@s.whatsapp.net"),
"+1234567890"
);
}
#[tokio::test]
#[cfg(feature = "whatsapp-web")]
async fn whatsapp_web_health_check_disconnected() {
let ch = make_channel();
assert!(!ch.health_check().await);
}
}

View file

@ -6,14 +6,14 @@ pub use schema::{
build_runtime_proxy_client_with_timeouts, runtime_proxy_config, set_runtime_proxy_config,
AgentConfig, AuditConfig, AutonomyConfig, BrowserComputerUseConfig, BrowserConfig,
ChannelsConfig, ClassificationRule, ComposioConfig, Config, CostConfig, CronConfig,
DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, EmbeddingRouteConfig, GatewayConfig,
HardwareConfig, HardwareTransport, HeartbeatConfig, HttpRequestConfig, IMessageConfig,
IdentityConfig, LarkConfig, MatrixConfig, MemoryConfig, ModelRouteConfig, MultimodalConfig,
ObservabilityConfig, PeripheralBoardConfig, PeripheralsConfig, ProxyConfig, ProxyScope,
QueryClassificationConfig, ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig,
SandboxBackend, SandboxConfig, SchedulerConfig, SecretsConfig, SecurityConfig, SkillsConfig,
SlackConfig, StorageConfig, StorageProviderConfig, StorageProviderSection, StreamMode,
TelegramConfig, TunnelConfig, WebSearchConfig, WebhookConfig,
DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, GatewayConfig, HardwareConfig,
HardwareTransport, HeartbeatConfig, HttpRequestConfig, IMessageConfig, IdentityConfig,
LarkConfig, MatrixConfig, MemoryConfig, ModelRouteConfig, ObservabilityConfig,
PeripheralBoardConfig, PeripheralsConfig, ProxyConfig, ProxyScope, QueryClassificationConfig,
ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig,
SchedulerConfig, SecretsConfig, SecurityConfig, SlackConfig, StorageConfig,
StorageProviderConfig, StorageProviderSection, StreamMode, TelegramConfig, TunnelConfig,
WebSearchConfig, WebhookConfig,
};
#[cfg(test)]
@ -36,7 +36,6 @@ mod tests {
allowed_users: vec!["alice".into()],
stream_mode: StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
};

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,5 @@
use crate::config::Config;
use crate::security::SecurityPolicy;
use anyhow::{bail, Result};
use anyhow::Result;
mod schedule;
mod store;
@ -97,58 +96,6 @@ pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<(
println!(" Cmd : {}", job.command);
Ok(())
}
crate::CronCommands::Update {
id,
expression,
tz,
command,
name,
} => {
if expression.is_none() && tz.is_none() && command.is_none() && name.is_none() {
bail!("At least one of --expression, --tz, --command, or --name must be provided");
}
// Merge expression/tz with the existing schedule so that
// --tz alone updates the timezone and --expression alone
// preserves the existing timezone.
let schedule = if expression.is_some() || tz.is_some() {
let existing = get_job(config, &id)?;
let (existing_expr, existing_tz) = match existing.schedule {
Schedule::Cron {
expr,
tz: existing_tz,
} => (expr, existing_tz),
_ => bail!("Cannot update expression/tz on a non-cron schedule"),
};
Some(Schedule::Cron {
expr: expression.unwrap_or(existing_expr),
tz: tz.or(existing_tz),
})
} else {
None
};
if let Some(ref cmd) = command {
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
if !security.is_command_allowed(cmd) {
bail!("Command blocked by security policy: {cmd}");
}
}
let patch = CronJobPatch {
schedule,
command,
name,
..CronJobPatch::default()
};
let job = update_job(config, &id, patch)?;
println!("\u{2705} Updated cron job {}", job.id);
println!(" Expr: {}", job.expression);
println!(" Next: {}", job.next_run.to_rfc3339());
println!(" Cmd : {}", job.command);
Ok(())
}
crate::CronCommands::Remove { id } => remove_job(config, &id),
crate::CronCommands::Pause { id } => {
pause_job(config, &id)?;
@ -220,197 +167,3 @@ fn parse_delay(input: &str) -> Result<chrono::Duration> {
};
Ok(duration)
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
fn test_config(tmp: &TempDir) -> Config {
let config = Config {
workspace_dir: tmp.path().join("workspace"),
config_path: tmp.path().join("config.toml"),
..Config::default()
};
std::fs::create_dir_all(&config.workspace_dir).unwrap();
config
}
fn make_job(config: &Config, expr: &str, tz: Option<&str>, cmd: &str) -> CronJob {
add_shell_job(
config,
None,
Schedule::Cron {
expr: expr.into(),
tz: tz.map(Into::into),
},
cmd,
)
.unwrap()
}
fn run_update(
config: &Config,
id: &str,
expression: Option<&str>,
tz: Option<&str>,
command: Option<&str>,
name: Option<&str>,
) -> Result<()> {
handle_command(
crate::CronCommands::Update {
id: id.into(),
expression: expression.map(Into::into),
tz: tz.map(Into::into),
command: command.map(Into::into),
name: name.map(Into::into),
},
config,
)
}
#[test]
fn update_changes_command_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo original");
run_update(&config, &job.id, None, None, Some("echo updated"), None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.command, "echo updated");
assert_eq!(updated.id, job.id);
}
#[test]
fn update_changes_expression_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(&config, &job.id, Some("0 9 * * *"), None, None, None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.expression, "0 9 * * *");
}
#[test]
fn update_changes_name_via_handler() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(&config, &job.id, None, None, None, Some("new-name")).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.name.as_deref(), Some("new-name"));
}
#[test]
fn update_tz_alone_sets_timezone() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
run_update(
&config,
&job.id,
None,
Some("America/Los_Angeles"),
None,
None,
)
.unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(
updated.schedule,
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: Some("America/Los_Angeles".into()),
}
);
}
#[test]
fn update_expression_preserves_existing_tz() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(
&config,
"*/5 * * * *",
Some("America/Los_Angeles"),
"echo test",
);
run_update(&config, &job.id, Some("0 9 * * *"), None, None, None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(
updated.schedule,
Schedule::Cron {
expr: "0 9 * * *".into(),
tz: Some("America/Los_Angeles".into()),
}
);
}
#[test]
fn update_preserves_unchanged_fields() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = add_shell_job(
&config,
Some("original-name".into()),
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"echo original",
)
.unwrap();
run_update(&config, &job.id, None, None, Some("echo changed"), None).unwrap();
let updated = get_job(&config, &job.id).unwrap();
assert_eq!(updated.command, "echo changed");
assert_eq!(updated.name.as_deref(), Some("original-name"));
assert_eq!(updated.expression, "*/5 * * * *");
}
#[test]
fn update_no_flags_fails() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = make_job(&config, "*/5 * * * *", None, "echo test");
let result = run_update(&config, &job.id, None, None, None, None);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("At least one of"));
}
#[test]
fn update_nonexistent_job_fails() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let result = run_update(
&config,
"nonexistent-id",
None,
None,
Some("echo test"),
None,
);
assert!(result.is_err());
}
#[test]
fn update_security_allows_safe_command() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
assert!(security.is_command_allowed("echo safe"));
}
}

View file

@ -61,7 +61,7 @@ async fn execute_job_with_retry(
for attempt in 0..=retries {
let (success, output) = match job.job_type {
JobType::Shell => run_job_command(config, security, job).await,
JobType::Agent => run_agent_job(config, security, job).await,
JobType::Agent => run_agent_job(config, job).await,
};
last_output = output;
@ -116,31 +116,7 @@ async fn execute_and_persist_job(
(job.id.clone(), success)
}
async fn run_agent_job(
config: &Config,
security: &SecurityPolicy,
job: &CronJob,
) -> (bool, String) {
if !security.can_act() {
return (
false,
"blocked by security policy: autonomy is read-only".to_string(),
);
}
if security.is_rate_limited() {
return (
false,
"blocked by security policy: rate limit exceeded".to_string(),
);
}
if !security.record_action() {
return (
false,
"blocked by security policy: action budget exhausted".to_string(),
);
}
async fn run_agent_job(config: &Config, job: &CronJob) -> (bool, String) {
let name = job.name.clone().unwrap_or_else(|| "cron-job".to_string());
let prompt = job.prompt.clone().unwrap_or_default();
let prefixed_prompt = format!("[cron:{} {name}] {prompt}", job.id);
@ -242,11 +218,14 @@ fn warn_if_high_frequency_agent_job(job: &CronJob) {
Schedule::Every { every_ms } => *every_ms < 5 * 60 * 1000,
Schedule::Cron { .. } => {
let now = Utc::now();
match (
next_run_for_schedule(&job.schedule, now),
next_run_for_schedule(&job.schedule, now + chrono::Duration::seconds(1)),
) {
(Ok(a), Ok(b)) => (b - a).num_minutes() < 5,
match next_run_for_schedule(&job.schedule, now) {
Ok(first) => {
// Get the occurrence *after* the first one to measure the actual interval.
match next_run_for_schedule(&job.schedule, first) {
Ok(second) => (second - first).num_minutes() < 5,
_ => false,
}
}
_ => false,
}
}
@ -499,15 +478,13 @@ mod tests {
use chrono::{Duration as ChronoDuration, Utc};
use tempfile::TempDir;
async fn test_config(tmp: &TempDir) -> Config {
fn test_config(tmp: &TempDir) -> Config {
let config = Config {
workspace_dir: tmp.path().join("workspace"),
config_path: tmp.path().join("config.toml"),
..Config::default()
};
tokio::fs::create_dir_all(&config.workspace_dir)
.await
.unwrap();
std::fs::create_dir_all(&config.workspace_dir).unwrap();
config
}
@ -539,7 +516,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_success() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let job = test_job("echo scheduler-ok");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -552,7 +529,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_failure() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let job = test_job("ls definitely_missing_file_for_scheduler_test");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -565,7 +542,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_times_out() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["sleep".into()];
let job = test_job("sleep 1");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -579,7 +556,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_blocks_disallowed_command() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into()];
let job = test_job("curl https://evil.example");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -593,7 +570,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_blocks_forbidden_path_argument() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["cat".into()];
let job = test_job("cat /etc/passwd");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -608,7 +585,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_blocks_readonly_mode() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.autonomy.level = crate::security::AutonomyLevel::ReadOnly;
let job = test_job("echo should-not-run");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -622,7 +599,7 @@ mod tests {
#[tokio::test]
async fn run_job_command_blocks_rate_limited() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.autonomy.max_actions_per_hour = 0;
let job = test_job("echo should-not-run");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -636,17 +613,16 @@ mod tests {
#[tokio::test]
async fn execute_job_with_retry_recovers_after_first_failure() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1;
config.autonomy.allowed_commands = vec!["sh".into()];
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
tokio::fs::write(
std::fs::write(
config.workspace_dir.join("retry-once.sh"),
"#!/bin/sh\nif [ -f retry-ok.flag ]; then\n echo recovered\n exit 0\nfi\ntouch retry-ok.flag\nexit 1\n",
)
.await
.unwrap();
let job = test_job("sh ./retry-once.sh");
@ -658,7 +634,7 @@ mod tests {
#[tokio::test]
async fn execute_job_with_retry_exhausts_attempts() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp).await;
let mut config = test_config(&tmp);
config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1;
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
@ -673,53 +649,23 @@ mod tests {
#[tokio::test]
async fn run_agent_job_returns_error_without_provider_key() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let mut job = test_job("");
job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("agent job failed:"));
}
#[tokio::test]
async fn run_agent_job_blocks_readonly_mode() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.level = crate::security::AutonomyLevel::ReadOnly;
let mut job = test_job("");
job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("read-only"));
}
#[tokio::test]
async fn run_agent_job_blocks_rate_limited() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.max_actions_per_hour = 0;
let mut job = test_job("");
job.job_type = JobType::Agent;
job.prompt = Some("Say hello".into());
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_agent_job(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("rate limit exceeded"));
let (success, output) = run_agent_job(&config, &job).await;
assert!(!success, "Agent job without provider key should fail");
assert!(
!output.is_empty(),
"Expected non-empty error output from failed agent job"
);
}
#[tokio::test]
async fn persist_job_result_records_run_and_reschedules_shell_job() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let job = cron::add_job(&config, "*/5 * * * *", "echo ok").unwrap();
let started = Utc::now();
let finished = started + ChronoDuration::milliseconds(10);
@ -736,7 +682,7 @@ mod tests {
#[tokio::test]
async fn persist_job_result_success_deletes_one_shot() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let at = Utc::now() + ChronoDuration::minutes(10);
let job = cron::add_agent_job(
&config,
@ -761,7 +707,7 @@ mod tests {
#[tokio::test]
async fn persist_job_result_failure_disables_one_shot() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let at = Utc::now() + ChronoDuration::minutes(10);
let job = cron::add_agent_job(
&config,
@ -787,7 +733,7 @@ mod tests {
#[tokio::test]
async fn deliver_if_configured_handles_none_and_invalid_channel() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let config = test_config(&tmp);
let mut job = test_job("echo ok");
assert!(deliver_if_configured(&config, &job, "x").await.is_ok());

View file

@ -66,7 +66,7 @@ pub async fn run(config: Config, host: String, port: u16) -> Result<()> {
max_backoff,
move || {
let cfg = heartbeat_cfg.clone();
async move { run_heartbeat_worker(cfg).await }
async move { Box::pin(run_heartbeat_worker(cfg)).await }
},
));
}
@ -209,40 +209,17 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
}
fn has_supervised_channels(config: &Config) -> bool {
let crate::config::ChannelsConfig {
cli: _, // `cli` is used only when running the CLI manually
webhook: _, // Managed by the gateway
telegram,
discord,
slack,
mattermost,
imessage,
matrix,
signal,
whatsapp,
email,
irc,
lark,
dingtalk,
linq,
qq,
..
} = &config.channels_config;
telegram.is_some()
|| discord.is_some()
|| slack.is_some()
|| mattermost.is_some()
|| imessage.is_some()
|| matrix.is_some()
|| signal.is_some()
|| whatsapp.is_some()
|| email.is_some()
|| irc.is_some()
|| lark.is_some()
|| dingtalk.is_some()
|| linq.is_some()
|| qq.is_some()
config.channels_config.telegram.is_some()
|| config.channels_config.discord.is_some()
|| config.channels_config.slack.is_some()
|| config.channels_config.imessage.is_some()
|| config.channels_config.matrix.is_some()
|| config.channels_config.signal.is_some()
|| config.channels_config.whatsapp.is_some()
|| config.channels_config.email.is_some()
|| config.channels_config.irc.is_some()
|| config.channels_config.lark.is_some()
|| config.channels_config.dingtalk.is_some()
}
#[cfg(test)]
@ -321,7 +298,6 @@ mod tests {
allowed_users: vec![],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
assert!(has_supervised_channels(&config));
@ -337,29 +313,4 @@ mod tests {
});
assert!(has_supervised_channels(&config));
}
#[test]
fn detects_mattermost_as_supervised_channel() {
let mut config = Config::default();
config.channels_config.mattermost = Some(crate::config::schema::MattermostConfig {
url: "https://mattermost.example.com".into(),
bot_token: "token".into(),
channel_id: Some("channel-id".into()),
allowed_users: vec!["*".into()],
thread_replies: Some(true),
mention_only: Some(false),
});
assert!(has_supervised_channels(&config));
}
#[test]
fn detects_qq_as_supervised_channel() {
let mut config = Config::default();
config.channels_config.qq = Some(crate::config::schema::QQConfig {
app_id: "app-id".into(),
app_secret: "app-secret".into(),
allowed_users: vec!["*".into()],
});
assert!(has_supervised_channels(&config));
}
}

View file

@ -344,58 +344,6 @@ fn check_config_semantics(config: &Config, items: &mut Vec<DiagItem>) {
}
}
// Embedding routes validation
for route in &config.embedding_routes {
if route.hint.trim().is_empty() {
items.push(DiagItem::warn(cat, "embedding route with empty hint"));
}
if let Some(reason) = embedding_provider_validation_error(&route.provider) {
items.push(DiagItem::warn(
cat,
format!(
"embedding route \"{}\" uses invalid provider \"{}\": {}",
route.hint, route.provider, reason
),
));
}
if route.model.trim().is_empty() {
items.push(DiagItem::warn(
cat,
format!("embedding route \"{}\" has empty model", route.hint),
));
}
if route.dimensions.is_some_and(|value| value == 0) {
items.push(DiagItem::warn(
cat,
format!(
"embedding route \"{}\" has invalid dimensions=0",
route.hint
),
));
}
}
if let Some(hint) = config
.memory
.embedding_model
.strip_prefix("hint:")
.map(str::trim)
.filter(|value| !value.is_empty())
{
if !config
.embedding_routes
.iter()
.any(|route| route.hint.trim() == hint)
{
items.push(DiagItem::warn(
cat,
format!(
"memory.embedding_model uses hint \"{hint}\" but no matching [[embedding_routes]] entry exists"
),
));
}
}
// Channel: at least one configured
let cc = &config.channels_config;
let has_channel = cc.telegram.is_some()
@ -448,31 +396,6 @@ fn provider_validation_error(name: &str) -> Option<String> {
}
}
fn embedding_provider_validation_error(name: &str) -> Option<String> {
let normalized = name.trim();
if normalized.eq_ignore_ascii_case("none") || normalized.eq_ignore_ascii_case("openai") {
return None;
}
let Some(url) = normalized.strip_prefix("custom:") else {
return Some("supported values: none, openai, custom:<url>".into());
};
let url = url.trim();
if url.is_empty() {
return Some("custom provider requires a non-empty URL after 'custom:'".into());
}
match reqwest::Url::parse(url) {
Ok(parsed) if matches!(parsed.scheme(), "http" | "https") => None,
Ok(parsed) => Some(format!(
"custom provider URL must use http/https, got '{}'",
parsed.scheme()
)),
Err(err) => Some(format!("invalid custom provider URL: {err}")),
}
}
// ── Workspace integrity ──────────────────────────────────────────
fn check_workspace(config: &Config, items: &mut Vec<DiagItem>) {
@ -968,62 +891,6 @@ mod tests {
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn config_validation_warns_empty_embedding_route_model() {
let mut config = Config::default();
config.embedding_routes = vec![crate::config::EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "openai".into(),
model: String::new(),
dimensions: Some(1536),
api_key: None,
}];
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items.iter().find(|item| {
item.message
.contains("embedding route \"semantic\" has empty model")
});
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn config_validation_warns_invalid_embedding_route_provider() {
let mut config = Config::default();
config.embedding_routes = vec![crate::config::EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "groq".into(),
model: "text-embedding-3-small".into(),
dimensions: None,
api_key: None,
}];
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items
.iter()
.find(|item| item.message.contains("uses invalid provider \"groq\""));
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn config_validation_warns_missing_embedding_hint_target() {
let mut config = Config::default();
config.memory.embedding_model = "hint:semantic".into();
let mut items = Vec::new();
check_config_semantics(&config, &mut items);
let route_item = items.iter().find(|item| {
item.message
.contains("no matching [[embedding_routes]] entry exists")
});
assert!(route_item.is_some());
assert_eq!(route_item.unwrap().severity, Severity::Warn);
}
#[test]
fn environment_check_finds_git() {
let mut items = Vec::new();
@ -1043,8 +910,8 @@ mod tests {
#[test]
fn truncate_for_display_preserves_utf8_boundaries() {
let preview = truncate_for_display("🙂example-alpha-build", 3);
assert_eq!(preview, "🙂ex");
let preview = truncate_for_display("版本号-alpha-build", 3);
assert_eq!(preview, "版本号");
}
#[test]

View file

@ -7,10 +7,10 @@
//! - Request timeouts (30s) to prevent slow-loris attacks
//! - Header sanitization (handled by axum/hyper)
use crate::channels::{Channel, LinqChannel, SendMessage, WhatsAppChannel};
use crate::channels::{Channel, SendMessage, WhatsAppChannel};
use crate::config::Config;
use crate::memory::{self, Memory, MemoryCategory};
use crate::providers::{self, ChatMessage, Provider, ProviderCapabilityError};
use crate::providers::{self, Provider};
use crate::runtime;
use crate::security::pairing::{constant_time_eq, is_public_bind, PairingGuard};
use crate::security::SecurityPolicy;
@ -53,10 +53,6 @@ fn whatsapp_memory_key(msg: &crate::channels::traits::ChannelMessage) -> String
format!("whatsapp_{}_{}", msg.sender, msg.id)
}
fn linq_memory_key(msg: &crate::channels::traits::ChannelMessage) -> String {
format!("linq_{}_{}", msg.sender, msg.id)
}
fn hash_webhook_secret(value: &str) -> String {
use sha2::{Digest, Sha256};
@ -278,9 +274,6 @@ pub struct AppState {
pub whatsapp: Option<Arc<WhatsAppChannel>>,
/// `WhatsApp` app secret for webhook signature verification (`X-Hub-Signature-256`)
pub whatsapp_app_secret: Option<Arc<str>>,
pub linq: Option<Arc<LinqChannel>>,
/// Linq webhook signing secret for signature verification
pub linq_signing_secret: Option<Arc<str>>,
/// Observability backend for metrics scraping
pub observer: Arc<dyn crate::observability::Observer>,
}
@ -313,7 +306,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
auth_profile_override: None,
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
},
)?);
let model = config
@ -368,16 +360,12 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
});
// WhatsApp channel (if configured)
let whatsapp_channel: Option<Arc<WhatsAppChannel>> = config
.channels_config
.whatsapp
.as_ref()
.filter(|wa| wa.is_cloud_config())
.map(|wa| {
let whatsapp_channel: Option<Arc<WhatsAppChannel>> =
config.channels_config.whatsapp.as_ref().map(|wa| {
Arc::new(WhatsAppChannel::new(
wa.access_token.clone().unwrap_or_default(),
wa.phone_number_id.clone().unwrap_or_default(),
wa.verify_token.clone().unwrap_or_default(),
wa.access_token.clone(),
wa.phone_number_id.clone(),
wa.verify_token.clone(),
wa.allowed_numbers.clone(),
))
});
@ -401,34 +389,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
})
.map(Arc::from);
// Linq channel (if configured)
let linq_channel: Option<Arc<LinqChannel>> = config.channels_config.linq.as_ref().map(|lq| {
Arc::new(LinqChannel::new(
lq.api_token.clone(),
lq.from_phone.clone(),
lq.allowed_senders.clone(),
))
});
// Linq signing secret for webhook signature verification
// Priority: environment variable > config file
let linq_signing_secret: Option<Arc<str>> = std::env::var("ZEROCLAW_LINQ_SIGNING_SECRET")
.ok()
.and_then(|secret| {
let secret = secret.trim();
(!secret.is_empty()).then(|| secret.to_owned())
})
.or_else(|| {
config.channels_config.linq.as_ref().and_then(|lq| {
lq.signing_secret
.as_deref()
.map(str::trim)
.filter(|secret| !secret.is_empty())
.map(ToOwned::to_owned)
})
})
.map(Arc::from);
// ── Pairing guard ──────────────────────────────────────
let pairing = Arc::new(PairingGuard::new(
config.gateway.require_pairing,
@ -480,9 +440,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
println!(" GET /whatsapp — Meta webhook verification");
println!(" POST /whatsapp — WhatsApp message webhook");
}
if linq_channel.is_some() {
println!(" POST /linq — Linq message webhook (iMessage/RCS/SMS)");
}
println!(" GET /health — health check");
println!(" GET /metrics — Prometheus metrics");
if let Some(code) = pairing.pairing_code() {
@ -519,8 +476,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
idempotency_store,
whatsapp: whatsapp_channel,
whatsapp_app_secret,
linq: linq_channel,
linq_signing_secret,
observer,
};
@ -532,7 +487,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route("/webhook", post(handle_webhook))
.route("/whatsapp", get(handle_whatsapp_verify))
.route("/whatsapp", post(handle_whatsapp_message))
.route("/linq", post(handle_linq_webhook))
.with_state(state)
.layer(RequestBodyLimitLayer::new(MAX_BODY_SIZE))
.layer(TimeoutLayer::with_status_code(
@ -588,16 +542,15 @@ async fn handle_metrics(State(state): State<AppState>) -> impl IntoResponse {
}
/// POST /pair — exchange one-time code for bearer token
#[axum::debug_handler]
async fn handle_pair(
State(state): State<AppState>,
ConnectInfo(peer_addr): ConnectInfo<SocketAddr>,
headers: HeaderMap,
) -> impl IntoResponse {
let rate_key =
let client_key =
client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers);
if !state.rate_limiter.allow_pair(&rate_key) {
tracing::warn!("/pair rate limit exceeded");
if !state.rate_limiter.allow_pair(&client_key) {
tracing::warn!("/pair rate limit exceeded for key: {client_key}");
let err = serde_json::json!({
"error": "Too many pairing requests. Please retry later.",
"retry_after": RATE_LIMIT_WINDOW_SECS,
@ -610,10 +563,10 @@ async fn handle_pair(
.and_then(|v| v.to_str().ok())
.unwrap_or("");
match state.pairing.try_pair(code, &rate_key).await {
match state.pairing.try_pair(code) {
Ok(Some(token)) => {
tracing::info!("🔐 New client paired successfully");
if let Err(err) = persist_pairing_tokens(state.config.clone(), &state.pairing).await {
if let Err(err) = persist_pairing_tokens(&state.config, &state.pairing) {
tracing::error!("🔐 Pairing succeeded but token persistence failed: {err:#}");
let body = serde_json::json!({
"paired": true,
@ -650,66 +603,12 @@ async fn handle_pair(
}
}
async fn persist_pairing_tokens(config: Arc<Mutex<Config>>, pairing: &PairingGuard) -> Result<()> {
fn persist_pairing_tokens(config: &Arc<Mutex<Config>>, pairing: &PairingGuard) -> Result<()> {
let paired_tokens = pairing.tokens();
// This is needed because parking_lot's guard is not Send so we clone the inner
// this should be removed once async mutexes are used everywhere
let mut updated_cfg = { config.lock().clone() };
updated_cfg.gateway.paired_tokens = paired_tokens;
updated_cfg
.save()
.await
.context("Failed to persist paired tokens to config.toml")?;
// Keep shared runtime config in sync with persisted tokens.
*config.lock() = updated_cfg;
Ok(())
}
async fn run_gateway_chat_with_multimodal(
state: &AppState,
provider_label: &str,
message: &str,
) -> anyhow::Result<String> {
let user_messages = vec![ChatMessage::user(message)];
let image_marker_count = crate::multimodal::count_image_markers(&user_messages);
if image_marker_count > 0 && !state.provider.supports_vision() {
return Err(ProviderCapabilityError {
provider: provider_label.to_string(),
capability: "vision".to_string(),
message: format!(
"received {image_marker_count} image marker(s), but this provider does not support vision input"
),
}
.into());
}
// Keep webhook/gateway prompts aligned with channel behavior by injecting
// workspace-aware system context before model invocation.
let system_prompt = {
let config_guard = state.config.lock();
crate::channels::build_system_prompt(
&config_guard.workspace_dir,
&state.model,
&[], // tools - empty for simple chat
&[], // skills
Some(&config_guard.identity),
None, // bootstrap_max_chars - use default
)
};
let mut messages = Vec::with_capacity(1 + user_messages.len());
messages.push(ChatMessage::system(system_prompt));
messages.extend(user_messages);
let multimodal_config = state.config.lock().multimodal.clone();
let prepared =
crate::multimodal::prepare_messages_for_provider(&messages, &multimodal_config).await?;
state
.provider
.chat_with_history(&prepared.messages, &state.model, state.temperature)
.await
let mut cfg = config.lock();
cfg.gateway.paired_tokens = paired_tokens;
cfg.save()
.context("Failed to persist paired tokens to config.toml")
}
/// Webhook request body
@ -725,10 +624,10 @@ async fn handle_webhook(
headers: HeaderMap,
body: Result<Json<WebhookBody>, axum::extract::rejection::JsonRejection>,
) -> impl IntoResponse {
let rate_key =
let client_key =
client_key_from_request(Some(peer_addr), &headers, state.trust_forwarded_headers);
if !state.rate_limiter.allow_webhook(&rate_key) {
tracing::warn!("/webhook rate limit exceeded");
if !state.rate_limiter.allow_webhook(&client_key) {
tracing::warn!("/webhook rate limit exceeded for key: {client_key}");
let err = serde_json::json!({
"error": "Too many webhook requests. Please retry later.",
"retry_after": RATE_LIMIT_WINDOW_SECS,
@ -833,7 +732,11 @@ async fn handle_webhook(
messages_count: 1,
});
match run_gateway_chat_with_multimodal(&state, &provider_label, message).await {
match state
.provider
.simple_chat(message, &state.model, state.temperature)
.await
{
Ok(response) => {
let duration = started_at.elapsed();
state
@ -1017,12 +920,6 @@ async fn handle_whatsapp_message(
}
// Process each message
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
for msg in &messages {
tracing::info!(
"WhatsApp message from {}: {}",
@ -1039,7 +936,12 @@ async fn handle_whatsapp_message(
.await;
}
match run_gateway_chat_with_multimodal(&state, &provider_label, &msg.content).await {
// Call the LLM
match state
.provider
.simple_chat(&msg.content, &state.model, state.temperature)
.await
{
Ok(response) => {
// Send reply via WhatsApp
if let Err(e) = wa
@ -1065,120 +967,6 @@ async fn handle_whatsapp_message(
(StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
}
/// POST /linq — incoming message webhook (iMessage/RCS/SMS via Linq)
async fn handle_linq_webhook(
State(state): State<AppState>,
headers: HeaderMap,
body: Bytes,
) -> impl IntoResponse {
let Some(ref linq) = state.linq else {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": "Linq not configured"})),
);
};
let body_str = String::from_utf8_lossy(&body);
// ── Security: Verify X-Webhook-Signature if signing_secret is configured ──
if let Some(ref signing_secret) = state.linq_signing_secret {
let timestamp = headers
.get("X-Webhook-Timestamp")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let signature = headers
.get("X-Webhook-Signature")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if !crate::channels::linq::verify_linq_signature(
signing_secret,
&body_str,
timestamp,
signature,
) {
tracing::warn!(
"Linq webhook signature verification failed (signature: {})",
if signature.is_empty() {
"missing"
} else {
"invalid"
}
);
return (
StatusCode::UNAUTHORIZED,
Json(serde_json::json!({"error": "Invalid signature"})),
);
}
}
// Parse JSON body
let Ok(payload) = serde_json::from_slice::<serde_json::Value>(&body) else {
return (
StatusCode::BAD_REQUEST,
Json(serde_json::json!({"error": "Invalid JSON payload"})),
);
};
// Parse messages from the webhook payload
let messages = linq.parse_webhook_payload(&payload);
if messages.is_empty() {
// Acknowledge the webhook even if no messages (could be status/delivery events)
return (StatusCode::OK, Json(serde_json::json!({"status": "ok"})));
}
// Process each message
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
for msg in &messages {
tracing::info!(
"Linq message from {}: {}",
msg.sender,
truncate_with_ellipsis(&msg.content, 50)
);
// Auto-save to memory
if state.auto_save {
let key = linq_memory_key(msg);
let _ = state
.mem
.store(&key, &msg.content, MemoryCategory::Conversation, None)
.await;
}
// Call the LLM
match run_gateway_chat_with_multimodal(&state, &provider_label, &msg.content).await {
Ok(response) => {
// Send reply via Linq
if let Err(e) = linq
.send(&SendMessage::new(response, &msg.reply_target))
.await
{
tracing::error!("Failed to send Linq reply: {e}");
}
}
Err(e) => {
tracing::error!("LLM error for Linq message: {e:#}");
let _ = linq
.send(&SendMessage::new(
"Sorry, I couldn't process your message right now.",
&msg.reply_target,
))
.await;
}
}
}
// Acknowledge the webhook
(StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
}
#[cfg(test)]
mod tests {
use super::*;
@ -1192,13 +980,6 @@ mod tests {
use parking_lot::Mutex;
use std::sync::atomic::{AtomicUsize, Ordering};
/// Generate a random hex secret at runtime to avoid hard-coded cryptographic values.
fn generate_test_secret() -> String {
use rand::Rng;
let bytes: [u8; 32] = rand::rng().random();
hex::encode(bytes)
}
#[test]
fn security_body_limit_is_64kb() {
assert_eq!(MAX_BODY_SIZE, 65_536);
@ -1253,8 +1034,6 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
@ -1296,8 +1075,6 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer,
};
@ -1444,8 +1221,8 @@ mod tests {
assert_eq!(normalize_max_keys(1, 10_000), 1);
}
#[tokio::test]
async fn persist_pairing_tokens_writes_config_tokens() {
#[test]
fn persist_pairing_tokens_writes_config_tokens() {
let temp = tempfile::tempdir().unwrap();
let config_path = temp.path().join("config.toml");
let workspace_path = temp.path().join("workspace");
@ -1453,28 +1230,22 @@ mod tests {
let mut config = Config::default();
config.config_path = config_path.clone();
config.workspace_dir = workspace_path;
config.save().await.unwrap();
config.save().unwrap();
let guard = PairingGuard::new(true, &[]);
let code = guard.pairing_code().unwrap();
let token = guard.try_pair(&code, "test_client").await.unwrap().unwrap();
let token = guard.try_pair(&code).unwrap().unwrap();
assert!(guard.is_authenticated(&token));
let shared_config = Arc::new(Mutex::new(config));
persist_pairing_tokens(shared_config.clone(), &guard)
.await
.unwrap();
persist_pairing_tokens(&shared_config, &guard).unwrap();
let saved = tokio::fs::read_to_string(config_path).await.unwrap();
let saved = std::fs::read_to_string(config_path).unwrap();
let parsed: Config = toml::from_str(&saved).unwrap();
assert_eq!(parsed.gateway.paired_tokens.len(), 1);
let persisted = &parsed.gateway.paired_tokens[0];
assert_eq!(persisted.len(), 64);
assert!(persisted.chars().all(|c| c.is_ascii_hexdigit()));
let in_memory = shared_config.lock();
assert_eq!(in_memory.gateway.paired_tokens.len(), 1);
assert_eq!(&in_memory.gateway.paired_tokens[0], persisted);
}
#[test]
@ -1496,7 +1267,6 @@ mod tests {
content: "hello".into(),
channel: "whatsapp".into(),
timestamp: 1,
thread_ts: None,
};
let key = whatsapp_memory_key(&msg);
@ -1656,8 +1426,6 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
@ -1714,8 +1482,6 @@ mod tests {
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
@ -1752,11 +1518,9 @@ mod tests {
#[test]
fn webhook_secret_hash_is_deterministic_and_nonempty() {
let secret_a = generate_test_secret();
let secret_b = generate_test_secret();
let one = hash_webhook_secret(&secret_a);
let two = hash_webhook_secret(&secret_a);
let other = hash_webhook_secret(&secret_b);
let one = hash_webhook_secret("secret-value");
let two = hash_webhook_secret("secret-value");
let other = hash_webhook_secret("other-value");
assert_eq!(one, two);
assert_ne!(one, other);
@ -1768,7 +1532,6 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let secret = generate_test_secret();
let state = AppState {
config: Arc::new(Mutex::new(Config::default())),
@ -1777,15 +1540,13 @@ mod tests {
temperature: 0.0,
mem: memory,
auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&secret))),
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))),
pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
@ -1809,8 +1570,6 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let valid_secret = generate_test_secret();
let wrong_secret = generate_test_secret();
let state = AppState {
config: Arc::new(Mutex::new(Config::default())),
@ -1819,23 +1578,18 @@ mod tests {
temperature: 0.0,
mem: memory,
auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&valid_secret))),
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))),
pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
let mut headers = HeaderMap::new();
headers.insert(
"X-Webhook-Secret",
HeaderValue::from_str(&wrong_secret).unwrap(),
);
headers.insert("X-Webhook-Secret", HeaderValue::from_static("wrong-secret"));
let response = handle_webhook(
State(state),
@ -1857,7 +1611,6 @@ mod tests {
let provider_impl = Arc::new(MockProvider::default());
let provider: Arc<dyn Provider> = provider_impl.clone();
let memory: Arc<dyn Memory> = Arc::new(MockMemory);
let secret = generate_test_secret();
let state = AppState {
config: Arc::new(Mutex::new(Config::default())),
@ -1866,20 +1619,18 @@ mod tests {
temperature: 0.0,
mem: memory,
auto_save: false,
webhook_secret_hash: Some(Arc::from(hash_webhook_secret(&secret))),
webhook_secret_hash: Some(Arc::from(hash_webhook_secret("super-secret"))),
pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
observer: Arc::new(crate::observability::NoopObserver),
};
let mut headers = HeaderMap::new();
headers.insert("X-Webhook-Secret", HeaderValue::from_str(&secret).unwrap());
headers.insert("X-Webhook-Secret", HeaderValue::from_static("super-secret"));
let response = handle_webhook(
State(state),
@ -1915,13 +1666,14 @@ mod tests {
#[test]
fn whatsapp_signature_valid() {
let app_secret = generate_test_secret();
// Test with known values
let app_secret = "test_secret_key_12345";
let body = b"test body content";
let signature_header = compute_whatsapp_signature_header(&app_secret, body);
let signature_header = compute_whatsapp_signature_header(app_secret, body);
assert!(verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -1929,14 +1681,14 @@ mod tests {
#[test]
fn whatsapp_signature_invalid_wrong_secret() {
let app_secret = generate_test_secret();
let wrong_secret = generate_test_secret();
let app_secret = "correct_secret_key_abc";
let wrong_secret = "wrong_secret_key_xyz";
let body = b"test body content";
let signature_header = compute_whatsapp_signature_header(&wrong_secret, body);
let signature_header = compute_whatsapp_signature_header(wrong_secret, body);
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -1944,15 +1696,15 @@ mod tests {
#[test]
fn whatsapp_signature_invalid_wrong_body() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let original_body = b"original body";
let tampered_body = b"tampered body";
let signature_header = compute_whatsapp_signature_header(&app_secret, original_body);
let signature_header = compute_whatsapp_signature_header(app_secret, original_body);
// Verify with tampered body should fail
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
tampered_body,
&signature_header
));
@ -1960,14 +1712,14 @@ mod tests {
#[test]
fn whatsapp_signature_missing_prefix() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
// Signature without "sha256=" prefix
let signature_header = "abc123def456";
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
body,
signature_header
));
@ -1975,22 +1727,22 @@ mod tests {
#[test]
fn whatsapp_signature_empty_header() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
assert!(!verify_whatsapp_signature(&app_secret, body, ""));
assert!(!verify_whatsapp_signature(app_secret, body, ""));
}
#[test]
fn whatsapp_signature_invalid_hex() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
// Invalid hex characters
let signature_header = "sha256=not_valid_hex_zzz";
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
body,
signature_header
));
@ -1998,13 +1750,13 @@ mod tests {
#[test]
fn whatsapp_signature_empty_body() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"";
let signature_header = compute_whatsapp_signature_header(&app_secret, body);
let signature_header = compute_whatsapp_signature_header(app_secret, body);
assert!(verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -2012,13 +1764,13 @@ mod tests {
#[test]
fn whatsapp_signature_unicode_body() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = "Hello 🦀 World".as_bytes();
let signature_header = compute_whatsapp_signature_header(&app_secret, body);
let signature_header = compute_whatsapp_signature_header(app_secret, body);
assert!(verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -2026,13 +1778,13 @@ mod tests {
#[test]
fn whatsapp_signature_json_payload() {
let app_secret = generate_test_secret();
let app_secret = "test_app_secret_key_xyz";
let body = br#"{"entry":[{"changes":[{"value":{"messages":[{"from":"1234567890","text":{"body":"Hello"}}]}}]}]}"#;
let signature_header = compute_whatsapp_signature_header(&app_secret, body);
let signature_header = compute_whatsapp_signature_header(app_secret, body);
assert!(verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -2040,35 +1792,31 @@ mod tests {
#[test]
fn whatsapp_signature_case_sensitive_prefix() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
let hex_sig = compute_whatsapp_signature_hex(app_secret, body);
// Wrong case prefix should fail
let wrong_prefix = format!("SHA256={hex_sig}");
assert!(!verify_whatsapp_signature(&app_secret, body, &wrong_prefix));
assert!(!verify_whatsapp_signature(app_secret, body, &wrong_prefix));
// Correct prefix should pass
let correct_prefix = format!("sha256={hex_sig}");
assert!(verify_whatsapp_signature(
&app_secret,
body,
&correct_prefix
));
assert!(verify_whatsapp_signature(app_secret, body, &correct_prefix));
}
#[test]
fn whatsapp_signature_truncated_hex() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
let hex_sig = compute_whatsapp_signature_hex(app_secret, body);
let truncated = &hex_sig[..32]; // Only half the signature
let signature_header = format!("sha256={truncated}");
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
@ -2076,65 +1824,17 @@ mod tests {
#[test]
fn whatsapp_signature_extra_bytes() {
let app_secret = generate_test_secret();
let app_secret = "test_secret_key_12345";
let body = b"test body";
let hex_sig = compute_whatsapp_signature_hex(&app_secret, body);
let hex_sig = compute_whatsapp_signature_hex(app_secret, body);
let extended = format!("{hex_sig}deadbeef");
let signature_header = format!("sha256={extended}");
assert!(!verify_whatsapp_signature(
&app_secret,
app_secret,
body,
&signature_header
));
}
// ══════════════════════════════════════════════════════════
// IdempotencyStore Edge-Case Tests
// ══════════════════════════════════════════════════════════
#[test]
fn idempotency_store_allows_different_keys() {
let store = IdempotencyStore::new(Duration::from_secs(60), 100);
assert!(store.record_if_new("key-a"));
assert!(store.record_if_new("key-b"));
assert!(store.record_if_new("key-c"));
assert!(store.record_if_new("key-d"));
}
#[test]
fn idempotency_store_max_keys_clamped_to_one() {
let store = IdempotencyStore::new(Duration::from_secs(60), 0);
assert!(store.record_if_new("only-key"));
assert!(!store.record_if_new("only-key"));
}
#[test]
fn idempotency_store_rapid_duplicate_rejected() {
let store = IdempotencyStore::new(Duration::from_secs(300), 100);
assert!(store.record_if_new("rapid"));
assert!(!store.record_if_new("rapid"));
}
#[test]
fn idempotency_store_accepts_after_ttl_expires() {
let store = IdempotencyStore::new(Duration::from_millis(1), 100);
assert!(store.record_if_new("ttl-key"));
std::thread::sleep(Duration::from_millis(10));
assert!(store.record_if_new("ttl-key"));
}
#[test]
fn idempotency_store_eviction_preserves_newest() {
let store = IdempotencyStore::new(Duration::from_secs(300), 1);
assert!(store.record_if_new("old-key"));
std::thread::sleep(Duration::from_millis(2));
assert!(store.record_if_new("new-key"));
let keys = store.keys.lock();
assert_eq!(keys.len(), 1);
assert!(!keys.contains_key("old-key"));
assert!(keys.contains_key("new-key"));
}
}

View file

@ -1,10 +1,4 @@
//! USB device discovery — enumerate devices and enrich with board registry.
//!
//! USB enumeration via `nusb` is only supported on Linux, macOS, and Windows.
//! On Android (Termux) and other unsupported platforms this module is excluded
//! from compilation; callers in `hardware/mod.rs` fall back to an empty result.
#![cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))]
use super::registry;
use anyhow::Result;

View file

@ -4,10 +4,10 @@
pub mod registry;
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
pub mod discover;
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
pub mod introspect;
use crate::config::Config;
@ -28,9 +28,8 @@ pub struct DiscoveredDevice {
/// Auto-discover connected hardware devices.
/// Returns an empty vec on platforms without hardware support.
pub fn discover_hardware() -> Vec<DiscoveredDevice> {
// USB/serial discovery is behind the "hardware" feature gate and only
// available on platforms where nusb supports device enumeration.
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
// USB/serial discovery is behind the "hardware" feature gate.
#[cfg(feature = "hardware")]
{
if let Ok(devices) = discover::list_usb_devices() {
return devices
@ -103,15 +102,7 @@ pub fn handle_command(cmd: crate::HardwareCommands, _config: &Config) -> Result<
return Ok(());
}
#[cfg(all(feature = "hardware", not(any(target_os = "linux", target_os = "macos", target_os = "windows"))))]
{
let _ = &cmd;
println!("Hardware USB discovery is not supported on this platform.");
println!("Supported platforms: Linux, macOS, Windows.");
return Ok(());
}
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
match cmd {
crate::HardwareCommands::Discover => run_discover(),
crate::HardwareCommands::Introspect { path } => run_introspect(&path),
@ -119,7 +110,7 @@ pub fn handle_command(cmd: crate::HardwareCommands, _config: &Config) -> Result<
}
}
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
fn run_discover() -> Result<()> {
let devices = discover::list_usb_devices()?;
@ -147,7 +138,7 @@ fn run_discover() -> Result<()> {
Ok(())
}
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
fn run_introspect(path: &str) -> Result<()> {
let result = introspect::introspect_device(path)?;
@ -169,7 +160,7 @@ fn run_introspect(path: &str) -> Result<()> {
Ok(())
}
#[cfg(all(feature = "hardware", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(feature = "hardware")]
fn run_info(chip: &str) -> Result<()> {
#[cfg(feature = "probe")]
{
@ -201,7 +192,7 @@ fn run_info(chip: &str) -> Result<()> {
}
}
#[cfg(all(feature = "hardware", feature = "probe", any(target_os = "linux", target_os = "macos", target_os = "windows")))]
#[cfg(all(feature = "hardware", feature = "probe"))]
fn info_via_probe(chip: &str) -> anyhow::Result<()> {
use probe_rs::config::MemoryRegion;
use probe_rs::{Session, SessionConfig};

View file

@ -790,7 +790,6 @@ mod tests {
allowed_users: vec!["user".into()],
stream_mode: StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
let entries = all_integrations();

View file

@ -39,49 +39,46 @@ use clap::Subcommand;
use serde::{Deserialize, Serialize};
pub mod agent;
pub(crate) mod approval;
pub(crate) mod auth;
pub mod approval;
pub mod auth;
pub mod channels;
pub mod config;
pub(crate) mod cost;
pub(crate) mod cron;
pub(crate) mod daemon;
pub(crate) mod doctor;
pub mod cost;
pub mod cron;
pub mod daemon;
pub mod doctor;
pub mod gateway;
pub(crate) mod hardware;
pub(crate) mod health;
pub(crate) mod heartbeat;
pub(crate) mod identity;
pub(crate) mod integrations;
pub mod hardware;
pub mod health;
pub mod heartbeat;
pub mod identity;
pub mod integrations;
pub mod memory;
pub(crate) mod migration;
pub(crate) mod multimodal;
pub mod migration;
pub mod observability;
pub(crate) mod onboard;
pub mod onboard;
pub mod peripherals;
pub mod providers;
pub mod rag;
pub mod runtime;
pub(crate) mod security;
pub(crate) mod service;
pub(crate) mod skills;
pub mod security;
pub mod service;
pub mod skills;
pub mod tools;
pub(crate) mod tunnel;
pub(crate) mod util;
pub mod tunnel;
pub mod util;
pub use config::Config;
/// Service management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum ServiceCommands {
pub enum ServiceCommands {
/// Install daemon service unit for auto-start and restart
Install,
/// Start daemon service
Start,
/// Stop daemon service
Stop,
/// Restart daemon service to apply latest config
Restart,
/// Check daemon service status
Status,
/// Uninstall daemon service unit
@ -90,7 +87,7 @@ pub(crate) enum ServiceCommands {
/// Channel management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum ChannelCommands {
pub enum ChannelCommands {
/// List all configured channels
List,
/// Start all configured channels (handled in main.rs for async)
@ -98,17 +95,6 @@ pub(crate) enum ChannelCommands {
/// Run health checks for configured channels (handled in main.rs for async)
Doctor,
/// Add a new channel configuration
#[command(long_about = "\
Add a new channel configuration.
Provide the channel type and a JSON object with the required \
configuration keys for that channel type.
Supported types: telegram, discord, slack, whatsapp, matrix, imessage, email.
Examples:
zeroclaw channel add telegram '{\"bot_token\":\"...\",\"name\":\"my-bot\"}'
zeroclaw channel add discord '{\"bot_token\":\"...\",\"name\":\"my-discord\"}'")]
Add {
/// Channel type (telegram, discord, slack, whatsapp, matrix, imessage, email)
channel_type: String,
@ -121,16 +107,6 @@ Examples:
name: String,
},
/// Bind a Telegram identity (username or numeric user ID) into allowlist
#[command(long_about = "\
Bind a Telegram identity into the allowlist.
Adds a Telegram username (without the '@' prefix) or numeric user \
ID to the channel allowlist so the agent will respond to messages \
from that identity.
Examples:
zeroclaw channel bind-telegram zeroclaw_user
zeroclaw channel bind-telegram 123456789")]
BindTelegram {
/// Telegram identity to allow (username without '@' or numeric user ID)
identity: String,
@ -139,12 +115,12 @@ Examples:
/// Skills management subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum SkillCommands {
pub enum SkillCommands {
/// List all installed skills
List,
/// Install a new skill from a git URL (HTTPS/SSH) or local path
/// Install a new skill from a URL or local path
Install {
/// Source git URL (HTTPS/SSH) or local path
/// Source URL or local path
source: String,
},
/// Remove an installed skill
@ -156,7 +132,7 @@ pub(crate) enum SkillCommands {
/// Migration subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum MigrateCommands {
pub enum MigrateCommands {
/// Import memory from an `OpenClaw` workspace into this `ZeroClaw` workspace
Openclaw {
/// Optional path to `OpenClaw` workspace (defaults to ~/.openclaw/workspace)
@ -171,20 +147,10 @@ pub(crate) enum MigrateCommands {
/// Cron subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum CronCommands {
pub enum CronCommands {
/// List all scheduled tasks
List,
/// Add a new scheduled task
#[command(long_about = "\
Add a new recurring scheduled task.
Uses standard 5-field cron syntax: 'min hour day month weekday'. \
Times are evaluated in UTC by default; use --tz with an IANA \
timezone name to override.
Examples:
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York
zeroclaw cron add '*/30 * * * *' 'Check system health'")]
Add {
/// Cron expression
expression: String,
@ -195,14 +161,6 @@ Examples:
command: String,
},
/// Add a one-shot scheduled task at an RFC3339 timestamp
#[command(long_about = "\
Add a one-shot task that fires at a specific UTC timestamp.
The timestamp must be in RFC 3339 format (e.g. 2025-01-15T14:00:00Z).
Examples:
zeroclaw cron add-at 2025-01-15T14:00:00Z 'Send reminder'
zeroclaw cron add-at 2025-12-31T23:59:00Z 'Happy New Year!'")]
AddAt {
/// One-shot timestamp in RFC3339 format
at: String,
@ -210,14 +168,6 @@ Examples:
command: String,
},
/// Add a fixed-interval scheduled task
#[command(long_about = "\
Add a task that repeats at a fixed interval.
Interval is specified in milliseconds. For example, 60000 = 1 minute.
Examples:
zeroclaw cron add-every 60000 'Ping heartbeat' # every minute
zeroclaw cron add-every 3600000 'Hourly report' # every hour")]
AddEvery {
/// Interval in milliseconds
every_ms: u64,
@ -225,16 +175,6 @@ Examples:
command: String,
},
/// Add a one-shot delayed task (e.g. "30m", "2h", "1d")
#[command(long_about = "\
Add a one-shot task that fires after a delay from now.
Accepts human-readable durations: s (seconds), m (minutes), \
h (hours), d (days).
Examples:
zeroclaw cron once 30m 'Run backup in 30 minutes'
zeroclaw cron once 2h 'Follow up on deployment'
zeroclaw cron once 1d 'Daily check'")]
Once {
/// Delay duration
delay: String,
@ -246,32 +186,6 @@ Examples:
/// Task ID
id: String,
},
/// Update a scheduled task
#[command(long_about = "\
Update one or more fields of an existing scheduled task.
Only the fields you specify are changed; others remain unchanged.
Examples:
zeroclaw cron update <task-id> --expression '0 8 * * *'
zeroclaw cron update <task-id> --tz Europe/London --name 'Morning check'
zeroclaw cron update <task-id> --command 'Updated message'")]
Update {
/// Task ID
id: String,
/// New cron expression
#[arg(long)]
expression: Option<String>,
/// New IANA timezone
#[arg(long)]
tz: Option<String>,
/// New command to run
#[arg(long)]
command: Option<String>,
/// New job name
#[arg(long)]
name: Option<String>,
},
/// Pause a scheduled task
Pause {
/// Task ID
@ -286,7 +200,7 @@ Examples:
/// Integration subcommands
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum IntegrationCommands {
pub enum IntegrationCommands {
/// Show details about a specific integration
Info {
/// Integration name
@ -298,39 +212,13 @@ pub(crate) enum IntegrationCommands {
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum HardwareCommands {
/// Enumerate USB devices (VID/PID) and show known boards
#[command(long_about = "\
Enumerate USB devices and show known boards.
Scans connected USB devices by VID/PID and matches them against \
known development boards (STM32 Nucleo, Arduino, ESP32).
Examples:
zeroclaw hardware discover")]
Discover,
/// Introspect a device by path (e.g. /dev/ttyACM0)
#[command(long_about = "\
Introspect a device by its serial or device path.
Opens the specified device path and queries for board information, \
firmware version, and supported capabilities.
Examples:
zeroclaw hardware introspect /dev/ttyACM0
zeroclaw hardware introspect COM3")]
Introspect {
/// Serial or device path
path: String,
},
/// Get chip info via USB (probe-rs over ST-Link). No firmware needed on target.
#[command(long_about = "\
Get chip info via USB using probe-rs over ST-Link.
Queries the target MCU directly through the debug probe without \
requiring any firmware on the target board.
Examples:
zeroclaw hardware info
zeroclaw hardware info --chip STM32F401RETx")]
Info {
/// Chip name (e.g. STM32F401RETx). Default: STM32F401RETx for Nucleo-F401RE
#[arg(long, default_value = "STM32F401RETx")]
@ -344,19 +232,6 @@ pub enum PeripheralCommands {
/// List configured peripherals
List,
/// Add a peripheral (board path, e.g. nucleo-f401re /dev/ttyACM0)
#[command(long_about = "\
Add a peripheral by board type and transport path.
Registers a hardware board so the agent can use its tools (GPIO, \
sensors, actuators). Use 'native' as path for local GPIO on \
single-board computers like Raspberry Pi.
Supported boards: nucleo-f401re, rpi-gpio, esp32, arduino-uno.
Examples:
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0
zeroclaw peripheral add rpi-gpio native
zeroclaw peripheral add esp32 /dev/ttyUSB0")]
Add {
/// Board type (nucleo-f401re, rpi-gpio, esp32)
board: String,
@ -364,16 +239,6 @@ Examples:
path: String,
},
/// Flash ZeroClaw firmware to Arduino (creates .ino, installs arduino-cli if needed, uploads)
#[command(long_about = "\
Flash ZeroClaw firmware to an Arduino board.
Generates the .ino sketch, installs arduino-cli if it is not \
already available, compiles, and uploads the firmware.
Examples:
zeroclaw peripheral flash
zeroclaw peripheral flash --port /dev/cu.usbmodem12345
zeroclaw peripheral flash -p COM3")]
Flash {
/// Serial port (e.g. /dev/cu.usbmodem12345). If omitted, uses first arduino-uno from config.
#[arg(short, long)]

View file

@ -39,14 +39,6 @@ use serde::{Deserialize, Serialize};
use tracing::{info, warn};
use tracing_subscriber::{fmt, EnvFilter};
fn parse_temperature(s: &str) -> std::result::Result<f64, String> {
let t: f64 = s.parse().map_err(|e| format!("{e}"))?;
if !(0.0..=2.0).contains(&t) {
return Err("temperature must be between 0.0 and 2.0".to_string());
}
Ok(t)
}
mod agent;
mod approval;
mod auth;
@ -66,7 +58,6 @@ mod identity;
mod integrations;
mod memory;
mod migration;
mod multimodal;
mod observability;
mod onboard;
mod peripherals;
@ -104,8 +95,6 @@ enum ServiceCommands {
Start,
/// Stop daemon service
Stop,
/// Restart daemon service to apply latest config
Restart,
/// Check daemon service status
Status,
/// Uninstall daemon service unit
@ -131,26 +120,13 @@ enum Commands {
/// Provider name (used in quick mode, default: openrouter)
#[arg(long)]
provider: Option<String>,
/// Model ID override (used in quick mode)
#[arg(long)]
model: Option<String>,
/// Memory backend (sqlite, lucid, markdown, none) - used in quick mode, default: sqlite
#[arg(long)]
memory: Option<String>,
},
/// Start the AI agent loop
#[command(long_about = "\
Start the AI agent loop.
Launches an interactive chat session with the configured AI provider. \
Use --message for single-shot queries without entering interactive mode.
Examples:
zeroclaw agent # interactive session
zeroclaw agent -m \"Summarize today's logs\" # single message
zeroclaw agent -p anthropic --model claude-sonnet-4-20250514
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0")]
Agent {
/// Single message mode (don't enter interactive mode)
#[arg(short, long)]
@ -165,7 +141,7 @@ Examples:
model: Option<String>,
/// Temperature (0.0 - 2.0)
#[arg(short, long, default_value = "0.7", value_parser = parse_temperature)]
#[arg(short, long, default_value = "0.7")]
temperature: f64,
/// Attach a peripheral (board:path, e.g. nucleo-f401re:/dev/ttyACM0)
@ -174,18 +150,6 @@ Examples:
},
/// Start the gateway server (webhooks, websockets)
#[command(long_about = "\
Start the gateway server (webhooks, websockets).
Runs the HTTP/WebSocket gateway that accepts incoming webhook events \
and WebSocket connections. Bind address defaults to the values in \
your config file (gateway.host / gateway.port).
Examples:
zeroclaw gateway # use config defaults
zeroclaw gateway -p 8080 # listen on port 8080
zeroclaw gateway --host 0.0.0.0 # bind to all interfaces
zeroclaw gateway -p 0 # random available port")]
Gateway {
/// Port to listen on (use 0 for random available port); defaults to config gateway.port
#[arg(short, long)]
@ -197,21 +161,6 @@ Examples:
},
/// Start long-running autonomous runtime (gateway + channels + heartbeat + scheduler)
#[command(long_about = "\
Start the long-running autonomous daemon.
Launches the full ZeroClaw runtime: gateway server, all configured \
channels (Telegram, Discord, Slack, etc.), heartbeat monitor, and \
the cron scheduler. This is the recommended way to run ZeroClaw in \
production or as an always-on assistant.
Use 'zeroclaw service install' to register the daemon as an OS \
service (systemd/launchd) for auto-start on boot.
Examples:
zeroclaw daemon # use config defaults
zeroclaw daemon -p 9090 # gateway on port 9090
zeroclaw daemon --host 127.0.0.1 # localhost only")]
Daemon {
/// Port to listen on (use 0 for random available port); defaults to config gateway.port
#[arg(short, long)]
@ -238,25 +187,6 @@ Examples:
Status,
/// Configure and manage scheduled tasks
#[command(long_about = "\
Configure and manage scheduled tasks.
Schedule recurring, one-shot, or interval-based tasks using cron \
expressions, RFC 3339 timestamps, durations, or fixed intervals.
Cron expressions use the standard 5-field format: \
'min hour day month weekday'. Timezones default to UTC; \
override with --tz and an IANA timezone name.
Examples:
zeroclaw cron list
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York
zeroclaw cron add '*/30 * * * *' 'Check system health'
zeroclaw cron add-at 2025-01-15T14:00:00Z 'Send reminder'
zeroclaw cron add-every 60000 'Ping heartbeat'
zeroclaw cron once 30m 'Run backup in 30 minutes'
zeroclaw cron pause <task-id>
zeroclaw cron update <task-id> --expression '0 8 * * *' --tz Europe/London")]
Cron {
#[command(subcommand)]
cron_command: CronCommands,
@ -272,19 +202,6 @@ Examples:
Providers,
/// Manage channels (telegram, discord, slack)
#[command(long_about = "\
Manage communication channels.
Add, remove, list, and health-check channels that connect ZeroClaw \
to messaging platforms. Supported channel types: telegram, discord, \
slack, whatsapp, matrix, imessage, email.
Examples:
zeroclaw channel list
zeroclaw channel doctor
zeroclaw channel add telegram '{\"bot_token\":\"...\",\"name\":\"my-bot\"}'
zeroclaw channel remove my-bot
zeroclaw channel bind-telegram zeroclaw_user")]
Channel {
#[command(subcommand)]
channel_command: ChannelCommands,
@ -315,62 +232,16 @@ Examples:
},
/// Discover and introspect USB hardware
#[command(long_about = "\
Discover and introspect USB hardware.
Enumerate connected USB devices, identify known development boards \
(STM32 Nucleo, Arduino, ESP32), and retrieve chip information via \
probe-rs / ST-Link.
Examples:
zeroclaw hardware discover
zeroclaw hardware introspect /dev/ttyACM0
zeroclaw hardware info --chip STM32F401RETx")]
Hardware {
#[command(subcommand)]
hardware_command: zeroclaw::HardwareCommands,
},
/// Manage hardware peripherals (STM32, RPi GPIO, etc.)
#[command(long_about = "\
Manage hardware peripherals.
Add, list, flash, and configure hardware boards that expose tools \
to the agent (GPIO, sensors, actuators). Supported boards: \
nucleo-f401re, rpi-gpio, esp32, arduino-uno.
Examples:
zeroclaw peripheral list
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0
zeroclaw peripheral add rpi-gpio native
zeroclaw peripheral flash --port /dev/cu.usbmodem12345
zeroclaw peripheral flash-nucleo")]
Peripheral {
#[command(subcommand)]
peripheral_command: zeroclaw::PeripheralCommands,
},
/// Manage configuration
#[command(long_about = "\
Manage ZeroClaw configuration.
Inspect and export configuration settings. Use 'schema' to dump \
the full JSON Schema for the config file, which documents every \
available key, type, and default value.
Examples:
zeroclaw config schema # print JSON Schema to stdout
zeroclaw config schema > schema.json")]
Config {
#[command(subcommand)]
config_command: ConfigCommands,
},
}
#[derive(Subcommand, Debug)]
enum ConfigCommands {
/// Dump the full configuration JSON Schema to stdout
Schema,
}
#[derive(Subcommand, Debug)]
@ -510,23 +381,6 @@ enum CronCommands {
/// Task ID
id: String,
},
/// Update a scheduled task
Update {
/// Task ID
id: String,
/// New cron expression
#[arg(long)]
expression: Option<String>,
/// New IANA timezone
#[arg(long)]
tz: Option<String>,
/// New command to run
#[arg(long)]
command: Option<String>,
/// New job name
#[arg(long)]
name: Option<String>,
},
/// Pause a scheduled task
Pause {
/// Task ID
@ -598,9 +452,9 @@ enum ChannelCommands {
enum SkillCommands {
/// List installed skills
List,
/// Install a skill from a git URL (HTTPS/SSH) or local path
/// Install a skill from a GitHub URL or local path
Install {
/// Git URL (HTTPS/SSH) or local path
/// GitHub URL or local path
source: String,
},
/// Remove an installed skill
@ -649,7 +503,6 @@ async fn main() -> Result<()> {
channels_only,
api_key,
provider,
model,
memory,
} = &cli.command
{
@ -657,30 +510,25 @@ async fn main() -> Result<()> {
let channels_only = *channels_only;
let api_key = api_key.clone();
let provider = provider.clone();
let model = model.clone();
let memory = memory.clone();
if interactive && channels_only {
bail!("Use either --interactive or --channels-only, not both");
}
if channels_only
&& (api_key.is_some() || provider.is_some() || model.is_some() || memory.is_some())
{
bail!("--channels-only does not accept --api-key, --provider, --model, or --memory");
if channels_only && (api_key.is_some() || provider.is_some() || memory.is_some()) {
bail!("--channels-only does not accept --api-key, --provider, or --memory");
}
let config = if channels_only {
onboard::run_channels_repair_wizard().await
let config = tokio::task::spawn_blocking(move || {
if channels_only {
onboard::run_channels_repair_wizard()
} else if interactive {
onboard::run_wizard().await
onboard::run_wizard()
} else {
onboard::run_quick_setup(
api_key.as_deref(),
provider.as_deref(),
model.as_deref(),
memory.as_deref(),
)
.await
}?;
onboard::run_quick_setup(api_key.as_deref(), provider.as_deref(), memory.as_deref())
}
})
.await??;
// Auto-start channels if user said yes during wizard
if std::env::var("ZEROCLAW_AUTOSTART_CHANNELS").as_deref() == Ok("1") {
channels::start_channels(config).await?;
@ -689,7 +537,7 @@ async fn main() -> Result<()> {
}
// All other commands need config loaded first
let mut config = Config::load_or_init().await?;
let mut config = Config::load_or_init()?;
config.apply_env_overrides();
match cli.command {
@ -877,14 +725,16 @@ async fn main() -> Result<()> {
Commands::Channel { channel_command } => match channel_command {
ChannelCommands::Start => channels::start_channels(config).await,
ChannelCommands::Doctor => channels::doctor_channels(config).await,
other => channels::handle_command(other, &config).await,
other => channels::handle_command(other, &config),
},
Commands::Integrations {
integration_command,
} => integrations::handle_command(integration_command, &config),
Commands::Skills { skill_command } => skills::handle_command(skill_command, &config),
Commands::Skills { skill_command } => {
skills::handle_command(skill_command, &config.workspace_dir)
}
Commands::Migrate { migrate_command } => {
migration::handle_command(migrate_command, &config).await
@ -897,19 +747,8 @@ async fn main() -> Result<()> {
}
Commands::Peripheral { peripheral_command } => {
peripherals::handle_command(peripheral_command.clone(), &config).await
peripherals::handle_command(peripheral_command.clone(), &config)
}
Commands::Config { config_command } => match config_command {
ConfigCommands::Schema => {
let schema = schemars::schema_for!(config::Config);
println!(
"{}",
serde_json::to_string_pretty(&schema).expect("failed to serialize JSON Schema")
);
Ok(())
}
},
}
}
@ -1095,11 +934,12 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
let account_id =
extract_openai_account_id_for_profile(&token_set.access_token);
auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
let saved = auth_service
.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config);
println!("Saved profile {profile}");
println!("Active profile for openai-codex: {profile}");
println!("Saved profile {}", saved.id);
println!("Active profile for openai-codex: {}", saved.id);
return Ok(());
}
Err(e) => {
@ -1145,11 +985,11 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?;
let account_id = extract_openai_account_id_for_profile(&token_set.access_token);
auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
let saved = auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config);
println!("Saved profile {profile}");
println!("Active profile for openai-codex: {profile}");
println!("Saved profile {}", saved.id);
println!("Active profile for openai-codex: {}", saved.id);
Ok(())
}
@ -1198,11 +1038,11 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
auth::openai_oauth::exchange_code_for_tokens(&client, &code, &pkce).await?;
let account_id = extract_openai_account_id_for_profile(&token_set.access_token);
auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
let saved = auth_service.store_openai_tokens(&profile, token_set, account_id, true)?;
clear_pending_openai_login(config);
println!("Saved profile {profile}");
println!("Active profile for openai-codex: {profile}");
println!("Saved profile {}", saved.id);
println!("Active profile for openai-codex: {}", saved.id);
Ok(())
}
@ -1228,9 +1068,10 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
kind.as_metadata_value().to_string(),
);
let saved =
auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?;
println!("Saved profile {profile}");
println!("Active profile for {provider}: {profile}");
println!("Saved profile {}", saved.id);
println!("Active profile for {provider}: {}", saved.id);
Ok(())
}
@ -1248,9 +1089,10 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
kind.as_metadata_value().to_string(),
);
let saved =
auth_service.store_provider_token(&provider, &profile, &token, metadata, true)?;
println!("Saved profile {profile}");
println!("Active profile for {provider}: {profile}");
println!("Saved profile {}", saved.id);
println!("Active profile for {provider}: {}", saved.id);
Ok(())
}
@ -1289,8 +1131,8 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
AuthCommands::Use { provider, profile } => {
let provider = auth::normalize_provider(&provider)?;
auth_service.set_active_profile(&provider, &profile)?;
println!("Active profile for {provider}: {profile}");
let active = auth_service.set_active_profile(&provider, &profile)?;
println!("Active profile for {provider}: {active}");
Ok(())
}
@ -1331,15 +1173,15 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
marker,
id,
profile.kind,
crate::security::redact(profile.account_id.as_deref().unwrap_or("unknown")),
profile.account_id.as_deref().unwrap_or("unknown"),
format_expiry(profile)
);
}
println!();
println!("Active profiles:");
for (provider, profile_id) in &data.active_profiles {
println!(" {provider}: {profile_id}");
for (provider, active) in &data.active_profiles {
println!(" {provider}: {active}");
}
Ok(())
@ -1350,61 +1192,10 @@ async fn handle_auth_command(auth_command: AuthCommands, config: &Config) -> Res
#[cfg(test)]
mod tests {
use super::*;
use clap::{CommandFactory, Parser};
use clap::CommandFactory;
#[test]
fn cli_definition_has_no_flag_conflicts() {
Cli::command().debug_assert();
}
#[test]
fn onboard_help_includes_model_flag() {
let cmd = Cli::command();
let onboard = cmd
.get_subcommands()
.find(|subcommand| subcommand.get_name() == "onboard")
.expect("onboard subcommand must exist");
let has_model_flag = onboard
.get_arguments()
.any(|arg| arg.get_id().as_str() == "model" && arg.get_long() == Some("model"));
assert!(
has_model_flag,
"onboard help should include --model for quick setup overrides"
);
}
#[test]
fn onboard_cli_accepts_model_provider_and_api_key_in_quick_mode() {
let cli = Cli::try_parse_from([
"zeroclaw",
"onboard",
"--provider",
"openrouter",
"--model",
"custom-model-946",
"--api-key",
"sk-issue946",
])
.expect("quick onboard invocation should parse");
match cli.command {
Commands::Onboard {
interactive,
channels_only,
api_key,
provider,
model,
..
} => {
assert!(!interactive);
assert!(!channels_only);
assert_eq!(provider.as_deref(), Some("openrouter"));
assert_eq!(model.as_deref(), Some("custom-model-946"));
assert_eq!(api_key.as_deref(), Some("sk-issue946"));
}
other => panic!("expected onboard command, got {other:?}"),
}
}
}

View file

@ -3,14 +3,12 @@
// Splits on markdown headings and paragraph boundaries, respecting
// a max token limit per chunk. Preserves heading context.
use std::rc::Rc;
/// A single chunk of text with metadata.
#[derive(Debug, Clone)]
pub struct Chunk {
pub index: usize,
pub content: String,
pub heading: Option<Rc<str>>,
pub heading: Option<String>,
}
/// Split markdown text into chunks, each under `max_tokens` approximate tokens.
@ -28,10 +26,9 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
let max_chars = max_tokens * 4;
let sections = split_on_headings(text);
let mut chunks = Vec::with_capacity(sections.len());
let mut chunks = Vec::new();
for (heading, body) in sections {
let heading: Option<Rc<str>> = heading.map(Rc::from);
let full = if let Some(ref h) = heading {
format!("{h}\n{body}")
} else {
@ -48,7 +45,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
// Split on paragraphs (blank lines)
let paragraphs = split_on_blank_lines(&body);
let mut current = heading
.as_deref()
.as_ref()
.map_or_else(String::new, |h| format!("{h}\n"));
for para in paragraphs {
@ -59,7 +56,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
heading: heading.clone(),
});
current = heading
.as_deref()
.as_ref()
.map_or_else(String::new, |h| format!("{h}\n"));
}
@ -72,7 +69,7 @@ pub fn chunk_markdown(text: &str, max_tokens: usize) -> Vec<Chunk> {
heading: heading.clone(),
});
current = heading
.as_deref()
.as_ref()
.map_or_else(String::new, |h| format!("{h}\n"));
}
for line_chunk in split_on_lines(&para, max_chars) {
@ -118,7 +115,8 @@ fn split_on_headings(text: &str) -> Vec<(Option<String>, String)> {
for line in text.lines() {
if line.starts_with("# ") || line.starts_with("## ") || line.starts_with("### ") {
if !current_body.trim().is_empty() || current_heading.is_some() {
sections.push((current_heading.take(), std::mem::take(&mut current_body)));
sections.push((current_heading.take(), current_body.clone()));
current_body.clear();
}
current_heading = Some(line.to_string());
} else {
@ -142,7 +140,8 @@ fn split_on_blank_lines(text: &str) -> Vec<String> {
for line in text.lines() {
if line.trim().is_empty() {
if !current.trim().is_empty() {
paragraphs.push(std::mem::take(&mut current));
paragraphs.push(current.clone());
current.clear();
}
} else {
current.push_str(line);
@ -159,12 +158,13 @@ fn split_on_blank_lines(text: &str) -> Vec<String> {
/// Split text on line boundaries to fit within `max_chars`
fn split_on_lines(text: &str, max_chars: usize) -> Vec<String> {
let mut chunks = Vec::with_capacity(text.len() / max_chars.max(1) + 1);
let mut chunks = Vec::new();
let mut current = String::new();
for line in text.lines() {
if current.len() + line.len() + 1 > max_chars && !current.is_empty() {
chunks.push(std::mem::take(&mut current));
chunks.push(current.clone());
current.clear();
}
current.push_str(line);
current.push('\n');

View file

@ -172,15 +172,6 @@ pub fn create_embedding_provider(
dims,
))
}
"openrouter" => {
let key = api_key.unwrap_or("");
Box::new(OpenAiEmbedding::new(
"https://openrouter.ai/api/v1",
key,
model,
dims,
))
}
name if name.starts_with("custom:") => {
let base_url = name.strip_prefix("custom:").unwrap_or("");
let key = api_key.unwrap_or("");
@ -221,18 +212,6 @@ mod tests {
assert_eq!(p.dimensions(), 1536);
}
#[test]
fn factory_openrouter() {
let p = create_embedding_provider(
"openrouter",
Some("sk-or-test"),
"openai/text-embedding-3-small",
1536,
);
assert_eq!(p.name(), "openai"); // uses OpenAiEmbedding internally
assert_eq!(p.dimensions(), 1536);
}
#[test]
fn factory_custom_url() {
let p = create_embedding_provider("custom:http://localhost:1234", None, "model", 768);
@ -302,20 +281,6 @@ mod tests {
assert_eq!(p.dimensions(), 384);
}
#[test]
fn embeddings_url_openrouter() {
let p = OpenAiEmbedding::new(
"https://openrouter.ai/api/v1",
"key",
"openai/text-embedding-3-small",
1536,
);
assert_eq!(
p.embeddings_url(),
"https://openrouter.ai/api/v1/embeddings"
);
}
#[test]
fn embeddings_url_standard_openai() {
let p = OpenAiEmbedding::new("https://api.openai.com", "key", "model", 1536);

View file

@ -608,7 +608,7 @@ exit 1
.iter()
.any(|e| e.content.contains("Rust should stay local-first")));
let context_calls = tokio::fs::read_to_string(&marker).await.unwrap_or_default();
let context_calls = fs::read_to_string(&marker).unwrap_or_default();
assert!(
context_calls.trim().is_empty(),
"Expected local-hit short-circuit; got calls: {context_calls}"
@ -669,7 +669,7 @@ exit 1
assert!(first.is_empty());
assert!(second.is_empty());
let calls = tokio::fs::read_to_string(&marker).await.unwrap_or_default();
let calls = fs::read_to_string(&marker).unwrap_or_default();
assert_eq!(calls.lines().count(), 1);
}
}

View file

@ -229,6 +229,7 @@ impl Memory for MarkdownMemory {
#[cfg(test)]
mod tests {
use super::*;
use std::fs as sync_fs;
use tempfile::TempDir;
fn temp_workspace() -> (TempDir, MarkdownMemory) {
@ -255,7 +256,7 @@ mod tests {
mem.store("pref", "User likes Rust", MemoryCategory::Core, None)
.await
.unwrap();
let content = fs::read_to_string(mem.core_path()).await.unwrap();
let content = sync_fs::read_to_string(mem.core_path()).unwrap();
assert!(content.contains("User likes Rust"));
}
@ -266,7 +267,7 @@ mod tests {
.await
.unwrap();
let path = mem.daily_path();
let content = fs::read_to_string(path).await.unwrap();
let content = sync_fs::read_to_string(path).unwrap();
assert!(content.contains("Finished tests"));
}

View file

@ -27,7 +27,7 @@ pub use traits::Memory;
#[allow(unused_imports)]
pub use traits::{MemoryCategory, MemoryEntry};
use crate::config::{EmbeddingRouteConfig, MemoryConfig, StorageProviderConfig};
use crate::config::{MemoryConfig, StorageProviderConfig};
use anyhow::Context;
use std::path::Path;
use std::sync::Arc;
@ -75,101 +75,13 @@ pub fn effective_memory_backend_name(
memory_backend.trim().to_ascii_lowercase()
}
/// Legacy auto-save key used for model-authored assistant summaries.
/// These entries are treated as untrusted context and should not be re-injected.
pub fn is_assistant_autosave_key(key: &str) -> bool {
let normalized = key.trim().to_ascii_lowercase();
normalized == "assistant_resp" || normalized.starts_with("assistant_resp_")
}
#[derive(Clone, PartialEq, Eq)]
struct ResolvedEmbeddingConfig {
provider: String,
model: String,
dimensions: usize,
api_key: Option<String>,
}
impl std::fmt::Debug for ResolvedEmbeddingConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ResolvedEmbeddingConfig")
.field("provider", &self.provider)
.field("model", &self.model)
.field("dimensions", &self.dimensions)
.field("api_key", &self.api_key.as_ref().map(|_| "[REDACTED]"))
.finish()
}
}
fn resolve_embedding_config(
config: &MemoryConfig,
embedding_routes: &[EmbeddingRouteConfig],
api_key: Option<&str>,
) -> ResolvedEmbeddingConfig {
let fallback_api_key = api_key
.map(str::trim)
.filter(|value| !value.is_empty())
.map(str::to_string);
let fallback = ResolvedEmbeddingConfig {
provider: config.embedding_provider.trim().to_string(),
model: config.embedding_model.trim().to_string(),
dimensions: config.embedding_dimensions,
api_key: fallback_api_key.clone(),
};
let Some(hint) = config
.embedding_model
.strip_prefix("hint:")
.map(str::trim)
.filter(|value| !value.is_empty())
else {
return fallback;
};
let Some(route) = embedding_routes
.iter()
.find(|route| route.hint.trim() == hint)
else {
tracing::warn!(
hint,
"Unknown embedding route hint; falling back to [memory] embedding settings"
);
return fallback;
};
let provider = route.provider.trim();
let model = route.model.trim();
let dimensions = route.dimensions.unwrap_or(config.embedding_dimensions);
if provider.is_empty() || model.is_empty() || dimensions == 0 {
tracing::warn!(
hint,
"Invalid embedding route configuration; falling back to [memory] embedding settings"
);
return fallback;
}
let routed_api_key = route
.api_key
.as_deref()
.map(str::trim)
.filter(|value: &&str| !value.is_empty())
.map(|value| value.to_string());
ResolvedEmbeddingConfig {
provider: provider.to_string(),
model: model.to_string(),
dimensions,
api_key: routed_api_key.or(fallback_api_key),
}
}
/// Factory: create the right memory backend from config
pub fn create_memory(
config: &MemoryConfig,
workspace_dir: &Path,
api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> {
create_memory_with_storage_and_routes(config, &[], None, workspace_dir, api_key)
create_memory_with_storage(config, None, workspace_dir, api_key)
}
/// Factory: create memory with optional storage-provider override.
@ -178,21 +90,9 @@ pub fn create_memory_with_storage(
storage_provider: Option<&StorageProviderConfig>,
workspace_dir: &Path,
api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> {
create_memory_with_storage_and_routes(config, &[], storage_provider, workspace_dir, api_key)
}
/// Factory: create memory with optional storage-provider override and embedding routes.
pub fn create_memory_with_storage_and_routes(
config: &MemoryConfig,
embedding_routes: &[EmbeddingRouteConfig],
storage_provider: Option<&StorageProviderConfig>,
workspace_dir: &Path,
api_key: Option<&str>,
) -> anyhow::Result<Box<dyn Memory>> {
let backend_name = effective_memory_backend_name(&config.backend, storage_provider);
let backend_kind = classify_memory_backend(&backend_name);
let resolved_embedding = resolve_embedding_config(config, embedding_routes, api_key);
// Best-effort memory hygiene/retention pass (throttled by state file).
if let Err(e) = hygiene::run_if_due(config, workspace_dir) {
@ -237,14 +137,14 @@ pub fn create_memory_with_storage_and_routes(
fn build_sqlite_memory(
config: &MemoryConfig,
workspace_dir: &Path,
resolved_embedding: &ResolvedEmbeddingConfig,
api_key: Option<&str>,
) -> anyhow::Result<SqliteMemory> {
let embedder: Arc<dyn embeddings::EmbeddingProvider> =
Arc::from(embeddings::create_embedding_provider(
&resolved_embedding.provider,
resolved_embedding.api_key.as_deref(),
&resolved_embedding.model,
resolved_embedding.dimensions,
&config.embedding_provider,
api_key,
&config.embedding_model,
config.embedding_dimensions,
));
#[allow(clippy::cast_possible_truncation)]
@ -284,7 +184,7 @@ pub fn create_memory_with_storage_and_routes(
create_memory_with_builders(
&backend_name,
workspace_dir,
|| build_sqlite_memory(config, workspace_dir, &resolved_embedding),
|| build_sqlite_memory(config, workspace_dir, api_key),
|| build_postgres_memory(storage_provider),
"",
)
@ -347,7 +247,7 @@ pub fn create_response_cache(config: &MemoryConfig, workspace_dir: &Path) -> Opt
#[cfg(test)]
mod tests {
use super::*;
use crate::config::{EmbeddingRouteConfig, StorageProviderConfig};
use crate::config::StorageProviderConfig;
use tempfile::TempDir;
#[test]
@ -361,15 +261,6 @@ mod tests {
assert_eq!(mem.name(), "sqlite");
}
#[test]
fn assistant_autosave_key_detection_matches_legacy_patterns() {
assert!(is_assistant_autosave_key("assistant_resp"));
assert!(is_assistant_autosave_key("assistant_resp_1234"));
assert!(is_assistant_autosave_key("ASSISTANT_RESP_abcd"));
assert!(!is_assistant_autosave_key("assistant_response"));
assert!(!is_assistant_autosave_key("user_msg_1234"));
}
#[test]
fn factory_markdown() {
let tmp = TempDir::new().unwrap();
@ -462,102 +353,4 @@ mod tests {
.expect("postgres without db_url should be rejected");
assert!(error.to_string().contains("db_url"));
}
#[test]
fn resolve_embedding_config_uses_base_config_when_model_is_not_hint() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "text-embedding-3-small".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let resolved = resolve_embedding_config(&cfg, &[], Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "text-embedding-3-small".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_uses_matching_route_with_api_key_override() {
let cfg = MemoryConfig {
embedding_provider: "none".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let routes = vec![EmbeddingRouteConfig {
hint: "semantic".into(),
provider: "custom:https://api.example.com/v1".into(),
model: "custom-embed-v2".into(),
dimensions: Some(1024),
api_key: Some("route-key".into()),
}];
let resolved = resolve_embedding_config(&cfg, &routes, Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "custom:https://api.example.com/v1".into(),
model: "custom-embed-v2".into(),
dimensions: 1024,
api_key: Some("route-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_falls_back_when_hint_is_missing() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let resolved = resolve_embedding_config(&cfg, &[], Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "hint:semantic".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
#[test]
fn resolve_embedding_config_falls_back_when_route_is_invalid() {
let cfg = MemoryConfig {
embedding_provider: "openai".into(),
embedding_model: "hint:semantic".into(),
embedding_dimensions: 1536,
..MemoryConfig::default()
};
let routes = vec![EmbeddingRouteConfig {
hint: "semantic".into(),
provider: String::new(),
model: "text-embedding-3-small".into(),
dimensions: Some(0),
api_key: None,
}];
let resolved = resolve_embedding_config(&cfg, &routes, Some("base-key"));
assert_eq!(
resolved,
ResolvedEmbeddingConfig {
provider: "openai".into(),
model: "hint:semantic".into(),
dimensions: 1536,
api_key: Some("base-key".into()),
}
);
}
}

View file

@ -30,32 +30,6 @@ impl PostgresMemory {
validate_identifier(schema, "storage schema")?;
validate_identifier(table, "storage table")?;
let schema_ident = quote_identifier(schema);
let table_ident = quote_identifier(table);
let qualified_table = format!("{schema_ident}.{table_ident}");
let client = Self::initialize_client(
db_url.to_string(),
connect_timeout_secs,
schema_ident.clone(),
qualified_table.clone(),
)?;
Ok(Self {
client: Arc::new(Mutex::new(client)),
qualified_table,
})
}
fn initialize_client(
db_url: String,
connect_timeout_secs: Option<u64>,
schema_ident: String,
qualified_table: String,
) -> Result<Client> {
let init_handle = std::thread::Builder::new()
.name("postgres-memory-init".to_string())
.spawn(move || -> Result<Client> {
let mut config: postgres::Config = db_url
.parse()
.context("invalid PostgreSQL connection URL")?;
@ -69,16 +43,16 @@ impl PostgresMemory {
.connect(NoTls)
.context("failed to connect to PostgreSQL memory backend")?;
let schema_ident = quote_identifier(schema);
let table_ident = quote_identifier(table);
let qualified_table = format!("{schema_ident}.{table_ident}");
Self::init_schema(&mut client, &schema_ident, &qualified_table)?;
Ok(client)
Ok(Self {
client: Arc::new(Mutex::new(client)),
qualified_table,
})
.context("failed to spawn PostgreSQL initializer thread")?;
let init_result = init_handle
.join()
.map_err(|_| anyhow::anyhow!("PostgreSQL initializer thread panicked"))?;
init_result
}
fn init_schema(client: &mut Client, schema_ident: &str, qualified_table: &str) -> Result<()> {
@ -183,7 +157,7 @@ impl Memory for PostgresMemory {
let key = key.to_string();
let content = content.to_string();
let category = Self::category_to_str(&category);
let sid = session_id.map(str::to_string);
let session_id = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<()> {
let now = Utc::now();
@ -203,7 +177,10 @@ impl Memory for PostgresMemory {
);
let id = Uuid::new_v4().to_string();
client.execute(&stmt, &[&id, &key, &content, &category, &now, &now, &sid])?;
client.execute(
&stmt,
&[&id, &key, &content, &category, &now, &now, &session_id],
)?;
Ok(())
})
.await?
@ -218,7 +195,7 @@ impl Memory for PostgresMemory {
let client = self.client.clone();
let qualified_table = self.qualified_table.clone();
let query = query.trim().to_string();
let sid = session_id.map(str::to_string);
let session_id = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> {
let mut client = client.lock();
@ -240,7 +217,7 @@ impl Memory for PostgresMemory {
#[allow(clippy::cast_possible_wrap)]
let limit_i64 = limit as i64;
let rows = client.query(&stmt, &[&query, &sid, &limit_i64])?;
let rows = client.query(&stmt, &[&query, &session_id, &limit_i64])?;
rows.iter()
.map(Self::row_to_entry)
.collect::<Result<Vec<MemoryEntry>>>()
@ -278,7 +255,7 @@ impl Memory for PostgresMemory {
let client = self.client.clone();
let qualified_table = self.qualified_table.clone();
let category = category.map(Self::category_to_str);
let sid = session_id.map(str::to_string);
let session_id = session_id.map(str::to_string);
tokio::task::spawn_blocking(move || -> Result<Vec<MemoryEntry>> {
let mut client = client.lock();
@ -293,7 +270,7 @@ impl Memory for PostgresMemory {
);
let category_ref = category.as_deref();
let session_ref = sid.as_deref();
let session_ref = session_id.as_deref();
let rows = client.query(&stmt, &[&category_ref, &session_ref])?;
rows.iter()
.map(Self::row_to_entry)
@ -372,22 +349,4 @@ mod tests {
MemoryCategory::Custom("custom_notes".into())
);
}
#[tokio::test(flavor = "current_thread")]
async fn new_does_not_panic_inside_tokio_runtime() {
let outcome = std::panic::catch_unwind(|| {
PostgresMemory::new(
"postgres://zeroclaw:password@127.0.0.1:1/zeroclaw",
"public",
"memories",
Some(1),
)
});
assert!(outcome.is_ok(), "PostgresMemory::new should not panic");
assert!(
outcome.unwrap().is_err(),
"PostgresMemory::new should return a connect error for an unreachable endpoint"
);
}
}

View file

@ -452,7 +452,7 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone();
let key = key.to_string();
let content = content.to_string();
let sid = session_id.map(String::from);
let session_id = session_id.map(String::from);
tokio::task::spawn_blocking(move || -> anyhow::Result<()> {
let conn = conn.lock();
@ -469,7 +469,7 @@ impl Memory for SqliteMemory {
embedding = excluded.embedding,
updated_at = excluded.updated_at,
session_id = excluded.session_id",
params![id, key, content, cat, embedding_bytes, now, now, sid],
params![id, key, content, cat, embedding_bytes, now, now, session_id],
)?;
Ok(())
})
@ -491,13 +491,13 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone();
let query = query.to_string();
let sid = session_id.map(String::from);
let session_id = session_id.map(String::from);
let vector_weight = self.vector_weight;
let keyword_weight = self.keyword_weight;
tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> {
let conn = conn.lock();
let session_ref = sid.as_deref();
let session_ref = session_id.as_deref();
// FTS5 BM25 keyword search
let keyword_results = Self::fts5_search(&conn, &query, limit * 2).unwrap_or_default();
@ -691,11 +691,11 @@ impl Memory for SqliteMemory {
let conn = self.conn.clone();
let category = category.cloned();
let sid = session_id.map(String::from);
let session_id = session_id.map(String::from);
tokio::task::spawn_blocking(move || -> anyhow::Result<Vec<MemoryEntry>> {
let conn = conn.lock();
let session_ref = sid.as_deref();
let session_ref = session_id.as_deref();
let mut results = Vec::new();
let row_mapper = |row: &rusqlite::Row| -> rusqlite::Result<MemoryEntry> {

Some files were not shown because too many files have changed in this diff Show more