Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.
The fastest, smallest, fully autonomous AI assistant โ deploy anywhere, swap anything. ``` ~3MB binary ยท <10ms startup ยท 502 tests ยท 22 providers ยท Pluggable everything ``` ## Quick Start ```bash git clone https://github.com/theonlyhennygod/zeroclaw.git cd zeroclaw cargo build --release # Initialize config + workspace cargo run --release -- onboard # Set your API key export OPENROUTER_API_KEY="sk-..." # Chat cargo run --release -- agent -m "Hello, ZeroClaw!" # Interactive mode cargo run --release -- agent # Check status cargo run --release -- status --verbose # List tools (includes memory tools) cargo run --release -- tools list # Test a tool directly cargo run --release -- tools test memory_store '{"key": "lang", "content": "User prefers Rust"}' cargo run --release -- tools test memory_recall '{"query": "Rust"}' ``` > **Tip:** Run `cargo install --path .` to install `zeroclaw` globally, then use `zeroclaw` instead of `cargo run --release --`. ## Architecture Every subsystem is a **trait** โ swap implementations with a config change, zero code changes. | Subsystem | Trait | Ships with | Extend | |-----------|-------|------------|--------| | **AI Models** | `Provider` | 22 providers (OpenRouter, Anthropic, OpenAI, Venice, Groq, Mistral, etc.) | Any OpenAI-compatible API | | **Channels** | `Channel` | CLI, Telegram, Discord, Slack, iMessage, Matrix, Webhook | Any messaging API | | **Memory** | `Memory` | SQLite (default), Markdown | Any persistence | | **Tools** | `Tool` | shell, file_read, file_write, memory_store, memory_recall, memory_forget | Any capability | | **Observability** | `Observer` | Noop, Log, Multi | Prometheus, OTel | | **Runtime** | `RuntimeAdapter` | Native (Mac/Linux/Pi) | Docker, WASM | | **Security** | `SecurityPolicy` | Sandbox + allowlists + rate limits | โ | | **Heartbeat** | Engine | HEARTBEAT.md periodic tasks | โ | ### Memory System ZeroClaw has a built-in brain. The agent automatically: 1. **Recalls** relevant memories before each prompt (context injection) 2. **Saves** conversation turns to memory (auto-save) 3. **Manages** its own memory via tools (store/recall/forget) Two backends โ **SQLite** (default, searchable, upsert, delete) and **Markdown** (human-readable, append-only, git-friendly). Switch with one config line. ### Security - **Workspace sandboxing** โ can't escape workspace directory - **Command allowlisting** โ only approved shell commands - **Path traversal blocking** โ `..` and absolute paths blocked - **Rate limiting** โ max actions/hour, max cost/day - **Autonomy levels** โ ReadOnly, Supervised, Full ## Configuration Config: `~/.zeroclaw/config.toml` (created by `onboard`) ## Documentation Index Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt Use this file to discover all available pages before exploring further. ## Token Use & Costs ZeroClaw tracks **tokens**, not characters. Tokens are model-specific, but most OpenAI-style models average ~4 characters per token for English text. ### How the system prompt is built ZeroClaw assembles its own system prompt on every run. It includes: * Tool list + short descriptions * Skills list (only metadata; instructions are loaded on demand with `read`) * Self-update instructions * Workspace + bootstrap files (`AGENTS.md`, `SOUL.md`, `TOOLS.md`, `IDENTITY.md`, `USER.md`, `HEARTBEAT.md`, `BOOTSTRAP.md` when new, plus `MEMORY.md` and/or `memory.md` when present). Large files are truncated by `agents.defaults.bootstrapMaxChars` (default: 20000). `memory/*.md` files are on-demand via memory tools and are not auto-injected. * Time (UTC + user timezone) * Reply tags + heartbeat behavior * Runtime metadata (host/OS/model/thinking) ### What counts in the context window Everything the model receives counts toward the context limit: * System prompt (all sections listed above) * Conversation history (user + assistant messages) * Tool calls and tool results * Attachments/transcripts (images, audio, files) * Compaction summaries and pruning artifacts * Provider wrappers or safety headers (not visible, but still counted) ### How to see current token usage Use these in chat: * `/status` โ **emoji-rich status card** with the session model, context usage, last response input/output tokens, and **estimated cost** (API key only). * `/usage off|tokens|full` โ appends a **per-response usage footer** to every reply. * Persists per session (stored as `responseUsage`). * OAuth auth **hides cost** (tokens only). * `/usage cost` โ shows a local cost summary from ZeroClaw session logs. Other surfaces: * **TUI/Web TUI:** `/status` + `/usage` are supported. * **CLI:** `zeroclaw status --usage` and `zeroclaw channels list` show provider quota windows (not per-response costs). ### Cost estimation (when shown) Costs are estimated from your model pricing config: ``` models.providers.