fix(whatsapp): complete wa-rs channel behavior and storage correctness

This commit is contained in:
Chummy 2026-02-19 17:25:08 +08:00
parent c2a1eb1088
commit b1ebd4b579
3 changed files with 884 additions and 271 deletions

555
CLAUDE.md
View file

@ -1,193 +1,474 @@
# CLAUDE.md
# CLAUDE.md — ZeroClaw Agent Engineering Protocol
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This file defines the default working protocol for Claude agents in this repository.
Scope: entire repository.
## Project Overview
## 1) Project Snapshot (Read First)
ZeroClaw is a lightweight autonomous AI assistant infrastructure written entirely in Rust. It is optimized for high performance, efficiency, and security with a trait-based pluggable architecture.
ZeroClaw is a Rust-first autonomous agent runtime optimized for:
Key stats: ~3.4MB binary, <5MB RAM, <10ms startup, 1,017 tests.
- high performance
- high efficiency
- high stability
- high extensibility
- high sustainability
- high security
## Common Commands
Core architecture is trait-driven and modular. Most extension work should be done by implementing traits and registering in factory modules.
### Build
```bash
cargo build --release # Optimized release build (~3.4MB)
CARGO_BUILD_JOBS=1 cargo build --release # Low-memory fallback (Raspberry Pi 3, 1GB RAM)
cargo build --release --locked # Build with locked dependencies (fixes OpenSSL errors)
```
Key extension points:
### Test
```bash
cargo test # Run all 1,017 tests
cargo test telegram --lib # Test specific module
cargo test security # Test security module
```
- `src/providers/traits.rs` (`Provider`)
- `src/channels/traits.rs` (`Channel`)
- `src/tools/traits.rs` (`Tool`)
- `src/memory/traits.rs` (`Memory`)
- `src/observability/traits.rs` (`Observer`)
- `src/runtime/traits.rs` (`RuntimeAdapter`)
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
### Format & Lint
```bash
cargo fmt --all -- --check # Check formatting
cargo fmt # Apply formatting
cargo clippy --all-targets -- -D clippy::correctness # Baseline (required for CI)
cargo clippy --all-targets -- -D warnings # Strict (optional)
```
## 2) Deep Architecture Observations (Why This Protocol Exists)
### CI / Pre-push
```bash
./dev/ci.sh all # Full CI in Docker
git config core.hooksPath .githooks # Enable pre-push hook
git push --no-verify # Skip hook if needed
```
These codebase realities should drive every design decision:
## Architecture
1. **Trait + factory architecture is the stability backbone**
- Extension points are intentionally explicit and swappable.
- Most features should be added via trait implementation + factory registration, not cross-cutting rewrites.
2. **Security-critical surfaces are first-class and internet-adjacent**
- `src/gateway/`, `src/security/`, `src/tools/`, `src/runtime/` carry high blast radius.
- Defaults already lean secure-by-default (pairing, bind safety, limits, secret handling); keep it that way.
3. **Performance and binary size are product goals, not nice-to-have**
- `Cargo.toml` release profile and dependency choices optimize for size and determinism.
- Convenience dependencies and broad abstractions can silently regress these goals.
4. **Config and runtime contracts are user-facing API**
- `src/config/schema.rs` and CLI commands are effectively public interfaces.
- Backward compatibility and explicit migration matter.
5. **The project now runs in high-concurrency collaboration mode**
- CI + docs governance + label routing are part of the product delivery system.
- PR throughput is a design constraint; not just a maintainer inconvenience.
ZeroClaw uses a **trait-based pluggable architecture** where every subsystem is swappable via traits and factory functions.
## 3) Engineering Principles (Normative)
### Core Extension Points
These principles are mandatory by default. They are not slogans; they are implementation constraints.
| Trait | Purpose | Location |
|-------|---------|----------|
| `Provider` | LLM backends (22+ providers) | `src/providers/traits.rs` |
| `Channel` | Messaging platforms | `src/channels/traits.rs` |
| `Tool` | Agent capabilities | `src/tools/traits.rs` |
| `Memory` | Persistence/backends | `src/memory/traits.rs` |
| `Observer` | Metrics/logging | `src/observability/traits.rs` |
| `RuntimeAdapter` | Platform abstraction | `src/runtime/traits.rs` |
| `Peripheral` | Hardware boards | `src/peripherals/traits.rs` |
### 3.1 KISS (Keep It Simple, Stupid)
### Key Directory Structure
**Why here:** Runtime + security behavior must stay auditable under pressure.
```
src/
├── main.rs # CLI entrypoint, command routing
├── lib.rs # Module exports, command enums
├── agent/ # Orchestration loop
├── channels/ # Telegram, Discord, Slack, WhatsApp, etc.
├── providers/ # OpenRouter, Anthropic, OpenAI, Ollama, etc.
├── tools/ # shell, file_read, file_write, memory, browser
├── memory/ # SQLite, Markdown, Lucid, None backends
├── gateway/ # Webhook/gateway server (Axum HTTP)
├── security/ # Policy, pairing, secret store
├── runtime/ # Native, Docker runtime adapters
├── peripherals/ # STM32, RPi GPIO hardware support
├── observability/ # Noop, Log, Multi, OTel observers
├── tunnel/ # Cloudflare, Tailscale, ngrok, custom
├── config/ # Schema + config loading/merging
└── identity/ # AIEOS/OpenClaw identity formats
```
Required:
## Memory System
- Prefer straightforward control flow over clever meta-programming.
- Prefer explicit match branches and typed structs over hidden dynamic behavior.
- Keep error paths obvious and localized.
ZeroClaw includes a full-stack search engine with zero external dependencies (no Pinecone, Elasticsearch, or LangChain):
### 3.2 YAGNI (You Aren't Gonna Need It)
- **Vector DB**: Embeddings stored as BLOB in SQLite, cosine similarity search
- **Keyword Search**: FTS5 virtual tables with BM25 scoring
- **Hybrid Merge**: Custom weighted merge function
- **Embeddings**: `EmbeddingProvider` trait — OpenAI, custom URL, or noop
- **Chunking**: Line-based markdown chunker with heading preservation
**Why here:** Premature features increase attack surface and maintenance burden.
## Security Principles
Required:
ZeroClaw enforces security at every layer. Key patterns:
- Do not add new config keys, trait methods, feature flags, or workflow branches without a concrete accepted use case.
- Do not introduce speculative “future-proof” abstractions without at least one current caller.
- Keep unsupported paths explicit (error out) rather than adding partial fake support.
- **Gateway pairing**: 6-digit one-time code required for webhook access
- **Workspace-only execution**: Default sandbox scopes file operations
- **Path traversal blocking**: 14 system dirs + 4 sensitive dotfiles blocked
- **Command allowlisting**: No blocklists — only explicit allowlists
- **Secret encryption**: ChaCha20-Poly1305 AEAD for encrypted secrets
- **No logging of secrets**: Never log tokens, keys, or sensitive payloads
### 3.3 DRY + Rule of Three
Critical security paths: `src/security/`, `src/runtime/`, `src/gateway/`, `src/tools/`, `.github/workflows/`
**Why here:** Naive DRY can create brittle shared abstractions across providers/channels/tools.
## Code Naming Conventions
Required:
- **Rust standard**: modules/files `snake_case`, types/traits `PascalCase`, functions/variables `snake_case`, constants `SCREAMING_SNAKE_CASE`
- **Domain-first naming**: `DiscordChannel`, `SecurityPolicy`, `SqliteMemory`
- **Trait implementers**: `*Provider`, `*Channel`, `*Tool`, `*Memory`, `*Observer`, `*RuntimeAdapter`
- **Factory keys**: lowercase, stable (`"openai"`, `"discord"`, `"shell"`)
- **Tests**: behavior-oriented (`allowlist_denies_unknown_user`)
- **Identity-like labels**: Use ZeroClaw-native only (`ZeroClawAgent`, `zeroclaw_user`) — never real names/personal data
- Duplicate small, local logic when it preserves clarity.
- Extract shared utilities only after repeated, stable patterns (rule-of-three).
- When extracting, preserve module boundaries and avoid hidden coupling.
## Architecture Boundary Rules
### 3.4 SRP + ISP (Single Responsibility + Interface Segregation)
- Extend via trait implementations + factory registration first
- Keep dependency direction inward: concrete implementations depend on traits/config/util
- Avoid cross-subsystem coupling (e.g., provider importing channel internals)
- Keep modules single-purpose
- Treat config keys as public contract — document migrations
**Why here:** Trait-driven architecture already encodes subsystem boundaries.
## Engineering Principles
Required:
From `AGENTS.md` — these are mandatory implementation constraints:
- Keep each module focused on one concern.
- Extend behavior by implementing existing narrow traits whenever possible.
- Avoid fat interfaces and “god modules” that mix policy + transport + storage.
1. **KISS** — Prefer straightforward control flow over clever meta-programming
2. **YAGNI** — Don't add features/config/flags without a concrete use case
3. **DRY + Rule of Three** — Extract shared utilities only after repeated stable patterns
4. **SRP + ISP** — Keep modules focused; extend via narrow traits
5. **Fail Fast + Explicit Errors** — Prefer explicit `bail!`/errors; never silently broaden permissions
6. **Secure by Default + Least Privilege** — Deny-by-default for access boundaries
7. **Determinism + Reproducibility** — Prefer reproducible commands and locked dependencies
8. **Reversibility + Rollback-First** — Keep changes easy to revert
### 3.5 Fail Fast + Explicit Errors
## Adding New Components
**Why here:** Silent fallback in agent runtimes can create unsafe or costly behavior.
### New Provider
1. Create `src/providers/your_provider.rs`
2. Implement `Provider` trait
3. Register factory in `src/providers/mod.rs`
Required:
### New Channel
1. Create `src/channels/your_channel.rs`
2. Implement `Channel` trait
3. Register in `src/channels/mod.rs`
- Prefer explicit `bail!`/errors for unsupported or unsafe states.
- Never silently broaden permissions/capabilities.
- Document fallback behavior when fallback is intentional and safe.
### New Tool
1. Create `src/tools/your_tool.rs`
2. Implement `Tool` trait with strict parameter schema
3. Register in `src/tools/mod.rs`
### 3.6 Secure by Default + Least Privilege
### New Peripheral
1. Create in `src/peripherals/`
2. Implement `Peripheral` trait (exposes `tools()` method)
3. See `docs/hardware-peripherals-design.md` for protocol
**Why here:** Gateway/tools/runtime can execute actions with real-world side effects.
## Validation Matrix
Required:
- Deny-by-default for access and exposure boundaries.
- Never log secrets, raw tokens, or sensitive payloads.
- Keep network/filesystem/shell scope as narrow as possible unless explicitly justified.
### 3.7 Determinism + Reproducibility
**Why here:** Reliable CI and low-latency triage depend on deterministic behavior.
Required:
- Prefer reproducible commands and locked dependency behavior in CI-sensitive paths.
- Keep tests deterministic (no flaky timing/network dependence without guardrails).
- Ensure local validation commands map to CI expectations.
### 3.8 Reversibility + Rollback-First Thinking
**Why here:** Fast recovery is mandatory under high PR volume.
Required:
- Keep changes easy to revert (small scope, clear blast radius).
- For risky changes, define rollback path before merge.
- Avoid mixed mega-patches that block safe rollback.
## 4) Repository Map (High-Level)
- `src/main.rs` — CLI entrypoint and command routing
- `src/lib.rs` — module exports and shared command enums
- `src/config/` — schema + config loading/merging
- `src/agent/` — orchestration loop
- `src/gateway/` — webhook/gateway server
- `src/security/` — policy, pairing, secret store
- `src/memory/` — markdown/sqlite memory backends + embeddings/vector merge
- `src/providers/` — model providers and resilient wrapper
- `src/channels/` — Telegram/Discord/Slack/etc channels
- `src/tools/` — tool execution surface (shell, file, memory, browser)
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO); see `docs/hardware-peripherals-design.md`
- `src/runtime/` — runtime adapters (currently native)
- `docs/` — task-oriented documentation system (hubs, unified TOC, references, operations, security proposals, multilingual guides)
- `.github/` — CI, templates, automation workflows
## 4.1 Documentation System Contract (Required)
Treat documentation as a first-class product surface, not a post-merge artifact.
Canonical entry points:
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`
- unified TOC: `docs/SUMMARY.md`
Collection indexes (category navigation):
- `docs/getting-started/README.md`
- `docs/reference/README.md`
- `docs/operations/README.md`
- `docs/security/README.md`
- `docs/hardware/README.md`
- `docs/contributing/README.md`
- `docs/project/README.md`
Runtime-contract references (must track behavior changes):
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
Required docs governance rules:
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
- Keep EN/ZH/JA/RU entry-point parity when changing navigation architecture.
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
## 5) Risk Tiers by Path (Review Depth Contract)
Use these tiers when deciding validation depth and review rigor.
- **Low risk**: docs/chore/tests-only changes
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
- **High risk**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`, access-control boundaries
When uncertain, classify as higher risk.
## 6) Agent Workflow (Required)
1. **Read before write**
- Inspect existing module, factory wiring, and adjacent tests before editing.
2. **Define scope boundary**
- One concern per PR; avoid mixed feature+refactor+infra patches.
3. **Implement minimal patch**
- Apply KISS/YAGNI/DRY rule-of-three explicitly.
4. **Validate by risk tier**
- Docs-only: lightweight checks.
- Code/risky changes: full relevant checks and focused scenarios.
5. **Document impact**
- Update docs/PR notes for behavior, risk, side effects, and rollback.
- If CLI/config/provider/channel behavior changed, update corresponding runtime-contract references.
- If docs entry points changed, keep EN/ZH/JA/RU README + docs-hub navigation aligned.
6. **Respect queue hygiene**
- If stacked PR: declare `Depends on #...`.
- If replacing old PR: declare `Supersedes #...`.
### 6.1 Branch / Commit / PR Flow (Required)
All contributors (human or agent) must follow the same collaboration flow:
- Create and work from a non-`main` branch.
- Commit changes to that branch with clear, scoped commit messages.
- Open a PR to `main`; do not push directly to `main`.
- Wait for required checks and review outcomes before merging.
- Merge via PR controls (squash/rebase/merge as repository policy allows).
- Branch deletion after merge is optional; long-lived branches are allowed when intentionally maintained.
### 6.2 Worktree Workflow (Required for Multi-Track Agent Work)
Use Git worktrees to isolate concurrent agent/human tracks safely and predictably:
- Use one worktree per active branch/PR stream to avoid cross-task contamination.
- Keep each worktree on a single branch; do not mix unrelated edits in one worktree.
- Run validation commands inside the corresponding worktree before commit/PR.
- Name worktrees clearly by scope (for example: `wt/ci-hardening`, `wt/provider-fix`) and remove stale worktrees when no longer needed.
- PR checkpoint rules from section 6.1 still apply to worktree-based development.
### 6.3 Code Naming Contract (Required)
Apply these naming rules for all code changes unless a subsystem has a stronger existing pattern.
- Use Rust standard casing consistently: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants/statics `SCREAMING_SNAKE_CASE`.
- Name types and modules by domain role, not implementation detail (for example `DiscordChannel`, `SecurityPolicy`, `MemoryStore` over vague names like `Manager`/`Helper`).
- Keep trait implementer naming explicit and predictable: `<ProviderName>Provider`, `<ChannelName>Channel`, `<ToolName>Tool`, `<BackendName>Memory`.
- Keep factory registration keys stable, lowercase, and user-facing (for example `"openai"`, `"discord"`, `"shell"`), and avoid alias sprawl without migration need.
- Name tests by behavior/outcome (`<subject>_<expected_behavior>`) and keep fixture identifiers neutral/project-scoped.
- If identity-like naming is required in tests/examples, use ZeroClaw-native labels only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
### 6.4 Architecture Boundary Contract (Required)
Use these rules to keep the trait/factory architecture stable under growth.
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
- Avoid creating cross-subsystem coupling (for example provider code importing channel internals, tool code mutating gateway policy directly).
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller in current scope.
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
## 7) Change Playbooks
### 7.1 Adding a Provider
- Implement `Provider` in `src/providers/`.
- Register in `src/providers/mod.rs` factory.
- Add focused tests for factory wiring and error paths.
- Avoid provider-specific behavior leaks into shared orchestration code.
### 7.2 Adding a Channel
- Implement `Channel` in `src/channels/`.
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
- Cover auth/allowlist/health behavior with tests.
### 7.3 Adding a Tool
- Implement `Tool` in `src/tools/` with strict parameter schema.
- Validate and sanitize all inputs.
- Return structured `ToolResult`; avoid panics in runtime path.
### 7.4 Adding a Peripheral
- Implement `Peripheral` in `src/peripherals/`.
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
- Register board type in config schema if needed.
- See `docs/hardware-peripherals-design.md` for protocol and firmware notes.
### 7.5 Security / Runtime / Gateway Changes
- Include threat/risk notes and rollback strategy.
- Add/update tests or validation evidence for failure modes and boundaries.
- Keep observability useful but non-sensitive.
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/actions-source-policy.md` when sources change.
### 7.6 Docs System / README / IA Changes
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
- When runtime surfaces change, update related references (`commands/providers/channels/config/runbook/troubleshooting`).
- Keep multilingual entry-point parity for EN/ZH/JA/RU when nav or key wording changes.
- For docs snapshots, add new date-stamped files for new sprints rather than rewriting historical context.
## 8) Validation Matrix
Default local checks for code changes:
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D clippy::correctness
cargo clippy --all-targets -- -D warnings
cargo test
```
For Docker CI parity (recommended when available):
Preferred local pre-PR validation path (recommended, not required):
```bash
./dev/ci.sh all
```
## Risk Tiers by Path
Notes:
- **Low risk**: docs/chore/tests-only
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
- **High risk**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`, access-control boundaries
- Local Docker-based CI is strongly recommended when Docker is available.
- Contributors are not blocked from opening a PR if local Docker CI is unavailable; in that case run the most relevant native checks and document what was run.
## Important Documentation
Additional expectations by change type:
- `AGENTS.md` — Agent engineering protocol (primary guide for AI contributors)
- `CONTRIBUTING.md` — Contribution guide and architecture rules
- `docs/pr-workflow.md` — PR workflow and governance
- `docs/reviewer-playbook.md` — Reviewer operating checklist
- `docs/ci-map.md` — CI ownership and triage
- `docs/hardware-peripherals-design.md` — Hardware peripherals protocol
- **Docs/template-only**:
- run markdown lint and link-integrity checks
- if touching README/docs-hub/SUMMARY/collection indexes, verify EN/ZH/JA/RU navigation parity
- if touching bootstrap docs/scripts, run `bash -n bootstrap.sh scripts/bootstrap.sh scripts/install.sh`
- **Workflow changes**: validate YAML syntax; run workflow lint/sanity checks when available.
- **Security/runtime/gateway/tools**: include at least one boundary/failure-mode validation.
## Pre-push Hook
If full checks are impractical, run the most relevant subset and document what was skipped and why.
The repo includes a pre-push hook that runs `fmt`, `clippy`, and `tests` before every push. Enable once with:
## 9) Collaboration and PR Discipline
```bash
git config core.hooksPath .githooks
- Follow `.github/pull_request_template.md` fully (including side effects / blast radius).
- Keep PR descriptions concrete: problem, change, non-goals, risk, rollback.
- Use conventional commit titles.
- Prefer small PRs (`size: XS/S/M`) when possible.
- Agent-assisted PRs are welcome, **but contributors remain accountable for understanding what their code will do**.
### 9.1 Privacy/Sensitive Data and Neutral Wording (Required)
Treat privacy and neutrality as merge gates, not best-effort guidelines.
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
- Use neutral project-scoped placeholders (for example: `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`) and avoid real-world personas.
- Recommended identity-safe naming palette (use when identity-like context is required):
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
- If reproducing external incidents, redact and anonymize all payloads before committing.
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
### 9.2 Superseded-PR Attribution (Required)
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email) so attribution is rendered correctly.
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
### 9.3 Superseded-PR PR Template (Recommended)
When superseding multiple PRs, use a consistent title/body structure to reduce reviewer ambiguity.
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
- If this is docs/chore/meta only, keep the same supersede suffix and use the appropriate conventional-commit type.
- In the PR body, include the following template (fill placeholders, remove non-applicable lines):
```md
## Supersedes
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
## Integrated Scope
- From #<pr_a>: <what was materially incorporated>
- From #<pr_b>: <what was materially incorporated>
- From #<pr_n>: <what was materially incorporated>
## Attribution
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
- If No, explain why (for example: no direct code/design carry-over)
## Non-goals
- <explicitly list what was not carried over>
## Risk and Rollback
- Risk: <summary>
- Rollback: <revert commit/PR strategy>
```
Skip with `git push --no-verify` during rapid iteration (CI will catch issues).
### 9.4 Superseded-PR Commit Template (Recommended)
When a commit unifies or supersedes prior PR work, use a deterministic commit message layout so attribution is machine-parsed and reviewer-friendly.
- Keep one blank line between message sections, and exactly one blank line before trailer lines.
- Keep each trailer on its own line; do not wrap, indent, or encode as escaped `\n` text.
- Add one `Co-authored-by` trailer per materially incorporated contributor, using GitHub-recognized email.
- If no direct code/design is carried over, omit `Co-authored-by` and explain attribution in the PR body instead.
```text
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
<one-paragraph summary of integrated outcome>
Supersedes:
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
Integrated scope:
- <subsystem_or_feature_a>: from #<pr_x>
- <subsystem_or_feature_b>: from #<pr_y>
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
```
Reference docs:
- `CONTRIBUTING.md`
- `docs/README.md`
- `docs/SUMMARY.md`
- `docs/docs-inventory.md`
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
- `docs/pr-workflow.md`
- `docs/reviewer-playbook.md`
- `docs/ci-map.md`
- `docs/actions-source-policy.md`
## 10) Anti-Patterns (Do Not)
- Do not add heavy dependencies for minor convenience.
- Do not silently weaken security policy or access constraints.
- Do not add speculative config/feature flags “just in case”.
- Do not mix massive formatting-only changes with functional changes.
- Do not modify unrelated modules “while here”.
- Do not bypass failing checks without explicit explanation.
- Do not hide behavior-changing side effects in refactor commits.
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
## 11) Handoff Template (Agent -> Agent / Maintainer)
When handing off work, include:
1. What changed
2. What did not change
3. Validation run and results
4. Remaining risks / unknowns
5. Next recommended action
## 12) Vibe Coding Guardrails
When working in fast iterative mode:
- Keep each iteration reversible (small commits, clear rollback).
- Validate assumptions with code search before implementing.
- Prefer deterministic behavior over clever shortcuts.
- Do not “ship and hope” on security-sensitive paths.
- If uncertain, leave a concrete TODO with verification context, not a hidden guess.

View file

@ -21,22 +21,22 @@ use std::path::Path;
#[cfg(feature = "whatsapp-web")]
use std::sync::Arc;
#[cfg(feature = "whatsapp-web")]
use prost::Message;
#[cfg(feature = "whatsapp-web")]
use wa_rs_binary::jid::Jid;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::appstate::hash::HashState;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::appstate::processor::AppStateMutationMAC;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::store::Device as CoreDevice;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::store::traits::*;
use wa_rs_core::store::traits::DeviceInfo;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::store::traits::DeviceStore as DeviceStoreTrait;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::store::traits::DeviceInfo;
use wa_rs_core::store::traits::*;
#[cfg(feature = "whatsapp-web")]
use wa_rs_binary::jid::Jid;
#[cfg(feature = "whatsapp-web")]
use prost::Message;
use wa_rs_core::store::Device as CoreDevice;
/// Custom wa-rs storage backend using rusqlite
///
@ -59,15 +59,13 @@ pub struct RusqliteStore {
macro_rules! to_store_err {
// For expressions returning Result<usize, E>
(execute: $expr:expr) => {
$expr.map(|_| ()).map_err(|e| {
wa_rs_core::store::error::StoreError::Database(e.to_string())
})
$expr
.map(|_| ())
.map_err(|e| wa_rs_core::store::error::StoreError::Database(e.to_string()))
};
// For other expressions
($expr:expr) => {
$expr.map_err(|e| {
wa_rs_core::store::error::StoreError::Database(e.to_string())
})
$expr.map_err(|e| wa_rs_core::store::error::StoreError::Database(e.to_string()))
};
}
@ -268,7 +266,11 @@ impl RusqliteStore {
impl SignalStore for RusqliteStore {
// --- Identity Operations ---
async fn put_identity(&self, address: &str, key: [u8; 32]) -> wa_rs_core::store::error::Result<()> {
async fn put_identity(
&self,
address: &str,
key: [u8; 32],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO identities (address, key, device_id)
@ -277,7 +279,10 @@ impl SignalStore for RusqliteStore {
))
}
async fn load_identity(&self, address: &str) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
async fn load_identity(
&self,
address: &str,
) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT key FROM identities WHERE address = ?1 AND device_id = ?2",
@ -288,7 +293,9 @@ impl SignalStore for RusqliteStore {
match result {
Ok(key) => Ok(Some(key)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
@ -302,7 +309,10 @@ impl SignalStore for RusqliteStore {
// --- Session Operations ---
async fn get_session(&self, address: &str) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
async fn get_session(
&self,
address: &str,
) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT record FROM sessions WHERE address = ?1 AND device_id = ?2",
@ -313,11 +323,17 @@ impl SignalStore for RusqliteStore {
match result {
Ok(record) => Ok(Some(record)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn put_session(&self, address: &str, session: &[u8]) -> wa_rs_core::store::error::Result<()> {
async fn put_session(
&self,
address: &str,
session: &[u8],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO sessions (address, record, device_id)
@ -336,7 +352,12 @@ impl SignalStore for RusqliteStore {
// --- PreKey Operations ---
async fn store_prekey(&self, id: u32, record: &[u8], uploaded: bool) -> wa_rs_core::store::error::Result<()> {
async fn store_prekey(
&self,
id: u32,
record: &[u8],
uploaded: bool,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO prekeys (id, key, uploaded, device_id)
@ -356,7 +377,9 @@ impl SignalStore for RusqliteStore {
match result {
Ok(key) => Ok(Some(key)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
@ -370,7 +393,11 @@ impl SignalStore for RusqliteStore {
// --- Signed PreKey Operations ---
async fn store_signed_prekey(&self, id: u32, record: &[u8]) -> wa_rs_core::store::error::Result<()> {
async fn store_signed_prekey(
&self,
id: u32,
record: &[u8],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO signed_prekeys (id, record, device_id)
@ -379,7 +406,10 @@ impl SignalStore for RusqliteStore {
))
}
async fn load_signed_prekey(&self, id: u32) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
async fn load_signed_prekey(
&self,
id: u32,
) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT record FROM signed_prekeys WHERE id = ?1 AND device_id = ?2",
@ -390,15 +420,19 @@ impl SignalStore for RusqliteStore {
match result {
Ok(record) => Ok(Some(record)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn load_all_signed_prekeys(&self) -> wa_rs_core::store::error::Result<Vec<(u32, Vec<u8>)>> {
async fn load_all_signed_prekeys(
&self,
) -> wa_rs_core::store::error::Result<Vec<(u32, Vec<u8>)>> {
let conn = self.conn.lock();
let mut stmt = to_store_err!(conn.prepare(
"SELECT id, record FROM signed_prekeys WHERE device_id = ?1"
))?;
let mut stmt = to_store_err!(
conn.prepare("SELECT id, record FROM signed_prekeys WHERE device_id = ?1")
)?;
let rows = to_store_err!(stmt.query_map(params![self.device_id], |row| {
Ok((row.get::<_, u32>(0)?, row.get::<_, Vec<u8>>(1)?))
@ -422,7 +456,11 @@ impl SignalStore for RusqliteStore {
// --- Sender Key Operations ---
async fn put_sender_key(&self, address: &str, record: &[u8]) -> wa_rs_core::store::error::Result<()> {
async fn put_sender_key(
&self,
address: &str,
record: &[u8],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO sender_keys (address, record, device_id)
@ -431,7 +469,10 @@ impl SignalStore for RusqliteStore {
))
}
async fn get_sender_key(&self, address: &str) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
async fn get_sender_key(
&self,
address: &str,
) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT record FROM sender_keys WHERE address = ?1 AND device_id = ?2",
@ -442,7 +483,9 @@ impl SignalStore for RusqliteStore {
match result {
Ok(record) => Ok(Some(record)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
@ -458,7 +501,10 @@ impl SignalStore for RusqliteStore {
#[cfg(feature = "whatsapp-web")]
#[async_trait]
impl AppSyncStore for RusqliteStore {
async fn get_sync_key(&self, key_id: &[u8]) -> wa_rs_core::store::error::Result<Option<AppStateSyncKey>> {
async fn get_sync_key(
&self,
key_id: &[u8],
) -> wa_rs_core::store::error::Result<Option<AppStateSyncKey>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT key_data FROM app_state_keys WHERE key_id = ?1 AND device_id = ?2",
@ -473,11 +519,17 @@ impl AppSyncStore for RusqliteStore {
match result {
Ok(key) => Ok(Some(key)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn set_sync_key(&self, key_id: &[u8], key: AppStateSyncKey) -> wa_rs_core::store::error::Result<()> {
async fn set_sync_key(
&self,
key_id: &[u8],
key: AppStateSyncKey,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let key_data = to_store_err!(serde_json::to_vec(&key))?;
@ -499,7 +551,11 @@ impl AppSyncStore for RusqliteStore {
to_store_err!(serde_json::from_slice(&state_data))
}
async fn set_version(&self, name: &str, state: HashState) -> wa_rs_core::store::error::Result<()> {
async fn set_version(
&self,
name: &str,
state: HashState,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let state_data = to_store_err!(serde_json::to_vec(&state))?;
@ -533,7 +589,11 @@ impl AppSyncStore for RusqliteStore {
Ok(())
}
async fn get_mutation_mac(&self, name: &str, index_mac: &[u8]) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
async fn get_mutation_mac(
&self,
name: &str,
index_mac: &[u8],
) -> wa_rs_core::store::error::Result<Option<Vec<u8>>> {
let conn = self.conn.lock();
let index_mac_json = to_store_err!(serde_json::to_vec(index_mac))?;
@ -547,11 +607,17 @@ impl AppSyncStore for RusqliteStore {
match result {
Ok(mac) => Ok(Some(mac)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn delete_mutation_macs(&self, name: &str, index_macs: &[Vec<u8>]) -> wa_rs_core::store::error::Result<()> {
async fn delete_mutation_macs(
&self,
name: &str,
index_macs: &[Vec<u8>],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
for index_mac in index_macs {
@ -573,7 +639,10 @@ impl AppSyncStore for RusqliteStore {
impl ProtocolStore for RusqliteStore {
// --- SKDM Tracking ---
async fn get_skdm_recipients(&self, group_jid: &str) -> wa_rs_core::store::error::Result<Vec<Jid>> {
async fn get_skdm_recipients(
&self,
group_jid: &str,
) -> wa_rs_core::store::error::Result<Vec<Jid>> {
let conn = self.conn.lock();
let mut stmt = to_store_err!(conn.prepare(
"SELECT device_jid FROM skdm_recipients WHERE group_jid = ?1 AND device_id = ?2"
@ -594,7 +663,11 @@ impl ProtocolStore for RusqliteStore {
Ok(result)
}
async fn add_skdm_recipients(&self, group_jid: &str, device_jids: &[Jid]) -> wa_rs_core::store::error::Result<()> {
async fn add_skdm_recipients(
&self,
group_jid: &str,
device_jids: &[Jid],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let now = chrono::Utc::now().timestamp();
@ -619,7 +692,10 @@ impl ProtocolStore for RusqliteStore {
// --- LID-PN Mapping ---
async fn get_lid_mapping(&self, lid: &str) -> wa_rs_core::store::error::Result<Option<LidPnMappingEntry>> {
async fn get_lid_mapping(
&self,
lid: &str,
) -> wa_rs_core::store::error::Result<Option<LidPnMappingEntry>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT lid, phone_number, created_at, learning_source, updated_at
@ -630,8 +706,8 @@ impl ProtocolStore for RusqliteStore {
lid: row.get(0)?,
phone_number: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
learning_source: row.get(4)?,
learning_source: row.get(3)?,
updated_at: row.get(4)?,
})
},
);
@ -639,11 +715,16 @@ impl ProtocolStore for RusqliteStore {
match result {
Ok(entry) => Ok(Some(entry)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn get_pn_mapping(&self, phone: &str) -> wa_rs_core::store::error::Result<Option<LidPnMappingEntry>> {
async fn get_pn_mapping(
&self,
phone: &str,
) -> wa_rs_core::store::error::Result<Option<LidPnMappingEntry>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT lid, phone_number, created_at, learning_source, updated_at
@ -655,8 +736,8 @@ impl ProtocolStore for RusqliteStore {
lid: row.get(0)?,
phone_number: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
learning_source: row.get(4)?,
learning_source: row.get(3)?,
updated_at: row.get(4)?,
})
},
);
@ -664,11 +745,16 @@ impl ProtocolStore for RusqliteStore {
match result {
Ok(entry) => Ok(Some(entry)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn put_lid_mapping(&self, entry: &LidPnMappingEntry) -> wa_rs_core::store::error::Result<()> {
async fn put_lid_mapping(
&self,
entry: &LidPnMappingEntry,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"INSERT OR REPLACE INTO lid_pn_mapping
@ -685,7 +771,9 @@ impl ProtocolStore for RusqliteStore {
))
}
async fn get_all_lid_mappings(&self) -> wa_rs_core::store::error::Result<Vec<LidPnMappingEntry>> {
async fn get_all_lid_mappings(
&self,
) -> wa_rs_core::store::error::Result<Vec<LidPnMappingEntry>> {
let conn = self.conn.lock();
let mut stmt = to_store_err!(conn.prepare(
"SELECT lid, phone_number, created_at, learning_source, updated_at
@ -697,8 +785,8 @@ impl ProtocolStore for RusqliteStore {
lid: row.get(0)?,
phone_number: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
learning_source: row.get(4)?,
learning_source: row.get(3)?,
updated_at: row.get(4)?,
})
}))?;
@ -712,7 +800,12 @@ impl ProtocolStore for RusqliteStore {
// --- Base Key Collision Detection ---
async fn save_base_key(&self, address: &str, message_id: &str, base_key: &[u8]) -> wa_rs_core::store::error::Result<()> {
async fn save_base_key(
&self,
address: &str,
message_id: &str,
base_key: &[u8],
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let now = chrono::Utc::now().timestamp();
@ -743,11 +836,17 @@ impl ProtocolStore for RusqliteStore {
match result {
Ok(same) => Ok(same),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(false),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn delete_base_key(&self, address: &str, message_id: &str) -> wa_rs_core::store::error::Result<()> {
async fn delete_base_key(
&self,
address: &str,
message_id: &str,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
to_store_err!(execute: conn.execute(
"DELETE FROM base_keys WHERE address = ?1 AND message_id = ?2 AND device_id = ?3",
@ -757,7 +856,10 @@ impl ProtocolStore for RusqliteStore {
// --- Device Registry ---
async fn update_device_list(&self, record: DeviceListRecord) -> wa_rs_core::store::error::Result<()> {
async fn update_device_list(
&self,
record: DeviceListRecord,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let devices_json = to_store_err!(serde_json::to_string(&record.devices))?;
let now = chrono::Utc::now().timestamp();
@ -777,7 +879,10 @@ impl ProtocolStore for RusqliteStore {
))
}
async fn get_devices(&self, user: &str) -> wa_rs_core::store::error::Result<Option<DeviceListRecord>> {
async fn get_devices(
&self,
user: &str,
) -> wa_rs_core::store::error::Result<Option<DeviceListRecord>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT user_id, devices_json, timestamp, phash
@ -785,13 +890,15 @@ impl ProtocolStore for RusqliteStore {
params![user, self.device_id],
|row| {
// Helper to convert errors to rusqlite::Error
fn to_rusqlite_err<E: std::error::Error + Send + Sync + 'static>(e: E) -> rusqlite::Error {
fn to_rusqlite_err<E: std::error::Error + Send + Sync + 'static>(
e: E,
) -> rusqlite::Error {
rusqlite::Error::ToSqlConversionFailure(Box::new(e))
}
let devices_json: String = row.get(1)?;
let devices: Vec<DeviceInfo> = serde_json::from_str(&devices_json)
.map_err(to_rusqlite_err)?;
let devices: Vec<DeviceInfo> =
serde_json::from_str(&devices_json).map_err(to_rusqlite_err)?;
Ok(DeviceListRecord {
user: row.get(0)?,
devices,
@ -804,13 +911,19 @@ impl ProtocolStore for RusqliteStore {
match result {
Ok(record) => Ok(Some(record)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
// --- Sender Key Status (Lazy Deletion) ---
async fn mark_forget_sender_key(&self, group_jid: &str, participant: &str) -> wa_rs_core::store::error::Result<()> {
async fn mark_forget_sender_key(
&self,
group_jid: &str,
participant: &str,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let now = chrono::Utc::now().timestamp();
@ -821,7 +934,10 @@ impl ProtocolStore for RusqliteStore {
))
}
async fn consume_forget_marks(&self, group_jid: &str) -> wa_rs_core::store::error::Result<Vec<String>> {
async fn consume_forget_marks(
&self,
group_jid: &str,
) -> wa_rs_core::store::error::Result<Vec<String>> {
let conn = self.conn.lock();
let mut stmt = to_store_err!(conn.prepare(
"SELECT participant FROM sender_key_status
@ -848,7 +964,10 @@ impl ProtocolStore for RusqliteStore {
// --- TcToken Storage ---
async fn get_tc_token(&self, jid: &str) -> wa_rs_core::store::error::Result<Option<TcTokenEntry>> {
async fn get_tc_token(
&self,
jid: &str,
) -> wa_rs_core::store::error::Result<Option<TcTokenEntry>> {
let conn = self.conn.lock();
let result = conn.query_row(
"SELECT token, token_timestamp, sender_timestamp FROM tc_tokens
@ -866,11 +985,17 @@ impl ProtocolStore for RusqliteStore {
match result {
Ok(entry) => Ok(Some(entry)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
async fn put_tc_token(&self, jid: &str, entry: &TcTokenEntry) -> wa_rs_core::store::error::Result<()> {
async fn put_tc_token(
&self,
jid: &str,
entry: &TcTokenEntry,
) -> wa_rs_core::store::error::Result<()> {
let conn = self.conn.lock();
let now = chrono::Utc::now().timestamp();
@ -899,13 +1024,12 @@ impl ProtocolStore for RusqliteStore {
async fn get_all_tc_token_jids(&self) -> wa_rs_core::store::error::Result<Vec<String>> {
let conn = self.conn.lock();
let mut stmt = to_store_err!(conn.prepare(
"SELECT jid FROM tc_tokens WHERE device_id = ?1"
))?;
let mut stmt =
to_store_err!(conn.prepare("SELECT jid FROM tc_tokens WHERE device_id = ?1"))?;
let rows = to_store_err!(stmt.query_map(params![self.device_id], |row| {
row.get::<_, String>(0)
}))?;
let rows = to_store_err!(
stmt.query_map(params![self.device_id], |row| { row.get::<_, String>(0) })
)?;
let mut result = Vec::new();
for row in rows {
@ -915,14 +1039,25 @@ impl ProtocolStore for RusqliteStore {
Ok(result)
}
async fn delete_expired_tc_tokens(&self, cutoff_timestamp: i64) -> wa_rs_core::store::error::Result<u32> {
async fn delete_expired_tc_tokens(
&self,
cutoff_timestamp: i64,
) -> wa_rs_core::store::error::Result<u32> {
let conn = self.conn.lock();
// Note: We can't easily get the affected row count with the execute macro, so we'll just return 0 for now
to_store_err!(execute: conn.execute(
let deleted = conn
.execute(
"DELETE FROM tc_tokens WHERE token_timestamp < ?1 AND device_id = ?2",
params![cutoff_timestamp, self.device_id],
))?;
Ok(0) // TODO: Return actual affected row count
)
.map_err(|e| wa_rs_core::store::error::StoreError::Database(e.to_string()))?;
let deleted = u32::try_from(deleted).map_err(|_| {
wa_rs_core::store::error::StoreError::Database(format!(
"Affected row count overflowed u32: {deleted}"
))
})?;
Ok(deleted)
}
}
@ -997,7 +1132,9 @@ impl DeviceStoreTrait for RusqliteStore {
params![self.device_id],
|row| {
// Helper to convert errors to rusqlite::Error
fn to_rusqlite_err<E: std::error::Error + Send + Sync + 'static>(e: E) -> rusqlite::Error {
fn to_rusqlite_err<E: std::error::Error + Send + Sync + 'static>(
e: E,
) -> rusqlite::Error {
rusqlite::Error::ToSqlConversionFailure(Box::new(e))
}
@ -1006,24 +1143,25 @@ impl DeviceStoreTrait for RusqliteStore {
let identity_key_bytes: Vec<u8> = row.get("identity_key")?;
let signed_pre_key_bytes: Vec<u8> = row.get("signed_pre_key")?;
if noise_key_bytes.len() != 64 || identity_key_bytes.len() != 64 || signed_pre_key_bytes.len() != 64 {
if noise_key_bytes.len() != 64
|| identity_key_bytes.len() != 64
|| signed_pre_key_bytes.len() != 64
{
return Err(rusqlite::Error::InvalidParameterName("key_pair".into()));
}
use wa_rs_core::libsignal::protocol::{PrivateKey, PublicKey, KeyPair};
use wa_rs_core::libsignal::protocol::{KeyPair, PrivateKey, PublicKey};
let noise_key = KeyPair::new(
PublicKey::from_djb_public_key_bytes(&noise_key_bytes[32..64])
.map_err(to_rusqlite_err)?,
PrivateKey::deserialize(&noise_key_bytes[0..32])
.map_err(to_rusqlite_err)?,
PrivateKey::deserialize(&noise_key_bytes[0..32]).map_err(to_rusqlite_err)?,
);
let identity_key = KeyPair::new(
PublicKey::from_djb_public_key_bytes(&identity_key_bytes[32..64])
.map_err(to_rusqlite_err)?,
PrivateKey::deserialize(&identity_key_bytes[0..32])
.map_err(to_rusqlite_err)?,
PrivateKey::deserialize(&identity_key_bytes[0..32]).map_err(to_rusqlite_err)?,
);
let signed_pre_key = KeyPair::new(
@ -1045,8 +1183,10 @@ impl DeviceStoreTrait for RusqliteStore {
adv_secret.copy_from_slice(&adv_secret_bytes);
let account = if let Some(bytes) = account_bytes {
Some(wa_rs_proto::whatsapp::AdvSignedDeviceIdentity::decode(&*bytes)
.map_err(to_rusqlite_err)?)
Some(
wa_rs_proto::whatsapp::AdvSignedDeviceIdentity::decode(&*bytes)
.map_err(to_rusqlite_err)?,
)
} else {
None
};
@ -1077,7 +1217,9 @@ impl DeviceStoreTrait for RusqliteStore {
match result {
Ok(device) => Ok(Some(device)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(e.to_string())),
Err(e) => Err(wa_rs_core::store::error::StoreError::Database(
e.to_string(),
)),
}
}
@ -1097,7 +1239,11 @@ impl DeviceStoreTrait for RusqliteStore {
Ok(self.device_id)
}
async fn snapshot_db(&self, name: &str, extra_content: Option<&[u8]>) -> wa_rs_core::store::error::Result<()> {
async fn snapshot_db(
&self,
name: &str,
extra_content: Option<&[u8]>,
) -> wa_rs_core::store::error::Result<()> {
// Create a snapshot by copying the database file
let snapshot_path = format!("{}.snapshot.{}", self.db_path, name);
@ -1116,6 +1262,8 @@ impl DeviceStoreTrait for RusqliteStore {
#[cfg(test)]
mod tests {
use super::*;
#[cfg(feature = "whatsapp-web")]
use wa_rs_core::store::traits::{LidPnMappingEntry, ProtocolStore, TcTokenEntry};
#[cfg(feature = "whatsapp-web")]
#[test]
@ -1124,4 +1272,74 @@ mod tests {
let store = RusqliteStore::new(tmp.path()).unwrap();
assert_eq!(store.device_id, 1);
}
#[cfg(feature = "whatsapp-web")]
#[tokio::test]
async fn lid_mapping_round_trip_preserves_learning_source_and_updated_at() {
let tmp = tempfile::NamedTempFile::new().unwrap();
let store = RusqliteStore::new(tmp.path()).unwrap();
let entry = LidPnMappingEntry {
lid: "100000012345678".to_string(),
phone_number: "15551234567".to_string(),
created_at: 1_700_000_000,
updated_at: 1_700_000_100,
learning_source: "usync".to_string(),
};
ProtocolStore::put_lid_mapping(&store, &entry)
.await
.unwrap();
let loaded = ProtocolStore::get_lid_mapping(&store, &entry.lid)
.await
.unwrap()
.expect("expected lid mapping to be present");
assert_eq!(loaded.learning_source, entry.learning_source);
assert_eq!(loaded.updated_at, entry.updated_at);
let loaded_by_pn = ProtocolStore::get_pn_mapping(&store, &entry.phone_number)
.await
.unwrap()
.expect("expected pn mapping to be present");
assert_eq!(loaded_by_pn.learning_source, entry.learning_source);
assert_eq!(loaded_by_pn.updated_at, entry.updated_at);
}
#[cfg(feature = "whatsapp-web")]
#[tokio::test]
async fn delete_expired_tc_tokens_returns_deleted_row_count() {
let tmp = tempfile::NamedTempFile::new().unwrap();
let store = RusqliteStore::new(tmp.path()).unwrap();
let expired = TcTokenEntry {
token: vec![1, 2, 3],
token_timestamp: 10,
sender_timestamp: None,
};
let fresh = TcTokenEntry {
token: vec![4, 5, 6],
token_timestamp: 1000,
sender_timestamp: Some(1000),
};
ProtocolStore::put_tc_token(&store, "15550000001", &expired)
.await
.unwrap();
ProtocolStore::put_tc_token(&store, "15550000002", &fresh)
.await
.unwrap();
let deleted = ProtocolStore::delete_expired_tc_tokens(&store, 100)
.await
.unwrap();
assert_eq!(deleted, 1);
assert!(ProtocolStore::get_tc_token(&store, "15550000001")
.await
.unwrap()
.is_none());
assert!(ProtocolStore::get_tc_token(&store, "15550000002")
.await
.unwrap()
.is_some());
}
}

View file

@ -28,7 +28,7 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use super::whatsapp_storage::RusqliteStore;
use anyhow::Result;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use std::sync::Arc;
@ -60,6 +60,8 @@ pub struct WhatsAppWebChannel {
allowed_numbers: Vec<String>,
/// Bot handle for shutdown
bot_handle: Arc<Mutex<Option<tokio::task::JoinHandle<()>>>>,
/// Client handle for sending messages and typing indicators
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
}
@ -86,6 +88,7 @@ impl WhatsAppWebChannel {
pair_code,
allowed_numbers,
bot_handle: Arc::new(Mutex::new(None)),
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
}
}
@ -100,12 +103,44 @@ impl WhatsAppWebChannel {
/// Normalize phone number to E.164 format
#[cfg(feature = "whatsapp-web")]
fn normalize_phone(&self, phone: &str) -> String {
if phone.starts_with('+') {
phone.to_string()
let trimmed = phone.trim();
let user_part = trimmed
.split_once('@')
.map(|(user, _)| user)
.unwrap_or(trimmed);
let normalized_user = user_part.trim_start_matches('+');
if user_part.starts_with('+') {
format!("+{normalized_user}")
} else {
format!("+{phone}")
format!("+{normalized_user}")
}
}
/// Convert a recipient to a wa-rs JID.
///
/// Supports:
/// - Full JIDs (e.g. "12345@s.whatsapp.net")
/// - E.164-like numbers (e.g. "+1234567890")
#[cfg(feature = "whatsapp-web")]
fn recipient_to_jid(&self, recipient: &str) -> Result<wa_rs_binary::jid::Jid> {
let trimmed = recipient.trim();
if trimmed.is_empty() {
anyhow::bail!("Recipient cannot be empty");
}
if trimmed.contains('@') {
return trimmed
.parse::<wa_rs_binary::jid::Jid>()
.map_err(|e| anyhow!("Invalid WhatsApp JID `{trimmed}`: {e}"));
}
let digits: String = trimmed.chars().filter(|c| c.is_ascii_digit()).collect();
if digits.is_empty() {
anyhow::bail!("Recipient `{trimmed}` does not contain a valid phone number");
}
Ok(wa_rs_binary::jid::Jid::pn(digits))
}
}
#[cfg(feature = "whatsapp-web")]
@ -116,23 +151,33 @@ impl Channel for WhatsAppWebChannel {
}
async fn send(&self, message: &SendMessage) -> Result<()> {
// Check if bot is running
let bot_handle_guard = self.bot_handle.lock();
if bot_handle_guard.is_none() {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
}
drop(bot_handle_guard);
};
// Validate recipient is allowed
let normalized = self.normalize_phone(&message.recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!("WhatsApp Web: recipient {} not in allowed list", message.recipient);
tracing::warn!(
"WhatsApp Web: recipient {} not in allowed list",
message.recipient
);
return Ok(());
}
// TODO: Implement sending via wa-rs client
// This requires getting the client from the bot and using its send_message API
tracing::debug!("WhatsApp Web: sending message to {}: {}", message.recipient, message.content);
let to = self.recipient_to_jid(&message.recipient)?;
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
};
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
message.recipient,
message_id
);
Ok(())
}
@ -141,11 +186,13 @@ impl Channel for WhatsAppWebChannel {
*self.tx.lock() = Some(tx.clone());
use wa_rs::bot::Bot;
use wa_rs::pair_code::PairCodeOptions;
use wa_rs::store::{Device, DeviceStore};
use wa_rs_core::types::events::Event;
use wa_rs_ureq_http::UreqHttpClient;
use wa_rs_tokio_transport::TokioWebSocketTransportFactory;
use wa_rs_binary::jid::JidExt as _;
use wa_rs_core::proto_helpers::MessageExt;
use wa_rs_core::types::events::Event;
use wa_rs_tokio_transport::TokioWebSocketTransportFactory;
use wa_rs_ureq_http::UreqHttpClient;
tracing::info!(
"WhatsApp Web channel starting (session: {})",
@ -166,7 +213,9 @@ impl Channel for WhatsAppWebChannel {
anyhow::bail!("Device exists but failed to load");
}
} else {
tracing::info!("WhatsApp Web: no existing session, new device will be created during pairing");
tracing::info!(
"WhatsApp Web: no existing session, new device will be created during pairing"
);
};
// Create transport factory
@ -182,7 +231,7 @@ impl Channel for WhatsAppWebChannel {
let tx_clone = tx.clone();
let allowed_numbers = self.allowed_numbers.clone();
let mut bot = Bot::builder()
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
.with_http_client(http_client)
@ -194,7 +243,7 @@ impl Channel for WhatsAppWebChannel {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender = info.source.sender.to_string();
let sender = info.source.sender.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!("📨 WhatsApp message from {} in {}: {}", sender, chat, text);
@ -209,14 +258,17 @@ impl Channel for WhatsAppWebChannel {
if allowed_numbers.is_empty()
|| allowed_numbers.iter().any(|n| n == "*" || n == &normalized)
{
if let Err(e) = tx_inner.send(ChannelMessage {
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
reply_target: normalized.clone(),
content: text.to_string(),
timestamp: chrono::Utc::now().timestamp_millis() as u64,
}).await {
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
} else {
@ -244,17 +296,25 @@ impl Channel for WhatsAppWebChannel {
}
}
})
.build()
.await?;
;
// Configure pair code options if pair_phone is set
// Configure pair-code flow when a phone number is provided.
if let Some(ref phone) = self.pair_phone {
// Set the phone number for pair code linking
// The exact API depends on wa-rs version
tracing::info!("Requesting pair code for phone: {}", phone);
// bot.request_pair_code(phone).await?;
tracing::info!("WhatsApp Web: pair-code flow enabled for configured phone number");
builder = builder.with_pair_code(PairCodeOptions {
phone_number: phone.clone(),
custom_code: self.pair_code.clone(),
..Default::default()
});
} else if self.pair_code.is_some() {
tracing::warn!(
"WhatsApp Web: pair_code is set but pair_phone is missing; pair code config is ignored"
);
}
let mut bot = builder.build().await?;
*self.client.lock() = Some(bot.client());
// Run the bot
let bot_handle = bot.run().await?;
@ -273,6 +333,11 @@ impl Channel for WhatsAppWebChannel {
}
}
*self.client.lock() = None;
if let Some(handle) = self.bot_handle.lock().take() {
handle.abort();
}
Ok(())
}
@ -282,14 +347,54 @@ impl Channel for WhatsAppWebChannel {
}
async fn start_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_composing(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (composing): {e}"))?;
tracing::debug!("WhatsApp Web: start typing for {}", recipient);
// TODO: Implement typing indicator via wa-rs client
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> Result<()> {
let client = self.client.lock().clone();
let Some(client) = client else {
anyhow::bail!("WhatsApp Web client not connected. Initialize the bot first.");
};
let normalized = self.normalize_phone(recipient);
if !self.is_number_allowed(&normalized) {
tracing::warn!(
"WhatsApp Web: typing target {} not in allowed list",
recipient
);
return Ok(());
}
let to = self.recipient_to_jid(recipient)?;
client
.chatstate()
.send_paused(&to)
.await
.map_err(|e| anyhow!("Failed to send typing state (paused): {e}"))?;
tracing::debug!("WhatsApp Web: stop typing for {}", recipient);
// TODO: Implement typing indicator via wa-rs client
Ok(())
}
}
@ -308,10 +413,7 @@ impl WhatsAppWebChannel {
_pair_code: Option<String>,
_allowed_numbers: Vec<String>,
) -> Self {
panic!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
Self { _private: () }
}
}
@ -323,11 +425,17 @@ impl Channel for WhatsAppWebChannel {
}
async fn send(&self, _message: &SendMessage) -> Result<()> {
unreachable!()
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
unreachable!()
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn health_check(&self) -> bool {
@ -335,11 +443,17 @@ impl Channel for WhatsAppWebChannel {
}
async fn start_typing(&self, _recipient: &str) -> Result<()> {
unreachable!()
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
async fn stop_typing(&self, _recipient: &str) -> Result<()> {
unreachable!()
anyhow::bail!(
"WhatsApp Web channel requires the 'whatsapp-web' feature. \
Enable with: cargo build --features whatsapp-web"
);
}
}