feat(proxy): add scoped proxy configuration and docs runbooks
- add scope-aware proxy schema and runtime wiring for providers/channels/tools - add agent callable proxy_config tool for fast proxy setup - standardize docs system with index, template, and playbooks
This commit is contained in:
parent
13ee9e6398
commit
ce104bed45
36 changed files with 2025 additions and 323 deletions
|
|
@ -188,15 +188,21 @@ To keep docs useful under high PR volume, we use these rules:
|
|||
- **Risk-proportionate detail**: high-risk paths need deeper evidence; low-risk paths stay lightweight.
|
||||
- **Side-effect visibility**: document blast radius, failure modes, and rollback before merge.
|
||||
- **Automation assists, humans decide**: bots triage and label, but merge accountability stays human.
|
||||
- **Index-first discoverability**: `docs/README.md` is the first entry point for operational documentation.
|
||||
- **Template-first authoring**: start new operational docs from `docs/doc-template.md`.
|
||||
|
||||
### Documentation System Map
|
||||
|
||||
| Doc | Primary purpose | When to update |
|
||||
|---|---|---|
|
||||
| `docs/README.md` | canonical docs index and taxonomy | add/remove docs or change documentation ownership/navigation |
|
||||
| `docs/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
|
||||
| `CONTRIBUTING.md` | contributor contract and readiness baseline | contributor expectations or policy changes |
|
||||
| `docs/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
|
||||
| `docs/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
|
||||
| `docs/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
|
||||
| `docs/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
|
||||
| `docs/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
|
||||
|
||||
## PR Definition of Ready (DoR)
|
||||
|
||||
|
|
@ -209,6 +215,8 @@ Before requesting review, ensure all of the following are true:
|
|||
- No personal/sensitive data is introduced in code/docs/tests/fixtures/logs/examples/commit messages.
|
||||
- Tests/fixtures/examples use neutral project-scoped wording (no identity-specific or first-person phrasing).
|
||||
- If identity-like wording is required, use ZeroClaw-centric labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`).
|
||||
- If docs were changed, update `docs/README.md` navigation and reciprocal links with related docs.
|
||||
- If a new operational doc was added, start from `docs/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
|
||||
- Linked issue (or rationale for no issue) is included.
|
||||
|
||||
## PR Definition of Done (DoD)
|
||||
|
|
@ -220,6 +228,7 @@ A PR is merge-ready when:
|
|||
- Risk level matches changed paths (`risk: low/medium/high`).
|
||||
- User-visible behavior, migration, and rollback notes are complete.
|
||||
- Follow-up TODOs are explicit and tracked in issues.
|
||||
- For documentation changes, links and ownership mapping in `CONTRIBUTING.md` and `docs/README.md` are consistent.
|
||||
|
||||
## High-Volume Collaboration Rules
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ tokio = { version = "1.42", default-features = false, features = ["rt-multi-thre
|
|||
tokio-util = { version = "0.7", default-features = false }
|
||||
|
||||
# HTTP client - minimal features
|
||||
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream"] }
|
||||
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream", "socks"] }
|
||||
|
||||
# Serialization
|
||||
serde = { version = "1.0", default-features = false, features = ["derive"] }
|
||||
|
|
|
|||
|
|
@ -834,12 +834,20 @@ Start from the docs hub for a task-based map:
|
|||
|
||||
Core collaboration references:
|
||||
|
||||
- Documentation index: [docs/README.md](docs/README.md)
|
||||
- Documentation template: [docs/doc-template.md](docs/doc-template.md)
|
||||
- Documentation change checklist: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- Contribution guide: [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- PR workflow policy: [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- Reviewer playbook (triage + deep review): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- CI ownership and triage map: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Security disclosure policy: [SECURITY.md](SECURITY.md)
|
||||
|
||||
For deployment and runtime operations:
|
||||
|
||||
- Network deployment guide: [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- Proxy agent playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
|
||||
## Support ZeroClaw
|
||||
|
||||
If ZeroClaw helps your work and you want to support ongoing development, you can donate here:
|
||||
|
|
|
|||
63
docs/doc-template.md
Normal file
63
docs/doc-template.md
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
# Documentation Template (Operational)
|
||||
|
||||
Use this template when adding a new operational or engineering document under `docs/`.
|
||||
|
||||
Keep sections that apply; remove non-applicable placeholders before merging.
|
||||
|
||||
---
|
||||
|
||||
## 1. Summary
|
||||
|
||||
- **Purpose:** <one sentence about why this document exists>
|
||||
- **Audience:** <operators | reviewers | contributors | maintainers>
|
||||
- **Scope:** <what this doc covers>
|
||||
- **Non-goals:** <what this doc intentionally does not cover>
|
||||
|
||||
## 2. Prerequisites
|
||||
|
||||
- <required environment>
|
||||
- <required permissions>
|
||||
- <required tools/config>
|
||||
|
||||
## 3. Procedure
|
||||
|
||||
### 3.1 Baseline Check
|
||||
|
||||
1. <step>
|
||||
2. <step>
|
||||
|
||||
### 3.2 Main Workflow
|
||||
|
||||
1. <step>
|
||||
2. <step>
|
||||
3. <step>
|
||||
|
||||
### 3.3 Verification
|
||||
|
||||
- <expected output or success signal>
|
||||
- <validation command/log/checkpoint>
|
||||
|
||||
## 4. Safety, Risk, and Rollback
|
||||
|
||||
- **Risk surface:** <which components may be impacted>
|
||||
- **Failure modes:** <what can go wrong>
|
||||
- **Rollback plan:** <concrete rollback command/steps>
|
||||
|
||||
## 5. Troubleshooting
|
||||
|
||||
- **Symptom:** <error/signal>
|
||||
- **Cause:** <likely cause>
|
||||
- **Fix:** <action>
|
||||
|
||||
## 6. Related Docs
|
||||
|
||||
- [README.md](./README.md) — documentation taxonomy and navigation.
|
||||
- <related-doc-1.md>
|
||||
- <related-doc-2.md>
|
||||
|
||||
## 7. Maintenance Notes
|
||||
|
||||
- **Owner:** <team/persona/area>
|
||||
- **Update trigger:** <what changes should force this doc update>
|
||||
- **Last reviewed:** <YYYY-MM-DD>
|
||||
|
||||
|
|
@ -11,10 +11,66 @@ This document defines how ZeroClaw handles high PR volume while maintaining:
|
|||
|
||||
Related references:
|
||||
|
||||
- [`docs/ci-map.md`](ci-map.md) for per-workflow ownership, triggers, and triage flow.
|
||||
- [`docs/reviewer-playbook.md`](reviewer-playbook.md) for day-to-day reviewer execution.
|
||||
- [`docs/README.md`](./README.md) for documentation taxonomy and navigation.
|
||||
- [`docs/ci-map.md`](./ci-map.md) for per-workflow ownership, triggers, and triage flow.
|
||||
- [`docs/reviewer-playbook.md`](./reviewer-playbook.md) for day-to-day reviewer execution.
|
||||
|
||||
## 1) Governance Goals
|
||||
## 0. Summary
|
||||
|
||||
- **Purpose:** provide a deterministic, risk-based PR operating model for high-throughput collaboration.
|
||||
- **Audience:** contributors, maintainers, and agent-assisted reviewers.
|
||||
- **Scope:** repository settings, PR lifecycle, readiness contracts, risk routing, queue discipline, and recovery protocol.
|
||||
- **Non-goals:** replacing branch protection configuration or CI workflow source files as implementation authority.
|
||||
|
||||
---
|
||||
|
||||
## 1. Fast Path by PR Situation
|
||||
|
||||
Use this section to route quickly before full deep review.
|
||||
|
||||
### 1.1 Intake is incomplete
|
||||
|
||||
1. Request template completion and missing evidence in one checklist comment.
|
||||
2. Stop deep review until intake blockers are resolved.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 5.1](#51-definition-of-ready-dor-before-requesting-review)
|
||||
|
||||
### 1.2 `CI Required Gate` failing
|
||||
|
||||
1. Route failure through CI map and fix deterministic gates first.
|
||||
2. Re-evaluate risk only after CI returns coherent signal.
|
||||
|
||||
Go to:
|
||||
|
||||
- [docs/ci-map.md](./ci-map.md)
|
||||
- [Section 4.2](#42-step-b-validation)
|
||||
|
||||
### 1.3 High-risk path touched
|
||||
|
||||
1. Escalate to deep review lane.
|
||||
2. Require explicit rollback, failure-mode evidence, and security boundary checks.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 9](#9-security-and-stability-rules)
|
||||
- [docs/reviewer-playbook.md](./reviewer-playbook.md)
|
||||
|
||||
### 1.4 PR is superseded or duplicate
|
||||
|
||||
1. Require explicit supersede linkage and queue cleanup.
|
||||
2. Close superseded PR after maintainer confirmation.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 8.2](#82-backlog-pressure-controls)
|
||||
|
||||
---
|
||||
|
||||
## 2. Governance Goals and Control Loop
|
||||
|
||||
### 2.1 Governance goals
|
||||
|
||||
1. Keep merge throughput predictable under heavy PR load.
|
||||
2. Keep CI signal quality high (fast feedback, low false positives).
|
||||
|
|
@ -22,18 +78,20 @@ Related references:
|
|||
4. Keep changes easy to reason about and easy to revert.
|
||||
5. Keep repository artifacts free of personal/sensitive data leakage.
|
||||
|
||||
### Governance Design Logic (Control Loop)
|
||||
### 2.2 Governance design logic (control loop)
|
||||
|
||||
This workflow is intentionally layered to reduce reviewer load while keeping accountability clear:
|
||||
|
||||
1. **Intake classification**: path/size/risk/module labels route the PR to the right review depth.
|
||||
2. **Deterministic validation**: merge gate depends on reproducible checks, not subjective comments.
|
||||
3. **Risk-based review depth**: high-risk paths trigger deep review; low-risk paths stay fast.
|
||||
4. **Rollback-first merge contract**: every merge path includes concrete recovery steps.
|
||||
1. **Intake classification:** path/size/risk/module labels route the PR to the right review depth.
|
||||
2. **Deterministic validation:** merge gate depends on reproducible checks, not subjective comments.
|
||||
3. **Risk-based review depth:** high-risk paths trigger deep review; low-risk paths stay fast.
|
||||
4. **Rollback-first merge contract:** every merge path includes concrete recovery steps.
|
||||
|
||||
Automation assists with triage and guardrails, but final merge accountability remains with human maintainers and PR authors.
|
||||
|
||||
## 2) Required Repository Settings
|
||||
---
|
||||
|
||||
## 3. Required Repository Settings
|
||||
|
||||
Maintain these branch protection rules on `main`:
|
||||
|
||||
|
|
@ -45,9 +103,11 @@ Maintain these branch protection rules on `main`:
|
|||
- Dismiss stale approvals when new commits are pushed.
|
||||
- Restrict force-push on protected branches.
|
||||
|
||||
## 3) PR Lifecycle
|
||||
---
|
||||
|
||||
### Step A: Intake
|
||||
## 4. PR Lifecycle Runbook
|
||||
|
||||
### 4.1 Step A: Intake
|
||||
|
||||
- Contributor opens PR with full `.github/pull_request_template.md`.
|
||||
- `PR Labeler` applies scope/path labels + size labels + risk labels + module labels (for example `channel:telegram`, `provider:kimi`, `tool:shell`) and contributor tiers by merged PR count (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50), while de-duplicating less-specific scope labels when a more specific module label is present.
|
||||
|
|
@ -58,27 +118,29 @@ Maintain these branch protection rules on `main`:
|
|||
- Managed label colors are arranged by display order to create a smooth gradient across long label rows.
|
||||
- `PR Auto Responder` posts first-time guidance, handles label-driven routing for low-signal items, and auto-applies issue contributor tiers using the same thresholds as `PR Labeler` (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50).
|
||||
|
||||
### Step B: Validation
|
||||
### 4.2 Step B: Validation
|
||||
|
||||
- `CI Required Gate` is the merge gate.
|
||||
- Docs-only PRs use fast-path and skip heavy Rust jobs.
|
||||
- Non-doc PRs must pass lint, tests, and release build smoke check.
|
||||
|
||||
### Step C: Review
|
||||
### 4.3 Step C: Review
|
||||
|
||||
- Reviewers prioritize by risk and size labels.
|
||||
- Security-sensitive paths (`src/security`, `src/runtime`, `src/gateway`, and CI workflows) require maintainer attention.
|
||||
- Large PRs (`size: L`/`size: XL`) should be split unless strongly justified.
|
||||
|
||||
### Step D: Merge
|
||||
### 4.4 Step D: Merge
|
||||
|
||||
- Prefer **squash merge** to keep history compact.
|
||||
- PR title should follow Conventional Commit style.
|
||||
- Merge only when rollback path is documented.
|
||||
|
||||
## 4) PR Readiness Contracts (DoR / DoD)
|
||||
---
|
||||
|
||||
### Definition of Ready (before requesting review)
|
||||
## 5. PR Readiness Contracts (DoR / DoD)
|
||||
|
||||
### 5.1 Definition of Ready (DoR) before requesting review
|
||||
|
||||
- PR template fully completed.
|
||||
- Scope boundary is explicit (what changed / what did not).
|
||||
|
|
@ -87,7 +149,7 @@ Maintain these branch protection rules on `main`:
|
|||
- Privacy/data-hygiene checks are completed and test language is neutral/project-scoped.
|
||||
- If identity-like wording appears in tests/examples, it is normalized to ZeroClaw/project-native labels.
|
||||
|
||||
### Definition of Done (merge-ready)
|
||||
### 5.2 Definition of Done (DoD) merge-ready
|
||||
|
||||
- `CI Required Gate` is green.
|
||||
- Required reviewers approved (including CODEOWNERS paths).
|
||||
|
|
@ -95,7 +157,11 @@ Maintain these branch protection rules on `main`:
|
|||
- Migration/compatibility impact is documented.
|
||||
- Rollback path is concrete and fast.
|
||||
|
||||
## 5) PR Size Policy
|
||||
---
|
||||
|
||||
## 6. PR Size and Batching Policy
|
||||
|
||||
### 6.1 Size tiers
|
||||
|
||||
- `size: XS` <= 80 changed lines
|
||||
- `size: S` <= 250 changed lines
|
||||
|
|
@ -103,93 +169,104 @@ Maintain these branch protection rules on `main`:
|
|||
- `size: L` <= 1000 changed lines
|
||||
- `size: XL` > 1000 changed lines
|
||||
|
||||
Policy:
|
||||
### 6.2 Policy
|
||||
|
||||
- Target `XS/S/M` by default.
|
||||
- `L/XL` PRs need explicit justification and tighter test evidence.
|
||||
- If a large feature is unavoidable, split into stacked PRs.
|
||||
|
||||
Automation behavior:
|
||||
### 6.3 Automation behavior
|
||||
|
||||
- `PR Labeler` applies `size:*` labels from effective changed lines.
|
||||
- Docs-only/lockfile-heavy PRs are normalized to avoid size inflation.
|
||||
|
||||
## 6) AI/Agent Contribution Policy
|
||||
---
|
||||
|
||||
## 7. AI/Agent Contribution Policy
|
||||
|
||||
AI-assisted PRs are welcome, and review can also be agent-assisted.
|
||||
|
||||
Required:
|
||||
### 7.1 Required
|
||||
|
||||
1. Clear PR summary with scope boundary.
|
||||
2. Explicit test/validation evidence.
|
||||
3. Security impact and rollback notes for risky changes.
|
||||
|
||||
Recommended:
|
||||
### 7.2 Recommended
|
||||
|
||||
1. Brief tool/workflow notes when automation materially influenced the change.
|
||||
2. Optional prompt/plan snippets for reproducibility.
|
||||
|
||||
We do **not** require contributors to quantify AI-vs-human line ownership.
|
||||
|
||||
Review emphasis for AI-heavy PRs:
|
||||
### 7.3 Review emphasis for AI-heavy PRs
|
||||
|
||||
- Contract compatibility
|
||||
- Security boundaries
|
||||
- Error handling and fallback behavior
|
||||
- Performance and memory regressions
|
||||
- Contract compatibility.
|
||||
- Security boundaries.
|
||||
- Error handling and fallback behavior.
|
||||
- Performance and memory regressions.
|
||||
|
||||
## 7) Review SLA and Queue Discipline
|
||||
---
|
||||
|
||||
## 8. Review SLA and Queue Discipline
|
||||
|
||||
- First maintainer triage target: within 48 hours.
|
||||
- If PR is blocked, maintainer leaves one actionable checklist.
|
||||
- `stale` automation is used to keep queue healthy; maintainers can apply `no-stale` when needed.
|
||||
- `pr-hygiene` automation checks open PRs every 12 hours and posts a nudge when a PR has no new commits for 48+ hours and is either behind `main` or missing/failing `CI Required Gate` on the head commit.
|
||||
|
||||
Backlog pressure controls:
|
||||
### 8.1 Queue budget controls
|
||||
|
||||
- Use a review queue budget: limit concurrent deep-review PRs per maintainer and keep the rest in triage state.
|
||||
- For stacked work, require explicit `Depends on #...` so review order is deterministic.
|
||||
|
||||
### 8.2 Backlog pressure controls
|
||||
|
||||
- If a new PR replaces an older open PR, require `Supersedes #...` and close the older one after maintainer confirmation.
|
||||
- Mark dormant/redundant PRs with `stale-candidate` or `superseded` to reduce duplicate review effort.
|
||||
|
||||
Issue triage discipline:
|
||||
### 8.3 Issue triage discipline
|
||||
|
||||
- `r:needs-repro` for incomplete bug reports (request deterministic repro before deep triage).
|
||||
- `r:support` for usage/help items better handled outside bug backlog.
|
||||
- `invalid` / `duplicate` labels trigger **issue-only** closing automation with guidance.
|
||||
|
||||
Automation side-effect guards:
|
||||
### 8.4 Automation side-effect guards
|
||||
|
||||
- `PR Auto Responder` deduplicates label-based comments to avoid spam.
|
||||
- Automated close routes are limited to issues, not PRs.
|
||||
- Maintainers can freeze automated risk recalculation with `risk: manual` when context demands human override.
|
||||
|
||||
## 8) Security and Stability Rules
|
||||
---
|
||||
|
||||
## 9. Security and Stability Rules
|
||||
|
||||
Changes in these areas require stricter review and stronger test evidence:
|
||||
|
||||
- `src/security/**`
|
||||
- runtime process management
|
||||
- gateway ingress/authentication behavior (`src/gateway/**`)
|
||||
- filesystem access boundaries
|
||||
- network/authentication behavior
|
||||
- GitHub workflows and release pipeline
|
||||
- tools with execution capability (`src/tools/**`)
|
||||
- Runtime process management.
|
||||
- Gateway ingress/authentication behavior (`src/gateway/**`).
|
||||
- Filesystem access boundaries.
|
||||
- Network/authentication behavior.
|
||||
- GitHub workflows and release pipeline.
|
||||
- Tools with execution capability (`src/tools/**`).
|
||||
|
||||
Minimum for risky PRs:
|
||||
### 9.1 Minimum for risky PRs
|
||||
|
||||
- threat/risk statement
|
||||
- mitigation notes
|
||||
- rollback steps
|
||||
- Threat/risk statement.
|
||||
- Mitigation notes.
|
||||
- Rollback steps.
|
||||
|
||||
Recommended for high-risk PRs:
|
||||
### 9.2 Recommended for high-risk PRs
|
||||
|
||||
- include a focused test proving boundary behavior
|
||||
- include one explicit failure-mode scenario and expected degradation
|
||||
- Include a focused test proving boundary behavior.
|
||||
- Include one explicit failure-mode scenario and expected degradation.
|
||||
|
||||
For agent-assisted contributions, reviewers should also verify the author demonstrates understanding of runtime behavior and blast radius.
|
||||
|
||||
## 9) Failure Recovery
|
||||
---
|
||||
|
||||
## 10. Failure Recovery Protocol
|
||||
|
||||
If a merged PR causes regressions:
|
||||
|
||||
|
|
@ -199,7 +276,9 @@ If a merged PR causes regressions:
|
|||
|
||||
Prefer fast restore of service quality over delayed perfect fixes.
|
||||
|
||||
## 10) Maintainer Checklist (Merge-Ready)
|
||||
---
|
||||
|
||||
## 11. Maintainer Merge Checklist
|
||||
|
||||
- Scope is focused and understandable.
|
||||
- CI gate is green.
|
||||
|
|
@ -210,11 +289,13 @@ Prefer fast restore of service quality over delayed perfect fixes.
|
|||
- Rollback plan is explicit.
|
||||
- Commit title follows Conventional Commits.
|
||||
|
||||
## 11) Agent Review Operating Model
|
||||
---
|
||||
|
||||
To keep review quality stable under high PR volume, we use a two-lane review model:
|
||||
## 12. Agent Review Operating Model
|
||||
|
||||
### Lane A: Fast triage (agent-friendly)
|
||||
To keep review quality stable under high PR volume, use a two-lane review model.
|
||||
|
||||
### 12.1 Lane A: fast triage (agent-friendly)
|
||||
|
||||
- Confirm PR template completeness.
|
||||
- Confirm CI gate signal (`CI Required Gate`).
|
||||
|
|
@ -223,7 +304,7 @@ To keep review quality stable under high PR volume, we use a two-lane review mod
|
|||
- Confirm privacy/data-hygiene section and neutral wording requirements are satisfied.
|
||||
- Confirm any required identity-like wording uses ZeroClaw/project-native terminology.
|
||||
|
||||
### Lane B: Deep review (risk-based)
|
||||
### 12.2 Lane B: deep review (risk-based)
|
||||
|
||||
Required for high-risk changes (security/runtime/gateway/CI):
|
||||
|
||||
|
|
@ -232,15 +313,17 @@ Required for high-risk changes (security/runtime/gateway/CI):
|
|||
- Validate backward compatibility and migration impact.
|
||||
- Validate observability/logging impact.
|
||||
|
||||
## 12) Queue Priority and Label Discipline
|
||||
---
|
||||
|
||||
Triage order recommendation:
|
||||
## 13. Queue Priority and Label Discipline
|
||||
|
||||
1. `size: XS`/`size: S` + bug/security fixes
|
||||
2. `size: M` focused changes
|
||||
3. `size: L`/`size: XL` split requests or staged review
|
||||
### 13.1 Triage order recommendation
|
||||
|
||||
Label discipline:
|
||||
1. `size: XS`/`size: S` + bug/security fixes.
|
||||
2. `size: M` focused changes.
|
||||
3. `size: L`/`size: XL` split requests or staged review.
|
||||
|
||||
### 13.2 Label discipline
|
||||
|
||||
- Path labels identify subsystem ownership quickly.
|
||||
- Size labels drive batching strategy.
|
||||
|
|
@ -249,7 +332,9 @@ Label discipline:
|
|||
- `risk: manual` allows maintainers to preserve a human risk judgment when automation lacks context.
|
||||
- `no-stale` is reserved for accepted-but-blocked work.
|
||||
|
||||
## 13) Agent Handoff Contract
|
||||
---
|
||||
|
||||
## 14. Agent Handoff Contract
|
||||
|
||||
When one agent hands off to another (or to a maintainer), include:
|
||||
|
||||
|
|
@ -259,3 +344,20 @@ When one agent hands off to another (or to a maintainer), include:
|
|||
4. Suggested next action.
|
||||
|
||||
This keeps context loss low and avoids repeated deep dives.
|
||||
|
||||
---
|
||||
|
||||
## 15. Related Docs
|
||||
|
||||
- [README.md](./README.md) — documentation taxonomy and navigation.
|
||||
- [ci-map.md](./ci-map.md) — CI workflow ownership and triage map.
|
||||
- [reviewer-playbook.md](./reviewer-playbook.md) — reviewer execution model.
|
||||
- [actions-source-policy.md](./actions-source-policy.md) — action source allowlist policy.
|
||||
|
||||
---
|
||||
|
||||
## 16. Maintenance Notes
|
||||
|
||||
- **Owner:** maintainers responsible for collaboration governance and merge quality.
|
||||
- **Update trigger:** branch protection changes, label/risk policy changes, queue governance updates, or agent review process changes.
|
||||
- **Last reviewed:** 2026-02-18.
|
||||
|
|
|
|||
229
docs/proxy-agent-playbook.md
Normal file
229
docs/proxy-agent-playbook.md
Normal file
|
|
@ -0,0 +1,229 @@
|
|||
# Proxy Agent Playbook
|
||||
|
||||
This playbook provides copy-paste tool calls for configuring proxy behavior via `proxy_config`.
|
||||
|
||||
Use this document when you want the agent to switch proxy scope quickly and safely.
|
||||
|
||||
## 0. Summary
|
||||
|
||||
- **Purpose:** provide copy-ready agent tool calls for proxy scope management and rollback.
|
||||
- **Audience:** operators and maintainers running ZeroClaw in proxied networks.
|
||||
- **Scope:** `proxy_config` actions, mode selection, verification flow, and troubleshooting.
|
||||
- **Non-goals:** generic network debugging outside ZeroClaw runtime behavior.
|
||||
|
||||
---
|
||||
|
||||
## 1. Fast Path by Intent
|
||||
|
||||
Use this section for quick operational routing.
|
||||
|
||||
### 1.1 Proxy only ZeroClaw internal traffic
|
||||
|
||||
1. Use scope `zeroclaw`.
|
||||
2. Set `http_proxy`/`https_proxy` or `all_proxy`.
|
||||
3. Validate with `{"action":"get"}`.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 4](#4-mode-a--proxy-only-for-zeroclaw-internals)
|
||||
|
||||
### 1.2 Proxy only selected services
|
||||
|
||||
1. Use scope `services`.
|
||||
2. Set concrete keys or wildcard selectors in `services`.
|
||||
3. Validate coverage using `{"action":"list_services"}`.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 5](#5-mode-b--proxy-only-for-specific-services)
|
||||
|
||||
### 1.3 Export process-wide proxy environment variables
|
||||
|
||||
1. Use scope `environment`.
|
||||
2. Apply with `{"action":"apply_env"}`.
|
||||
3. Verify env snapshot via `{"action":"get"}`.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 6](#6-mode-c--proxy-for-full-process-environment)
|
||||
|
||||
### 1.4 Emergency rollback
|
||||
|
||||
1. Disable proxy.
|
||||
2. If needed, clear env exports.
|
||||
3. Re-check runtime and environment snapshots.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 7](#7-disable--rollback-patterns)
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope Decision Matrix
|
||||
|
||||
| Scope | Affects | Exports env vars | Typical use |
|
||||
|---|---|---|---|
|
||||
| `zeroclaw` | ZeroClaw internal HTTP clients | No | Normal runtime proxying without process-level side effects |
|
||||
| `services` | Only selected service keys/selectors | No | Fine-grained routing for specific providers/tools/channels |
|
||||
| `environment` | Runtime + process environment proxy variables | Yes | Integrations that require `HTTP_PROXY`/`HTTPS_PROXY`/`ALL_PROXY` |
|
||||
|
||||
---
|
||||
|
||||
## 3. Standard Safe Workflow
|
||||
|
||||
Use this sequence for every proxy change:
|
||||
|
||||
1. Inspect current state.
|
||||
2. Discover valid service keys/selectors.
|
||||
3. Apply target scope configuration.
|
||||
4. Verify runtime and environment snapshots.
|
||||
5. Roll back if behavior is not expected.
|
||||
|
||||
Tool calls:
|
||||
|
||||
```json
|
||||
{"action":"get"}
|
||||
{"action":"list_services"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Mode A — Proxy Only for ZeroClaw Internals
|
||||
|
||||
Use when ZeroClaw provider/channel/tool HTTP traffic should use proxy, without exporting process-level proxy env vars.
|
||||
|
||||
Tool calls:
|
||||
|
||||
```json
|
||||
{"action":"set","enabled":true,"scope":"zeroclaw","http_proxy":"http://127.0.0.1:7890","https_proxy":"http://127.0.0.1:7890","no_proxy":["localhost","127.0.0.1"]}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
Expected behavior:
|
||||
|
||||
- Runtime proxy is active for ZeroClaw HTTP clients.
|
||||
- `HTTP_PROXY` / `HTTPS_PROXY` process env exports are not required.
|
||||
|
||||
---
|
||||
|
||||
## 5. Mode B — Proxy Only for Specific Services
|
||||
|
||||
Use when only part of the system should use proxy (for example specific providers/tools/channels).
|
||||
|
||||
### 5.1 Target specific services
|
||||
|
||||
```json
|
||||
{"action":"set","enabled":true,"scope":"services","services":["provider.openai","tool.http_request","channel.telegram"],"all_proxy":"socks5h://127.0.0.1:1080","no_proxy":["localhost","127.0.0.1",".internal"]}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
### 5.2 Target by selectors
|
||||
|
||||
```json
|
||||
{"action":"set","enabled":true,"scope":"services","services":["provider.*","tool.*"],"http_proxy":"http://127.0.0.1:7890"}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
Expected behavior:
|
||||
|
||||
- Only matched services use proxy.
|
||||
- Unmatched services bypass proxy.
|
||||
|
||||
---
|
||||
|
||||
## 6. Mode C — Proxy for Full Process Environment
|
||||
|
||||
Use when you intentionally need exported process env vars (`HTTP_PROXY`, `HTTPS_PROXY`, `ALL_PROXY`, `NO_PROXY`) for runtime integrations.
|
||||
|
||||
### 6.1 Configure and apply environment scope
|
||||
|
||||
```json
|
||||
{"action":"set","enabled":true,"scope":"environment","http_proxy":"http://127.0.0.1:7890","https_proxy":"http://127.0.0.1:7890","no_proxy":"localhost,127.0.0.1,.internal"}
|
||||
{"action":"apply_env"}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
Expected behavior:
|
||||
|
||||
- Runtime proxy is active.
|
||||
- Environment variables are exported for the process.
|
||||
|
||||
---
|
||||
|
||||
## 7. Disable / Rollback Patterns
|
||||
|
||||
### 7.1 Disable proxy (default safe behavior)
|
||||
|
||||
```json
|
||||
{"action":"disable"}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
### 7.2 Disable proxy and force-clear env vars
|
||||
|
||||
```json
|
||||
{"action":"disable","clear_env":true}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
### 7.3 Keep proxy enabled but clear environment exports only
|
||||
|
||||
```json
|
||||
{"action":"clear_env"}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Common Operation Recipes
|
||||
|
||||
### 8.1 Switch from environment-wide proxy to service-only proxy
|
||||
|
||||
```json
|
||||
{"action":"set","enabled":true,"scope":"services","services":["provider.openai","tool.http_request"],"all_proxy":"socks5://127.0.0.1:1080"}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
### 8.2 Add one more proxied service
|
||||
|
||||
```json
|
||||
{"action":"set","scope":"services","services":["provider.openai","tool.http_request","channel.slack"]}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
### 8.3 Reset `services` list with selectors
|
||||
|
||||
```json
|
||||
{"action":"set","scope":"services","services":["provider.*","channel.telegram"]}
|
||||
{"action":"get"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Troubleshooting
|
||||
|
||||
- Error: `proxy.scope='services' requires a non-empty proxy.services list`
|
||||
- Fix: set at least one concrete service key or selector.
|
||||
|
||||
- Error: invalid proxy URL scheme
|
||||
- Allowed schemes: `http`, `https`, `socks5`, `socks5h`.
|
||||
|
||||
- Proxy does not apply as expected
|
||||
- Run `{"action":"list_services"}` and verify service names/selectors.
|
||||
- Run `{"action":"get"}` and check `runtime_proxy` and `environment` snapshot values.
|
||||
|
||||
---
|
||||
|
||||
## 10. Related Docs
|
||||
|
||||
- [README.md](./README.md) — Documentation index and taxonomy.
|
||||
- [network-deployment.md](./network-deployment.md) — end-to-end network deployment and tunnel topology guidance.
|
||||
- [resource-limits.md](./resource-limits.md) — runtime safety limits for network/tool execution contexts.
|
||||
|
||||
---
|
||||
|
||||
## 11. Maintenance Notes
|
||||
|
||||
- **Owner:** runtime and tooling maintainers.
|
||||
- **Update trigger:** new `proxy_config` actions, proxy scope semantics, or supported service selector changes.
|
||||
- **Last reviewed:** 2026-02-18.
|
||||
|
|
@ -1,39 +1,93 @@
|
|||
# Reviewer Playbook
|
||||
|
||||
This playbook is the operational companion to [`docs/pr-workflow.md`](pr-workflow.md).
|
||||
Use it to reduce review latency without reducing quality.
|
||||
This playbook is the operational companion to [`docs/pr-workflow.md`](./pr-workflow.md).
|
||||
For broader documentation navigation, use [`docs/README.md`](./README.md).
|
||||
|
||||
## 1) Review Objectives
|
||||
## 0. Summary
|
||||
|
||||
- Keep queue throughput predictable.
|
||||
- Keep risk review proportionate to change risk.
|
||||
- Keep merge decisions reproducible and auditable.
|
||||
- **Purpose:** define a deterministic reviewer operating model that keeps review quality high under heavy PR volume.
|
||||
- **Audience:** maintainers, reviewers, and agent-assisted reviewers.
|
||||
- **Scope:** intake triage, risk-to-depth routing, deep-review checks, automation overrides, and handoff protocol.
|
||||
- **Non-goals:** replacing PR policy authority in `CONTRIBUTING.md` or workflow authority in CI files.
|
||||
|
||||
## 2) 5-Minute Intake Triage
|
||||
---
|
||||
|
||||
For every new PR, do a fast intake pass:
|
||||
## 1. Fast Path by Review Situation
|
||||
|
||||
Use this section to route quickly before reading full detail.
|
||||
|
||||
### 1.1 Intake fails in first 5 minutes
|
||||
|
||||
1. Leave one actionable checklist comment.
|
||||
2. Stop deep review until intake blockers are fixed.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 3.1](#31-five-minute-intake-triage)
|
||||
|
||||
### 1.2 Risk is high or unclear
|
||||
|
||||
1. Treat as `risk: high` by default.
|
||||
2. Require deep review and explicit rollback evidence.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 2](#2-review-depth-decision-matrix)
|
||||
- [Section 3.3](#33-deep-review-checklist-high-risk)
|
||||
|
||||
### 1.3 Automation output is wrong/noisy
|
||||
|
||||
1. Apply override protocol (`risk: manual`, dedupe comments/labels).
|
||||
2. Continue review with explicit rationale.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 5](#5-automation-override-protocol)
|
||||
|
||||
### 1.4 Need review handoff
|
||||
|
||||
1. Handoff with scope/risk/validation/blockers.
|
||||
2. Assign concrete next action.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 6](#6-handoff-protocol)
|
||||
|
||||
---
|
||||
|
||||
## 2. Review Depth Decision Matrix
|
||||
|
||||
| Risk label | Typical touched paths | Minimum review depth | Required evidence |
|
||||
|---|---|---|---|
|
||||
| `risk: low` | docs/tests/chore, isolated non-runtime changes | 1 reviewer + CI gate | coherent local validation + no behavior ambiguity |
|
||||
| `risk: medium` | `src/providers/**`, `src/channels/**`, `src/memory/**`, `src/config/**` | 1 subsystem-aware reviewer + behavior verification | focused scenario proof + explicit side effects |
|
||||
| `risk: high` | `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**` | fast triage + deep review + rollback readiness | security/failure-mode checks + rollback clarity |
|
||||
|
||||
When uncertain, treat as `risk: high`.
|
||||
|
||||
If automated risk labeling is contextually wrong, maintainers can apply `risk: manual` and set the final `risk:*` label explicitly.
|
||||
|
||||
---
|
||||
|
||||
## 3. Standard Review Workflow
|
||||
|
||||
### 3.1 Five-minute intake triage
|
||||
|
||||
For every new PR:
|
||||
|
||||
1. Confirm template completeness (`summary`, `validation`, `security`, `rollback`).
|
||||
2. Confirm labels (`size:*`, `risk:*`, scope labels such as `provider`/`channel`/`security`, module-scoped labels such as `channel: *`/`provider: *`/`tool: *`, and contributor tier labels when applicable) are present and plausible.
|
||||
2. Confirm labels are present and plausible:
|
||||
- `size:*`, `risk:*`
|
||||
- scope labels (for example `provider`, `channel`, `security`)
|
||||
- module-scoped labels (`channel:*`, `provider:*`, `tool:*`)
|
||||
- contributor tier labels when applicable
|
||||
3. Confirm CI signal status (`CI Required Gate`).
|
||||
4. Confirm scope is one concern (reject mixed mega-PRs unless justified).
|
||||
5. Confirm privacy/data-hygiene and neutral test wording requirements are satisfied.
|
||||
|
||||
If any intake requirement fails, leave one actionable checklist comment instead of deep review.
|
||||
|
||||
## 3) Risk-to-Depth Matrix
|
||||
|
||||
| Risk label | Typical touched paths | Minimum review depth |
|
||||
|---|---|---|
|
||||
| `risk: low` | docs/tests/chore, isolated non-runtime changes | 1 reviewer + CI gate |
|
||||
| `risk: medium` | `src/providers/**`, `src/channels/**`, `src/memory/**`, `src/config/**` | 1 subsystem-aware reviewer + behavior verification |
|
||||
| `risk: high` | `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**` | fast triage + deep review, strong rollback and failure-mode checks |
|
||||
|
||||
When uncertain, treat as `risk: high`.
|
||||
|
||||
If automated risk labeling is contextually wrong, maintainers can apply `risk: manual` and set the final risk label explicitly.
|
||||
|
||||
## 4) Fast-Lane Checklist (All PRs)
|
||||
### 3.2 Fast-lane checklist (all PRs)
|
||||
|
||||
- Scope boundary is explicit and believable.
|
||||
- Validation commands are present and results are coherent.
|
||||
|
|
@ -45,17 +99,31 @@ If automated risk labeling is contextually wrong, maintainers can apply `risk: m
|
|||
- If identity-like wording exists, it uses ZeroClaw/project-native roles (not personal or real-world identities).
|
||||
- Naming and architecture boundaries follow project contracts (`AGENTS.md`, `CONTRIBUTING.md`).
|
||||
|
||||
## 5) Deep Review Checklist (High Risk)
|
||||
### 3.3 Deep review checklist (high risk)
|
||||
|
||||
For high-risk PRs, verify at least one example in each category:
|
||||
For high-risk PRs, verify at least one concrete example in each category:
|
||||
|
||||
- **Security boundaries**: deny-by-default behavior preserved, no accidental scope broadening.
|
||||
- **Failure modes**: error handling is explicit and degrades safely.
|
||||
- **Contract stability**: CLI/config/API compatibility preserved or migration documented.
|
||||
- **Observability**: failures are diagnosable without leaking secrets.
|
||||
- **Rollback safety**: revert path and blast radius are clear.
|
||||
- **Security boundaries:** deny-by-default behavior preserved, no accidental scope broadening.
|
||||
- **Failure modes:** error handling is explicit and degrades safely.
|
||||
- **Contract stability:** CLI/config/API compatibility preserved or migration documented.
|
||||
- **Observability:** failures are diagnosable without leaking secrets.
|
||||
- **Rollback safety:** revert path and blast radius are clear.
|
||||
|
||||
## 6) Issue Triage Playbook
|
||||
### 3.4 Review comment outcome style
|
||||
|
||||
Prefer checklist-style comments with one explicit outcome:
|
||||
|
||||
- **Ready to merge** (say why).
|
||||
- **Needs author action** (ordered blocker list).
|
||||
- **Needs deeper security/runtime review** (state exact risk and requested evidence).
|
||||
|
||||
Avoid vague comments that create avoidable back-and-forth latency.
|
||||
|
||||
---
|
||||
|
||||
## 4. Issue Triage and Backlog Governance
|
||||
|
||||
### 4.1 Issue triage label playbook
|
||||
|
||||
Use labels to keep backlog actionable:
|
||||
|
||||
|
|
@ -63,28 +131,9 @@ Use labels to keep backlog actionable:
|
|||
- `r:support` for usage/support questions better routed outside bug backlog.
|
||||
- `duplicate` / `invalid` for non-actionable duplicates/noise.
|
||||
- `no-stale` for accepted work waiting on external blockers.
|
||||
- Request redaction if logs/payloads include personal identifiers or sensitive data.
|
||||
- Request redaction when logs/payloads include personal identifiers or sensitive data.
|
||||
|
||||
## 7) Review Comment Style
|
||||
|
||||
Prefer checklist-style comments with one of these outcomes:
|
||||
|
||||
- **Ready to merge** (explicitly say why).
|
||||
- **Needs author action** (ordered list of blockers).
|
||||
- **Needs deeper security/runtime review** (state exact risk and requested evidence).
|
||||
|
||||
Avoid vague comments that create back-and-forth latency.
|
||||
|
||||
## 8) Automation Override Protocol
|
||||
|
||||
Use this when automation output creates review side effects:
|
||||
|
||||
1. **Incorrect risk label**: add `risk: manual`, then set the intended `risk:*` label.
|
||||
2. **Incorrect auto-close on issue triage**: reopen issue, remove route label, and leave one clarifying comment.
|
||||
3. **Label spam/noise**: keep one canonical maintainer comment and remove redundant route labels.
|
||||
4. **Ambiguous PR scope**: request split before deep review.
|
||||
|
||||
### PR Backlog Pruning Protocol
|
||||
### 4.2 PR backlog pruning protocol
|
||||
|
||||
When review demand exceeds capacity, apply this order:
|
||||
|
||||
|
|
@ -93,18 +142,50 @@ When review demand exceeds capacity, apply this order:
|
|||
3. Mark dormant PRs as `stale-candidate` before stale closure window starts.
|
||||
4. Require rebase + fresh validation before reopening stale/superseded technical work.
|
||||
|
||||
## 9) Handoff Protocol
|
||||
---
|
||||
|
||||
## 5. Automation Override Protocol
|
||||
|
||||
Use this when automation output creates review side effects:
|
||||
|
||||
1. **Incorrect risk label:** add `risk: manual`, then set intended `risk:*` label.
|
||||
2. **Incorrect auto-close on issue triage:** reopen issue, remove route label, leave one clarifying comment.
|
||||
3. **Label spam/noise:** keep one canonical maintainer comment and remove redundant route labels.
|
||||
4. **Ambiguous PR scope:** request split before deep review.
|
||||
|
||||
---
|
||||
|
||||
## 6. Handoff Protocol
|
||||
|
||||
If handing off review to another maintainer/agent, include:
|
||||
|
||||
1. Scope summary
|
||||
2. Current risk class and why
|
||||
3. What has been validated already
|
||||
4. Open blockers
|
||||
5. Suggested next action
|
||||
1. Scope summary.
|
||||
2. Current risk class and rationale.
|
||||
3. What has been validated already.
|
||||
4. Open blockers.
|
||||
5. Suggested next action.
|
||||
|
||||
## 10) Weekly Queue Hygiene
|
||||
---
|
||||
|
||||
## 7. Weekly Queue Hygiene
|
||||
|
||||
- Review stale queue and apply `no-stale` only to accepted-but-blocked work.
|
||||
- Prioritize `size: XS/S` bug/security PRs first.
|
||||
- Convert recurring support issues into docs updates and auto-response guidance.
|
||||
|
||||
---
|
||||
|
||||
## 8. Related Docs
|
||||
|
||||
- [README.md](./README.md) — documentation taxonomy and navigation.
|
||||
- [pr-workflow.md](./pr-workflow.md) — governance workflow and merge contract.
|
||||
- [ci-map.md](./ci-map.md) — CI ownership and triage map.
|
||||
- [actions-source-policy.md](./actions-source-policy.md) — action source allowlist policy.
|
||||
|
||||
---
|
||||
|
||||
## 9. Maintenance Notes
|
||||
|
||||
- **Owner:** maintainers responsible for review quality and queue throughput.
|
||||
- **Update trigger:** PR policy changes, risk-routing model changes, or automation override behavior changes.
|
||||
- **Last reviewed:** 2026-02-18.
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ pub struct DingTalkChannel {
|
|||
client_id: String,
|
||||
client_secret: String,
|
||||
allowed_users: Vec<String>,
|
||||
client: reqwest::Client,
|
||||
/// Per-chat session webhooks for sending replies (chatID -> webhook URL).
|
||||
/// DingTalk provides a unique webhook URL with each incoming message.
|
||||
session_webhooks: Arc<RwLock<HashMap<String, String>>>,
|
||||
|
|
@ -34,11 +33,14 @@ impl DingTalkChannel {
|
|||
client_id,
|
||||
client_secret,
|
||||
allowed_users,
|
||||
client: reqwest::Client::new(),
|
||||
session_webhooks: Arc::new(RwLock::new(HashMap::new())),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.dingtalk")
|
||||
}
|
||||
|
||||
fn is_user_allowed(&self, user_id: &str) -> bool {
|
||||
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
|
||||
}
|
||||
|
|
@ -86,7 +88,7 @@ impl DingTalkChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://api.dingtalk.com/v1.0/gateway/connections/open")
|
||||
.json(&body)
|
||||
.send()
|
||||
|
|
@ -128,7 +130,12 @@ impl Channel for DingTalkChannel {
|
|||
}
|
||||
});
|
||||
|
||||
let resp = self.client.post(webhook_url).json(&body).send().await?;
|
||||
let resp = self
|
||||
.http_client()
|
||||
.post(webhook_url)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
let status = resp.status();
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@ pub struct DiscordChannel {
|
|||
allowed_users: Vec<String>,
|
||||
listen_to_bots: bool,
|
||||
mention_only: bool,
|
||||
client: reqwest::Client,
|
||||
typing_handle: Mutex<Option<tokio::task::JoinHandle<()>>>,
|
||||
}
|
||||
|
||||
|
|
@ -31,11 +30,14 @@ impl DiscordChannel {
|
|||
allowed_users,
|
||||
listen_to_bots,
|
||||
mention_only,
|
||||
client: reqwest::Client::new(),
|
||||
typing_handle: Mutex::new(None),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.discord")
|
||||
}
|
||||
|
||||
/// Check if a Discord user ID is in the allowlist.
|
||||
/// Empty list means deny everyone until explicitly configured.
|
||||
/// `"*"` means allow everyone.
|
||||
|
|
@ -198,7 +200,7 @@ impl Channel for DiscordChannel {
|
|||
let body = json!({ "content": chunk });
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("Bot {}", self.bot_token))
|
||||
.json(&body)
|
||||
|
|
@ -229,7 +231,7 @@ impl Channel for DiscordChannel {
|
|||
|
||||
// Get Gateway URL
|
||||
let gw_resp: serde_json::Value = self
|
||||
.client
|
||||
.http_client()
|
||||
.get("https://discord.com/api/v10/gateway/bot")
|
||||
.header("Authorization", format!("Bot {}", self.bot_token))
|
||||
.send()
|
||||
|
|
@ -424,7 +426,7 @@ impl Channel for DiscordChannel {
|
|||
}
|
||||
|
||||
async fn health_check(&self) -> bool {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get("https://discord.com/api/v10/users/@me")
|
||||
.header("Authorization", format!("Bot {}", self.bot_token))
|
||||
.send()
|
||||
|
|
@ -436,7 +438,7 @@ impl Channel for DiscordChannel {
|
|||
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
|
||||
self.stop_typing(recipient).await?;
|
||||
|
||||
let client = self.client.clone();
|
||||
let client = self.http_client();
|
||||
let token = self.bot_token.clone();
|
||||
let channel_id = recipient.to_string();
|
||||
|
||||
|
|
|
|||
|
|
@ -142,7 +142,6 @@ pub struct LarkChannel {
|
|||
use_feishu: bool,
|
||||
/// How to receive events: WebSocket long-connection or HTTP webhook.
|
||||
receive_mode: crate::config::schema::LarkReceiveMode,
|
||||
client: reqwest::Client,
|
||||
/// Cached tenant access token
|
||||
tenant_token: Arc<RwLock<Option<String>>>,
|
||||
/// Dedup set: WS message_ids seen in last ~30 min to prevent double-dispatch
|
||||
|
|
@ -165,7 +164,6 @@ impl LarkChannel {
|
|||
allowed_users,
|
||||
use_feishu: true,
|
||||
receive_mode: crate::config::schema::LarkReceiveMode::default(),
|
||||
client: reqwest::Client::new(),
|
||||
tenant_token: Arc::new(RwLock::new(None)),
|
||||
ws_seen_ids: Arc::new(RwLock::new(HashMap::new())),
|
||||
}
|
||||
|
|
@ -185,6 +183,10 @@ impl LarkChannel {
|
|||
ch
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.lark")
|
||||
}
|
||||
|
||||
fn api_base(&self) -> &'static str {
|
||||
if self.use_feishu {
|
||||
FEISHU_BASE_URL
|
||||
|
|
@ -212,7 +214,7 @@ impl LarkChannel {
|
|||
/// POST /callback/ws/endpoint → (wss_url, client_config)
|
||||
async fn get_ws_endpoint(&self) -> anyhow::Result<(String, WsClientConfig)> {
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/callback/ws/endpoint", self.ws_base()))
|
||||
.header("locale", if self.use_feishu { "zh" } else { "en" })
|
||||
.json(&serde_json::json!({
|
||||
|
|
@ -488,7 +490,7 @@ impl LarkChannel {
|
|||
"app_secret": self.app_secret,
|
||||
});
|
||||
|
||||
let resp = self.client.post(&url).json(&body).send().await?;
|
||||
let resp = self.http_client().post(&url).json(&body).send().await?;
|
||||
let data: serde_json::Value = resp.json().await?;
|
||||
|
||||
let code = data.get("code").and_then(|c| c.as_i64()).unwrap_or(-1);
|
||||
|
|
@ -642,7 +644,7 @@ impl Channel for LarkChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("Bearer {token}"))
|
||||
.header("Content-Type", "application/json; charset=utf-8")
|
||||
|
|
@ -655,7 +657,7 @@ impl Channel for LarkChannel {
|
|||
self.invalidate_token().await;
|
||||
let new_token = self.get_tenant_access_token().await?;
|
||||
let retry_resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("Bearer {new_token}"))
|
||||
.header("Content-Type", "application/json; charset=utf-8")
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ pub struct MatrixChannel {
|
|||
access_token: String,
|
||||
room_id: String,
|
||||
allowed_users: Vec<String>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
|
|
@ -79,10 +78,13 @@ impl MatrixChannel {
|
|||
access_token,
|
||||
room_id,
|
||||
allowed_users,
|
||||
client: Client::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client("channel.matrix")
|
||||
}
|
||||
|
||||
fn is_user_allowed(&self, sender: &str) -> bool {
|
||||
if self.allowed_users.iter().any(|u| u == "*") {
|
||||
return true;
|
||||
|
|
@ -95,7 +97,7 @@ impl MatrixChannel {
|
|||
async fn get_my_user_id(&self) -> anyhow::Result<String> {
|
||||
let url = format!("{}/_matrix/client/v3/account/whoami", self.homeserver);
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(&url)
|
||||
.header("Authorization", format!("Bearer {}", self.access_token))
|
||||
.send()
|
||||
|
|
@ -130,7 +132,7 @@ impl Channel for MatrixChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.put(&url)
|
||||
.header("Authorization", format!("Bearer {}", self.access_token))
|
||||
.json(&body)
|
||||
|
|
@ -157,7 +159,7 @@ impl Channel for MatrixChannel {
|
|||
);
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(&url)
|
||||
.header("Authorization", format!("Bearer {}", self.access_token))
|
||||
.send()
|
||||
|
|
@ -179,7 +181,7 @@ impl Channel for MatrixChannel {
|
|||
);
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(&url)
|
||||
.header("Authorization", format!("Bearer {}", self.access_token))
|
||||
.send()
|
||||
|
|
@ -250,7 +252,7 @@ impl Channel for MatrixChannel {
|
|||
async fn health_check(&self) -> bool {
|
||||
let url = format!("{}/_matrix/client/v3/account/whoami", self.homeserver);
|
||||
let Ok(resp) = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(&url)
|
||||
.header("Authorization", format!("Bearer {}", self.access_token))
|
||||
.send()
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ pub struct MattermostChannel {
|
|||
thread_replies: bool,
|
||||
/// When true, only respond to messages that @-mention the bot.
|
||||
mention_only: bool,
|
||||
client: reqwest::Client,
|
||||
/// Handle for the background typing-indicator loop (aborted on stop_typing).
|
||||
typing_handle: Mutex<Option<tokio::task::JoinHandle<()>>>,
|
||||
}
|
||||
|
|
@ -38,11 +37,14 @@ impl MattermostChannel {
|
|||
allowed_users,
|
||||
thread_replies,
|
||||
mention_only,
|
||||
client: reqwest::Client::new(),
|
||||
typing_handle: Mutex::new(None),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.mattermost")
|
||||
}
|
||||
|
||||
/// Check if a user ID is in the allowlist.
|
||||
/// Empty list means deny everyone. "*" means allow everyone.
|
||||
fn is_user_allowed(&self, user_id: &str) -> bool {
|
||||
|
|
@ -53,7 +55,7 @@ impl MattermostChannel {
|
|||
/// and detect @-mentions by username.
|
||||
async fn get_bot_identity(&self) -> (String, String) {
|
||||
let resp: Option<serde_json::Value> = async {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get(format!("{}/api/v4/users/me", self.base_url))
|
||||
.bearer_auth(&self.bot_token)
|
||||
.send()
|
||||
|
|
@ -109,7 +111,7 @@ impl Channel for MattermostChannel {
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/api/v4/posts", self.base_url))
|
||||
.bearer_auth(&self.bot_token)
|
||||
.json(&body_map)
|
||||
|
|
@ -147,7 +149,7 @@ impl Channel for MattermostChannel {
|
|||
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
|
||||
|
||||
let resp = match self
|
||||
.client
|
||||
.http_client()
|
||||
.get(format!(
|
||||
"{}/api/v4/channels/{}/posts",
|
||||
self.base_url, channel_id
|
||||
|
|
@ -202,7 +204,7 @@ impl Channel for MattermostChannel {
|
|||
}
|
||||
|
||||
async fn health_check(&self) -> bool {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get(format!("{}/api/v4/users/me", self.base_url))
|
||||
.bearer_auth(&self.bot_token)
|
||||
.send()
|
||||
|
|
@ -215,7 +217,7 @@ impl Channel for MattermostChannel {
|
|||
// Cancel any existing typing loop before starting a new one.
|
||||
self.stop_typing(recipient).await?;
|
||||
|
||||
let client = self.client.clone();
|
||||
let client = self.http_client();
|
||||
let token = self.bot_token.clone();
|
||||
let base_url = self.base_url.clone();
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,6 @@ pub struct QQChannel {
|
|||
app_id: String,
|
||||
app_secret: String,
|
||||
allowed_users: Vec<String>,
|
||||
client: reqwest::Client,
|
||||
/// Cached access token + expiry timestamp.
|
||||
token_cache: Arc<RwLock<Option<(String, u64)>>>,
|
||||
/// Message deduplication set.
|
||||
|
|
@ -33,12 +32,15 @@ impl QQChannel {
|
|||
app_id,
|
||||
app_secret,
|
||||
allowed_users,
|
||||
client: reqwest::Client::new(),
|
||||
token_cache: Arc::new(RwLock::new(None)),
|
||||
dedup: Arc::new(RwLock::new(HashSet::new())),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.qq")
|
||||
}
|
||||
|
||||
fn is_user_allowed(&self, user_id: &str) -> bool {
|
||||
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
|
||||
}
|
||||
|
|
@ -50,7 +52,12 @@ impl QQChannel {
|
|||
"clientSecret": self.app_secret,
|
||||
});
|
||||
|
||||
let resp = self.client.post(QQ_AUTH_URL).json(&body).send().await?;
|
||||
let resp = self
|
||||
.http_client()
|
||||
.post(QQ_AUTH_URL)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
let status = resp.status();
|
||||
|
|
@ -109,7 +116,7 @@ impl QQChannel {
|
|||
/// Get the WebSocket gateway URL.
|
||||
async fn get_gateway_url(&self, token: &str) -> anyhow::Result<String> {
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(format!("{QQ_API_BASE}/gateway"))
|
||||
.header("Authorization", format!("QQBot {token}"))
|
||||
.send()
|
||||
|
|
@ -190,7 +197,7 @@ impl Channel for QQChannel {
|
|||
};
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("QQBot {token}"))
|
||||
.json(&body)
|
||||
|
|
|
|||
|
|
@ -28,7 +28,6 @@ pub struct SignalChannel {
|
|||
allowed_from: Vec<String>,
|
||||
ignore_attachments: bool,
|
||||
ignore_stories: bool,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
// ── signal-cli SSE event JSON shapes ────────────────────────────
|
||||
|
|
@ -81,10 +80,6 @@ impl SignalChannel {
|
|||
ignore_stories: bool,
|
||||
) -> Self {
|
||||
let http_url = http_url.trim_end_matches('/').to_string();
|
||||
let client = Client::builder()
|
||||
.connect_timeout(Duration::from_secs(10))
|
||||
.build()
|
||||
.expect("Signal HTTP client should build");
|
||||
Self {
|
||||
http_url,
|
||||
account,
|
||||
|
|
@ -92,10 +87,15 @@ impl SignalChannel {
|
|||
allowed_from,
|
||||
ignore_attachments,
|
||||
ignore_stories,
|
||||
client,
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
let builder = Client::builder().connect_timeout(Duration::from_secs(10));
|
||||
let builder = crate::config::apply_runtime_proxy_to_builder(builder, "channel.signal");
|
||||
builder.build().expect("Signal HTTP client should build")
|
||||
}
|
||||
|
||||
/// Effective sender: prefer `sourceNumber` (E.164), fall back to `source`.
|
||||
fn sender(envelope: &Envelope) -> Option<String> {
|
||||
envelope
|
||||
|
|
@ -178,7 +178,7 @@ impl SignalChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.timeout(Duration::from_secs(30))
|
||||
.header("Content-Type", "application/json")
|
||||
|
|
@ -298,7 +298,7 @@ impl Channel for SignalChannel {
|
|||
|
||||
loop {
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(url.clone())
|
||||
.header("Accept", "text/event-stream")
|
||||
.send()
|
||||
|
|
@ -408,7 +408,7 @@ impl Channel for SignalChannel {
|
|||
async fn health_check(&self) -> bool {
|
||||
let url = format!("{}/api/v1/check", self.http_url);
|
||||
let Ok(resp) = self
|
||||
.client
|
||||
.http_client()
|
||||
.get(&url)
|
||||
.timeout(Duration::from_secs(10))
|
||||
.send()
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ pub struct SlackChannel {
|
|||
bot_token: String,
|
||||
channel_id: Option<String>,
|
||||
allowed_users: Vec<String>,
|
||||
client: reqwest::Client,
|
||||
}
|
||||
|
||||
impl SlackChannel {
|
||||
|
|
@ -15,10 +14,13 @@ impl SlackChannel {
|
|||
bot_token,
|
||||
channel_id,
|
||||
allowed_users,
|
||||
client: reqwest::Client::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.slack")
|
||||
}
|
||||
|
||||
/// Check if a Slack user ID is in the allowlist.
|
||||
/// Empty list means deny everyone until explicitly configured.
|
||||
/// `"*"` means allow everyone.
|
||||
|
|
@ -29,7 +31,7 @@ impl SlackChannel {
|
|||
/// Get the bot's own user ID so we can ignore our own messages
|
||||
async fn get_bot_user_id(&self) -> Option<String> {
|
||||
let resp: serde_json::Value = self
|
||||
.client
|
||||
.http_client()
|
||||
.get("https://slack.com/api/auth.test")
|
||||
.bearer_auth(&self.bot_token)
|
||||
.send()
|
||||
|
|
@ -58,7 +60,7 @@ impl Channel for SlackChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://slack.com/api/chat.postMessage")
|
||||
.bearer_auth(&self.bot_token)
|
||||
.json(&body)
|
||||
|
|
@ -108,7 +110,7 @@ impl Channel for SlackChannel {
|
|||
}
|
||||
|
||||
let resp = match self
|
||||
.client
|
||||
.http_client()
|
||||
.get("https://slack.com/api/conversations.history")
|
||||
.bearer_auth(&self.bot_token)
|
||||
.query(¶ms)
|
||||
|
|
@ -179,7 +181,7 @@ impl Channel for SlackChannel {
|
|||
}
|
||||
|
||||
async fn health_check(&self) -> bool {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get("https://slack.com/api/auth.test")
|
||||
.bearer_auth(&self.bot_token)
|
||||
.send()
|
||||
|
|
|
|||
|
|
@ -357,6 +357,10 @@ impl TelegramChannel {
|
|||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.telegram")
|
||||
}
|
||||
|
||||
fn normalize_identity(value: &str) -> String {
|
||||
value.trim().trim_start_matches('@').to_string()
|
||||
}
|
||||
|
|
@ -448,7 +452,7 @@ impl TelegramChannel {
|
|||
}
|
||||
|
||||
async fn fetch_bot_username(&self) -> anyhow::Result<String> {
|
||||
let resp = self.client.get(self.api_url("getMe")).send().await?;
|
||||
let resp = self.http_client().get(self.api_url("getMe")).send().await?;
|
||||
|
||||
if !resp.status().is_success() {
|
||||
anyhow::bail!("Failed to fetch bot info: {}", resp.status());
|
||||
|
|
@ -857,7 +861,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let markdown_resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendMessage"))
|
||||
.json(&markdown_body)
|
||||
.send()
|
||||
|
|
@ -887,7 +891,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
plain_body["message_thread_id"] = serde_json::Value::String(tid.to_string());
|
||||
}
|
||||
let plain_resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendMessage"))
|
||||
.json(&plain_body)
|
||||
.send()
|
||||
|
|
@ -936,7 +940,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url(method))
|
||||
.json(&body)
|
||||
.send()
|
||||
|
|
@ -1029,7 +1033,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendDocument"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1068,7 +1072,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendDocument"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1112,7 +1116,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendPhoto"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1151,7 +1155,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendPhoto"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1195,7 +1199,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendVideo"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1239,7 +1243,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendAudio"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1283,7 +1287,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendVoice"))
|
||||
.multipart(form)
|
||||
.send()
|
||||
|
|
@ -1320,7 +1324,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendDocument"))
|
||||
.json(&body)
|
||||
.send()
|
||||
|
|
@ -1357,7 +1361,7 @@ Allowlist Telegram username (without '@') or numeric user ID.",
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendPhoto"))
|
||||
.json(&body)
|
||||
.send()
|
||||
|
|
@ -1685,7 +1689,7 @@ impl Channel for TelegramChannel {
|
|||
"allowed_updates": ["message"]
|
||||
});
|
||||
|
||||
let resp = match self.client.post(&url).json(&body).send().await {
|
||||
let resp = match self.http_client().post(&url).json(&body).send().await {
|
||||
Ok(r) => r,
|
||||
Err(e) => {
|
||||
tracing::warn!("Telegram poll error: {e}");
|
||||
|
|
@ -1750,7 +1754,7 @@ Ensure only one `zeroclaw` process is using this bot token."
|
|||
"action": "typing"
|
||||
});
|
||||
let _ = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.api_url("sendChatAction"))
|
||||
.json(&typing_body)
|
||||
.send()
|
||||
|
|
@ -1769,7 +1773,7 @@ Ensure only one `zeroclaw` process is using this bot token."
|
|||
|
||||
match tokio::time::timeout(
|
||||
timeout_duration,
|
||||
self.client.get(self.api_url("getMe")).send(),
|
||||
self.http_client().get(self.api_url("getMe")).send(),
|
||||
)
|
||||
.await
|
||||
{
|
||||
|
|
@ -1788,7 +1792,7 @@ Ensure only one `zeroclaw` process is using this bot token."
|
|||
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
|
||||
self.stop_typing(recipient).await?;
|
||||
|
||||
let client = self.client.clone();
|
||||
let client = self.http_client();
|
||||
let url = self.api_url("sendChatAction");
|
||||
let chat_id = recipient.to_string();
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@ pub struct WhatsAppChannel {
|
|||
endpoint_id: String,
|
||||
verify_token: String,
|
||||
allowed_numbers: Vec<String>,
|
||||
client: reqwest::Client,
|
||||
}
|
||||
|
||||
impl WhatsAppChannel {
|
||||
|
|
@ -28,10 +27,13 @@ impl WhatsAppChannel {
|
|||
endpoint_id,
|
||||
verify_token,
|
||||
allowed_numbers,
|
||||
client: reqwest::Client::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("channel.whatsapp")
|
||||
}
|
||||
|
||||
/// Check if a phone number is allowed (E.164 format: +1234567890)
|
||||
fn is_number_allowed(&self, phone: &str) -> bool {
|
||||
self.allowed_numbers.iter().any(|n| n == "*" || n == phone)
|
||||
|
|
@ -164,7 +166,7 @@ impl Channel for WhatsAppChannel {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.bearer_auth(&self.access_token)
|
||||
.header("Content-Type", "application/json")
|
||||
|
|
@ -201,7 +203,7 @@ impl Channel for WhatsAppChannel {
|
|||
// Check if we can reach the WhatsApp API
|
||||
let url = format!("https://graph.facebook.com/v18.0/{}", self.endpoint_id);
|
||||
|
||||
self.client
|
||||
self.http_client()
|
||||
.get(&url)
|
||||
.bearer_auth(&self.access_token)
|
||||
.send()
|
||||
|
|
|
|||
|
|
@ -2,16 +2,18 @@ pub mod schema;
|
|||
|
||||
#[allow(unused_imports)]
|
||||
pub use schema::{
|
||||
apply_runtime_proxy_to_builder, build_runtime_proxy_client,
|
||||
build_runtime_proxy_client_with_timeouts, runtime_proxy_config, set_runtime_proxy_config,
|
||||
AgentConfig, AuditConfig, AutonomyConfig, BrowserComputerUseConfig, BrowserConfig,
|
||||
ChannelsConfig, ClassificationRule, ComposioConfig, Config, CostConfig, CronConfig,
|
||||
DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, GatewayConfig, HardwareConfig,
|
||||
HardwareTransport, HeartbeatConfig, HttpRequestConfig, IMessageConfig, IdentityConfig,
|
||||
LarkConfig, MatrixConfig, MemoryConfig, ModelRouteConfig, ObservabilityConfig,
|
||||
PeripheralBoardConfig, PeripheralsConfig, QueryClassificationConfig, ReliabilityConfig,
|
||||
ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig, SchedulerConfig,
|
||||
SecretsConfig, SecurityConfig, SlackConfig, StorageConfig, StorageProviderConfig,
|
||||
StorageProviderSection, StreamMode, TelegramConfig, TunnelConfig, WebSearchConfig,
|
||||
WebhookConfig,
|
||||
PeripheralBoardConfig, PeripheralsConfig, ProxyConfig, ProxyScope, QueryClassificationConfig,
|
||||
ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig,
|
||||
SchedulerConfig, SecretsConfig, SecurityConfig, SlackConfig, StorageConfig,
|
||||
StorageProviderConfig, StorageProviderSection, StreamMode, TelegramConfig, TunnelConfig,
|
||||
WebSearchConfig, WebhookConfig,
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
|
|
|
|||
|
|
@ -7,6 +7,41 @@ use std::collections::HashMap;
|
|||
use std::fs::{self, File, OpenOptions};
|
||||
use std::io::Write;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{OnceLock, RwLock};
|
||||
|
||||
const SUPPORTED_PROXY_SERVICE_KEYS: &[&str] = &[
|
||||
"provider.anthropic",
|
||||
"provider.compatible",
|
||||
"provider.copilot",
|
||||
"provider.gemini",
|
||||
"provider.glm",
|
||||
"provider.ollama",
|
||||
"provider.openai",
|
||||
"provider.openrouter",
|
||||
"channel.dingtalk",
|
||||
"channel.discord",
|
||||
"channel.lark",
|
||||
"channel.matrix",
|
||||
"channel.mattermost",
|
||||
"channel.qq",
|
||||
"channel.signal",
|
||||
"channel.slack",
|
||||
"channel.telegram",
|
||||
"channel.whatsapp",
|
||||
"tool.browser",
|
||||
"tool.composio",
|
||||
"tool.http_request",
|
||||
"tool.pushover",
|
||||
"memory.embeddings",
|
||||
"tunnel.custom",
|
||||
];
|
||||
|
||||
const SUPPORTED_PROXY_SERVICE_SELECTORS: &[&str] =
|
||||
&["provider.*", "channel.*", "tool.*", "memory.*", "tunnel.*"];
|
||||
|
||||
static RUNTIME_PROXY_CONFIG: OnceLock<RwLock<ProxyConfig>> = OnceLock::new();
|
||||
static RUNTIME_PROXY_CLIENT_CACHE: OnceLock<RwLock<HashMap<String, reqwest::Client>>> =
|
||||
OnceLock::new();
|
||||
|
||||
// ── Top-level config ──────────────────────────────────────────────
|
||||
|
||||
|
|
@ -87,6 +122,9 @@ pub struct Config {
|
|||
#[serde(default)]
|
||||
pub web_search: WebSearchConfig,
|
||||
|
||||
#[serde(default)]
|
||||
pub proxy: ProxyConfig,
|
||||
|
||||
#[serde(default)]
|
||||
pub identity: IdentityConfig,
|
||||
|
||||
|
|
@ -772,6 +810,465 @@ impl Default for WebSearchConfig {
|
|||
}
|
||||
}
|
||||
|
||||
// ── Proxy ───────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, Default, PartialEq, Eq)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum ProxyScope {
|
||||
Environment,
|
||||
#[default]
|
||||
Zeroclaw,
|
||||
Services,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ProxyConfig {
|
||||
/// Enable proxy support for selected scope.
|
||||
#[serde(default)]
|
||||
pub enabled: bool,
|
||||
/// Proxy URL for HTTP requests (supports http, https, socks5, socks5h).
|
||||
#[serde(default)]
|
||||
pub http_proxy: Option<String>,
|
||||
/// Proxy URL for HTTPS requests (supports http, https, socks5, socks5h).
|
||||
#[serde(default)]
|
||||
pub https_proxy: Option<String>,
|
||||
/// Fallback proxy URL for all schemes.
|
||||
#[serde(default)]
|
||||
pub all_proxy: Option<String>,
|
||||
/// No-proxy bypass list. Same format as NO_PROXY.
|
||||
#[serde(default)]
|
||||
pub no_proxy: Vec<String>,
|
||||
/// Proxy application scope.
|
||||
#[serde(default)]
|
||||
pub scope: ProxyScope,
|
||||
/// Service selectors used when scope = "services".
|
||||
#[serde(default)]
|
||||
pub services: Vec<String>,
|
||||
}
|
||||
|
||||
impl Default for ProxyConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
enabled: false,
|
||||
http_proxy: None,
|
||||
https_proxy: None,
|
||||
all_proxy: None,
|
||||
no_proxy: Vec::new(),
|
||||
scope: ProxyScope::Zeroclaw,
|
||||
services: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ProxyConfig {
|
||||
pub fn supported_service_keys() -> &'static [&'static str] {
|
||||
SUPPORTED_PROXY_SERVICE_KEYS
|
||||
}
|
||||
|
||||
pub fn supported_service_selectors() -> &'static [&'static str] {
|
||||
SUPPORTED_PROXY_SERVICE_SELECTORS
|
||||
}
|
||||
|
||||
pub fn has_any_proxy_url(&self) -> bool {
|
||||
normalize_proxy_url_option(self.http_proxy.as_deref()).is_some()
|
||||
|| normalize_proxy_url_option(self.https_proxy.as_deref()).is_some()
|
||||
|| normalize_proxy_url_option(self.all_proxy.as_deref()).is_some()
|
||||
}
|
||||
|
||||
pub fn normalized_services(&self) -> Vec<String> {
|
||||
normalize_service_list(self.services.clone())
|
||||
}
|
||||
|
||||
pub fn normalized_no_proxy(&self) -> Vec<String> {
|
||||
normalize_no_proxy_list(self.no_proxy.clone())
|
||||
}
|
||||
|
||||
pub fn validate(&self) -> Result<()> {
|
||||
for (field, value) in [
|
||||
("http_proxy", self.http_proxy.as_deref()),
|
||||
("https_proxy", self.https_proxy.as_deref()),
|
||||
("all_proxy", self.all_proxy.as_deref()),
|
||||
] {
|
||||
if let Some(url) = normalize_proxy_url_option(value) {
|
||||
validate_proxy_url(field, &url)?;
|
||||
}
|
||||
}
|
||||
|
||||
for selector in self.normalized_services() {
|
||||
if !is_supported_proxy_service_selector(&selector) {
|
||||
anyhow::bail!(
|
||||
"Unsupported proxy service selector '{selector}'. Use tool `proxy_config` action `list_services` for valid values"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if self.enabled && !self.has_any_proxy_url() {
|
||||
anyhow::bail!(
|
||||
"Proxy is enabled but no proxy URL is configured. Set at least one of http_proxy, https_proxy, or all_proxy"
|
||||
);
|
||||
}
|
||||
|
||||
if self.enabled
|
||||
&& self.scope == ProxyScope::Services
|
||||
&& self.normalized_services().is_empty()
|
||||
{
|
||||
anyhow::bail!(
|
||||
"proxy.scope='services' requires a non-empty proxy.services list when proxy is enabled"
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn should_apply_to_service(&self, service_key: &str) -> bool {
|
||||
if !self.enabled {
|
||||
return false;
|
||||
}
|
||||
|
||||
match self.scope {
|
||||
ProxyScope::Environment => false,
|
||||
ProxyScope::Zeroclaw => true,
|
||||
ProxyScope::Services => {
|
||||
let service_key = service_key.trim().to_ascii_lowercase();
|
||||
if service_key.is_empty() {
|
||||
return false;
|
||||
}
|
||||
|
||||
self.normalized_services()
|
||||
.iter()
|
||||
.any(|selector| service_selector_matches(selector, &service_key))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn apply_to_reqwest_builder(
|
||||
&self,
|
||||
mut builder: reqwest::ClientBuilder,
|
||||
service_key: &str,
|
||||
) -> reqwest::ClientBuilder {
|
||||
if !self.should_apply_to_service(service_key) {
|
||||
return builder;
|
||||
}
|
||||
|
||||
let no_proxy = self.no_proxy_value();
|
||||
|
||||
if let Some(url) = normalize_proxy_url_option(self.all_proxy.as_deref()) {
|
||||
match reqwest::Proxy::all(&url) {
|
||||
Ok(proxy) => {
|
||||
builder = builder.proxy(apply_no_proxy(proxy, no_proxy.clone()));
|
||||
}
|
||||
Err(error) => {
|
||||
tracing::warn!(
|
||||
proxy_url = %url,
|
||||
service_key,
|
||||
"Ignoring invalid all_proxy URL: {error}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(url) = normalize_proxy_url_option(self.http_proxy.as_deref()) {
|
||||
match reqwest::Proxy::http(&url) {
|
||||
Ok(proxy) => {
|
||||
builder = builder.proxy(apply_no_proxy(proxy, no_proxy.clone()));
|
||||
}
|
||||
Err(error) => {
|
||||
tracing::warn!(
|
||||
proxy_url = %url,
|
||||
service_key,
|
||||
"Ignoring invalid http_proxy URL: {error}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(url) = normalize_proxy_url_option(self.https_proxy.as_deref()) {
|
||||
match reqwest::Proxy::https(&url) {
|
||||
Ok(proxy) => {
|
||||
builder = builder.proxy(apply_no_proxy(proxy, no_proxy));
|
||||
}
|
||||
Err(error) => {
|
||||
tracing::warn!(
|
||||
proxy_url = %url,
|
||||
service_key,
|
||||
"Ignoring invalid https_proxy URL: {error}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
builder
|
||||
}
|
||||
|
||||
pub fn apply_to_process_env(&self) {
|
||||
set_proxy_env_pair("HTTP_PROXY", self.http_proxy.as_deref());
|
||||
set_proxy_env_pair("HTTPS_PROXY", self.https_proxy.as_deref());
|
||||
set_proxy_env_pair("ALL_PROXY", self.all_proxy.as_deref());
|
||||
|
||||
let no_proxy_joined = {
|
||||
let list = self.normalized_no_proxy();
|
||||
(!list.is_empty()).then(|| list.join(","))
|
||||
};
|
||||
set_proxy_env_pair("NO_PROXY", no_proxy_joined.as_deref());
|
||||
}
|
||||
|
||||
pub fn clear_process_env() {
|
||||
clear_proxy_env_pair("HTTP_PROXY");
|
||||
clear_proxy_env_pair("HTTPS_PROXY");
|
||||
clear_proxy_env_pair("ALL_PROXY");
|
||||
clear_proxy_env_pair("NO_PROXY");
|
||||
}
|
||||
|
||||
fn no_proxy_value(&self) -> Option<reqwest::NoProxy> {
|
||||
let joined = {
|
||||
let list = self.normalized_no_proxy();
|
||||
(!list.is_empty()).then(|| list.join(","))
|
||||
};
|
||||
joined.as_deref().and_then(reqwest::NoProxy::from_string)
|
||||
}
|
||||
}
|
||||
|
||||
fn apply_no_proxy(proxy: reqwest::Proxy, no_proxy: Option<reqwest::NoProxy>) -> reqwest::Proxy {
|
||||
proxy.no_proxy(no_proxy)
|
||||
}
|
||||
|
||||
fn normalize_proxy_url_option(raw: Option<&str>) -> Option<String> {
|
||||
let value = raw?.trim();
|
||||
(!value.is_empty()).then(|| value.to_string())
|
||||
}
|
||||
|
||||
fn normalize_no_proxy_list(values: Vec<String>) -> Vec<String> {
|
||||
normalize_comma_values(values)
|
||||
}
|
||||
|
||||
fn normalize_service_list(values: Vec<String>) -> Vec<String> {
|
||||
let mut normalized = normalize_comma_values(values)
|
||||
.into_iter()
|
||||
.map(|value| value.to_ascii_lowercase())
|
||||
.collect::<Vec<_>>();
|
||||
normalized.sort_unstable();
|
||||
normalized.dedup();
|
||||
normalized
|
||||
}
|
||||
|
||||
fn normalize_comma_values(values: Vec<String>) -> Vec<String> {
|
||||
let mut output = Vec::new();
|
||||
for value in values {
|
||||
for part in value.split(',') {
|
||||
let normalized = part.trim();
|
||||
if normalized.is_empty() {
|
||||
continue;
|
||||
}
|
||||
output.push(normalized.to_string());
|
||||
}
|
||||
}
|
||||
output.sort_unstable();
|
||||
output.dedup();
|
||||
output
|
||||
}
|
||||
|
||||
fn is_supported_proxy_service_selector(selector: &str) -> bool {
|
||||
if SUPPORTED_PROXY_SERVICE_KEYS
|
||||
.iter()
|
||||
.any(|known| known.eq_ignore_ascii_case(selector))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
SUPPORTED_PROXY_SERVICE_SELECTORS
|
||||
.iter()
|
||||
.any(|known| known.eq_ignore_ascii_case(selector))
|
||||
}
|
||||
|
||||
fn service_selector_matches(selector: &str, service_key: &str) -> bool {
|
||||
if selector == service_key {
|
||||
return true;
|
||||
}
|
||||
|
||||
if let Some(prefix) = selector.strip_suffix(".*") {
|
||||
return service_key.starts_with(prefix)
|
||||
&& service_key
|
||||
.strip_prefix(prefix)
|
||||
.is_some_and(|suffix| suffix.starts_with('.'));
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
|
||||
fn validate_proxy_url(field: &str, url: &str) -> Result<()> {
|
||||
let parsed = reqwest::Url::parse(url)
|
||||
.with_context(|| format!("Invalid {field} URL: '{url}' is not a valid URL"))?;
|
||||
|
||||
match parsed.scheme() {
|
||||
"http" | "https" | "socks5" | "socks5h" => {}
|
||||
scheme => {
|
||||
anyhow::bail!(
|
||||
"Invalid {field} URL scheme '{scheme}'. Allowed: http, https, socks5, socks5h"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if parsed.host_str().is_none() {
|
||||
anyhow::bail!("Invalid {field} URL: host is required");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn set_proxy_env_pair(key: &str, value: Option<&str>) {
|
||||
let lowercase_key = key.to_ascii_lowercase();
|
||||
if let Some(value) = value.and_then(|candidate| normalize_proxy_url_option(Some(candidate))) {
|
||||
std::env::set_var(key, &value);
|
||||
std::env::set_var(lowercase_key, value);
|
||||
} else {
|
||||
std::env::remove_var(key);
|
||||
std::env::remove_var(lowercase_key);
|
||||
}
|
||||
}
|
||||
|
||||
fn clear_proxy_env_pair(key: &str) {
|
||||
std::env::remove_var(key);
|
||||
std::env::remove_var(key.to_ascii_lowercase());
|
||||
}
|
||||
|
||||
fn runtime_proxy_state() -> &'static RwLock<ProxyConfig> {
|
||||
RUNTIME_PROXY_CONFIG.get_or_init(|| RwLock::new(ProxyConfig::default()))
|
||||
}
|
||||
|
||||
fn runtime_proxy_client_cache() -> &'static RwLock<HashMap<String, reqwest::Client>> {
|
||||
RUNTIME_PROXY_CLIENT_CACHE.get_or_init(|| RwLock::new(HashMap::new()))
|
||||
}
|
||||
|
||||
fn clear_runtime_proxy_client_cache() {
|
||||
match runtime_proxy_client_cache().write() {
|
||||
Ok(mut guard) => {
|
||||
guard.clear();
|
||||
}
|
||||
Err(poisoned) => {
|
||||
poisoned.into_inner().clear();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn runtime_proxy_cache_key(
|
||||
service_key: &str,
|
||||
timeout_secs: Option<u64>,
|
||||
connect_timeout_secs: Option<u64>,
|
||||
) -> String {
|
||||
format!(
|
||||
"{}|timeout={}|connect_timeout={}",
|
||||
service_key.trim().to_ascii_lowercase(),
|
||||
timeout_secs
|
||||
.map(|value| value.to_string())
|
||||
.unwrap_or_else(|| "none".to_string()),
|
||||
connect_timeout_secs
|
||||
.map(|value| value.to_string())
|
||||
.unwrap_or_else(|| "none".to_string())
|
||||
)
|
||||
}
|
||||
|
||||
fn runtime_proxy_cached_client(cache_key: &str) -> Option<reqwest::Client> {
|
||||
match runtime_proxy_client_cache().read() {
|
||||
Ok(guard) => guard.get(cache_key).cloned(),
|
||||
Err(poisoned) => poisoned.into_inner().get(cache_key).cloned(),
|
||||
}
|
||||
}
|
||||
|
||||
fn set_runtime_proxy_cached_client(cache_key: String, client: reqwest::Client) {
|
||||
match runtime_proxy_client_cache().write() {
|
||||
Ok(mut guard) => {
|
||||
guard.insert(cache_key, client);
|
||||
}
|
||||
Err(poisoned) => {
|
||||
poisoned.into_inner().insert(cache_key, client);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_runtime_proxy_config(config: ProxyConfig) {
|
||||
match runtime_proxy_state().write() {
|
||||
Ok(mut guard) => {
|
||||
*guard = config;
|
||||
}
|
||||
Err(poisoned) => {
|
||||
*poisoned.into_inner() = config;
|
||||
}
|
||||
}
|
||||
|
||||
clear_runtime_proxy_client_cache();
|
||||
}
|
||||
|
||||
pub fn runtime_proxy_config() -> ProxyConfig {
|
||||
match runtime_proxy_state().read() {
|
||||
Ok(guard) => guard.clone(),
|
||||
Err(poisoned) => poisoned.into_inner().clone(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn apply_runtime_proxy_to_builder(
|
||||
builder: reqwest::ClientBuilder,
|
||||
service_key: &str,
|
||||
) -> reqwest::ClientBuilder {
|
||||
runtime_proxy_config().apply_to_reqwest_builder(builder, service_key)
|
||||
}
|
||||
|
||||
pub fn build_runtime_proxy_client(service_key: &str) -> reqwest::Client {
|
||||
let cache_key = runtime_proxy_cache_key(service_key, None, None);
|
||||
if let Some(client) = runtime_proxy_cached_client(&cache_key) {
|
||||
return client;
|
||||
}
|
||||
|
||||
let builder = apply_runtime_proxy_to_builder(reqwest::Client::builder(), service_key);
|
||||
let client = builder.build().unwrap_or_else(|error| {
|
||||
tracing::warn!(service_key, "Failed to build proxied client: {error}");
|
||||
reqwest::Client::new()
|
||||
});
|
||||
set_runtime_proxy_cached_client(cache_key, client.clone());
|
||||
client
|
||||
}
|
||||
|
||||
pub fn build_runtime_proxy_client_with_timeouts(
|
||||
service_key: &str,
|
||||
timeout_secs: u64,
|
||||
connect_timeout_secs: u64,
|
||||
) -> reqwest::Client {
|
||||
let cache_key =
|
||||
runtime_proxy_cache_key(service_key, Some(timeout_secs), Some(connect_timeout_secs));
|
||||
if let Some(client) = runtime_proxy_cached_client(&cache_key) {
|
||||
return client;
|
||||
}
|
||||
|
||||
let builder = reqwest::Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(timeout_secs))
|
||||
.connect_timeout(std::time::Duration::from_secs(connect_timeout_secs));
|
||||
let builder = apply_runtime_proxy_to_builder(builder, service_key);
|
||||
let client = builder.build().unwrap_or_else(|error| {
|
||||
tracing::warn!(
|
||||
service_key,
|
||||
"Failed to build proxied timeout client: {error}"
|
||||
);
|
||||
reqwest::Client::new()
|
||||
});
|
||||
set_runtime_proxy_cached_client(cache_key, client.clone());
|
||||
client
|
||||
}
|
||||
|
||||
fn parse_proxy_scope(raw: &str) -> Option<ProxyScope> {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"environment" | "env" => Some(ProxyScope::Environment),
|
||||
"zeroclaw" | "internal" | "core" => Some(ProxyScope::Zeroclaw),
|
||||
"services" | "service" => Some(ProxyScope::Services),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_proxy_enabled(raw: &str) -> Option<bool> {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"1" | "true" | "yes" | "on" => Some(true),
|
||||
"0" | "false" | "no" | "off" => Some(false),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
// ── Memory ───────────────────────────────────────────────────
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||
|
|
@ -1922,6 +2419,7 @@ impl Default for Config {
|
|||
browser: BrowserConfig::default(),
|
||||
http_request: HttpRequestConfig::default(),
|
||||
web_search: WebSearchConfig::default(),
|
||||
proxy: ProxyConfig::default(),
|
||||
identity: IdentityConfig::default(),
|
||||
cost: CostConfig::default(),
|
||||
peripherals: PeripheralsConfig::default(),
|
||||
|
|
@ -2368,6 +2866,74 @@ impl Config {
|
|||
}
|
||||
}
|
||||
}
|
||||
// Proxy enabled flag: ZEROCLAW_PROXY_ENABLED
|
||||
let explicit_proxy_enabled = std::env::var("ZEROCLAW_PROXY_ENABLED")
|
||||
.ok()
|
||||
.as_deref()
|
||||
.and_then(parse_proxy_enabled);
|
||||
if let Some(enabled) = explicit_proxy_enabled {
|
||||
self.proxy.enabled = enabled;
|
||||
}
|
||||
|
||||
// Proxy URLs: ZEROCLAW_* wins, then generic *PROXY vars.
|
||||
let mut proxy_url_overridden = false;
|
||||
if let Ok(proxy_url) =
|
||||
std::env::var("ZEROCLAW_HTTP_PROXY").or_else(|_| std::env::var("HTTP_PROXY"))
|
||||
{
|
||||
self.proxy.http_proxy = normalize_proxy_url_option(Some(&proxy_url));
|
||||
proxy_url_overridden = true;
|
||||
}
|
||||
if let Ok(proxy_url) =
|
||||
std::env::var("ZEROCLAW_HTTPS_PROXY").or_else(|_| std::env::var("HTTPS_PROXY"))
|
||||
{
|
||||
self.proxy.https_proxy = normalize_proxy_url_option(Some(&proxy_url));
|
||||
proxy_url_overridden = true;
|
||||
}
|
||||
if let Ok(proxy_url) =
|
||||
std::env::var("ZEROCLAW_ALL_PROXY").or_else(|_| std::env::var("ALL_PROXY"))
|
||||
{
|
||||
self.proxy.all_proxy = normalize_proxy_url_option(Some(&proxy_url));
|
||||
proxy_url_overridden = true;
|
||||
}
|
||||
if let Ok(no_proxy) =
|
||||
std::env::var("ZEROCLAW_NO_PROXY").or_else(|_| std::env::var("NO_PROXY"))
|
||||
{
|
||||
self.proxy.no_proxy = normalize_no_proxy_list(vec![no_proxy]);
|
||||
}
|
||||
|
||||
if explicit_proxy_enabled.is_none()
|
||||
&& proxy_url_overridden
|
||||
&& self.proxy.has_any_proxy_url()
|
||||
{
|
||||
self.proxy.enabled = true;
|
||||
}
|
||||
|
||||
// Proxy scope and service selectors.
|
||||
if let Ok(scope_raw) = std::env::var("ZEROCLAW_PROXY_SCOPE") {
|
||||
if let Some(scope) = parse_proxy_scope(&scope_raw) {
|
||||
self.proxy.scope = scope;
|
||||
} else {
|
||||
tracing::warn!(
|
||||
scope = %scope_raw,
|
||||
"Ignoring invalid ZEROCLAW_PROXY_SCOPE (valid: environment|zeroclaw|services)"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if let Ok(services_raw) = std::env::var("ZEROCLAW_PROXY_SERVICES") {
|
||||
self.proxy.services = normalize_service_list(vec![services_raw]);
|
||||
}
|
||||
|
||||
if let Err(error) = self.proxy.validate() {
|
||||
tracing::warn!("Invalid proxy configuration ignored: {error}");
|
||||
self.proxy.enabled = false;
|
||||
}
|
||||
|
||||
if self.proxy.enabled && self.proxy.scope == ProxyScope::Environment {
|
||||
self.proxy.apply_to_process_env();
|
||||
}
|
||||
|
||||
set_runtime_proxy_config(self.proxy.clone());
|
||||
}
|
||||
|
||||
pub fn save(&self) -> Result<()> {
|
||||
|
|
@ -2682,6 +3248,7 @@ default_temperature = 0.7
|
|||
browser: BrowserConfig::default(),
|
||||
http_request: HttpRequestConfig::default(),
|
||||
web_search: WebSearchConfig::default(),
|
||||
proxy: ProxyConfig::default(),
|
||||
agent: AgentConfig::default(),
|
||||
identity: IdentityConfig::default(),
|
||||
cost: CostConfig::default(),
|
||||
|
|
@ -2821,6 +3388,7 @@ tool_dispatcher = "xml"
|
|||
browser: BrowserConfig::default(),
|
||||
http_request: HttpRequestConfig::default(),
|
||||
web_search: WebSearchConfig::default(),
|
||||
proxy: ProxyConfig::default(),
|
||||
agent: AgentConfig::default(),
|
||||
identity: IdentityConfig::default(),
|
||||
cost: CostConfig::default(),
|
||||
|
|
@ -3619,6 +4187,28 @@ default_temperature = 0.7
|
|||
.expect("env override test lock poisoned")
|
||||
}
|
||||
|
||||
fn clear_proxy_env_test_vars() {
|
||||
for key in [
|
||||
"ZEROCLAW_PROXY_ENABLED",
|
||||
"ZEROCLAW_HTTP_PROXY",
|
||||
"ZEROCLAW_HTTPS_PROXY",
|
||||
"ZEROCLAW_ALL_PROXY",
|
||||
"ZEROCLAW_NO_PROXY",
|
||||
"ZEROCLAW_PROXY_SCOPE",
|
||||
"ZEROCLAW_PROXY_SERVICES",
|
||||
"HTTP_PROXY",
|
||||
"HTTPS_PROXY",
|
||||
"ALL_PROXY",
|
||||
"NO_PROXY",
|
||||
"http_proxy",
|
||||
"https_proxy",
|
||||
"all_proxy",
|
||||
"no_proxy",
|
||||
] {
|
||||
std::env::remove_var(key);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn env_override_api_key() {
|
||||
let _env_guard = env_override_test_guard();
|
||||
|
|
@ -4108,6 +4698,128 @@ default_model = "legacy-model"
|
|||
std::env::remove_var("ZEROCLAW_STORAGE_CONNECT_TIMEOUT_SECS");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn proxy_config_scope_services_requires_entries_when_enabled() {
|
||||
let proxy = ProxyConfig {
|
||||
enabled: true,
|
||||
http_proxy: Some("http://127.0.0.1:7890".into()),
|
||||
https_proxy: None,
|
||||
all_proxy: None,
|
||||
no_proxy: Vec::new(),
|
||||
scope: ProxyScope::Services,
|
||||
services: Vec::new(),
|
||||
};
|
||||
|
||||
let error = proxy.validate().unwrap_err().to_string();
|
||||
assert!(error.contains("proxy.scope='services'"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn env_override_proxy_scope_services() {
|
||||
let _env_guard = env_override_test_guard();
|
||||
clear_proxy_env_test_vars();
|
||||
|
||||
let mut config = Config::default();
|
||||
std::env::set_var("ZEROCLAW_PROXY_ENABLED", "true");
|
||||
std::env::set_var("ZEROCLAW_HTTP_PROXY", "http://127.0.0.1:7890");
|
||||
std::env::set_var(
|
||||
"ZEROCLAW_PROXY_SERVICES",
|
||||
"provider.openai, tool.http_request",
|
||||
);
|
||||
std::env::set_var("ZEROCLAW_PROXY_SCOPE", "services");
|
||||
|
||||
config.apply_env_overrides();
|
||||
|
||||
assert!(config.proxy.enabled);
|
||||
assert_eq!(config.proxy.scope, ProxyScope::Services);
|
||||
assert_eq!(
|
||||
config.proxy.http_proxy.as_deref(),
|
||||
Some("http://127.0.0.1:7890")
|
||||
);
|
||||
assert!(config.proxy.should_apply_to_service("provider.openai"));
|
||||
assert!(config.proxy.should_apply_to_service("tool.http_request"));
|
||||
assert!(!config.proxy.should_apply_to_service("provider.anthropic"));
|
||||
|
||||
clear_proxy_env_test_vars();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn env_override_proxy_scope_environment_applies_process_env() {
|
||||
let _env_guard = env_override_test_guard();
|
||||
clear_proxy_env_test_vars();
|
||||
|
||||
let mut config = Config::default();
|
||||
std::env::set_var("ZEROCLAW_PROXY_ENABLED", "true");
|
||||
std::env::set_var("ZEROCLAW_PROXY_SCOPE", "environment");
|
||||
std::env::set_var("ZEROCLAW_HTTP_PROXY", "http://127.0.0.1:7890");
|
||||
std::env::set_var("ZEROCLAW_HTTPS_PROXY", "http://127.0.0.1:7891");
|
||||
std::env::set_var("ZEROCLAW_NO_PROXY", "localhost,127.0.0.1");
|
||||
|
||||
config.apply_env_overrides();
|
||||
|
||||
assert_eq!(config.proxy.scope, ProxyScope::Environment);
|
||||
assert_eq!(
|
||||
std::env::var("HTTP_PROXY").ok().as_deref(),
|
||||
Some("http://127.0.0.1:7890")
|
||||
);
|
||||
assert_eq!(
|
||||
std::env::var("HTTPS_PROXY").ok().as_deref(),
|
||||
Some("http://127.0.0.1:7891")
|
||||
);
|
||||
assert!(std::env::var("NO_PROXY")
|
||||
.ok()
|
||||
.is_some_and(|value| value.contains("localhost")));
|
||||
|
||||
clear_proxy_env_test_vars();
|
||||
}
|
||||
|
||||
fn runtime_proxy_cache_contains(cache_key: &str) -> bool {
|
||||
match runtime_proxy_client_cache().read() {
|
||||
Ok(guard) => guard.contains_key(cache_key),
|
||||
Err(poisoned) => poisoned.into_inner().contains_key(cache_key),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn runtime_proxy_client_cache_reuses_default_profile_key() {
|
||||
let service_key = format!(
|
||||
"provider.cache_test.{}",
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("system clock should be after unix epoch")
|
||||
.as_nanos()
|
||||
);
|
||||
let cache_key = runtime_proxy_cache_key(&service_key, None, None);
|
||||
|
||||
clear_runtime_proxy_client_cache();
|
||||
assert!(!runtime_proxy_cache_contains(&cache_key));
|
||||
|
||||
let _ = build_runtime_proxy_client(&service_key);
|
||||
assert!(runtime_proxy_cache_contains(&cache_key));
|
||||
|
||||
let _ = build_runtime_proxy_client(&service_key);
|
||||
assert!(runtime_proxy_cache_contains(&cache_key));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn set_runtime_proxy_config_clears_runtime_proxy_client_cache() {
|
||||
let service_key = format!(
|
||||
"provider.cache_timeout_test.{}",
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.expect("system clock should be after unix epoch")
|
||||
.as_nanos()
|
||||
);
|
||||
let cache_key = runtime_proxy_cache_key(&service_key, Some(30), Some(5));
|
||||
|
||||
clear_runtime_proxy_client_cache();
|
||||
let _ = build_runtime_proxy_client_with_timeouts(&service_key, 30, 5);
|
||||
assert!(runtime_proxy_cache_contains(&cache_key));
|
||||
|
||||
set_runtime_proxy_config(ProxyConfig::default());
|
||||
assert!(!runtime_proxy_cache_contains(&cache_key));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gateway_config_default_values() {
|
||||
let g = GatewayConfig::default();
|
||||
|
|
|
|||
|
|
@ -43,7 +43,6 @@ impl EmbeddingProvider for NoopEmbedding {
|
|||
// ── OpenAI-compatible embedding provider ─────────────────────
|
||||
|
||||
pub struct OpenAiEmbedding {
|
||||
client: reqwest::Client,
|
||||
base_url: String,
|
||||
api_key: String,
|
||||
model: String,
|
||||
|
|
@ -53,7 +52,6 @@ pub struct OpenAiEmbedding {
|
|||
impl OpenAiEmbedding {
|
||||
pub fn new(base_url: &str, api_key: &str, model: &str, dims: usize) -> Self {
|
||||
Self {
|
||||
client: reqwest::Client::new(),
|
||||
base_url: base_url.trim_end_matches('/').to_string(),
|
||||
api_key: api_key.to_string(),
|
||||
model: model.to_string(),
|
||||
|
|
@ -61,6 +59,10 @@ impl OpenAiEmbedding {
|
|||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> reqwest::Client {
|
||||
crate::config::build_runtime_proxy_client("memory.embeddings")
|
||||
}
|
||||
|
||||
fn has_explicit_api_path(&self) -> bool {
|
||||
let Ok(url) = reqwest::Url::parse(&self.base_url) else {
|
||||
return false;
|
||||
|
|
@ -112,7 +114,7 @@ impl EmbeddingProvider for OpenAiEmbedding {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(self.embeddings_url())
|
||||
.header("Authorization", format!("Bearer {}", self.api_key))
|
||||
.header("Content-Type", "application/json")
|
||||
|
|
|
|||
|
|
@ -133,6 +133,7 @@ pub fn run_wizard() -> Result<Config> {
|
|||
browser: BrowserConfig::default(),
|
||||
http_request: crate::config::HttpRequestConfig::default(),
|
||||
web_search: crate::config::WebSearchConfig::default(),
|
||||
proxy: crate::config::ProxyConfig::default(),
|
||||
identity: crate::config::IdentityConfig::default(),
|
||||
cost: crate::config::CostConfig::default(),
|
||||
peripherals: crate::config::PeripheralsConfig::default(),
|
||||
|
|
@ -356,6 +357,7 @@ pub fn run_quick_setup(
|
|||
browser: BrowserConfig::default(),
|
||||
http_request: crate::config::HttpRequestConfig::default(),
|
||||
web_search: crate::config::WebSearchConfig::default(),
|
||||
proxy: crate::config::ProxyConfig::default(),
|
||||
identity: crate::config::IdentityConfig::default(),
|
||||
cost: crate::config::CostConfig::default(),
|
||||
peripherals: crate::config::PeripheralsConfig::default(),
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@ use serde::{Deserialize, Serialize};
|
|||
pub struct AnthropicProvider {
|
||||
credential: Option<String>,
|
||||
base_url: String,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
|
|
@ -161,11 +160,6 @@ impl AnthropicProvider {
|
|||
.filter(|k| !k.is_empty())
|
||||
.map(ToString::to_string),
|
||||
base_url,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -404,6 +398,10 @@ impl AnthropicProvider {
|
|||
tool_calls,
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.anthropic", 120, 10)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
|
|
@ -433,7 +431,7 @@ impl Provider for AnthropicProvider {
|
|||
};
|
||||
|
||||
let mut request = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/v1/messages", self.base_url))
|
||||
.header("anthropic-version", "2023-06-01")
|
||||
.header("content-type", "application/json")
|
||||
|
|
@ -480,7 +478,7 @@ impl Provider for AnthropicProvider {
|
|||
};
|
||||
|
||||
let req = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/v1/messages", self.base_url))
|
||||
.header("anthropic-version", "2023-06-01")
|
||||
.header("content-type", "application/json")
|
||||
|
|
@ -502,7 +500,7 @@ impl Provider for AnthropicProvider {
|
|||
async fn warmup(&self) -> anyhow::Result<()> {
|
||||
if let Some(credential) = self.credential.as_ref() {
|
||||
let mut request = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/v1/messages", self.base_url))
|
||||
.header("anthropic-version", "2023-06-01");
|
||||
request = self.apply_auth(request, credential);
|
||||
|
|
@ -594,7 +592,9 @@ mod tests {
|
|||
let provider = AnthropicProvider::new(None);
|
||||
let request = provider
|
||||
.apply_auth(
|
||||
provider.client.get("https://api.anthropic.com/v1/models"),
|
||||
provider
|
||||
.http_client()
|
||||
.get("https://api.anthropic.com/v1/models"),
|
||||
"sk-ant-oat01-test-token",
|
||||
)
|
||||
.build()
|
||||
|
|
@ -622,7 +622,9 @@ mod tests {
|
|||
let provider = AnthropicProvider::new(None);
|
||||
let request = provider
|
||||
.apply_auth(
|
||||
provider.client.get("https://api.anthropic.com/v1/models"),
|
||||
provider
|
||||
.http_client()
|
||||
.get("https://api.anthropic.com/v1/models"),
|
||||
"sk-ant-api-key",
|
||||
)
|
||||
.build()
|
||||
|
|
|
|||
|
|
@ -22,7 +22,6 @@ pub struct OpenAiCompatibleProvider {
|
|||
/// When false, do not fall back to /v1/responses on chat completions 404.
|
||||
/// GLM/Zhipu does not support the responses API.
|
||||
supports_responses_fallback: bool,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
/// How the provider expects the API key to be sent.
|
||||
|
|
@ -49,11 +48,6 @@ impl OpenAiCompatibleProvider {
|
|||
credential: credential.map(ToString::to_string),
|
||||
auth_header: auth_style,
|
||||
supports_responses_fallback: true,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -71,14 +65,13 @@ impl OpenAiCompatibleProvider {
|
|||
credential: credential.map(ToString::to_string),
|
||||
auth_header: auth_style,
|
||||
supports_responses_fallback: false,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.compatible", 120, 10)
|
||||
}
|
||||
|
||||
/// Build the full URL for chat completions, detecting if base_url already includes the path.
|
||||
/// This allows custom providers with non-standard endpoints (e.g., VolcEngine ARK uses
|
||||
/// `/api/coding/v3/chat/completions` instead of `/v1/chat/completions`).
|
||||
|
|
@ -513,7 +506,7 @@ impl OpenAiCompatibleProvider {
|
|||
let url = self.responses_url();
|
||||
|
||||
let response = self
|
||||
.apply_auth_header(self.client.post(&url).json(&request), credential)
|
||||
.apply_auth_header(self.http_client().post(&url).json(&request), credential)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
|
|
@ -578,7 +571,7 @@ impl Provider for OpenAiCompatibleProvider {
|
|||
let url = self.chat_completions_url();
|
||||
|
||||
let response = self
|
||||
.apply_auth_header(self.client.post(&url).json(&request), credential)
|
||||
.apply_auth_header(self.http_client().post(&url).json(&request), credential)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
|
|
@ -660,7 +653,7 @@ impl Provider for OpenAiCompatibleProvider {
|
|||
|
||||
let url = self.chat_completions_url();
|
||||
let response = self
|
||||
.apply_auth_header(self.client.post(&url).json(&request), credential)
|
||||
.apply_auth_header(self.http_client().post(&url).json(&request), credential)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
|
|
@ -760,7 +753,7 @@ impl Provider for OpenAiCompatibleProvider {
|
|||
|
||||
let url = self.chat_completions_url();
|
||||
let response = self
|
||||
.apply_auth_header(self.client.post(&url).json(&request), credential)
|
||||
.apply_auth_header(self.http_client().post(&url).json(&request), credential)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
|
|
@ -900,7 +893,7 @@ impl Provider for OpenAiCompatibleProvider {
|
|||
};
|
||||
|
||||
let url = self.chat_completions_url();
|
||||
let client = self.client.clone();
|
||||
let client = self.http_client();
|
||||
let auth_header = self.auth_header.clone();
|
||||
|
||||
// Use a channel to bridge the async HTTP response to the stream
|
||||
|
|
@ -967,7 +960,7 @@ impl Provider for OpenAiCompatibleProvider {
|
|||
// the goal is TLS handshake and HTTP/2 negotiation.
|
||||
let url = self.chat_completions_url();
|
||||
let _ = self
|
||||
.apply_auth_header(self.client.get(&url), credential)
|
||||
.apply_auth_header(self.http_client().get(&url), credential)
|
||||
.send()
|
||||
.await?;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -161,7 +161,6 @@ pub struct CopilotProvider {
|
|||
/// Mutex ensures only one caller refreshes tokens at a time,
|
||||
/// preventing duplicate device flow prompts or redundant API calls.
|
||||
refresh_lock: Arc<Mutex<Option<CachedApiKey>>>,
|
||||
http: Client,
|
||||
token_dir: PathBuf,
|
||||
}
|
||||
|
||||
|
|
@ -204,15 +203,14 @@ impl CopilotProvider {
|
|||
.filter(|token| !token.is_empty())
|
||||
.map(String::from),
|
||||
refresh_lock: Arc::new(Mutex::new(None)),
|
||||
http: Client::builder()
|
||||
.timeout(Duration::from_secs(120))
|
||||
.connect_timeout(Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
token_dir,
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.copilot", 120, 10)
|
||||
}
|
||||
|
||||
/// Required headers for Copilot API requests (editor identification).
|
||||
const COPILOT_HEADERS: [(&str, &str); 4] = [
|
||||
("Editor-Version", "vscode/1.85.1"),
|
||||
|
|
@ -326,7 +324,7 @@ impl CopilotProvider {
|
|||
};
|
||||
|
||||
let mut req = self
|
||||
.http
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("Bearer {token}"))
|
||||
.json(&request);
|
||||
|
|
@ -438,7 +436,7 @@ impl CopilotProvider {
|
|||
/// Run GitHub OAuth device code flow.
|
||||
async fn device_code_login(&self) -> anyhow::Result<String> {
|
||||
let response: DeviceCodeResponse = self
|
||||
.http
|
||||
.http_client()
|
||||
.post(GITHUB_DEVICE_CODE_URL)
|
||||
.header("Accept", "application/json")
|
||||
.json(&serde_json::json!({
|
||||
|
|
@ -467,7 +465,7 @@ impl CopilotProvider {
|
|||
tokio::time::sleep(poll_interval).await;
|
||||
|
||||
let token_response: AccessTokenResponse = self
|
||||
.http
|
||||
.http_client()
|
||||
.post(GITHUB_ACCESS_TOKEN_URL)
|
||||
.header("Accept", "application/json")
|
||||
.json(&serde_json::json!({
|
||||
|
|
@ -502,7 +500,7 @@ impl CopilotProvider {
|
|||
|
||||
/// Exchange a GitHub access token for a Copilot API key.
|
||||
async fn exchange_for_api_key(&self, access_token: &str) -> anyhow::Result<ApiKeyInfo> {
|
||||
let mut request = self.http.get(GITHUB_API_KEY_URL);
|
||||
let mut request = self.http_client().get(GITHUB_API_KEY_URL);
|
||||
for (header, value) in &Self::COPILOT_HEADERS {
|
||||
request = request.header(*header, *value);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@ use std::path::PathBuf;
|
|||
/// Gemini provider supporting multiple authentication methods.
|
||||
pub struct GeminiProvider {
|
||||
auth: Option<GeminiAuth>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
/// Resolved credential — the variant determines both the HTTP auth method
|
||||
|
|
@ -161,11 +160,6 @@ impl GeminiProvider {
|
|||
|
||||
Self {
|
||||
auth: resolved_auth,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -279,6 +273,10 @@ impl GeminiProvider {
|
|||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.gemini", 120, 10)
|
||||
}
|
||||
|
||||
fn build_generate_content_request(
|
||||
&self,
|
||||
auth: &GeminiAuth,
|
||||
|
|
@ -286,6 +284,7 @@ impl GeminiProvider {
|
|||
request: &GenerateContentRequest,
|
||||
model: &str,
|
||||
) -> reqwest::RequestBuilder {
|
||||
let req = self.http_client().post(url).json(request);
|
||||
match auth {
|
||||
GeminiAuth::OAuthToken(token) => {
|
||||
// Internal API expects the model in the request body envelope
|
||||
|
|
@ -317,12 +316,12 @@ impl GeminiProvider {
|
|||
.collect(),
|
||||
}),
|
||||
};
|
||||
self.client
|
||||
self.http_client()
|
||||
.post(url)
|
||||
.json(&internal_request)
|
||||
.bearer_auth(token)
|
||||
}
|
||||
_ => self.client.post(url).json(request),
|
||||
_ => req,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -408,7 +407,7 @@ impl Provider for GeminiProvider {
|
|||
"https://generativelanguage.googleapis.com/v1beta/models".to_string()
|
||||
};
|
||||
|
||||
let mut request = self.client.get(&url);
|
||||
let mut request = self.http_client().get(&url);
|
||||
if let GeminiAuth::OAuthToken(token) = auth {
|
||||
request = request.bearer_auth(token);
|
||||
}
|
||||
|
|
@ -470,17 +469,13 @@ mod tests {
|
|||
fn auth_source_explicit_key() {
|
||||
let provider = GeminiProvider {
|
||||
auth: Some(GeminiAuth::ExplicitKey("key".into())),
|
||||
client: Client::new(),
|
||||
};
|
||||
assert_eq!(provider.auth_source(), "config");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn auth_source_none_without_credentials() {
|
||||
let provider = GeminiProvider {
|
||||
auth: None,
|
||||
client: Client::new(),
|
||||
};
|
||||
let provider = GeminiProvider { auth: None };
|
||||
assert_eq!(provider.auth_source(), "none");
|
||||
}
|
||||
|
||||
|
|
@ -488,7 +483,6 @@ mod tests {
|
|||
fn auth_source_oauth() {
|
||||
let provider = GeminiProvider {
|
||||
auth: Some(GeminiAuth::OAuthToken("ya29.mock".into())),
|
||||
client: Client::new(),
|
||||
};
|
||||
assert_eq!(provider.auth_source(), "Gemini CLI OAuth");
|
||||
}
|
||||
|
|
@ -534,7 +528,6 @@ mod tests {
|
|||
fn oauth_request_uses_bearer_auth_header() {
|
||||
let provider = GeminiProvider {
|
||||
auth: Some(GeminiAuth::OAuthToken("ya29.mock-token".into())),
|
||||
client: Client::new(),
|
||||
};
|
||||
let auth = GeminiAuth::OAuthToken("ya29.mock-token".into());
|
||||
let url = GeminiProvider::build_generate_content_url("gemini-2.0-flash", &auth);
|
||||
|
|
@ -570,7 +563,6 @@ mod tests {
|
|||
fn api_key_request_does_not_set_bearer_header() {
|
||||
let provider = GeminiProvider {
|
||||
auth: Some(GeminiAuth::ExplicitKey("api-key-123".into())),
|
||||
client: Client::new(),
|
||||
};
|
||||
let auth = GeminiAuth::ExplicitKey("api-key-123".into());
|
||||
let url = GeminiProvider::build_generate_content_url("gemini-2.0-flash", &auth);
|
||||
|
|
@ -689,10 +681,7 @@ mod tests {
|
|||
|
||||
#[tokio::test]
|
||||
async fn warmup_without_key_is_noop() {
|
||||
let provider = GeminiProvider {
|
||||
auth: None,
|
||||
client: Client::new(),
|
||||
};
|
||||
let provider = GeminiProvider { auth: None };
|
||||
let result = provider.warmup().await;
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ pub struct GlmProvider {
|
|||
api_key_id: String,
|
||||
api_key_secret: String,
|
||||
base_url: String,
|
||||
client: Client,
|
||||
/// Cached JWT token + expiry timestamp (ms)
|
||||
token_cache: Mutex<Option<(String, u64)>>,
|
||||
}
|
||||
|
|
@ -90,11 +89,6 @@ impl GlmProvider {
|
|||
api_key_id: id,
|
||||
api_key_secret: secret,
|
||||
base_url: "https://api.z.ai/api/paas/v4".to_string(),
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
token_cache: Mutex::new(None),
|
||||
}
|
||||
}
|
||||
|
|
@ -149,6 +143,10 @@ impl GlmProvider {
|
|||
|
||||
Ok(token)
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.glm", 120, 10)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
|
|
@ -185,7 +183,7 @@ impl Provider for GlmProvider {
|
|||
let url = format!("{}/chat/completions", self.base_url);
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(&url)
|
||||
.header("Authorization", format!("Bearer {token}"))
|
||||
.json(&request)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ use serde::{Deserialize, Serialize};
|
|||
pub struct OllamaProvider {
|
||||
base_url: String,
|
||||
api_key: Option<String>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
// ─── Request Structures ───────────────────────────────────────────────────────
|
||||
|
|
@ -76,11 +75,6 @@ impl OllamaProvider {
|
|||
.trim_end_matches('/')
|
||||
.to_string(),
|
||||
api_key,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(300))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -91,6 +85,10 @@ impl OllamaProvider {
|
|||
.is_some_and(|host| matches!(host.as_str(), "localhost" | "127.0.0.1" | "::1"))
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.ollama", 300, 10)
|
||||
}
|
||||
|
||||
fn resolve_request_details(&self, model: &str) -> anyhow::Result<(String, bool)> {
|
||||
let requests_cloud = model.ends_with(":cloud");
|
||||
let normalized_model = model.strip_suffix(":cloud").unwrap_or(model).to_string();
|
||||
|
|
@ -139,7 +137,7 @@ impl OllamaProvider {
|
|||
temperature
|
||||
);
|
||||
|
||||
let mut request_builder = self.client.post(&url).json(&request);
|
||||
let mut request_builder = self.http_client().post(&url).json(&request);
|
||||
|
||||
if should_auth {
|
||||
if let Some(key) = self.api_key.as_ref() {
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@ use serde::{Deserialize, Serialize};
|
|||
pub struct OpenAiProvider {
|
||||
base_url: String,
|
||||
credential: Option<String>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
|
|
@ -148,11 +147,6 @@ impl OpenAiProvider {
|
|||
.map(|u| u.trim_end_matches('/').to_string())
|
||||
.unwrap_or_else(|| "https://api.openai.com/v1".to_string()),
|
||||
credential: credential.map(ToString::to_string),
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -254,6 +248,10 @@ impl OpenAiProvider {
|
|||
|
||||
ProviderChatResponse { text, tool_calls }
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.openai", 120, 10)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
|
|
@ -290,7 +288,7 @@ impl Provider for OpenAiProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/chat/completions", self.base_url))
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.json(&request)
|
||||
|
|
@ -331,7 +329,7 @@ impl Provider for OpenAiProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post(format!("{}/chat/completions", self.base_url))
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.json(&native_request)
|
||||
|
|
@ -358,7 +356,7 @@ impl Provider for OpenAiProvider {
|
|||
|
||||
async fn warmup(&self) -> anyhow::Result<()> {
|
||||
if let Some(credential) = self.credential.as_ref() {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get(format!("{}/models", self.base_url))
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.send()
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ use serde::{Deserialize, Serialize};
|
|||
|
||||
pub struct OpenRouterProvider {
|
||||
credential: Option<String>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
|
|
@ -113,11 +112,6 @@ impl OpenRouterProvider {
|
|||
pub fn new(credential: Option<&str>) -> Self {
|
||||
Self {
|
||||
credential: credential.map(ToString::to_string),
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(120))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -225,6 +219,10 @@ impl OpenRouterProvider {
|
|||
tool_calls,
|
||||
}
|
||||
}
|
||||
|
||||
fn http_client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("provider.openrouter", 120, 10)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
|
|
@ -233,7 +231,7 @@ impl Provider for OpenRouterProvider {
|
|||
// Hit a lightweight endpoint to establish TLS + HTTP/2 connection pool.
|
||||
// This prevents the first real chat request from timing out on cold start.
|
||||
if let Some(credential) = self.credential.as_ref() {
|
||||
self.client
|
||||
self.http_client()
|
||||
.get("https://openrouter.ai/api/v1/auth/key")
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.send()
|
||||
|
|
@ -274,7 +272,7 @@ impl Provider for OpenRouterProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://openrouter.ai/api/v1/chat/completions")
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.header(
|
||||
|
|
@ -324,7 +322,7 @@ impl Provider for OpenRouterProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://openrouter.ai/api/v1/chat/completions")
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.header(
|
||||
|
|
@ -372,7 +370,7 @@ impl Provider for OpenRouterProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://openrouter.ai/api/v1/chat/completions")
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.header(
|
||||
|
|
@ -460,7 +458,7 @@ impl Provider for OpenRouterProvider {
|
|||
};
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.http_client()
|
||||
.post("https://openrouter.ai/api/v1/chat/completions")
|
||||
.header("Authorization", format!("Bearer {credential}"))
|
||||
.header(
|
||||
|
|
|
|||
|
|
@ -736,7 +736,7 @@ impl BrowserTool {
|
|||
}
|
||||
});
|
||||
|
||||
let client = reqwest::Client::new();
|
||||
let client = crate::config::build_runtime_proxy_client("tool.browser");
|
||||
let mut request = client
|
||||
.post(endpoint)
|
||||
.timeout(Duration::from_millis(self.computer_use.timeout_ms))
|
||||
|
|
|
|||
|
|
@ -24,7 +24,6 @@ pub struct ComposioTool {
|
|||
api_key: String,
|
||||
default_entity_id: String,
|
||||
security: Arc<SecurityPolicy>,
|
||||
client: Client,
|
||||
}
|
||||
|
||||
impl ComposioTool {
|
||||
|
|
@ -37,14 +36,13 @@ impl ComposioTool {
|
|||
api_key: api_key.to_string(),
|
||||
default_entity_id: normalize_entity_id(default_entity_id.unwrap_or("default")),
|
||||
security,
|
||||
client: Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(60))
|
||||
.connect_timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
}
|
||||
}
|
||||
|
||||
fn client(&self) -> Client {
|
||||
crate::config::build_runtime_proxy_client_with_timeouts("tool.composio", 60, 10)
|
||||
}
|
||||
|
||||
/// List available Composio apps/actions for the authenticated user.
|
||||
///
|
||||
/// Uses v3 endpoint first and falls back to v2 for compatibility.
|
||||
|
|
@ -68,7 +66,7 @@ impl ComposioTool {
|
|||
|
||||
async fn list_actions_v3(&self, app_name: Option<&str>) -> anyhow::Result<Vec<ComposioAction>> {
|
||||
let url = format!("{COMPOSIO_API_BASE_V3}/tools");
|
||||
let mut req = self.client.get(&url).header("x-api-key", &self.api_key);
|
||||
let mut req = self.client().get(&url).header("x-api-key", &self.api_key);
|
||||
|
||||
req = req.query(&[("limit", "200")]);
|
||||
if let Some(app) = app_name.map(str::trim).filter(|app| !app.is_empty()) {
|
||||
|
|
@ -95,7 +93,7 @@ impl ComposioTool {
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.get(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.send()
|
||||
|
|
@ -180,7 +178,7 @@ impl ComposioTool {
|
|||
);
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.post(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.json(&body)
|
||||
|
|
@ -216,7 +214,7 @@ impl ComposioTool {
|
|||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.post(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.json(&body)
|
||||
|
|
@ -288,7 +286,7 @@ impl ComposioTool {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.post(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.json(&body)
|
||||
|
|
@ -321,7 +319,7 @@ impl ComposioTool {
|
|||
});
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.post(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.json(&body)
|
||||
|
|
@ -345,7 +343,7 @@ impl ComposioTool {
|
|||
let url = format!("{COMPOSIO_API_BASE_V3}/auth_configs");
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.client()
|
||||
.get(&url)
|
||||
.header("x-api-key", &self.api_key)
|
||||
.query(&[
|
||||
|
|
|
|||
|
|
@ -114,10 +114,12 @@ impl HttpRequestTool {
|
|||
headers: Vec<(String, String)>,
|
||||
body: Option<&str>,
|
||||
) -> anyhow::Result<reqwest::Response> {
|
||||
let client = reqwest::Client::builder()
|
||||
let builder = reqwest::Client::builder()
|
||||
.timeout(Duration::from_secs(self.timeout_secs))
|
||||
.redirect(reqwest::redirect::Policy::none())
|
||||
.build()?;
|
||||
.connect_timeout(Duration::from_secs(10))
|
||||
.redirect(reqwest::redirect::Policy::none());
|
||||
let builder = crate::config::apply_runtime_proxy_to_builder(builder, "tool.http_request");
|
||||
let client = builder.build()?;
|
||||
|
||||
let mut request = client.request(method, url);
|
||||
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ pub mod image_info;
|
|||
pub mod memory_forget;
|
||||
pub mod memory_recall;
|
||||
pub mod memory_store;
|
||||
pub mod proxy_config;
|
||||
pub mod pushover;
|
||||
pub mod schedule;
|
||||
pub mod schema;
|
||||
|
|
@ -48,6 +49,7 @@ pub use image_info::ImageInfoTool;
|
|||
pub use memory_forget::MemoryForgetTool;
|
||||
pub use memory_recall::MemoryRecallTool;
|
||||
pub use memory_store::MemoryStoreTool;
|
||||
pub use proxy_config::ProxyConfigTool;
|
||||
pub use pushover::PushoverTool;
|
||||
pub use schedule::ScheduleTool;
|
||||
#[allow(unused_imports)]
|
||||
|
|
@ -144,6 +146,7 @@ pub fn all_tools_with_runtime(
|
|||
Box::new(MemoryRecallTool::new(memory.clone())),
|
||||
Box::new(MemoryForgetTool::new(memory, security.clone())),
|
||||
Box::new(ScheduleTool::new(security.clone(), root_config.clone())),
|
||||
Box::new(ProxyConfigTool::new(config.clone(), security.clone())),
|
||||
Box::new(GitOperationsTool::new(
|
||||
security.clone(),
|
||||
workspace_dir.to_path_buf(),
|
||||
|
|
@ -292,6 +295,7 @@ mod tests {
|
|||
assert!(!names.contains(&"browser_open"));
|
||||
assert!(names.contains(&"schedule"));
|
||||
assert!(names.contains(&"pushover"));
|
||||
assert!(names.contains(&"proxy_config"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
|
@ -330,6 +334,7 @@ mod tests {
|
|||
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
|
||||
assert!(names.contains(&"browser_open"));
|
||||
assert!(names.contains(&"pushover"));
|
||||
assert!(names.contains(&"proxy_config"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
|
|
|||
492
src/tools/proxy_config.rs
Normal file
492
src/tools/proxy_config.rs
Normal file
|
|
@ -0,0 +1,492 @@
|
|||
use super::traits::{Tool, ToolResult};
|
||||
use crate::config::{
|
||||
runtime_proxy_config, set_runtime_proxy_config, Config, ProxyConfig, ProxyScope,
|
||||
};
|
||||
use crate::security::SecurityPolicy;
|
||||
use async_trait::async_trait;
|
||||
use serde_json::{json, Value};
|
||||
use std::fs;
|
||||
use std::sync::Arc;
|
||||
|
||||
pub struct ProxyConfigTool {
|
||||
config: Arc<Config>,
|
||||
security: Arc<SecurityPolicy>,
|
||||
}
|
||||
|
||||
impl ProxyConfigTool {
|
||||
pub fn new(config: Arc<Config>, security: Arc<SecurityPolicy>) -> Self {
|
||||
Self { config, security }
|
||||
}
|
||||
|
||||
fn load_config_without_env(&self) -> anyhow::Result<Config> {
|
||||
let contents = fs::read_to_string(&self.config.config_path).map_err(|error| {
|
||||
anyhow::anyhow!(
|
||||
"Failed to read config file {}: {error}",
|
||||
self.config.config_path.display()
|
||||
)
|
||||
})?;
|
||||
|
||||
let mut parsed: Config = toml::from_str(&contents).map_err(|error| {
|
||||
anyhow::anyhow!(
|
||||
"Failed to parse config file {}: {error}",
|
||||
self.config.config_path.display()
|
||||
)
|
||||
})?;
|
||||
parsed.config_path = self.config.config_path.clone();
|
||||
parsed.workspace_dir = self.config.workspace_dir.clone();
|
||||
Ok(parsed)
|
||||
}
|
||||
|
||||
fn require_write_access(&self) -> Option<ToolResult> {
|
||||
if !self.security.can_act() {
|
||||
return Some(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Action blocked: autonomy is read-only".into()),
|
||||
});
|
||||
}
|
||||
|
||||
if !self.security.record_action() {
|
||||
return Some(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Action blocked: rate limit exceeded".into()),
|
||||
});
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
fn parse_scope(raw: &str) -> Option<ProxyScope> {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"environment" | "env" => Some(ProxyScope::Environment),
|
||||
"zeroclaw" | "internal" | "core" => Some(ProxyScope::Zeroclaw),
|
||||
"services" | "service" => Some(ProxyScope::Services),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_string_list(raw: &Value, field: &str) -> anyhow::Result<Vec<String>> {
|
||||
if let Some(raw_string) = raw.as_str() {
|
||||
return Ok(raw_string
|
||||
.split(',')
|
||||
.map(str::trim)
|
||||
.filter(|entry| !entry.is_empty())
|
||||
.map(ToOwned::to_owned)
|
||||
.collect());
|
||||
}
|
||||
|
||||
if let Some(array) = raw.as_array() {
|
||||
let mut out = Vec::new();
|
||||
for item in array {
|
||||
let value = item
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("'{field}' array must only contain strings"))?;
|
||||
let trimmed = value.trim();
|
||||
if !trimmed.is_empty() {
|
||||
out.push(trimmed.to_string());
|
||||
}
|
||||
}
|
||||
return Ok(out);
|
||||
}
|
||||
|
||||
anyhow::bail!("'{field}' must be a string or string[]")
|
||||
}
|
||||
|
||||
fn parse_optional_string_update(
|
||||
args: &Value,
|
||||
field: &str,
|
||||
) -> anyhow::Result<Option<Option<String>>> {
|
||||
let Some(raw) = args.get(field) else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
if raw.is_null() {
|
||||
return Ok(Some(None));
|
||||
}
|
||||
|
||||
let value = raw
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("'{field}' must be a string or null"))?
|
||||
.trim()
|
||||
.to_string();
|
||||
Ok(Some((!value.is_empty()).then_some(value)))
|
||||
}
|
||||
|
||||
fn env_snapshot() -> Value {
|
||||
json!({
|
||||
"HTTP_PROXY": std::env::var("HTTP_PROXY").ok(),
|
||||
"HTTPS_PROXY": std::env::var("HTTPS_PROXY").ok(),
|
||||
"ALL_PROXY": std::env::var("ALL_PROXY").ok(),
|
||||
"NO_PROXY": std::env::var("NO_PROXY").ok(),
|
||||
})
|
||||
}
|
||||
|
||||
fn proxy_json(proxy: &ProxyConfig) -> Value {
|
||||
json!({
|
||||
"enabled": proxy.enabled,
|
||||
"scope": proxy.scope,
|
||||
"http_proxy": proxy.http_proxy,
|
||||
"https_proxy": proxy.https_proxy,
|
||||
"all_proxy": proxy.all_proxy,
|
||||
"no_proxy": proxy.normalized_no_proxy(),
|
||||
"services": proxy.normalized_services(),
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_get(&self) -> anyhow::Result<ToolResult> {
|
||||
let file_proxy = self.load_config_without_env()?.proxy;
|
||||
let runtime_proxy = runtime_proxy_config();
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"proxy": Self::proxy_json(&file_proxy),
|
||||
"runtime_proxy": Self::proxy_json(&runtime_proxy),
|
||||
"environment": Self::env_snapshot(),
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_list_services(&self) -> anyhow::Result<ToolResult> {
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"supported_service_keys": ProxyConfig::supported_service_keys(),
|
||||
"supported_selectors": ProxyConfig::supported_service_selectors(),
|
||||
"usage_example": {
|
||||
"action": "set",
|
||||
"scope": "services",
|
||||
"services": ["provider.openai", "tool.http_request", "channel.telegram"]
|
||||
}
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_set(&self, args: &Value) -> anyhow::Result<ToolResult> {
|
||||
let mut cfg = self.load_config_without_env()?;
|
||||
let previous_scope = cfg.proxy.scope;
|
||||
let mut proxy = cfg.proxy.clone();
|
||||
let mut touched_proxy_url = false;
|
||||
|
||||
if let Some(enabled) = args.get("enabled") {
|
||||
proxy.enabled = enabled
|
||||
.as_bool()
|
||||
.ok_or_else(|| anyhow::anyhow!("'enabled' must be a boolean"))?;
|
||||
}
|
||||
|
||||
if let Some(scope_raw) = args.get("scope") {
|
||||
let scope = scope_raw
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("'scope' must be a string"))?;
|
||||
proxy.scope = Self::parse_scope(scope).ok_or_else(|| {
|
||||
anyhow::anyhow!("Invalid scope '{scope}'. Use environment|zeroclaw|services")
|
||||
})?;
|
||||
}
|
||||
|
||||
if let Some(update) = Self::parse_optional_string_update(args, "http_proxy")? {
|
||||
proxy.http_proxy = update;
|
||||
touched_proxy_url = true;
|
||||
}
|
||||
|
||||
if let Some(update) = Self::parse_optional_string_update(args, "https_proxy")? {
|
||||
proxy.https_proxy = update;
|
||||
touched_proxy_url = true;
|
||||
}
|
||||
|
||||
if let Some(update) = Self::parse_optional_string_update(args, "all_proxy")? {
|
||||
proxy.all_proxy = update;
|
||||
touched_proxy_url = true;
|
||||
}
|
||||
|
||||
if let Some(no_proxy_raw) = args.get("no_proxy") {
|
||||
proxy.no_proxy = Self::parse_string_list(no_proxy_raw, "no_proxy")?;
|
||||
}
|
||||
|
||||
if let Some(services_raw) = args.get("services") {
|
||||
proxy.services = Self::parse_string_list(services_raw, "services")?;
|
||||
}
|
||||
|
||||
if args.get("enabled").is_none() && touched_proxy_url {
|
||||
proxy.enabled = true;
|
||||
}
|
||||
|
||||
proxy.no_proxy = proxy.normalized_no_proxy();
|
||||
proxy.services = proxy.normalized_services();
|
||||
proxy.validate()?;
|
||||
|
||||
cfg.proxy = proxy.clone();
|
||||
cfg.save()?;
|
||||
set_runtime_proxy_config(proxy.clone());
|
||||
|
||||
if proxy.enabled && proxy.scope == ProxyScope::Environment {
|
||||
proxy.apply_to_process_env();
|
||||
} else if previous_scope == ProxyScope::Environment {
|
||||
ProxyConfig::clear_process_env();
|
||||
}
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"message": "Proxy configuration updated",
|
||||
"proxy": Self::proxy_json(&proxy),
|
||||
"environment": Self::env_snapshot(),
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_disable(&self, args: &Value) -> anyhow::Result<ToolResult> {
|
||||
let mut cfg = self.load_config_without_env()?;
|
||||
let clear_env_default = cfg.proxy.scope == ProxyScope::Environment;
|
||||
cfg.proxy.enabled = false;
|
||||
cfg.save()?;
|
||||
|
||||
set_runtime_proxy_config(cfg.proxy.clone());
|
||||
|
||||
let clear_env = args
|
||||
.get("clear_env")
|
||||
.and_then(Value::as_bool)
|
||||
.unwrap_or(clear_env_default);
|
||||
if clear_env {
|
||||
ProxyConfig::clear_process_env();
|
||||
}
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"message": "Proxy disabled",
|
||||
"proxy": Self::proxy_json(&cfg.proxy),
|
||||
"environment": Self::env_snapshot(),
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_apply_env(&self) -> anyhow::Result<ToolResult> {
|
||||
let cfg = self.load_config_without_env()?;
|
||||
let proxy = cfg.proxy;
|
||||
proxy.validate()?;
|
||||
|
||||
if !proxy.enabled {
|
||||
anyhow::bail!("Proxy is disabled. Use action 'set' with enabled=true first");
|
||||
}
|
||||
|
||||
if proxy.scope != ProxyScope::Environment {
|
||||
anyhow::bail!(
|
||||
"apply_env only works when proxy.scope is 'environment' (current: {:?})",
|
||||
proxy.scope
|
||||
);
|
||||
}
|
||||
|
||||
proxy.apply_to_process_env();
|
||||
set_runtime_proxy_config(proxy.clone());
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"message": "Proxy environment variables applied",
|
||||
"proxy": Self::proxy_json(&proxy),
|
||||
"environment": Self::env_snapshot(),
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn handle_clear_env(&self) -> anyhow::Result<ToolResult> {
|
||||
ProxyConfig::clear_process_env();
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string_pretty(&json!({
|
||||
"message": "Proxy environment variables cleared",
|
||||
"environment": Self::env_snapshot(),
|
||||
}))?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Tool for ProxyConfigTool {
|
||||
fn name(&self) -> &str {
|
||||
"proxy_config"
|
||||
}
|
||||
|
||||
fn description(&self) -> &str {
|
||||
"Manage ZeroClaw proxy settings (scope: environment | zeroclaw | services), including runtime and process env application"
|
||||
}
|
||||
|
||||
fn parameters_schema(&self) -> Value {
|
||||
json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["get", "set", "disable", "list_services", "apply_env", "clear_env"],
|
||||
"default": "get"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean",
|
||||
"description": "Enable or disable proxy"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Proxy scope: environment | zeroclaw | services"
|
||||
},
|
||||
"http_proxy": {
|
||||
"type": ["string", "null"],
|
||||
"description": "HTTP proxy URL"
|
||||
},
|
||||
"https_proxy": {
|
||||
"type": ["string", "null"],
|
||||
"description": "HTTPS proxy URL"
|
||||
},
|
||||
"all_proxy": {
|
||||
"type": ["string", "null"],
|
||||
"description": "Fallback proxy URL for all protocols"
|
||||
},
|
||||
"no_proxy": {
|
||||
"description": "Comma-separated string or array of NO_PROXY entries",
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"type": "array", "items": {"type": "string"}}
|
||||
]
|
||||
},
|
||||
"services": {
|
||||
"description": "Comma-separated string or array of service selectors used when scope=services",
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"type": "array", "items": {"type": "string"}}
|
||||
]
|
||||
},
|
||||
"clear_env": {
|
||||
"type": "boolean",
|
||||
"description": "When action=disable, clear process proxy environment variables"
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
|
||||
let action = args
|
||||
.get("action")
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or("get")
|
||||
.to_ascii_lowercase();
|
||||
|
||||
let result = match action.as_str() {
|
||||
"get" => self.handle_get(),
|
||||
"list_services" => self.handle_list_services(),
|
||||
"set" | "disable" | "apply_env" | "clear_env" => {
|
||||
if let Some(blocked) = self.require_write_access() {
|
||||
return Ok(blocked);
|
||||
}
|
||||
|
||||
match action.as_str() {
|
||||
"set" => self.handle_set(&args),
|
||||
"disable" => self.handle_disable(&args),
|
||||
"apply_env" => self.handle_apply_env(),
|
||||
"clear_env" => self.handle_clear_env(),
|
||||
_ => unreachable!("handled above"),
|
||||
}
|
||||
}
|
||||
_ => anyhow::bail!(
|
||||
"Unknown action '{action}'. Valid: get, set, disable, list_services, apply_env, clear_env"
|
||||
),
|
||||
};
|
||||
|
||||
match result {
|
||||
Ok(outcome) => Ok(outcome),
|
||||
Err(error) => Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(error.to_string()),
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::security::{AutonomyLevel, SecurityPolicy};
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn test_security() -> Arc<SecurityPolicy> {
|
||||
Arc::new(SecurityPolicy {
|
||||
autonomy: AutonomyLevel::Supervised,
|
||||
workspace_dir: std::env::temp_dir(),
|
||||
..SecurityPolicy::default()
|
||||
})
|
||||
}
|
||||
|
||||
fn test_config(tmp: &TempDir) -> Arc<Config> {
|
||||
let config = Config {
|
||||
workspace_dir: tmp.path().join("workspace"),
|
||||
config_path: tmp.path().join("config.toml"),
|
||||
..Config::default()
|
||||
};
|
||||
config.save().unwrap();
|
||||
Arc::new(config)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn list_services_action_returns_known_keys() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let tool = ProxyConfigTool::new(test_config(&tmp), test_security());
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "list_services"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("provider.openai"));
|
||||
assert!(result.output.contains("tool.http_request"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn set_scope_services_requires_services_entries() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let tool = ProxyConfigTool::new(test_config(&tmp), test_security());
|
||||
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": "set",
|
||||
"enabled": true,
|
||||
"scope": "services",
|
||||
"http_proxy": "http://127.0.0.1:7890",
|
||||
"services": []
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(!result.success);
|
||||
assert!(result
|
||||
.error
|
||||
.unwrap_or_default()
|
||||
.contains("proxy.scope='services'"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn set_and_get_round_trip_proxy_scope() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let tool = ProxyConfigTool::new(test_config(&tmp), test_security());
|
||||
|
||||
let set_result = tool
|
||||
.execute(json!({
|
||||
"action": "set",
|
||||
"scope": "services",
|
||||
"http_proxy": "http://127.0.0.1:7890",
|
||||
"services": ["provider.openai", "tool.http_request"]
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(set_result.success, "{:?}", set_result.error);
|
||||
|
||||
let get_result = tool.execute(json!({"action": "get"})).await.unwrap();
|
||||
assert!(get_result.success);
|
||||
assert!(get_result.output.contains("provider.openai"));
|
||||
assert!(get_result.output.contains("services"));
|
||||
}
|
||||
}
|
||||
|
|
@ -1,30 +1,21 @@
|
|||
use super::traits::{Tool, ToolResult};
|
||||
use crate::security::SecurityPolicy;
|
||||
use async_trait::async_trait;
|
||||
use reqwest::Client;
|
||||
use serde_json::json;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
const PUSHOVER_API_URL: &str = "https://api.pushover.net/1/messages.json";
|
||||
const PUSHOVER_REQUEST_TIMEOUT_SECS: u64 = 15;
|
||||
|
||||
pub struct PushoverTool {
|
||||
client: Client,
|
||||
security: Arc<SecurityPolicy>,
|
||||
workspace_dir: PathBuf,
|
||||
}
|
||||
|
||||
impl PushoverTool {
|
||||
pub fn new(security: Arc<SecurityPolicy>, workspace_dir: PathBuf) -> Self {
|
||||
let client = Client::builder()
|
||||
.timeout(Duration::from_secs(PUSHOVER_REQUEST_TIMEOUT_SECS))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new());
|
||||
|
||||
Self {
|
||||
client,
|
||||
security,
|
||||
workspace_dir,
|
||||
}
|
||||
|
|
@ -182,12 +173,12 @@ impl Tool for PushoverTool {
|
|||
form = form.text("sound", sound);
|
||||
}
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.post(PUSHOVER_API_URL)
|
||||
.multipart(form)
|
||||
.send()
|
||||
.await?;
|
||||
let client = crate::config::build_runtime_proxy_client_with_timeouts(
|
||||
"tool.pushover",
|
||||
PUSHOVER_REQUEST_TIMEOUT_SECS,
|
||||
10,
|
||||
);
|
||||
let response = client.post(PUSHOVER_API_URL).multipart(form).send().await?;
|
||||
|
||||
let status = response.status();
|
||||
let body = response.text().await.unwrap_or_default();
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ impl Tunnel for CustomTunnel {
|
|||
async fn health_check(&self) -> bool {
|
||||
// If a health URL is configured, try to reach it
|
||||
if let Some(ref url) = self.health_url {
|
||||
return reqwest::Client::new()
|
||||
return crate::config::build_runtime_proxy_client("tunnel.custom")
|
||||
.get(url)
|
||||
.timeout(std::time::Duration::from_secs(5))
|
||||
.send()
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue