feat: add AIEOS identity support and harden cron scheduler security

- Add IdentityConfig with format=openclaw|aieos, aieos_path, and aieos_inline
- Implement AIEOS v1.1 JSON parser and system prompt injection
- Add build_system_prompt_with_identity() supporting both OpenClaw markdown and AIEOS JSON
- Harden cron scheduler with SecurityPolicy checks (command allowlist, forbidden path arguments)
- Skip retries on deterministic security policy violations
- Add comprehensive tests for AIEOS config and cron security edge cases
- Update README with AIEOS documentation and schema overview
- Add .dockerignore tests for build context security validation
This commit is contained in:
argenis de la rosa 2026-02-14 13:26:08 -05:00
parent 76074cb789
commit acea042bdb
7 changed files with 790 additions and 22 deletions

View file

@ -119,6 +119,7 @@ Every subsystem is a **trait** — swap implementations with a config change, ze
| **Observability** | `Observer` | Noop, Log, Multi | Prometheus, OTel | | **Observability** | `Observer` | Noop, Log, Multi | Prometheus, OTel |
| **Runtime** | `RuntimeAdapter` | Native (Mac/Linux/Pi) | Docker, WASM (planned; unsupported kinds fail fast) | | **Runtime** | `RuntimeAdapter` | Native (Mac/Linux/Pi) | Docker, WASM (planned; unsupported kinds fail fast) |
| **Security** | `SecurityPolicy` | Gateway pairing, sandbox, allowlists, rate limits, filesystem scoping, encrypted secrets | — | | **Security** | `SecurityPolicy` | Gateway pairing, sandbox, allowlists, rate limits, filesystem scoping, encrypted secrets | — |
| **Identity** | `IdentityConfig` | OpenClaw (markdown), AIEOS v1.1 (JSON) | Any identity format |
| **Tunnel** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | Any tunnel binary | | **Tunnel** | `Tunnel` | None, Cloudflare, Tailscale, ngrok, Custom | Any tunnel binary |
| **Heartbeat** | Engine | HEARTBEAT.md periodic tasks | — | | **Heartbeat** | Engine | HEARTBEAT.md periodic tasks | — |
| **Skills** | Loader | TOML manifests + SKILL.md instructions | Community skill packs | | **Skills** | Loader | TOML manifests + SKILL.md instructions | Community skill packs |
@ -284,8 +285,81 @@ allowed_domains = ["docs.rs"] # required when browser is enabled
[composio] [composio]
enabled = false # opt-in: 1000+ OAuth apps via composio.dev enabled = false # opt-in: 1000+ OAuth apps via composio.dev
[identity]
format = "openclaw" # "openclaw" (default, markdown files) or "aieos" (JSON)
# aieos_path = "identity.json" # path to AIEOS JSON file (relative to workspace or absolute)
# aieos_inline = '{"identity":{"names":{"first":"Nova"}}}' # inline AIEOS JSON
``` ```
## Identity System (AIEOS Support)
ZeroClaw supports **identity-agnostic** AI personas through two formats:
### OpenClaw (Default)
Traditional markdown files in your workspace:
- `IDENTITY.md` — Who the agent is
- `SOUL.md` — Core personality and values
- `USER.md` — Who the agent is helping
- `AGENTS.md` — Behavior guidelines
### AIEOS (AI Entity Object Specification)
[AIEOS](https://aieos.org) is a standardization framework for portable AI identity. ZeroClaw supports AIEOS v1.1 JSON payloads, allowing you to:
- **Import identities** from the AIEOS ecosystem
- **Export identities** to other AIEOS-compatible systems
- **Maintain behavioral integrity** across different AI models
#### Enable AIEOS
```toml
[identity]
format = "aieos"
aieos_path = "identity.json" # relative to workspace or absolute path
```
Or inline JSON:
```toml
[identity]
format = "aieos"
aieos_inline = '''
{
"identity": {
"names": { "first": "Nova", "nickname": "N" }
},
"psychology": {
"neural_matrix": { "creativity": 0.9, "logic": 0.8 },
"traits": { "mbti": "ENTP" },
"moral_compass": { "alignment": "Chaotic Good" }
},
"linguistics": {
"text_style": { "formality_level": 0.2, "slang_usage": true }
},
"motivations": {
"core_drive": "Push boundaries and explore possibilities"
}
}
'''
```
#### AIEOS Schema Sections
| Section | Description |
|---------|-------------|
| `identity` | Names, bio, origin, residence |
| `psychology` | Neural matrix (cognitive weights), MBTI, OCEAN, moral compass |
| `linguistics` | Text style, formality, catchphrases, forbidden words |
| `motivations` | Core drive, short/long-term goals, fears |
| `capabilities` | Skills and tools the agent can access |
| `physicality` | Visual descriptors for image generation |
| `history` | Origin story, education, occupation |
| `interests` | Hobbies, favorites, lifestyle |
See [aieos.org](https://aieos.org) for the full schema and live examples.
## Gateway API ## Gateway API
| Endpoint | Method | Auth | Description | | Endpoint | Method | Auth | Description |

169
scripts/test_dockerignore.sh Executable file
View file

@ -0,0 +1,169 @@
#!/usr/bin/env bash
# Test script to verify .dockerignore excludes sensitive paths
# Run: ./scripts/test_dockerignore.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
DOCKERIGNORE="$PROJECT_ROOT/.dockerignore"
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
PASS=0
FAIL=0
log_pass() {
echo -e "${GREEN}${NC} $1"
PASS=$((PASS + 1))
}
log_fail() {
echo -e "${RED}${NC} $1"
FAIL=$((FAIL + 1))
}
# Test 1: .dockerignore exists
echo "=== Testing .dockerignore ==="
if [[ -f "$DOCKERIGNORE" ]]; then
log_pass ".dockerignore file exists"
else
log_fail ".dockerignore file does not exist"
exit 1
fi
# Test 2: Required exclusions are present
MUST_EXCLUDE=(
".git"
".githooks"
"target"
"docs"
"examples"
"tests"
"*.md"
"*.png"
"*.db"
"*.db-journal"
".DS_Store"
".github"
"deny.toml"
"LICENSE"
".env"
".tmp_*"
)
for pattern in "${MUST_EXCLUDE[@]}"; do
# Use fgrep for literal matching
if grep -Fq "$pattern" "$DOCKERIGNORE" 2>/dev/null; then
log_pass "Excludes: $pattern"
else
log_fail "Missing exclusion: $pattern"
fi
done
# Test 3: Build essentials are NOT excluded
MUST_NOT_EXCLUDE=(
"Cargo.toml"
"Cargo.lock"
"src"
)
for path in "${MUST_NOT_EXCLUDE[@]}"; do
if grep -qE "^${path}$" "$DOCKERIGNORE" 2>/dev/null; then
log_fail "Build essential '$path' is incorrectly excluded"
else
log_pass "Build essential NOT excluded: $path"
fi
done
# Test 4: No syntax errors (basic validation)
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^# ]] && continue
# Check for common issues
if [[ "$line" =~ [[:space:]]$ ]]; then
log_fail "Trailing whitespace in pattern: '$line'"
fi
done < "$DOCKERIGNORE"
log_pass "No trailing whitespace in patterns"
# Test 5: Verify Docker build context would be small
echo ""
echo "=== Simulating Docker build context ==="
# Create temp dir and simulate what would be sent
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# Use rsync with .dockerignore patterns to simulate Docker's behavior
cd "$PROJECT_ROOT"
# Count files that WOULD be sent (excluding .dockerignore patterns)
TOTAL_FILES=$(find . -type f | wc -l | tr -d ' ')
CONTEXT_FILES=$(find . -type f \
! -path './.git/*' \
! -path './target/*' \
! -path './docs/*' \
! -path './examples/*' \
! -path './tests/*' \
! -name '*.md' \
! -name '*.png' \
! -name '*.svg' \
! -name '*.db' \
! -name '*.db-journal' \
! -name '.DS_Store' \
! -path './.github/*' \
! -name 'deny.toml' \
! -name 'LICENSE' \
! -name '.env' \
! -name '.env.*' \
2>/dev/null | wc -l | tr -d ' ')
echo "Total files in repo: $TOTAL_FILES"
echo "Files in Docker context: $CONTEXT_FILES"
if [[ $CONTEXT_FILES -lt $TOTAL_FILES ]]; then
log_pass "Docker context is smaller than full repo ($CONTEXT_FILES < $TOTAL_FILES files)"
else
log_fail "Docker context is not being reduced"
fi
# Test 6: Verify critical security files would be excluded
echo ""
echo "=== Security checks ==="
# Check if .git would be excluded
if [[ -d "$PROJECT_ROOT/.git" ]]; then
if grep -q "^\.git$" "$DOCKERIGNORE"; then
log_pass ".git directory will be excluded (security)"
else
log_fail ".git directory NOT excluded - SECURITY RISK"
fi
fi
# Check if any .db files exist and would be excluded
DB_FILES=$(find "$PROJECT_ROOT" -name "*.db" -type f 2>/dev/null | head -5)
if [[ -n "$DB_FILES" ]]; then
if grep -q "^\*\.db$" "$DOCKERIGNORE"; then
log_pass "*.db files will be excluded (security)"
else
log_fail "*.db files NOT excluded - SECURITY RISK"
fi
fi
# Summary
echo ""
echo "=== Summary ==="
echo -e "Passed: ${GREEN}$PASS${NC}"
echo -e "Failed: ${RED}$FAIL${NC}"
if [[ $FAIL -gt 0 ]]; then
echo -e "${RED}FAILED${NC}: $FAIL tests failed"
exit 1
else
echo -e "${GREEN}PASSED${NC}: All tests passed"
exit 0
fi

View file

@ -68,7 +68,7 @@ pub struct IdentityConfig {
/// Only used when format = "aieos" /// Only used when format = "aieos"
#[serde(default)] #[serde(default)]
pub aieos_path: Option<String>, pub aieos_path: Option<String>,
/// Inline AIEOS JSON (alternative to aieos_path) /// Inline AIEOS JSON (alternative to `aieos_path`)
/// Only used when format = "aieos" /// Only used when format = "aieos"
#[serde(default)] #[serde(default)]
pub aieos_inline: Option<String>, pub aieos_inline: Option<String>,

View file

@ -1,5 +1,6 @@
use crate::config::Config; use crate::config::Config;
use crate::cron::{due_jobs, reschedule_after_run, CronJob}; use crate::cron::{due_jobs, reschedule_after_run, CronJob};
use crate::security::SecurityPolicy;
use anyhow::Result; use anyhow::Result;
use chrono::Utc; use chrono::Utc;
use tokio::process::Command; use tokio::process::Command;
@ -10,6 +11,7 @@ const MIN_POLL_SECONDS: u64 = 5;
pub async fn run(config: Config) -> Result<()> { pub async fn run(config: Config) -> Result<()> {
let poll_secs = config.reliability.scheduler_poll_secs.max(MIN_POLL_SECONDS); let poll_secs = config.reliability.scheduler_poll_secs.max(MIN_POLL_SECONDS);
let mut interval = time::interval(Duration::from_secs(poll_secs)); let mut interval = time::interval(Duration::from_secs(poll_secs));
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
crate::health::mark_component_ok("scheduler"); crate::health::mark_component_ok("scheduler");
@ -27,7 +29,7 @@ pub async fn run(config: Config) -> Result<()> {
for job in jobs { for job in jobs {
crate::health::mark_component_ok("scheduler"); crate::health::mark_component_ok("scheduler");
let (success, output) = execute_job_with_retry(&config, &job).await; let (success, output) = execute_job_with_retry(&config, &security, &job).await;
if !success { if !success {
crate::health::mark_component_error("scheduler", format!("job {} failed", job.id)); crate::health::mark_component_error("scheduler", format!("job {} failed", job.id));
@ -41,19 +43,28 @@ pub async fn run(config: Config) -> Result<()> {
} }
} }
async fn execute_job_with_retry(config: &Config, job: &CronJob) -> (bool, String) { async fn execute_job_with_retry(
config: &Config,
security: &SecurityPolicy,
job: &CronJob,
) -> (bool, String) {
let mut last_output = String::new(); let mut last_output = String::new();
let retries = config.reliability.scheduler_retries; let retries = config.reliability.scheduler_retries;
let mut backoff_ms = config.reliability.provider_backoff_ms.max(200); let mut backoff_ms = config.reliability.provider_backoff_ms.max(200);
for attempt in 0..=retries { for attempt in 0..=retries {
let (success, output) = run_job_command(config, job).await; let (success, output) = run_job_command(config, security, job).await;
last_output = output; last_output = output;
if success { if success {
return (true, last_output); return (true, last_output);
} }
if last_output.starts_with("blocked by security policy:") {
// Deterministic policy violations are not retryable.
return (false, last_output);
}
if attempt < retries { if attempt < retries {
let jitter_ms = (Utc::now().timestamp_subsec_millis() % 250) as u64; let jitter_ms = (Utc::now().timestamp_subsec_millis() % 250) as u64;
time::sleep(Duration::from_millis(backoff_ms + jitter_ms)).await; time::sleep(Duration::from_millis(backoff_ms + jitter_ms)).await;
@ -64,7 +75,86 @@ async fn execute_job_with_retry(config: &Config, job: &CronJob) -> (bool, String
(false, last_output) (false, last_output)
} }
async fn run_job_command(config: &Config, job: &CronJob) -> (bool, String) { fn is_env_assignment(word: &str) -> bool {
word.contains('=')
&& word
.chars()
.next()
.is_some_and(|c| c.is_ascii_alphabetic() || c == '_')
}
fn strip_wrapping_quotes(token: &str) -> &str {
token.trim_matches(|c| c == '"' || c == '\'')
}
fn forbidden_path_argument(security: &SecurityPolicy, command: &str) -> Option<String> {
let mut normalized = command.to_string();
for sep in ["&&", "||"] {
normalized = normalized.replace(sep, "\x00");
}
for sep in ['\n', ';', '|'] {
normalized = normalized.replace(sep, "\x00");
}
for segment in normalized.split('\x00') {
let tokens: Vec<&str> = segment.split_whitespace().collect();
if tokens.is_empty() {
continue;
}
// Skip leading env assignments and executable token.
let mut idx = 0;
while idx < tokens.len() && is_env_assignment(tokens[idx]) {
idx += 1;
}
if idx >= tokens.len() {
continue;
}
idx += 1;
for token in &tokens[idx..] {
let candidate = strip_wrapping_quotes(token);
if candidate.is_empty() || candidate.starts_with('-') || candidate.contains("://") {
continue;
}
let looks_like_path = candidate.starts_with('/')
|| candidate.starts_with("./")
|| candidate.starts_with("../")
|| candidate.starts_with("~/")
|| candidate.contains('/');
if looks_like_path && !security.is_path_allowed(candidate) {
return Some(candidate.to_string());
}
}
}
None
}
async fn run_job_command(
config: &Config,
security: &SecurityPolicy,
job: &CronJob,
) -> (bool, String) {
if !security.is_command_allowed(&job.command) {
return (
false,
format!(
"blocked by security policy: command not allowed: {}",
job.command
),
);
}
if let Some(path) = forbidden_path_argument(security, &job.command) {
return (
false,
format!("blocked by security policy: forbidden path argument: {path}"),
);
}
let output = Command::new("sh") let output = Command::new("sh")
.arg("-lc") .arg("-lc")
.arg(&job.command) .arg(&job.command)
@ -92,6 +182,7 @@ async fn run_job_command(config: &Config, job: &CronJob) -> (bool, String) {
mod tests { mod tests {
use super::*; use super::*;
use crate::config::Config; use crate::config::Config;
use crate::security::SecurityPolicy;
use tempfile::TempDir; use tempfile::TempDir;
fn test_config(tmp: &TempDir) -> Config { fn test_config(tmp: &TempDir) -> Config {
@ -118,8 +209,9 @@ mod tests {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp);
let job = test_job("echo scheduler-ok"); let job = test_job("echo scheduler-ok");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_job_command(&config, &job).await; let (success, output) = run_job_command(&config, &security, &job).await;
assert!(success); assert!(success);
assert!(output.contains("scheduler-ok")); assert!(output.contains("scheduler-ok"));
assert!(output.contains("status=exit status: 0")); assert!(output.contains("status=exit status: 0"));
@ -129,12 +221,42 @@ mod tests {
async fn run_job_command_failure() { async fn run_job_command_failure() {
let tmp = TempDir::new().unwrap(); let tmp = TempDir::new().unwrap();
let config = test_config(&tmp); let config = test_config(&tmp);
let job = test_job("echo scheduler-fail 1>&2; exit 7"); let job = test_job("ls definitely_missing_file_for_scheduler_test");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_job_command(&config, &job).await; let (success, output) = run_job_command(&config, &security, &job).await;
assert!(!success); assert!(!success);
assert!(output.contains("scheduler-fail")); assert!(output.contains("definitely_missing_file_for_scheduler_test"));
assert!(output.contains("status=exit status: 7")); assert!(output.contains("status=exit status:"));
}
#[tokio::test]
async fn run_job_command_blocks_disallowed_command() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into()];
let job = test_job("curl https://evil.example");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_job_command(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("command not allowed"));
}
#[tokio::test]
async fn run_job_command_blocks_forbidden_path_argument() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["cat".into()];
let job = test_job("cat /etc/passwd");
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let (success, output) = run_job_command(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("forbidden path argument"));
assert!(output.contains("/etc/passwd"));
} }
#[tokio::test] #[tokio::test]
@ -143,12 +265,17 @@ mod tests {
let mut config = test_config(&tmp); let mut config = test_config(&tmp);
config.reliability.scheduler_retries = 1; config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1; config.reliability.provider_backoff_ms = 1;
config.autonomy.allowed_commands = vec!["sh".into()];
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let job = test_job( std::fs::write(
"if [ -f retry-ok.flag ]; then echo recovered; exit 0; else touch retry-ok.flag; echo first-fail 1>&2; exit 1; fi", config.workspace_dir.join("retry-once.sh"),
); "#!/bin/sh\nif [ -f retry-ok.flag ]; then\n echo recovered\n exit 0\nfi\ntouch retry-ok.flag\nexit 1\n",
)
.unwrap();
let job = test_job("sh ./retry-once.sh");
let (success, output) = execute_job_with_retry(&config, &job).await; let (success, output) = execute_job_with_retry(&config, &security, &job).await;
assert!(success); assert!(success);
assert!(output.contains("recovered")); assert!(output.contains("recovered"));
} }
@ -159,11 +286,12 @@ mod tests {
let mut config = test_config(&tmp); let mut config = test_config(&tmp);
config.reliability.scheduler_retries = 1; config.reliability.scheduler_retries = 1;
config.reliability.provider_backoff_ms = 1; config.reliability.provider_backoff_ms = 1;
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
let job = test_job("echo still-bad 1>&2; exit 1"); let job = test_job("ls always_missing_for_retry_test");
let (success, output) = execute_job_with_retry(&config, &job).await; let (success, output) = execute_job_with_retry(&config, &security, &job).await;
assert!(!success); assert!(!success);
assert!(output.contains("still-bad")); assert!(output.contains("always_missing_for_retry_test"));
} }
} }

View file

@ -1,12 +1,12 @@
//! AIEOS (AI Entity Object Specification) v1.1 support //! AIEOS (AI Entity Object Specification) v1.1 support
//! //!
//! AIEOS is a standardization framework for portable AI identity. //! AIEOS is a standardization framework for portable AI identity.
//! See: https://aieos.org //! See: <https://aieos.org>
//! //!
//! This module provides: //! This module provides:
//! - Full AIEOS v1.1 schema types //! - Full AIEOS v1.1 schema types
//! - JSON parsing and validation //! - JSON parsing and validation
//! - Conversion to ZeroClaw system prompt sections //! - Conversion to `ZeroClaw` system prompt sections
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -705,8 +705,55 @@ pub fn load_aieos_identity(path: &Path) -> Result<AieosEntity> {
} }
/// Parse an AIEOS identity from a JSON string /// Parse an AIEOS identity from a JSON string
///
/// Handles edge cases:
/// - Strips BOM if present
/// - Trims whitespace
/// - Provides detailed error context
pub fn parse_aieos_json(json: &str) -> Result<AieosEntity> { pub fn parse_aieos_json(json: &str) -> Result<AieosEntity> {
serde_json::from_str(json).context("Failed to parse AIEOS JSON") // Strip UTF-8 BOM if present
let json = json.strip_prefix('\u{feff}').unwrap_or(json);
// Trim whitespace
let json = json.trim();
if json.is_empty() {
anyhow::bail!("AIEOS JSON is empty");
}
serde_json::from_str(json).with_context(|| {
// Provide helpful error context
let preview = if json.len() > 100 {
format!("{}...", &json[..100])
} else {
json.to_string()
};
format!("Failed to parse AIEOS JSON. Preview: {preview}")
})
}
/// Validate AIEOS schema version compatibility
pub fn validate_aieos_version(entity: &AieosEntity) -> Result<()> {
if let Some(ref standard) = entity.standard {
if let Some(ref version) = standard.version {
// We support v1.0.x and v1.1.x
if version.starts_with("1.0") || version.starts_with("1.1") {
return Ok(());
}
// Warn but don't fail for newer minor versions
if version.starts_with("1.") {
tracing::warn!(
"AIEOS version {version} is newer than supported (1.1.x); some fields may be ignored"
);
return Ok(());
}
// Fail for major version mismatch
anyhow::bail!(
"AIEOS version {version} is not compatible; supported versions: 1.0.x, 1.1.x"
);
}
}
// No version specified — assume compatible
Ok(())
} }
// ══════════════════════════════════════════════════════════════════════════════ // ══════════════════════════════════════════════════════════════════════════════
@ -791,6 +838,9 @@ impl AieosEntity {
// History section (brief) // History section (brief)
self.write_history_section(&mut prompt); self.write_history_section(&mut prompt);
// Interests section
self.write_interests_section(&mut prompt);
prompt prompt
} }
@ -914,6 +964,28 @@ impl AieosEntity {
let _ = writeln!(prompt, "- Temperament: {temperament}"); let _ = writeln!(prompt, "- Temperament: {temperament}");
} }
} }
// OCEAN (Big Five) traits
if let Some(ref ocean) = traits.ocean {
let mut ocean_parts = Vec::new();
if let Some(o) = ocean.openness {
ocean_parts.push(format!("O:{:.0}%", o * 100.0));
}
if let Some(c) = ocean.conscientiousness {
ocean_parts.push(format!("C:{:.0}%", c * 100.0));
}
if let Some(e) = ocean.extraversion {
ocean_parts.push(format!("E:{:.0}%", e * 100.0));
}
if let Some(a) = ocean.agreeableness {
ocean_parts.push(format!("A:{:.0}%", a * 100.0));
}
if let Some(n) = ocean.neuroticism {
ocean_parts.push(format!("N:{:.0}%", n * 100.0));
}
if !ocean_parts.is_empty() {
let _ = writeln!(prompt, "- OCEAN: {}", ocean_parts.join(" "));
}
}
prompt.push('\n'); prompt.push('\n');
} }
@ -1145,6 +1217,88 @@ impl AieosEntity {
} }
} }
} }
fn write_interests_section(&self, prompt: &mut String) {
if let Some(ref interests) = self.interests {
let mut has_content = false;
// Hobbies
if !interests.hobbies.is_empty() {
if !has_content {
prompt.push_str("### Interests & Lifestyle\n\n");
has_content = true;
}
let _ = writeln!(prompt, "**Hobbies:** {}", interests.hobbies.join(", "));
}
// Favorites (compact)
if let Some(ref favs) = interests.favorites {
let mut fav_parts = Vec::new();
if let Some(ref music) = favs.music_genre {
if !music.is_empty() {
fav_parts.push(format!("music: {music}"));
}
}
if let Some(ref book) = favs.book {
if !book.is_empty() {
fav_parts.push(format!("book: {book}"));
}
}
if let Some(ref movie) = favs.movie {
if !movie.is_empty() {
fav_parts.push(format!("movie: {movie}"));
}
}
if let Some(ref food) = favs.food {
if !food.is_empty() {
fav_parts.push(format!("food: {food}"));
}
}
if !fav_parts.is_empty() {
if !has_content {
prompt.push_str("### Interests & Lifestyle\n\n");
has_content = true;
}
let _ = writeln!(prompt, "**Favorites:** {}", fav_parts.join(", "));
}
}
// Aversions
if !interests.aversions.is_empty() {
if !has_content {
prompt.push_str("### Interests & Lifestyle\n\n");
has_content = true;
}
let _ = writeln!(prompt, "**Dislikes:** {}", interests.aversions.join(", "));
}
// Lifestyle
if let Some(ref lifestyle) = interests.lifestyle {
let mut lifestyle_parts = Vec::new();
if let Some(ref diet) = lifestyle.diet {
if !diet.is_empty() {
lifestyle_parts.push(format!("diet: {diet}"));
}
}
if let Some(ref sleep) = lifestyle.sleep_schedule {
if !sleep.is_empty() {
lifestyle_parts.push(format!("sleep: {sleep}"));
}
}
if !lifestyle_parts.is_empty() {
if !has_content {
prompt.push_str("### Interests & Lifestyle\n\n");
has_content = true;
}
let _ = writeln!(prompt, "**Lifestyle:** {}", lifestyle_parts.join(", "));
}
}
if has_content {
prompt.push('\n');
}
}
}
} }
// ══════════════════════════════════════════════════════════════════════════════ // ══════════════════════════════════════════════════════════════════════════════
@ -1450,4 +1604,242 @@ mod tests {
// Should fall back to "Entity" when names are empty // Should fall back to "Entity" when names are empty
assert_eq!(entity.display_name(), "Entity"); assert_eq!(entity.display_name(), "Entity");
} }
// ══════════════════════════════════════════════════════════
// Edge Case Tests
// ══════════════════════════════════════════════════════════
#[test]
fn parse_empty_json_fails() {
let result = parse_aieos_json("");
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("empty"));
}
#[test]
fn parse_whitespace_only_fails() {
let result = parse_aieos_json(" \n\t ");
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("empty"));
}
#[test]
fn parse_json_with_bom() {
// UTF-8 BOM followed by valid JSON
let json = "\u{feff}{\"identity\": {\"names\": {\"first\": \"BOM Test\"}}}";
let entity = parse_aieos_json(json).unwrap();
assert_eq!(entity.display_name(), "BOM Test");
}
#[test]
fn parse_json_with_leading_whitespace() {
let json = " \n\t {\"identity\": {\"names\": {\"first\": \"Whitespace\"}}}";
let entity = parse_aieos_json(json).unwrap();
assert_eq!(entity.display_name(), "Whitespace");
}
#[test]
fn validate_version_1_0_ok() {
let json = r#"{"standard": {"version": "1.0.0"}}"#;
let entity = parse_aieos_json(json).unwrap();
assert!(validate_aieos_version(&entity).is_ok());
}
#[test]
fn validate_version_1_1_ok() {
let json = r#"{"standard": {"version": "1.1.0"}}"#;
let entity = parse_aieos_json(json).unwrap();
assert!(validate_aieos_version(&entity).is_ok());
}
#[test]
fn validate_version_1_2_warns_but_ok() {
let json = r#"{"standard": {"version": "1.2.0"}}"#;
let entity = parse_aieos_json(json).unwrap();
// Should warn but not fail
assert!(validate_aieos_version(&entity).is_ok());
}
#[test]
fn validate_version_2_0_fails() {
let json = r#"{"standard": {"version": "2.0.0"}}"#;
let entity = parse_aieos_json(json).unwrap();
let result = validate_aieos_version(&entity);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("not compatible"));
}
#[test]
fn validate_no_version_ok() {
let json = r#"{}"#;
let entity = parse_aieos_json(json).unwrap();
assert!(validate_aieos_version(&entity).is_ok());
}
#[test]
fn parse_invalid_json_provides_preview() {
let result = parse_aieos_json("{invalid json here}");
assert!(result.is_err());
let err_msg = result.unwrap_err().to_string();
assert!(err_msg.contains("Preview"));
}
#[test]
fn ocean_traits_in_prompt() {
let json = r#"{
"psychology": {
"traits": {
"ocean": {
"openness": 0.8,
"conscientiousness": 0.6,
"extraversion": 0.4,
"agreeableness": 0.7,
"neuroticism": 0.3
}
}
}
}"#;
let entity = parse_aieos_json(json).unwrap();
let prompt = entity.to_system_prompt();
assert!(prompt.contains("OCEAN:"));
assert!(prompt.contains("O:80%"));
assert!(prompt.contains("C:60%"));
assert!(prompt.contains("E:40%"));
assert!(prompt.contains("A:70%"));
assert!(prompt.contains("N:30%"));
}
#[test]
fn interests_in_prompt() {
let json = r#"{
"interests": {
"hobbies": ["coding", "gaming"],
"favorites": {
"music_genre": "Jazz",
"book": "Dune"
},
"aversions": ["crowds"],
"lifestyle": {
"diet": "omnivore",
"sleep_schedule": "early bird"
}
}
}"#;
let entity = parse_aieos_json(json).unwrap();
let prompt = entity.to_system_prompt();
assert!(prompt.contains("### Interests & Lifestyle"));
assert!(prompt.contains("coding, gaming"));
assert!(prompt.contains("music: Jazz"));
assert!(prompt.contains("book: Dune"));
assert!(prompt.contains("crowds"));
assert!(prompt.contains("diet: omnivore"));
}
#[test]
fn null_values_handled() {
// JSON with explicit nulls
let json = r#"{
"identity": {
"names": { "first": null, "last": "Smith" }
}
}"#;
let entity = parse_aieos_json(json).unwrap();
assert_eq!(entity.full_name(), Some("Smith".to_string()));
}
#[test]
fn extra_fields_ignored() {
// JSON with unknown fields should be ignored (forward compatibility)
let json = r#"{
"identity": {
"names": { "first": "Test" },
"unknown_field": "should be ignored",
"another_unknown": { "nested": true }
},
"future_section": { "data": 123 }
}"#;
let entity = parse_aieos_json(json).unwrap();
assert_eq!(entity.display_name(), "Test");
}
#[test]
fn case_insensitive_format_matching() {
// This tests the config format matching in channels/mod.rs
// Here we just verify the entity parses correctly
let json = r#"{"identity": {"names": {"first": "CaseTest"}}}"#;
let entity = parse_aieos_json(json).unwrap();
assert_eq!(entity.display_name(), "CaseTest");
}
#[test]
fn emotional_triggers_parsed() {
let json = r#"{
"psychology": {
"emotional_profile": {
"base_mood": "optimistic",
"volatility": 0.3,
"resilience": "high",
"triggers": {
"joy": ["helping others", "learning"],
"anger": ["injustice"],
"sadness": ["loss"]
}
}
}
}"#;
let entity = parse_aieos_json(json).unwrap();
let psych = entity.psychology.unwrap();
let emotional = psych.emotional_profile.unwrap();
assert_eq!(emotional.base_mood, Some("optimistic".to_string()));
assert_eq!(emotional.triggers.as_ref().unwrap().joy.len(), 2);
}
#[test]
fn idiosyncrasies_parsed() {
let json = r#"{
"psychology": {
"idiosyncrasies": {
"phobias": ["heights"],
"obsessions": ["organization"],
"tics": ["tapping fingers"]
}
}
}"#;
let entity = parse_aieos_json(json).unwrap();
let psych = entity.psychology.unwrap();
let idio = psych.idiosyncrasies.unwrap();
assert_eq!(idio.phobias, vec!["heights"]);
assert_eq!(idio.obsessions, vec!["organization"]);
}
#[test]
fn tts_config_parsed() {
let json = r#"{
"linguistics": {
"voice": {
"tts_config": {
"provider": "elevenlabs",
"voice_id": "abc123",
"stability": 0.7,
"similarity_boost": 0.8
},
"accent": {
"region": "British",
"strength": 0.5
}
}
}
}"#;
let entity = parse_aieos_json(json).unwrap();
let ling = entity.linguistics.unwrap();
let voice = ling.voice.unwrap();
assert_eq!(
voice.tts_config.as_ref().unwrap().provider,
Some("elevenlabs".to_string())
);
assert_eq!(
voice.accent.as_ref().unwrap().region,
Some("British".to_string())
);
}
} }

View file

@ -2,7 +2,7 @@
//! //!
//! Supports multiple identity formats: //! Supports multiple identity formats:
//! - **AIEOS** (AI Entity Object Specification v1.1) — JSON-based portable identity //! - **AIEOS** (AI Entity Object Specification v1.1) — JSON-based portable identity
//! - **OpenClaw** (default) — Markdown files (IDENTITY.md, SOUL.md, etc.) //! - **`OpenClaw`** (default) — Markdown files (IDENTITY.md, SOUL.md, etc.)
pub mod aieos; pub mod aieos;

View file

@ -12,6 +12,7 @@ use std::path::Path;
/// Paths that MUST be excluded from Docker build context (security/performance) /// Paths that MUST be excluded from Docker build context (security/performance)
const MUST_EXCLUDE: &[&str] = &[ const MUST_EXCLUDE: &[&str] = &[
".git", ".git",
".githooks",
"target", "target",
"docs", "docs",
"examples", "examples",
@ -22,10 +23,10 @@ const MUST_EXCLUDE: &[&str] = &[
"*.db-journal", "*.db-journal",
".DS_Store", ".DS_Store",
".github", ".github",
".githooks",
"deny.toml", "deny.toml",
"LICENSE", "LICENSE",
".env", ".env",
".tmp_*",
]; ];
/// Paths that MUST NOT be excluded (required for build) /// Paths that MUST NOT be excluded (required for build)
@ -299,20 +300,24 @@ fn dockerignore_pattern_matching_edge_cases() {
// Test the pattern matching logic itself // Test the pattern matching logic itself
let patterns = vec![ let patterns = vec![
".git".to_string(), ".git".to_string(),
".githooks".to_string(),
"target".to_string(), "target".to_string(),
"*.md".to_string(), "*.md".to_string(),
"*.db".to_string(), "*.db".to_string(),
".tmp_*".to_string(), ".tmp_*".to_string(),
".env".to_string(),
]; ];
// Should match // Should match
assert!(is_excluded(&patterns, ".git")); assert!(is_excluded(&patterns, ".git"));
assert!(is_excluded(&patterns, ".git/config")); assert!(is_excluded(&patterns, ".git/config"));
assert!(is_excluded(&patterns, ".githooks"));
assert!(is_excluded(&patterns, "target")); assert!(is_excluded(&patterns, "target"));
assert!(is_excluded(&patterns, "target/debug/build")); assert!(is_excluded(&patterns, "target/debug/build"));
assert!(is_excluded(&patterns, "README.md")); assert!(is_excluded(&patterns, "README.md"));
assert!(is_excluded(&patterns, "brain.db")); assert!(is_excluded(&patterns, "brain.db"));
assert!(is_excluded(&patterns, ".tmp_todo_probe")); assert!(is_excluded(&patterns, ".tmp_todo_probe"));
assert!(is_excluded(&patterns, ".env"));
// Should NOT match // Should NOT match
assert!(!is_excluded(&patterns, "src")); assert!(!is_excluded(&patterns, "src"));