The bot no longer shells out to `opencode run`. Instead it POSTs to the OpenAI-compatible /chat/completions endpoint exposed by llama-server on halo.hoyer.tail:8000 directly. This removes the Bun/sqlite cold-start overhead per request, drops the pkgs.opencode runtime dependency, and eliminates the ExecStartPre dance that materialized config.json into the service's $HOME. Conversation history is now stored as a proper OpenAI `messages` list with system/user/assistant roles, instead of the XML blob that was inlined into a single `opencode run` argument. The interactive opencode setup (config/opencode/config.json) is unchanged — only the bot stops depending on it. The module gains a `modelBaseUrl` option; `model` is now the bare model name (`halo-8000`) without the provider/ prefix that the opencode CLI required. |
||
|---|---|---|
| .. | ||
| bot.py | ||
| default.nix | ||
| module.nix | ||