refactor(mx): drive opencode bot via direct chat-completions API
The bot no longer shells out to `opencode run`. Instead it POSTs to the OpenAI-compatible /chat/completions endpoint exposed by llama-server on halo.hoyer.tail:8000 directly. This removes the Bun/sqlite cold-start overhead per request, drops the pkgs.opencode runtime dependency, and eliminates the ExecStartPre dance that materialized config.json into the service's $HOME. Conversation history is now stored as a proper OpenAI `messages` list with system/user/assistant roles, instead of the XML blob that was inlined into a single `opencode run` argument. The interactive opencode setup (config/opencode/config.json) is unchanged — only the bot stops depending on it. The module gains a `modelBaseUrl` option; `model` is now the bare model name (`halo-8000`) without the provider/ prefix that the opencode CLI required.
This commit is contained in:
parent
aa3bc3c457
commit
42c52bd87f
3 changed files with 72 additions and 101 deletions
|
|
@ -6,8 +6,8 @@
|
|||
enable = true;
|
||||
nextcloudUrl = "https://nc.hoyer.xyz";
|
||||
botSecretFile = config.sops.secrets."nextcloud-opencode-bot/secret".path;
|
||||
opencodeConfig = ../../../../config/opencode/config.json;
|
||||
model = "halo-8000/halo-8000";
|
||||
modelBaseUrl = "http://halo.hoyer.tail:8000/v1";
|
||||
model = "halo-8000";
|
||||
botName = "Halo";
|
||||
allowedUsers = [ ];
|
||||
};
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue