nixcfg/systems/x86_64-linux/mx/nextcloud-opencode-bot/default.nix
Harald Hoyer 42c52bd87f refactor(mx): drive opencode bot via direct chat-completions API
The bot no longer shells out to `opencode run`. Instead it POSTs to the
OpenAI-compatible /chat/completions endpoint exposed by llama-server on
halo.hoyer.tail:8000 directly. This removes the Bun/sqlite cold-start
overhead per request, drops the pkgs.opencode runtime dependency, and
eliminates the ExecStartPre dance that materialized config.json into the
service's $HOME.

Conversation history is now stored as a proper OpenAI `messages` list
with system/user/assistant roles, instead of the XML blob that was
inlined into a single `opencode run` argument. The interactive opencode
setup (config/opencode/config.json) is unchanged — only the bot stops
depending on it.

The module gains a `modelBaseUrl` option; `model` is now the bare model
name (`halo-8000`) without the provider/ prefix that the opencode CLI
required.
2026-05-13 16:38:58 +02:00

35 lines
1.1 KiB
Nix

{ config, ... }:
{
imports = [ ./module.nix ];
services.nextcloud-opencode-bot = {
enable = true;
nextcloudUrl = "https://nc.hoyer.xyz";
botSecretFile = config.sops.secrets."nextcloud-opencode-bot/secret".path;
modelBaseUrl = "http://halo.hoyer.tail:8000/v1";
model = "halo-8000";
botName = "Halo";
allowedUsers = [ ];
};
sops.secrets."nextcloud-opencode-bot/secret" = {
sopsFile = ../../../../.secrets/hetzner/nextcloud-opencode-bot.yaml;
restartUnits = [ "nextcloud-opencode-bot.service" ];
owner = "opencode-bot";
};
# Nginx location for Nextcloud to send webhooks to the bot
services.nginx.virtualHosts."nc.hoyer.xyz".locations."/_opencode-bot/" = {
proxyPass = "http://127.0.0.1:8086/";
extraConfig = ''
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Only allow from localhost (Nextcloud on same server)
allow 127.0.0.1;
deny all;
'';
};
}