Production Deployment
Full guide for deploying the knowledge graph system in production.
Overview
Production deployment differs from quick start: - Pre-built container images from GitHub Container Registry (GHCR) - Headless (non-interactive) initialization - HTTPS with real certificates - Proper hostname configuration for OAuth - GPU acceleration configured explicitly
Prerequisites
- Linux server (Ubuntu 20.04+ or Debian 11+ recommended)
- 16GB+ RAM (8GB minimum)
- NVIDIA GPU recommended for faster extraction
- A domain name (for HTTPS)
- DNS control (for certificate issuance)
Docker will be installed automatically if not present.
Recommended: Standalone Installer
The easiest way to deploy to a production server:
This interactive installer: - Installs Docker if needed - Downloads pre-built images from GHCR - Generates secure secrets - Configures SSL/HTTPS - Sets up admin user and AI provider
Headless Installer
For fully automated deployments:
curl -fsSL https://raw.githubusercontent.com/aaronsb/knowledge-graph-system/main/install.sh | bash -s -- \
--hostname kg.example.com \
--ssl letsencrypt \
--ssl-email admin@example.com \
--ai-provider anthropic \
--ai-key "$ANTHROPIC_API_KEY" \
--gpu nvidia
SSL options:
- --ssl offload - HTTP only (behind reverse proxy that handles SSL)
- --ssl selfsigned - Generate self-signed certificate
- --ssl letsencrypt - Auto-generate via Let's Encrypt (requires --ssl-email)
- --ssl manual - Use existing certificates (requires --ssl-cert and --ssl-key)
After Installation
cd ~/knowledge-graph # Default install location
./operator.sh status # Verify everything is running
./operator.sh logs api # Check API logs
./operator.sh shell # Configuration shell
Alternative: Git Clone + Operator
If you prefer to work from the git repository:
git clone https://github.com/aaronsb/knowledge-graph-system.git
cd knowledge-graph-system
./operator.sh init --headless ...
Headless Initialization
./operator.sh init --headless \
--container-prefix=kg \
--image-source=ghcr \
--gpu=nvidia \
--web-hostname=kg.example.com \
--ai-provider=anthropic \
--ai-model=claude-sonnet-4 \
--ai-key="$ANTHROPIC_API_KEY"
Required Parameters
| Parameter | Description |
|---|---|
--headless |
Enable non-interactive mode |
Infrastructure Parameters
| Parameter | Values | Default | Description |
|---|---|---|---|
--image-source |
local, ghcr |
local |
Where to get container images |
--gpu |
auto, nvidia, amd, amd-host, mac, cpu |
auto |
GPU acceleration mode |
--container-prefix |
kg, knowledge-graph |
knowledge-graph |
Container name prefix |
--compose-file |
path | docker-compose.yml |
Base compose file |
Web Configuration
| Parameter | Description |
|---|---|
--web-hostname |
Public hostname for web access (e.g., kg.example.com) |
The hostname is used for: - OAuth redirect URIs - API URL in frontend configuration - SSL certificate common name
AI Configuration
| Parameter | Description |
|---|---|
--ai-provider |
openai, anthropic, or openrouter |
--ai-model |
Model name (e.g., gpt-4o, claude-sonnet-4) |
--ai-key |
API key for the provider |
--skip-ai-config |
Skip AI configuration entirely |
Other Options
| Parameter | Description |
|---|---|
--password-mode |
random (secure) or simple (dev defaults) |
--container-mode |
regular or dev (hot reload) |
--skip-cli |
Skip CLI installation |
GPU Configuration
NVIDIA GPU
Requires NVIDIA Container Toolkit installed on the host.
AMD GPU (ROCm)
Uses ROCm PyTorch wheels inside the container.
AMD GPU (Host ROCm)
Uses ROCm installed on the host system.
CPU Only
No GPU acceleration. Slower but works everywhere.
HTTPS Configuration
Using Let's Encrypt with DNS Validation
-
Install acme.sh on the host:
-
Configure your DNS provider (example: Porkbun):
-
Issue the certificate:
-
Install to a location the container can access:
-
Configure nginx - edit
docker/nginx.prod.conf: -
Mount certificates in compose - the
docker-compose.prod.ymlmounts:
Certificate Renewal
acme.sh sets up automatic renewal via cron. After renewal, reload nginx:
Example: Full Production Deployment
# On your production server
cd ~/knowledge-graph-system
# Set environment variables
export ANTHROPIC_API_KEY="sk-ant-..."
# Initialize with all production settings
./operator.sh init --headless \
--container-prefix=kg \
--image-source=ghcr \
--gpu=nvidia \
--web-hostname=kg.example.com \
--ai-provider=anthropic \
--ai-model=claude-sonnet-4 \
--ai-key="$ANTHROPIC_API_KEY"
# Verify everything is running
./operator.sh status
# Check the web interface
curl -I https://kg.example.com
Secrets and Security
Generated Secrets
During initialization, these are generated and stored in .env:
| Secret | Purpose |
|---|---|
ENCRYPTION_KEY |
Encrypts API keys at rest |
OAUTH_SIGNING_KEY |
Signs JWT tokens |
POSTGRES_PASSWORD |
Database password |
INTERNAL_KEY_SERVICE_SECRET |
Service-to-service auth |
GARAGE_RPC_SECRET |
Storage cluster coordination |
Never commit .env to version control.
AI Provider Keys
AI provider API keys (OpenAI, Anthropic) are stored encrypted in the database, not in .env. They're configured via:
Or via the --ai-key flag during headless init.
Data Locations
Default data paths (can be customized in compose files):
| Data | Location |
|---|---|
| PostgreSQL database | /srv/docker/data/knowledge-graph/postgres |
| Garage object storage | /srv/docker/data/knowledge-graph/garage |
| Model cache | /srv/docker/data/knowledge-graph/hf_cache |
| SSL certificates | /srv/docker/data/knowledge-graph/certs |
Upgrading
See Upgrading for version upgrade procedures.
Monitoring
Container Health
./operator.sh status # Quick status
docker ps # Detailed container info
./operator.sh logs api # API logs
./operator.sh logs --follow # Tail all logs
API Health Check
Next Steps
- Configuration Reference - All settings explained
- Backup & Restore - Protect your data
- Troubleshooting - When things go wrong