Skip to content

Kubernetes

A Helm chart is on the v1.1 roadmap. Until then, a hand-rolled manifest covers the essentials.

apiVersion: apps/v1
kind: Deployment
metadata:
name: z4j-brain
spec:
replicas: 1
selector:
matchLabels: { app: z4j-brain }
template:
metadata:
labels: { app: z4j-brain }
spec:
containers:
- name: brain
image: z4jdev/z4j:v1.0.0
ports: [{ containerPort: 7700 }]
env:
- name: Z4J_DATABASE_URL
valueFrom: { secretKeyRef: { name: z4j-secrets, key: database-url } }
- name: Z4J_SECRET
valueFrom: { secretKeyRef: { name: z4j-secrets, key: app-secret } }
- name: Z4J_SESSION_SECRET
valueFrom: { secretKeyRef: { name: z4j-secrets, key: session-secret } }
- name: Z4J_AUDIT_SECRET
valueFrom: { secretKeyRef: { name: z4j-secrets, key: audit-secret } }
- name: Z4J_PUBLIC_URL
value: https://z4j.example.com
readinessProbe:
httpGet: { path: /api/v1/health, port: 7700 }
periodSeconds: 10
livenessProbe:
httpGet: { path: /api/v1/health, port: 7700 }
periodSeconds: 30
resources:
requests: { cpu: "200m", memory: "256Mi" }
limits: { cpu: "2", memory: "2Gi" }

Plus a Service + Ingress per your cluster’s conventions.

Your Ingress must allow WebSocket upgrades. Example (nginx-ingress):

metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"

Running more than one brain replica requires sticky session routing for the /ws endpoint (each agent pins to one brain pod). Sticky sessions + periodic affinity re-balance are planned for v1.1.

Use a managed Postgres (Cloud SQL, RDS, Crunchy) or an operator (Zalando, CNPG). Do not run Postgres in a StatefulSet with local storage unless you really know what you’re doing.

Inject Z4J_*_SECRET via Secret objects or external managers (Vault, AWS Secrets Manager, GCP Secret Manager). Do not hard-code.

  • Scrape /metrics with Prometheus (requires Z4J_METRICS_TOKEN header).
  • Ship stdout JSON logs with Fluent Bit / Vector.