Skip to content

Infrastructure

Project structure

The monorepo is split into applications (deployable services) and shared packages (reusable libraries).

Applications

ApplicationDescription
apiFastify REST API with BetterAuth authentication
docsVitePress documentation site
mcpMCP server — expose API tools to LLMs via stdio & HTTP transport

Shared packages

PackageDescription
clitmts CLI — API client with cross-platform native build
eslint-configShared ESLint configuration
loggerShared Pino-based structured logger for all apps and packages
ts-configShared TypeScript base configuration
test-utilsTesting utilities (mock factories, helpers)
sharedZod schemas, API contracts, utility functions
playwrightEnd-to-end browser tests

Architecture note: Organization management (CRUD, members, invitations) and access control (roles, permissions) are handled directly by BetterAuth's Organization plugin within the auth module. Domain-specific extensions (projects, quotas, custom resources) are meant to be added by the consuming application, not the template.

Docker services

The docker/ folder contains two compose files:

  • docker-compose.dev.yml — development stack with hot-reload (docker compose watch)
  • docker-compose.prod.yml — production-like stack with pre-built images

Docker images

Each application has its own Dockerfile. The migration runner uses a dedicated lightweight image separate from the API:

ImageBaseDockerfilePurpose
apibun:distrolessapps/api/DockerfileProduction API runtime (multi-stage)
api-migratebun:alpineapps/api/Dockerfile.migratePrisma migration runner (init container)
docsnginx-unprivileged:alpine-slimapps/docs/DockerfileStatic documentation site
clidistroless/cc-debian12packages/cli/DockerfileCLI native binary
mcpbun:distrolessapps/mcp/DockerfileMCP server

Migration runner

The api-migrate image is a standalone, lightweight container that runs prisma migrate deploy and exits. It is used as:

  • An init container in Kubernetes (runs before the API pod starts)
  • A dependency service in Docker Compose (API depends on migrate completing successfully)

The image extracts the Prisma version from apps/api/package.json at build time to stay in sync without hard-coding. It uses a dedicated non-root migrate user and requires only the DB__URL environment variable.

Services

ServiceImagePort (host)Description
apitemplate-monorepo-ts/api8081Fastify API (dev: watch mode, prod: bundled)
docstemplate-monorepo-ts/docs8082VitePress documentation site
mcptemplate-monorepo-ts/mcp3100MCP server (opt-in via mcp profile)
dbpostgres:17.95432Main application PostgreSQL database
redisredis:7.4-bookworm6379Redis session store
migratetemplate-monorepo-ts/api-migrateOne-shot Prisma migration runner (Dockerfile.migrate)
keycloak-dbpostgres:17.9Dedicated Keycloak PostgreSQL database
keycloakkeycloak/keycloak:26.5.48084Keycloak identity provider
keycloak-initkeycloak/keycloak:26.5.4One-shot init container: sets master realm sslRequired=none via kcadm.sh
otel-collectorotel/opentelemetry-collector-contrib4317, 4318OTel Collector (OTLP gRPC + HTTP)
tempografana/tempo:2.10.1Distributed tracing backend
prometheusprom/prometheus:3.10.09090Metrics storage and query
grafanagrafana/grafana:12.4.08083Observability dashboards

Startup order (dev)

Keycloak setup

Keycloak runs in start-dev mode backed by a dedicated PostgreSQL instance. On first boot, the realm export in docker/keycloak/realm-export.json is imported automatically (--import-realm). A keycloak-init service runs once after Keycloak becomes healthy and patches the master realm to disable the SSL requirement (sslRequired=none), which is required when running without TLS in development.

Notes:

  • This init-container pattern is the standard approach for Keycloak 26+ — there is no env var or CLI flag to control sslRequired on the master realm.
  • In production, Keycloak runs in start --optimized mode and TLS is expected to be terminated at the reverse proxy level.

Observability

The template includes a full observability stack based on OpenTelemetry for both traces and metrics.

Architecture

  • The API uses manual NodeTracerProvider and MeterProvider (replacing NodeSDK for Bun compatibility — Bun does not support require hooks, so auto-instrumentation is unavailable).
  • @fastify/otel provides HTTP request trace spans.
  • @prisma/instrumentation hooks into Prisma Client internals, producing prisma:client:operation, prisma:client:db_query and prisma:client:serialize spans.
  • A custom httpRequestDuration histogram records request latency via a Fastify onResponse hook.
  • The OTel Collector receives traces and metrics, generates Prometheus metrics from trace spans using the spanmetrics connector, and forwards traces to Tempo.
  • Grafana provides 3 pre-configured dashboards: API Overview, Prisma / Database and Traces Explorer.

Docker Compose endpoints

ComponentInternal hostnamePort
OTel Collectorotel-collector4317 (gRPC), 4318 (HTTP)
Tempotempo3200
Prometheusprometheus9090
Grafanagrafana3000 → host 8083

Kubernetes endpoints

When deployed via the Helm chart, internal service DNS names are stabilised with fullnameOverride:

ComponentK8s service namePort
OTel Collectoropentelemetry-collector4317 (gRPC), 4318 (HTTP)
Tempotempo3200
Prometheusauto-provisioned by kube-prometheus-stack9090
Grafanaauto-provisioned by kube-prometheus-stack80

Set the following environment variable in the API deployment (via api.env or api.envSecret in helm/values.yaml):

txt
OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4318

Grafana datasources & dashboards

In Docker Compose, datasources and dashboards are mounted from docker/otel/grafana/.

In Kubernetes, the Helm chart:

  • Provisions the Prometheus datasource automatically via kube-prometheus-stack (uid: prometheus).
  • Provisions the Tempo datasource via kube-prometheus-stack.grafana.additionalDataSources (uid: tempo).
  • Creates dashboard ConfigMaps from helm/files/dashboards/ (mirroring docker/otel/grafana/dashboards/). The kube-prometheus-stack Grafana sidecar picks up any ConfigMap labelled grafana_dashboard: "1" automatically.

Keep helm/files/dashboards/ and docker/otel/grafana/dashboards/ in sync when modifying dashboards.

Notes:

  • OTel configuration files are located in docker/otel/.
  • Grafana dashboards are provisioned from docker/otel/grafana/dashboards/ (Docker) and helm/files/dashboards/ (Kubernetes).

Tests

Unit tests are run using Vitest, which is API-compatible with Jest but faster when working on top of Vite. Tests are co-located with source files (*.spec.ts).

End to end tests are powered by Playwright and managed in the ./packages/playwright folder.

Notes: Test execution may require some packages to be built first. Pipeline dependencies are described in the turbo.json file.

Docs

Documentation is written in the ./apps/docs folder using VitePress, a static site generator built on Vite and Vue that parses .md files into a documentation website.

CI/CD

The CI/CD pipelines use reusable workflows from this-is-tobi/github-workflows@v0. The orchestrators (ci.yml, cd.yml) call these reusable workflows; the only local workflow is release-cli.yml which handles project-specific CLI binary compilation.

CI pipeline

The main CI workflow (ci.yml) runs on pull requests:

StepWorkflow / Source
Lint commit messageslint-commits.yml@v0 (reusable)
Lint JS/TS codelint-js.yml@v0 (reusable)
Unit tests with coveragetest-vitest.yml@v0 (reusable)
SonarQube code quality scan [1]scan-sonarqube.yml@v0 (reusable)
Build Docker images [2]build-docker.yml@v0 (reusable)
Label PR on buildlabel-pr.yml@v0 (reusable)
End to end tests OR Deployment tests [3]test-playwright.yml@v0 / test-kube-deployment.yml@v0 (reusable)
Trivy vulnerability scan (images) [4]scan-trivy.yml@v0 (reusable)
Trivy vulnerability scan (config) [4]scan-trivy.yml@v0 (reusable)

Notes:

  • [1] Runs code quality analysis using SonarQube scanner. Requires secrets SONAR_HOST_URL, SONAR_TOKEN, SONAR_PROJECT_KEY. The job uses continue-on-error and is skipped gracefully when secrets are not configured.
  • [2] Builds application images tagged pr-<pr_number> and pushes them to GHCR. Each image is built in its own matrix slot via the reusable build-docker.yml with built-in SLSA provenance and SBOM attestation enabled (PROVENANCE: true, SBOM: true). The attestation runs as an additional job inside the reusable workflow itself, which is necessary when using a matrix strategy — per-matrix outputs cannot be forwarded to a separate attest-docker.yml call. Requires id-token: write and attestations: write permissions.
  • [3] Runs e2e tests if changes occur in apps, packages or workflows; otherwise runs deployment tests. Uses reusable workflows: test-playwright.yml for Playwright browser tests and test-kube-deployment.yml for Kind-based Kubernetes deploy checks.
  • [4] Runs only if the base branch is main or develop. SARIF results are uploaded to the GitHub Security tab.

CD pipeline

The CD workflow (cd.yml) publishes releases using Release-please-action, which automatically parses Git history following Conventional Commits to build changelogs and version numbers (see Semantic Versioning):

StepWorkflow / Source
Build CLI binaries [7]release-cli.yml (local)
Create release (release-please)release-app.yml@v0 (reusable)
Build Docker images + attestbuild-docker.yml@v0 (reusable)
Publish CLI to NPMrelease-npm.yml@v0 (reusable)
Bump Helm chart appVersion [6]update-helm-chart.yml@v0 (reusable)
Publish Helm chart to OCI [5]release-helm.yml@v0 (reusable)

Notes:

  • Uncomment the on: push trigger in cd.yml to automatically create the new release PR on merge into the main branch.
  • Release-please automatically updates packages/cli/package.json version via extra-files to keep it in sync with the app version.
  • [5] release-helm runs after update-helm-chart completes. It uses chart-releaser to detect charts whose version tag doesn't exist yet and publishes them to GHCR as OCI artifacts.
  • [6] update-helm-chart runs only on app release. It bumps the chart's appVersion to the new app version, independently increments the chart version (patch bump), regenerates docs, and creates a PR. When that PR is merged, the next CD run picks it up via release-helm. The chart version is independent from the app version — the chart can also be bumped without an app release.
  • [7] build-cli runs unconditionally before the release step. It compiles cross-platform CLI binaries, generates SHA-256 checksums, and uploads them as a consolidated artifact (cli-release-assets). When a release is created, release-app.yml automatically attaches these assets to the GitHub release.
  • Docker images are built with built-in SLSA provenance and SBOM attestation enabled (PROVENANCE: true, SBOM: true). Attestation runs inside each matrix job, which is necessary because per-matrix outputs cannot be forwarded to a separate workflow.
  • Requires secrets: NPM_TOKEN for NPM publishing, GH_PAT for release auto-merge.

Other workflows

WorkflowSourceDescription
cache.ymllocalCleans GitHub Actions cache and optionally GHCR images on PR close (uses clean-cache.yml@v0)
preview.ymllocalPosts preview environment links as PR comment (uses preview-comment.yml@v0)

Build

Docker images are built using the reusable build-docker.yml workflow, once per image via a strategy.matrix:

  • api — production runtime (distroless, minimal)
  • api-migrate — Prisma migration runner (dedicated lightweight image built from apps/api/Dockerfile.migrate, used as init container in Kubernetes / dependency service in docker-compose)
  • docs — documentation static site
  • cli — CLI binary Docker image
  • mcp — MCP server Docker image

Each matrix slot enables built-in SLSA provenance and SBOM attestation (PROVENANCE: true, SBOM: true). The attestation job runs inside the reusable workflow after the image is built and merged, delegating to attest-docker.yml internally. This approach is required when using a matrix strategy, because GitHub Actions cannot expose per-matrix outputs to a separate attestation workflow call. The caller must grant id-token: write and attestations: write permissions.

Cache

GitHub Actions cache is automatically deleted when the associated pull request is closed or merged. Optionally, GHCR images can be deleted by setting the repository variable CLEAN_IMAGES=true or using the manual dispatch input.

Security

Trivy scans are performed on each PR via the reusable scan-trivy.yml. Image scans and config scans run as separate jobs. SARIF reports are uploaded to the GitHub Security tab.

Preview

Application preview can be enabled using the ArgoCD PR generator. When a pull request is tagged with the preview label, a preview deployment is created using images tagged pr-<pr_number>.

To activate this feature:

  1. Create a GitHub App so ArgoCD can access the repository and receive webhooks.
  2. Deploy an ApplicationSet based on this template.
  3. Create GitHub Actions environment variable templates: API_TEMPLATE_URL (https://api.pr-<pr_number>.domain.com) and DOCS_TEMPLATE_URL (https://docs.pr-<pr_number>.domain.com).

Using workflows locally

If you prefer to have all workflow definitions in your repository rather than referencing the external reusable workflows, you can copy them locally:

sh
# Clone the reusable workflows at the pinned version
git clone --branch v0 --depth 1 https://github.com/this-is-tobi/github-workflows.git /tmp/github-workflows

# Copy the specific workflows used by this project
for wf in build-docker lint-js lint-commits test-vitest test-playwright test-kube-deployment scan-trivy scan-sonarqube clean-cache label-pr preview-comment release-app release-npm release-helm update-helm-chart; do
  cp "/tmp/github-workflows/.github/workflows/${wf}.yml" ./.github/workflows/
done

# Clean up
rm -rf /tmp/github-workflows

Then update the uses: references in ci.yml, cd.yml and cache.yml from:

yaml
uses: this-is-tobi/github-workflows/.github/workflows/<workflow>.yml@v0

to:

yaml
uses: ./.github/workflows/<workflow>.yml

Notes:

  • When using local copies, you are responsible for pulling upstream updates yourself.
  • The reusable workflows are versioned — pinning @v0 ensures stability. Check the releases page for updates.

Deployment

Helm chart

An example Helm chart is provided in the ./helm folder to facilitate Kubernetes deployment.

Adding a new service requires:

  1. Copy the API templates folder:
    sh
    cp -R ./helm/templates/api ./helm/templates/<service_name>
  2. Replace references in the new templates:
    sh
    find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.Values\.api|\.Values\.<service_name>|g' {} \;
    find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.template\.api|\.template\.<service_name>|g' {} \;
  3. Copy and rename the helper functions in ./helm/templates/_helpers.tpl.
  4. Copy and rename the values block in ./helm/values.yaml.

Notes: Consider moving the ./helm directory to a dedicated repository to use it as a versioned Helm registry:

GitHub templates

GitHub community templates are already set up and only need to be updated for your project: