Skip to content

Infrastructure

Shared packages

The packages/ folder contains reusable libraries shared across applications:

PackageDescription
eslint-configShared ESLint configuration
ts-configShared TypeScript base configuration
test-utilsTesting utilities (mock factories, helpers)
sharedZod schemas, API contracts, utility functions
playwrightEnd-to-end browser tests

Architecture note: Organization management (CRUD, members, invitations) and access control (roles, permissions) are handled directly by BetterAuth's Organization plugin within the auth module. Domain-specific extensions (projects, quotas, custom resources) are meant to be added by the consuming application, not the template.

Docker services

The docker/ folder contains two compose files:

  • docker-compose.dev.yml — development stack with hot-reload (docker compose watch)
  • docker-compose.prod.yml — production-like stack with pre-built images

Services

ServiceImagePort (host)Description
apitemplate-monorepo-ts/api8081Fastify API (dev: watch mode, prod: bundled)
docstemplate-monorepo-ts/docs8082VitePress documentation site
dbpostgres:17.95432Main application PostgreSQL database
redisredis:7.4-bookworm6379Redis session store
migratetemplate-monorepo-ts/api (migrate stage)One-shot Prisma migration runner
keycloak-dbpostgres:17.9Dedicated Keycloak PostgreSQL database
keycloakkeycloak/keycloak:26.5.48084Keycloak identity provider
keycloak-initkeycloak/keycloak:26.5.4One-shot init container: sets master realm sslRequired=none via kcadm.sh
otel-collectorotel/opentelemetry-collector-contrib4317, 4318OTel Collector (OTLP gRPC + HTTP)
tempografana/tempo:2.10.1Distributed tracing backend
prometheusprom/prometheus:3.10.09090Metrics storage and query
grafanagrafana/grafana:12.4.08083Observability dashboards

Startup order (dev)

txt
keycloak-db ──► keycloak ──► keycloak-init (exits 0)
db ───────────► migrate ───► api
redis ─────────────────────► api

Keycloak setup

Keycloak runs in start-dev mode backed by a dedicated PostgreSQL instance. On first boot, the realm export in docker/keycloak/realm-export.json is imported automatically (--import-realm). A keycloak-init service runs once after Keycloak becomes healthy and patches the master realm to disable the SSL requirement (sslRequired=none), which is required when running without TLS in development.

Notes:

  • This init-container pattern is the standard approach for Keycloak 26+ — there is no env var or CLI flag to control sslRequired on the master realm.
  • In production, Keycloak runs in start --optimized mode and TLS is expected to be terminated at the reverse proxy level.

Observability

The template includes a full observability stack based on OpenTelemetry for both traces and metrics.

Architecture

txt
API (OTel SDK) → OTel Collector → Prometheus (metrics)
                                → Tempo (traces)
                                → Grafana (dashboards)
  • The API uses manual NodeTracerProvider and MeterProvider (replacing NodeSDK for Bun compatibility — Bun does not support require hooks, so auto-instrumentation is unavailable).
  • @fastify/otel provides HTTP request trace spans.
  • @prisma/instrumentation hooks into Prisma Client internals, producing prisma:client:operation, prisma:client:db_query and prisma:client:serialize spans.
  • A custom httpRequestDuration histogram records request latency via a Fastify onResponse hook.
  • The OTel Collector receives traces and metrics, generates Prometheus metrics from trace spans using the spanmetrics connector, and forwards traces to Tempo.
  • Grafana provides 3 pre-configured dashboards: API Overview, Prisma / Database and Traces Explorer.

Docker Compose endpoints

ComponentInternal hostnamePort
OTel Collectorotel-collector4317 (gRPC), 4318 (HTTP)
Tempotempo3200
Prometheusprometheus9090
Grafanagrafana3000 → host 8083

Kubernetes endpoints

When deployed via the Helm chart, internal service DNS names are stabilised with fullnameOverride:

ComponentK8s service namePort
OTel Collectoropentelemetry-collector4317 (gRPC), 4318 (HTTP)
Tempotempo3200
Prometheusauto-provisioned by kube-prometheus-stack9090
Grafanaauto-provisioned by kube-prometheus-stack80

Set the following environment variable in the API deployment (via api.env or api.envSecret in helm/values.yaml):

txt
OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4318

Grafana datasources & dashboards

In Docker Compose, datasources and dashboards are mounted from docker/otel/grafana/.

In Kubernetes, the Helm chart:

  • Provisions the Prometheus datasource automatically via kube-prometheus-stack (uid: prometheus).
  • Provisions the Tempo datasource via kube-prometheus-stack.grafana.additionalDataSources (uid: tempo).
  • Creates dashboard ConfigMaps from helm/files/dashboards/ (mirroring docker/otel/grafana/dashboards/). The kube-prometheus-stack Grafana sidecar picks up any ConfigMap labelled grafana_dashboard: "1" automatically.

Keep helm/files/dashboards/ and docker/otel/grafana/dashboards/ in sync when modifying dashboards.

Notes:

  • OTel configuration files are located in docker/otel/.
  • Grafana dashboards are provisioned from docker/otel/grafana/dashboards/ (Docker) and helm/files/dashboards/ (Kubernetes).

Tests

Unit tests are run using Vitest, which is API-compatible with Jest but faster when working on top of Vite. Tests are co-located with source files (*.spec.ts).

End to end tests are powered by Playwright and managed in the ./packages/playwright folder.

Notes: Test execution may require some packages to be built first. Pipeline dependencies are described in the turbo.json file.

Docs

Documentation is written in the ./apps/docs folder using VitePress, a static site generator built on Vite and Vue that parses .md files into a documentation website.

CI/CD

Default GitHub Actions workflows are ready to use. The main CI workflow runs on pull requests:

DescriptionFile
Lintlint.yml
Unit tests - (with optional code quality scan) [1]tests-unit.yml
Build application images [2]build.yml
End to end tests OR Deployment tests [3]tests-e2e.yml / tests-deploy.yml
Vulnerability scan [4]scan.yml

Notes:

  • [1] Runs code quality analysis using Sonarqube scanner, only if secrets SONAR_HOST_URL, SONAR_TOKEN, SONAR_PROJECT_KEY are configured.
  • [2] Builds application images tagged pr-<pr_number> and pushes them to a registry.
  • [3] Runs e2e tests if changes occur on apps, dependencies or helm; otherwise runs deployment tests.
  • [4] Runs only if changes occur in apps, packages or .github folders and the base branch is main or develop.

The CD workflow (cd.yml) publishes releases using Release-please-action, which automatically parses Git history following Conventional Commits to build changelogs and version numbers (see Semantic Versioning):

DescriptionFile
Create new release pull request / Create new git tag and github releaserelease.yml
Build application images and push them to a registrybuild.yml

Notes: Uncomment the on: push trigger in cd.yml to automatically create the new release PR on merge into the main branch.

Build

All Docker images are built in parallel using the matrix/docker.json file. Options are available for multi-arch builds with or without QEMU (see build.yml).

The CI builds three images from the matrix:

  • api — production runtime (distroless, minimal)
  • api-migrate — Prisma migration runner (used as init container in Kubernetes / dependency service in docker-compose)
  • docs — documentation static site

Cache

The template uses caching for Bun, Turbo and Docker to improve CI/CD speed. Cache is automatically deleted when the associated pull request is closed or merged (see cache.yml).

Security

Trivy scans are performed on each PR and reports are uploaded to the GitHub Code Scanning tool using SARIF exports, with additional templates available in the ./ci/trivy folder.

Preview

Application preview can be enabled using the ArgoCD PR generator. When a pull request is tagged with the preview label, a preview deployment is created using images tagged pr-<pr_number>.

To activate this feature:

  1. Create a GitHub App so ArgoCD can access the repository and receive webhooks.
  2. Deploy an ApplicationSet based on this template.
  3. Create GitHub Actions environment variable templates: API_TEMPLATE_URL (https://api.pr-<pr_number>.domain.com) and DOCS_TEMPLATE_URL (https://docs.pr-<pr_number>.domain.com).

Deployment

Helm chart

An example Helm chart is provided in the ./helm folder to facilitate Kubernetes deployment. Adding a new service requires:

  1. Copy the API templates folder:
    sh
    cp -R ./helm/templates/api ./helm/templates/<service_name>
  2. Replace references in the new templates:
    sh
    find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.Values\.api|\.Values\.<service_name>|g' {} \;
    find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.template\.api|\.template\.<service_name>|g' {} \;
  3. Copy and rename the helper functions in ./helm/templates/_helpers.tpl.
  4. Copy and rename the values block in ./helm/values.yaml.

Notes: Consider moving the ./helm directory to a dedicated repository to use it as a versioned Helm registry:

GitHub templates

GitHub community templates are already set up and only need to be updated for your project: