Infrastructure
Shared packages
The packages/ folder contains reusable libraries shared across applications:
| Package | Description |
|---|---|
eslint-config | Shared ESLint configuration |
ts-config | Shared TypeScript base configuration |
test-utils | Testing utilities (mock factories, helpers) |
shared | Zod schemas, API contracts, utility functions |
playwright | End-to-end browser tests |
Architecture note: Organization management (CRUD, members, invitations) and access control (roles, permissions) are handled directly by BetterAuth's Organization plugin within the auth module. Domain-specific extensions (projects, quotas, custom resources) are meant to be added by the consuming application, not the template.
Docker services
The docker/ folder contains two compose files:
docker-compose.dev.yml— development stack with hot-reload (docker compose watch)docker-compose.prod.yml— production-like stack with pre-built images
Services
| Service | Image | Port (host) | Description |
|---|---|---|---|
api | template-monorepo-ts/api | 8081 | Fastify API (dev: watch mode, prod: bundled) |
docs | template-monorepo-ts/docs | 8082 | VitePress documentation site |
db | postgres:17.9 | 5432 | Main application PostgreSQL database |
redis | redis:7.4-bookworm | 6379 | Redis session store |
migrate | template-monorepo-ts/api (migrate stage) | — | One-shot Prisma migration runner |
keycloak-db | postgres:17.9 | — | Dedicated Keycloak PostgreSQL database |
keycloak | keycloak/keycloak:26.5.4 | 8084 | Keycloak identity provider |
keycloak-init | keycloak/keycloak:26.5.4 | — | One-shot init container: sets master realm sslRequired=none via kcadm.sh |
otel-collector | otel/opentelemetry-collector-contrib | 4317, 4318 | OTel Collector (OTLP gRPC + HTTP) |
tempo | grafana/tempo:2.10.1 | — | Distributed tracing backend |
prometheus | prom/prometheus:3.10.0 | 9090 | Metrics storage and query |
grafana | grafana/grafana:12.4.0 | 8083 | Observability dashboards |
Startup order (dev)
keycloak-db ──► keycloak ──► keycloak-init (exits 0)
db ───────────► migrate ───► api
redis ─────────────────────► apiKeycloak setup
Keycloak runs in start-dev mode backed by a dedicated PostgreSQL instance. On first boot, the realm export in docker/keycloak/realm-export.json is imported automatically (--import-realm). A keycloak-init service runs once after Keycloak becomes healthy and patches the master realm to disable the SSL requirement (sslRequired=none), which is required when running without TLS in development.
Notes:
- This init-container pattern is the standard approach for Keycloak 26+ — there is no env var or CLI flag to control
sslRequiredon the master realm.- In production, Keycloak runs in
start --optimizedmode and TLS is expected to be terminated at the reverse proxy level.
Observability
The template includes a full observability stack based on OpenTelemetry for both traces and metrics.
Architecture
API (OTel SDK) → OTel Collector → Prometheus (metrics)
→ Tempo (traces)
→ Grafana (dashboards)- The API uses manual
NodeTracerProviderandMeterProvider(replacingNodeSDKfor Bun compatibility — Bun does not supportrequirehooks, so auto-instrumentation is unavailable). - @fastify/otel provides HTTP request trace spans.
- @prisma/instrumentation hooks into Prisma Client internals, producing
prisma:client:operation,prisma:client:db_queryandprisma:client:serializespans. - A custom
httpRequestDurationhistogram records request latency via a FastifyonResponsehook. - The OTel Collector receives traces and metrics, generates Prometheus metrics from trace spans using the
spanmetricsconnector, and forwards traces to Tempo. - Grafana provides 3 pre-configured dashboards: API Overview, Prisma / Database and Traces Explorer.
Docker Compose endpoints
| Component | Internal hostname | Port |
|---|---|---|
| OTel Collector | otel-collector | 4317 (gRPC), 4318 (HTTP) |
| Tempo | tempo | 3200 |
| Prometheus | prometheus | 9090 |
| Grafana | grafana | 3000 → host 8083 |
Kubernetes endpoints
When deployed via the Helm chart, internal service DNS names are stabilised with fullnameOverride:
| Component | K8s service name | Port |
|---|---|---|
| OTel Collector | opentelemetry-collector | 4317 (gRPC), 4318 (HTTP) |
| Tempo | tempo | 3200 |
| Prometheus | auto-provisioned by kube-prometheus-stack | 9090 |
| Grafana | auto-provisioned by kube-prometheus-stack | 80 |
Set the following environment variable in the API deployment (via api.env or api.envSecret in helm/values.yaml):
OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4318Grafana datasources & dashboards
In Docker Compose, datasources and dashboards are mounted from docker/otel/grafana/.
In Kubernetes, the Helm chart:
- Provisions the Prometheus datasource automatically via kube-prometheus-stack (uid:
prometheus). - Provisions the Tempo datasource via
kube-prometheus-stack.grafana.additionalDataSources(uid:tempo). - Creates dashboard ConfigMaps from
helm/files/dashboards/(mirroringdocker/otel/grafana/dashboards/). The kube-prometheus-stack Grafana sidecar picks up any ConfigMap labelledgrafana_dashboard: "1"automatically.
Keep
helm/files/dashboards/anddocker/otel/grafana/dashboards/in sync when modifying dashboards.
Notes:
- OTel configuration files are located in
docker/otel/.- Grafana dashboards are provisioned from
docker/otel/grafana/dashboards/(Docker) andhelm/files/dashboards/(Kubernetes).
Tests
Unit tests are run using Vitest, which is API-compatible with Jest but faster when working on top of Vite. Tests are co-located with source files (*.spec.ts).
End to end tests are powered by Playwright and managed in the ./packages/playwright folder.
Notes: Test execution may require some packages to be built first. Pipeline dependencies are described in the
turbo.jsonfile.
Docs
Documentation is written in the ./apps/docs folder using VitePress, a static site generator built on Vite and Vue that parses .md files into a documentation website.
CI/CD
Default GitHub Actions workflows are ready to use. The main CI workflow runs on pull requests:
| Description | File |
|---|---|
| Lint | lint.yml |
| Unit tests - (with optional code quality scan) [1] | tests-unit.yml |
| Build application images [2] | build.yml |
| End to end tests OR Deployment tests [3] | tests-e2e.yml / tests-deploy.yml |
| Vulnerability scan [4] | scan.yml |
Notes:
- [1] Runs code quality analysis using Sonarqube scanner, only if secrets
SONAR_HOST_URL,SONAR_TOKEN,SONAR_PROJECT_KEYare configured.- [2] Builds application images tagged
pr-<pr_number>and pushes them to a registry.- [3] Runs e2e tests if changes occur on apps, dependencies or helm; otherwise runs deployment tests.
- [4] Runs only if changes occur in
apps,packagesor.githubfolders and the base branch ismainordevelop.
The CD workflow (cd.yml) publishes releases using Release-please-action, which automatically parses Git history following Conventional Commits to build changelogs and version numbers (see Semantic Versioning):
| Description | File |
|---|---|
| Create new release pull request / Create new git tag and github release | release.yml |
| Build application images and push them to a registry | build.yml |
Notes: Uncomment the
on: pushtrigger incd.ymlto automatically create the new release PR on merge into the main branch.
Build
All Docker images are built in parallel using the matrix/docker.json file. Options are available for multi-arch builds with or without QEMU (see build.yml).
The CI builds three images from the matrix:
api— production runtime (distroless, minimal)api-migrate— Prisma migration runner (used as init container in Kubernetes / dependency service in docker-compose)docs— documentation static site
Cache
The template uses caching for Bun, Turbo and Docker to improve CI/CD speed. Cache is automatically deleted when the associated pull request is closed or merged (see cache.yml).
Security
Trivy scans are performed on each PR and reports are uploaded to the GitHub Code Scanning tool using SARIF exports, with additional templates available in the ./ci/trivy folder.
Preview
Application preview can be enabled using the ArgoCD PR generator. When a pull request is tagged with the preview label, a preview deployment is created using images tagged pr-<pr_number>.
To activate this feature:
- Create a GitHub App so ArgoCD can access the repository and receive webhooks.
- Deploy an
ApplicationSetbased on this template. - Create GitHub Actions environment variable templates:
API_TEMPLATE_URL(https://api.pr-<pr_number>.domain.com) andDOCS_TEMPLATE_URL(https://docs.pr-<pr_number>.domain.com).
Deployment
Helm chart
An example Helm chart is provided in the ./helm folder to facilitate Kubernetes deployment. Adding a new service requires:
- Copy the API templates folder:sh
cp -R ./helm/templates/api ./helm/templates/<service_name> - Replace references in the new templates:sh
find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.Values\.api|\.Values\.<service_name>|g' {} \; find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.template\.api|\.template\.<service_name>|g' {} \; - Copy and rename the helper functions in
./helm/templates/_helpers.tpl. - Copy and rename the values block in
./helm/values.yaml.
Notes: Consider moving the
./helmdirectory to a dedicated repository to use it as a versioned Helm registry:
GitHub templates
GitHub community templates are already set up and only need to be updated for your project: