Infrastructure
Project structure
The monorepo is split into applications (deployable services) and shared packages (reusable libraries).
Applications
| Application | Description |
|---|---|
api | Fastify REST API with BetterAuth authentication |
docs | VitePress documentation site |
mcp | MCP server — expose API tools to LLMs via stdio & HTTP transport |
Shared packages
| Package | Description |
|---|---|
cli | tmts CLI — API client with cross-platform native build |
eslint-config | Shared ESLint configuration |
logger | Shared Pino-based structured logger for all apps and packages |
ts-config | Shared TypeScript base configuration |
test-utils | Testing utilities (mock factories, helpers) |
shared | Zod schemas, API contracts, utility functions |
playwright | End-to-end browser tests |
Architecture note: Organization management (CRUD, members, invitations) and access control (roles, permissions) are handled directly by BetterAuth's Organization plugin within the auth module. Domain-specific extensions (projects, quotas, custom resources) are meant to be added by the consuming application, not the template.
Docker services
The docker/ folder contains two compose files:
docker-compose.dev.yml— development stack with hot-reload (docker compose watch)docker-compose.prod.yml— production-like stack with pre-built images
Docker images
Each application has its own Dockerfile. The migration runner uses a dedicated lightweight image separate from the API:
| Image | Base | Dockerfile | Purpose |
|---|---|---|---|
api | bun:distroless | apps/api/Dockerfile | Production API runtime (multi-stage) |
api-migrate | bun:alpine | apps/api/Dockerfile.migrate | Prisma migration runner (init container) |
docs | nginx-unprivileged:alpine-slim | apps/docs/Dockerfile | Static documentation site |
cli | distroless/cc-debian12 | packages/cli/Dockerfile | CLI native binary |
mcp | bun:distroless | apps/mcp/Dockerfile | MCP server |
Migration runner
The api-migrate image is a standalone, lightweight container that runs prisma migrate deploy and exits. It is used as:
- An init container in Kubernetes (runs before the API pod starts)
- A dependency service in Docker Compose (API depends on
migratecompleting successfully)
The image extracts the Prisma version from apps/api/package.json at build time to stay in sync without hard-coding. It uses a dedicated non-root migrate user and requires only the DB__URL environment variable.
Services
| Service | Image | Port (host) | Description |
|---|---|---|---|
api | template-monorepo-ts/api | 8081 | Fastify API (dev: watch mode, prod: bundled) |
docs | template-monorepo-ts/docs | 8082 | VitePress documentation site |
mcp | template-monorepo-ts/mcp | 3100 | MCP server (opt-in via mcp profile) |
db | postgres:17.9 | 5432 | Main application PostgreSQL database |
redis | redis:7.4-bookworm | 6379 | Redis session store |
migrate | template-monorepo-ts/api-migrate | — | One-shot Prisma migration runner (Dockerfile.migrate) |
keycloak-db | postgres:17.9 | — | Dedicated Keycloak PostgreSQL database |
keycloak | keycloak/keycloak:26.5.4 | 8084 | Keycloak identity provider |
keycloak-init | keycloak/keycloak:26.5.4 | — | One-shot init container: sets master realm sslRequired=none via kcadm.sh |
otel-collector | otel/opentelemetry-collector-contrib | 4317, 4318 | OTel Collector (OTLP gRPC + HTTP) |
tempo | grafana/tempo:2.10.1 | — | Distributed tracing backend |
prometheus | prom/prometheus:3.10.0 | 9090 | Metrics storage and query |
grafana | grafana/grafana:12.4.0 | 8083 | Observability dashboards |
Startup order (dev)
Keycloak setup
Keycloak runs in start-dev mode backed by a dedicated PostgreSQL instance. On first boot, the realm export in docker/keycloak/realm-export.json is imported automatically (--import-realm). A keycloak-init service runs once after Keycloak becomes healthy and patches the master realm to disable the SSL requirement (sslRequired=none), which is required when running without TLS in development.
Notes:
- This init-container pattern is the standard approach for Keycloak 26+ — there is no env var or CLI flag to control
sslRequiredon the master realm.- In production, Keycloak runs in
start --optimizedmode and TLS is expected to be terminated at the reverse proxy level.
Observability
The template includes a full observability stack based on OpenTelemetry for both traces and metrics.
Architecture
- The API uses manual
NodeTracerProviderandMeterProvider(replacingNodeSDKfor Bun compatibility — Bun does not supportrequirehooks, so auto-instrumentation is unavailable). - @fastify/otel provides HTTP request trace spans.
- @prisma/instrumentation hooks into Prisma Client internals, producing
prisma:client:operation,prisma:client:db_queryandprisma:client:serializespans. - A custom
httpRequestDurationhistogram records request latency via a FastifyonResponsehook. - The OTel Collector receives traces and metrics, generates Prometheus metrics from trace spans using the
spanmetricsconnector, and forwards traces to Tempo. - Grafana provides 3 pre-configured dashboards: API Overview, Prisma / Database and Traces Explorer.
Docker Compose endpoints
| Component | Internal hostname | Port |
|---|---|---|
| OTel Collector | otel-collector | 4317 (gRPC), 4318 (HTTP) |
| Tempo | tempo | 3200 |
| Prometheus | prometheus | 9090 |
| Grafana | grafana | 3000 → host 8083 |
Kubernetes endpoints
When deployed via the Helm chart, internal service DNS names are stabilised with fullnameOverride:
| Component | K8s service name | Port |
|---|---|---|
| OTel Collector | opentelemetry-collector | 4317 (gRPC), 4318 (HTTP) |
| Tempo | tempo | 3200 |
| Prometheus | auto-provisioned by kube-prometheus-stack | 9090 |
| Grafana | auto-provisioned by kube-prometheus-stack | 80 |
Set the following environment variable in the API deployment (via api.env or api.envSecret in helm/values.yaml):
OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4318Grafana datasources & dashboards
In Docker Compose, datasources and dashboards are mounted from docker/otel/grafana/.
In Kubernetes, the Helm chart:
- Provisions the Prometheus datasource automatically via kube-prometheus-stack (uid:
prometheus). - Provisions the Tempo datasource via
kube-prometheus-stack.grafana.additionalDataSources(uid:tempo). - Creates dashboard ConfigMaps from
helm/files/dashboards/(mirroringdocker/otel/grafana/dashboards/). The kube-prometheus-stack Grafana sidecar picks up any ConfigMap labelledgrafana_dashboard: "1"automatically.
Keep
helm/files/dashboards/anddocker/otel/grafana/dashboards/in sync when modifying dashboards.
Notes:
- OTel configuration files are located in
docker/otel/.- Grafana dashboards are provisioned from
docker/otel/grafana/dashboards/(Docker) andhelm/files/dashboards/(Kubernetes).
Tests
Unit tests are run using Vitest, which is API-compatible with Jest but faster when working on top of Vite. Tests are co-located with source files (*.spec.ts).
End to end tests are powered by Playwright and managed in the ./packages/playwright folder.
Notes: Test execution may require some packages to be built first. Pipeline dependencies are described in the
turbo.jsonfile.
Docs
Documentation is written in the ./apps/docs folder using VitePress, a static site generator built on Vite and Vue that parses .md files into a documentation website.
CI/CD
The CI/CD pipelines use reusable workflows from this-is-tobi/github-workflows@v0. The orchestrators (ci.yml, cd.yml) call these reusable workflows; the only local workflow is release-cli.yml which handles project-specific CLI binary compilation.
CI pipeline
The main CI workflow (ci.yml) runs on pull requests:
| Step | Workflow / Source |
|---|---|
| Lint commit messages | lint-commits.yml@v0 (reusable) |
| Lint JS/TS code | lint-js.yml@v0 (reusable) |
| Unit tests with coverage | test-vitest.yml@v0 (reusable) |
| SonarQube code quality scan [1] | scan-sonarqube.yml@v0 (reusable) |
| Build Docker images [2] | build-docker.yml@v0 (reusable) |
| Label PR on build | label-pr.yml@v0 (reusable) |
| End to end tests OR Deployment tests [3] | test-playwright.yml@v0 / test-kube-deployment.yml@v0 (reusable) |
| Trivy vulnerability scan (images) [4] | scan-trivy.yml@v0 (reusable) |
| Trivy vulnerability scan (config) [4] | scan-trivy.yml@v0 (reusable) |
Notes:
- [1] Runs code quality analysis using SonarQube scanner. Requires secrets
SONAR_HOST_URL,SONAR_TOKEN,SONAR_PROJECT_KEY. The job usescontinue-on-errorand is skipped gracefully when secrets are not configured.- [2] Builds application images tagged
pr-<pr_number>and pushes them to GHCR. Each image is built in its own matrix slot via the reusablebuild-docker.ymlwith built-in SLSA provenance and SBOM attestation enabled (PROVENANCE: true,SBOM: true). The attestation runs as an additional job inside the reusable workflow itself, which is necessary when using a matrix strategy — per-matrix outputs cannot be forwarded to a separateattest-docker.ymlcall. Requiresid-token: writeandattestations: writepermissions.- [3] Runs e2e tests if changes occur in apps, packages or workflows; otherwise runs deployment tests. Uses reusable workflows:
test-playwright.ymlfor Playwright browser tests andtest-kube-deployment.ymlfor Kind-based Kubernetes deploy checks.- [4] Runs only if the base branch is
mainordevelop. SARIF results are uploaded to the GitHub Security tab.
CD pipeline
The CD workflow (cd.yml) publishes releases using Release-please-action, which automatically parses Git history following Conventional Commits to build changelogs and version numbers (see Semantic Versioning):
| Step | Workflow / Source |
|---|---|
| Build CLI binaries [7] | release-cli.yml (local) |
| Create release (release-please) | release-app.yml@v0 (reusable) |
| Build Docker images + attest | build-docker.yml@v0 (reusable) |
| Publish CLI to NPM | release-npm.yml@v0 (reusable) |
| Bump Helm chart appVersion [6] | update-helm-chart.yml@v0 (reusable) |
| Publish Helm chart to OCI [5] | release-helm.yml@v0 (reusable) |
Notes:
- Uncomment the
on: pushtrigger incd.ymlto automatically create the new release PR on merge into the main branch.- Release-please automatically updates
packages/cli/package.jsonversion viaextra-filesto keep it in sync with the app version.- [5]
release-helmruns afterupdate-helm-chartcompletes. It uses chart-releaser to detect charts whose version tag doesn't exist yet and publishes them to GHCR as OCI artifacts.- [6]
update-helm-chartruns only on app release. It bumps the chart'sappVersionto the new app version, independently increments the chartversion(patch bump), regenerates docs, and creates a PR. When that PR is merged, the next CD run picks it up viarelease-helm. The chart version is independent from the app version — the chart can also be bumped without an app release.- [7]
build-cliruns unconditionally before the release step. It compiles cross-platform CLI binaries, generates SHA-256 checksums, and uploads them as a consolidated artifact (cli-release-assets). When a release is created,release-app.ymlautomatically attaches these assets to the GitHub release.- Docker images are built with built-in SLSA provenance and SBOM attestation enabled (
PROVENANCE: true,SBOM: true). Attestation runs inside each matrix job, which is necessary because per-matrix outputs cannot be forwarded to a separate workflow.- Requires secrets:
NPM_TOKENfor NPM publishing,GH_PATfor release auto-merge.
Other workflows
| Workflow | Source | Description |
|---|---|---|
| cache.yml | local | Cleans GitHub Actions cache and optionally GHCR images on PR close (uses clean-cache.yml@v0) |
| preview.yml | local | Posts preview environment links as PR comment (uses preview-comment.yml@v0) |
Build
Docker images are built using the reusable build-docker.yml workflow, once per image via a strategy.matrix:
api— production runtime (distroless, minimal)api-migrate— Prisma migration runner (dedicated lightweight image built fromapps/api/Dockerfile.migrate, used as init container in Kubernetes / dependency service in docker-compose)docs— documentation static sitecli— CLI binary Docker imagemcp— MCP server Docker image
Each matrix slot enables built-in SLSA provenance and SBOM attestation (PROVENANCE: true, SBOM: true). The attestation job runs inside the reusable workflow after the image is built and merged, delegating to attest-docker.yml internally. This approach is required when using a matrix strategy, because GitHub Actions cannot expose per-matrix outputs to a separate attestation workflow call. The caller must grant id-token: write and attestations: write permissions.
Cache
GitHub Actions cache is automatically deleted when the associated pull request is closed or merged. Optionally, GHCR images can be deleted by setting the repository variable CLEAN_IMAGES=true or using the manual dispatch input.
Security
Trivy scans are performed on each PR via the reusable scan-trivy.yml. Image scans and config scans run as separate jobs. SARIF reports are uploaded to the GitHub Security tab.
Preview
Application preview can be enabled using the ArgoCD PR generator. When a pull request is tagged with the preview label, a preview deployment is created using images tagged pr-<pr_number>.
To activate this feature:
- Create a GitHub App so ArgoCD can access the repository and receive webhooks.
- Deploy an
ApplicationSetbased on this template. - Create GitHub Actions environment variable templates:
API_TEMPLATE_URL(https://api.pr-<pr_number>.domain.com) andDOCS_TEMPLATE_URL(https://docs.pr-<pr_number>.domain.com).
Using workflows locally
If you prefer to have all workflow definitions in your repository rather than referencing the external reusable workflows, you can copy them locally:
# Clone the reusable workflows at the pinned version
git clone --branch v0 --depth 1 https://github.com/this-is-tobi/github-workflows.git /tmp/github-workflows
# Copy the specific workflows used by this project
for wf in build-docker lint-js lint-commits test-vitest test-playwright test-kube-deployment scan-trivy scan-sonarqube clean-cache label-pr preview-comment release-app release-npm release-helm update-helm-chart; do
cp "/tmp/github-workflows/.github/workflows/${wf}.yml" ./.github/workflows/
done
# Clean up
rm -rf /tmp/github-workflowsThen update the uses: references in ci.yml, cd.yml and cache.yml from:
uses: this-is-tobi/github-workflows/.github/workflows/<workflow>.yml@v0to:
uses: ./.github/workflows/<workflow>.ymlNotes:
- When using local copies, you are responsible for pulling upstream updates yourself.
- The reusable workflows are versioned — pinning
@v0ensures stability. Check the releases page for updates.
Deployment
Helm chart
An example Helm chart is provided in the ./helm folder to facilitate Kubernetes deployment.
Adding a new service requires:
- Copy the API templates folder:sh
cp -R ./helm/templates/api ./helm/templates/<service_name> - Replace references in the new templates:sh
find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.Values\.api|\.Values\.<service_name>|g' {} \; find ./helm/templates/<service_name> -type f -exec perl -pi -e 's|\.template\.api|\.template\.<service_name>|g' {} \; - Copy and rename the helper functions in
./helm/templates/_helpers.tpl. - Copy and rename the values block in
./helm/values.yaml.
Notes: Consider moving the
./helmdirectory to a dedicated repository to use it as a versioned Helm registry:
GitHub templates
GitHub community templates are already set up and only need to be updated for your project: