Overview
Containerisation solves one of the most persistent problems in software deployment: the gap between the environment where software is developed and the environment where it runs in production. A container packages the application code alongside its complete runtime environment — the operating system libraries, the language runtime, the configuration, and every dependency the application needs to run — into a single portable unit that runs identically on a developer's laptop, on a CI/CD server, and in production. The "it works on my machine" class of deployment failure disappears because the machine is included in the package.
Docker is the dominant container runtime and build toolchain. A Dockerfile defines how to build the container image — the steps that assemble the runtime environment and install the application. A Docker Compose file defines how multiple containers work together — the application container, the database container, the cache container — as a composed system. The Docker container registry stores the built images and makes them available for deployment. This toolchain, combined with the container orchestration that Kubernetes provides for production deployments at scale, is the deployment infrastructure that modern software production relies on.
Containerisation is part of how we build and deploy every application we develop. The development environment is containerised. The CI/CD pipeline builds and tests in containers. The production deployment runs containers. This consistency means that the application running in production is the same application that was tested in CI — not a version that has been reassembled from separate deployment steps that may have been executed differently.
We design and build containerisation infrastructure for the applications we develop and for existing applications that need to be containerised for consistency, portability, or deployment to container orchestration platforms.
What Containerisation Infrastructure Covers
Dockerfile design and optimisation. The Dockerfile is the specification for how a container image is built. Well-designed Dockerfiles produce images that are small, build quickly, and are secure — and poorly designed Dockerfiles produce images that are large, slow to build, and contain unnecessary dependencies that expand the attack surface.
Multi-stage builds: the Dockerfile pattern that uses separate build and runtime stages — the first stage installs build tools and compiles the application, the second stage copies only the compiled output into a minimal runtime image without the build tools. A Rust application's build stage includes the full Rust toolchain; the runtime stage contains only the compiled binary and the minimal shared libraries it needs. A Node.js application's build stage runs npm install and the bundler; the runtime stage contains only the static assets and the production Node.js dependencies. Multi-stage builds for our Rust applications typically produce images under 50MB. Multi-stage builds for C# .NET applications produce images based on the minimal mcr.microsoft.com/dotnet/aspnet runtime rather than the full SDK image.
Layer caching optimisation: Docker builds reuse cached layers when the inputs to a step have not changed. Dockerfile instruction ordering that maximises cache reuse — copying and installing dependencies before copying application code, so that dependency installation is only re-run when dependency files change rather than whenever any application code changes. Cache-optimised Dockerfiles that reduce build times from minutes to seconds for the common case of incremental code changes.
Base image selection: choosing base images that are minimal (Alpine Linux or distroless where appropriate), regularly updated with security patches, and appropriate for the application's runtime requirements. The base image trade-offs between Alpine's small size and potential glibc compatibility issues, Debian slim's broader compatibility, and distroless images' minimal attack surface.
Non-root user configuration: running container processes as non-root users where the application permits — the security configuration that limits the damage a compromised container can do to the host system and to other containers.
.dockerignore and build context management. The .dockerignore file that excludes unnecessary files from the Docker build context — the files that do not need to be present in the container image and that, if included, slow down build context transmission and potentially expose sensitive files. Properly configured .dockerignore files for each language and framework type that exclude test directories, local configuration files, development dependencies, and the version control metadata that should never be in a container image.
Docker Compose for development environments. Docker Compose defines multi-container application environments — the application container, the database container, the message broker, the cache — as a composed system that starts and stops as a unit. Development environments defined in Docker Compose mean that any developer can get a fully functional development environment with a single command, without installing any application-specific software on their machine beyond Docker itself.
Service dependencies and health checks: Compose service dependency configuration that starts containers in the correct order and waits for dependent services to be healthy before starting services that depend on them. Health check definitions that tell Compose whether a service is actually ready to accept requests rather than just whether its process is running.
Volume mounts for development: Compose configurations with volume mounts that make the developer's local source code available inside the running container, enabling live reload development workflows where code changes are reflected in the running container without requiring a rebuild.
Environment variable management: Compose configurations that read environment variables from .env files for local development secrets and configuration, with .env.example files that document the required variables without committing actual secret values.
Container registry management. Built container images stored in a container registry that the CI/CD pipeline pushes to and that deployment infrastructure pulls from. Registry selection and configuration for the project's requirements — GitHub Container Registry (GHCR) for projects hosted on GitHub, AWS ECR for AWS-deployed applications, Docker Hub for public images, or private registry infrastructure for projects with specific security requirements.
Image tagging conventions: tagging images with the git commit SHA for immutable image references, with the branch name for environment-specific deployments, and with semantic version tags for released versions. Tagging conventions that make it unambiguous which code version produced each image and which image is deployed to each environment.
Registry cleanup policies: automated cleanup of old images that accumulate in the registry over time — retaining a defined number of recent images and the images for current deployments while removing the accumulated history that consumes storage without providing value.
Container networking. Docker networking configuration that controls how containers communicate with each other and with the outside world. User-defined bridge networks that allow containers to communicate by service name rather than by IP address. Network segmentation that isolates containers that should not communicate directly — the web application container that can reach the database container but should not be able to reach infrastructure management services. Port mapping that exposes only the ports that need to be externally accessible.
Resource limits and health checks. Container resource limit configuration that prevents individual containers from consuming excessive CPU or memory and affecting other containers on the same host. Health check definitions that tell the container runtime whether the application inside the container is actually healthy and able to serve requests — enabling the runtime to restart unhealthy containers automatically and the load balancer to route away from containers that are not healthy.
Kubernetes for Production Container Orchestration
For applications deployed at production scale — where multiple container replicas need to run for availability, where rolling deployments are required, where autoscaling is needed to handle variable load — Kubernetes provides the orchestration that Docker alone does not.
Kubernetes manifest development. The YAML manifests that define how applications run in Kubernetes — Deployment objects that define the container image, the number of replicas, the resource requests and limits, and the update strategy; Service objects that define how the application is exposed to other services or to the outside world; ConfigMap and Secret objects that provide configuration and sensitive values to running containers.
Manifest organisation: Kustomize or Helm for managing Kubernetes manifests across multiple environments — the base manifest that defines the application, overlaid with environment-specific configuration that adjusts image tags, resource limits, and configuration values for development, staging, and production deployments.
Deployment strategies. Rolling update deployments that replace old pod replicas with new ones incrementally, maintaining application availability throughout the update. Blue-green deployments that run the new version alongside the old version and switch traffic atomically when the new version is verified. Canary deployments that route a small fraction of traffic to the new version before promoting it to handle full traffic. Deployment strategy selection based on the application's tolerance for brief degradation during deployment versus the operational complexity of running multiple versions simultaneously.
Horizontal Pod Autoscaling. HPA configuration that automatically scales the number of running pod replicas based on CPU utilisation, memory utilisation, or custom metrics from the application. Autoscaling configuration that handles traffic spikes without requiring manual scaling intervention and scales back down during low-traffic periods to reduce infrastructure cost.
Persistent storage. PersistentVolume and PersistentVolumeClaim configuration for applications that need durable storage — the database container that needs its data to survive pod restarts, the application that writes files that need to persist across deployments. Storage class selection for the appropriate combination of performance, durability, and cost for each storage requirement.
Ingress and TLS termination. Kubernetes Ingress configuration that routes external HTTP traffic to the appropriate services, with TLS termination at the ingress layer using certificates managed by cert-manager and Let's Encrypt. Ingress rules that route traffic by hostname and path prefix to different backend services within the cluster.
Our Own Infrastructure
Our own production services run containerised on VPS infrastructure with nginx as the reverse proxy — the same deployment pattern we implement for client applications. The application container handles the application logic; nginx handles TLS termination, request routing, and static asset serving. Systemd manages the container lifecycle. This is the deployment infrastructure for the software we run ourselves, not just a pattern we prescribe for clients.
Technologies Used
- Docker — container runtime and image build toolchain
- Docker Compose — multi-container development and testing environments
- Dockerfile — container image build specification
- Kubernetes — container orchestration for production deployments at scale
- Helm — Kubernetes package manager for application deployment templates
- Kustomize — Kubernetes manifest customisation for multi-environment deployments
- GitHub Container Registry (GHCR) — container image registry for GitHub-hosted projects
- AWS ECR — container registry for AWS-deployed applications
- cert-manager — automated TLS certificate management in Kubernetes
- nginx — reverse proxy and ingress for containerised applications
- systemd — service management for container lifecycle on VPS deployments
- GitHub Actions / GitLab CI/CD — CI/CD pipeline integration for container build and push
Containerisation as a Foundation, Not a Feature
Containerisation is not a feature of the applications we build — it is the foundation of how we build and deploy them. Every application we develop is containerised from the start. The development environment, the CI/CD pipeline, and the production deployment all use the same container images built from the same Dockerfiles. This consistency eliminates the deployment environment differences that cause deployment failures and makes the deployment process reproducible regardless of who executes it or when.
The investment in containerisation infrastructure pays its cost from the first deployment — in the confidence that what was tested is what was deployed, in the developer experience of environment consistency, and in the operational simplicity of a deployment process that does not depend on correctly executing a sequence of manual steps.
The Same Container, From Development to Production
The goal of containerisation is a deployment artefact that runs identically in every environment it is placed in. Docker and Kubernetes, correctly configured, deliver this — the developer's local environment, the CI test run, the staging verification, and the production deployment all run the same container image, with environment-specific configuration injected at runtime rather than baked into the image.