Docker for Developers: Containers Explained from Scratch
The Problem Docker Solves
Every developer has uttered the phrase "but it works on my machine!" This frustrating situation arises because software depends on a precise combination of operating system version, installed libraries, environment variables, file paths, and runtime versions. When a developer's laptop has Node.js 18 but the production server runs Node.js 16, subtle bugs emerge. When a Python script requires library version X but the deployment machine has version Y, the application crashes. These environment inconsistencies are one of the leading causes of deployment failures and hours of debugging.
Docker solves this by packaging your application together with its entire environment — the operating system libraries, runtime, dependencies, and configuration — into a single, portable unit called a container. That container runs identically on your laptop, your colleague's laptop, your CI/CD pipeline, and your production server. No more environment mismatch. No more "works on my machine."
Containers vs Virtual Machines
Both containers and virtual machines (VMs) provide isolation, but they work very differently. A VM emulates an entire physical computer, including a full operating system kernel, which makes VMs heavy (gigabytes in size) and slow to start (minutes). A container shares the host machine's kernel and only isolates the user-space processes, making containers lightweight (megabytes), fast to start (seconds or milliseconds), and far more efficient — you can run dozens of containers on the same machine where you might only run 3-4 VMs.
Installing Docker
Download and install Docker Desktop from docker.com for Windows or macOS. On Linux, install the Docker Engine via your package manager. Verify the installation:
docker --version
# Docker version 26.x.x
docker run hello-world
# This downloads a tiny test image and runs it — confirms Docker is working
Core Docker Concepts
- Image — A read-only blueprint/template for creating containers (like a class in OOP).
- Container — A running instance of an image (like an object in OOP).
- Dockerfile — A text file with instructions for building a custom image.
- Registry — A repository for storing and sharing images. Docker Hub is the most popular public registry.
- Volume — Persistent storage that survives container restarts.
- Network — Virtual network connecting containers together.
Essential Docker Commands
## Working with images
docker pull python:3.12-slim # download an image from Docker Hub
docker images # list all local images
docker rmi python:3.12-slim # remove an image
docker image prune # remove all unused images
## Running containers
docker run python:3.12-slim # run a container (stops immediately)
docker run -it python:3.12-slim bash # interactive terminal inside container
docker run -d nginx # run in detached (background) mode
docker run -p 8080:80 nginx # map host port 8080 to container port 80
docker run -e DATABASE_URL="..." app # set environment variable
## Managing running containers
docker ps # list running containers
docker ps -a # list ALL containers (including stopped)
docker stop container_id # gracefully stop a container
docker rm container_id # delete a stopped container
docker logs container_id # view container output
docker exec -it container_id bash # open shell in running container
Writing a Dockerfile
A Dockerfile is a recipe that tells Docker how to build your custom image. Here is a production-ready example for a Python Flask application:
# Start from official Python slim image (smaller than full)
FROM python:3.12-slim
# Set working directory inside the container
WORKDIR /app
# Copy dependency files first (layer caching optimisation)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code
COPY . .
# Expose the port the app listens on
EXPOSE 5000
# Create a non-root user for security
RUN useradd -m appuser
USER appuser
# Command to run when container starts
CMD ["python", "app.py"]
# Build the image (. = use current directory as context)
docker build -t my-flask-app:1.0 .
# Run the container
docker run -p 5000:5000 my-flask-app:1.0
# Push to Docker Hub
docker tag my-flask-app:1.0 yourusername/my-flask-app:1.0
docker push yourusername/my-flask-app:1.0
Persisting Data with Volumes
Containers are ephemeral — any data written inside a container is lost when it stops. Volumes solve this by storing data on the host machine, outside the container lifecycle:
## Named volumes (recommended — Docker manages the storage location)
docker volume create db-data
docker run -v db-data:/var/lib/postgresql/data postgres:16
## Bind mounts — mount a host directory into the container
## Great for development: changes on host reflect instantly in container
docker run -v $(pwd):/app -p 5000:5000 my-flask-app:1.0
## List and remove volumes
docker volume ls
docker volume rm db-data
Docker Compose — Running Multiple Containers
Real applications usually need multiple services — a web server, a database, a cache. Docker Compose lets you define and run all of them with a single file:
# docker-compose.yml
version: '3.9'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
depends_on:
- db
volumes:
- .:/app
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- db-data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
db-data:
docker compose up -d # start all services in background
docker compose down # stop and remove all containers
docker compose logs -f web # follow logs for the web service
docker compose ps # status of all services
Best Practices
- Use specific image tags — never use
:latestin production; pin to an exact version likepython:3.12.3-slim. - Multi-stage builds — use a build stage with full tooling, then copy only the final artifacts to a minimal runtime image.
- Never run as root — always create a non-root user in your Dockerfile.
- .dockerignore file — exclude
node_modules/,.git/,*.pyc, and.envfiles from your build context. - Keep images small — use slim or alpine base images, and clean up package caches in the same
RUNlayer.