Docker for Beginners
2026-03-19
Docker changed how developers build, ship, and run software. If you've heard the term but haven't dug into it yet, this is your starting point. No fluff, just the concepts you need to understand what Docker does and why it matters.
What Is Docker?
Docker is a platform that packages your application and everything it needs to run — code, runtime, libraries, system tools, configuration — into a single unit called a container. That container runs the same way on your laptop, your colleague's machine, a test server, and production.
The classic "it works on my machine" problem? Docker kills it.
Containers vs Virtual Machines
This is the first thing people get confused about. Containers and virtual machines both provide isolation, but they do it differently.
A virtual machine runs a full operating system on top of a hypervisor. Each VM has its own kernel, its own OS, its own set of system processes. That's heavy. A single VM might consume gigabytes of disk and RAM before your application even starts.
A container shares the host operating system's kernel. It only packages the application layer — your code and its dependencies. This makes containers dramatically lighter. They start in seconds, use a fraction of the resources, and you can run many more of them on the same hardware.
Think of it this way: VMs virtualize the hardware, containers virtualize the operating system. Both have their place, but for most application workloads, containers are the better fit.
Images and Containers
Two terms you need to keep straight. An image is the blueprint — a read-only template that defines what's inside the container. A container is a running instance of that image.
You build an image once. You can run as many containers from it as you want. Each container is isolated from the others. If one crashes, the rest keep running.
Images are layered and cached, which makes builds fast. When you change one line of code, Docker only rebuilds the layers that changed — not the entire image.
The Dockerfile
A Dockerfile is a text file that tells Docker how to build your image. It's a set of instructions executed in order. Here's a simple example for a Node.js application:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Let's break that down:
- FROM sets the base image. You're starting from an existing Node.js image rather than building from scratch.
- WORKDIR sets the working directory inside the container.
- COPY moves files from your machine into the image.
- RUN executes a command during the build — in this case, installing dependencies.
- EXPOSE documents which port the app listens on.
- CMD defines the default command to run when the container starts.
That's it. With these few lines, anyone with Docker installed can build and run your application. No need to install Node.js, no dependency conflicts, no environment mismatch.
Why Developers Use Docker
Consistency Across Environments
Development, staging, and production all run the same container. Configuration differences between environments are handled through environment variables, not by hoping the right versions are installed on each server.
Fast Onboarding
New team member? Clone the repo, run docker compose up, and the entire application stack is running locally. Database, cache, API, frontend — all of it. No spending a full day setting up a development environment.
Isolation
Running a Python 2 project alongside a Python 3 project? No problem. Each container has its own dependencies. They don't interfere with each other or with your host system.
Reproducible Builds
The same Dockerfile produces the same image every time (assuming you pin your dependency versions, which you should). This makes debugging easier because you can reproduce the exact environment where a bug occurred.
Docker in CI/CD Pipelines
This is where Docker really earns its keep in a team setting. Your CI/CD pipeline builds a Docker image, runs tests inside it, and then deploys that exact image to production. No more "the tests passed but production is different."
If you're using GitHub Actions for deployments, Docker fits naturally into that workflow. My post on automating AWS deployments with GitHub Actions covers how these pieces connect.
The same principle applies to AWS-native build tools. AWS CodeBuild runs your builds inside Docker containers by default. Understanding Docker makes those tools far less mysterious.
Docker Compose
Most real applications aren't a single container. You've got a web server, a database, maybe a cache or message queue. Docker Compose lets you define multi-container applications in a single YAML file and bring them all up with one command.
This is incredibly useful for local development and testing. Instead of installing PostgreSQL, Redis, and your application separately, you define them all in a docker-compose.yml and run docker compose up.
Getting Started
Install Docker Desktop on your machine. Pull a simple image like nginx or hello-world and run it. Then write a Dockerfile for one of your own projects. That hands-on experience will teach you more than any tutorial.
If you're newer to the tooling side of development, understanding Git first will help. Docker and Git are the two foundational tools that modern development workflows are built on. And once you're comfortable with both, you'll find that the secrets of cloud engineering start making a lot more sense.
Docker isn't complicated. It's just different from what most of us learned first. Start small, containerize one thing, and build from there.