If you have spent any time in software development or DevOps, you have almost certainly heard the phrase: "But it works on my machine!" This is the problem Docker was built to solve. Docker packages your application, its dependencies, and its environment into a single portable unit called a container — so it runs exactly the same way everywhere, whether that's your laptop, a colleague's machine, or a server in the cloud.

In this guide, we'll cover everything you need to know to get started with Docker: what containers are, how Docker works, how to write a Dockerfile, and how to run multi-service applications with Docker Compose.

What is a Container?

A container is a lightweight, isolated environment that runs an application and everything it needs to run — libraries, configuration files, and dependencies — bundled together as a single package. Containers share the host operating system's kernel but are isolated from each other at the process and filesystem level.

This is different from a virtual machine (VM). A VM includes its own full operating system, which makes it heavy (several gigabytes) and slow to start (minutes). A Docker container, by contrast, starts in seconds and typically weighs only megabytes because it shares the host OS kernel.

Container vs Virtual Machine

A virtual machine virtualizes hardware and runs a full OS. A container virtualizes only the application layer and shares the host kernel. This makes containers faster to start, smaller in size, and more efficient to run at scale.

Installing Docker

Docker Desktop is the easiest way to install Docker on Windows and Mac. On Linux (Ubuntu), run:

# Update package list and install Docker sudo apt-get update sudo apt-get install -y docker.io sudo systemctl start docker sudo systemctl enable docker # Add your user to the docker group (so you don't need sudo) sudo usermod -aG docker $USER # Verify installation docker --version

Key Docker Concepts

Docker Image

A Docker image is a read-only template that contains the instructions for creating a container. Think of it like a blueprint or a recipe. Images are built from a Dockerfile and can be stored in a registry like Docker Hub or Amazon ECR. When you run an image, Docker creates a container from it.

Docker Container

A container is a running instance of an image. You can start, stop, pause, and delete containers. Containers are isolated from each other and from the host system, but they can be connected to networks and storage volumes.

Docker Hub

Docker Hub is Docker's public registry — a library of pre-built images you can use as starting points. Instead of building from scratch, you can base your image on an official Ubuntu, Node.js, Python, or Nginx image and just add your application on top.

Your First Docker Commands

# Pull an image from Docker Hub docker pull nginx # Run a container (nginx web server on port 8080) docker run -d -p 8080:80 --name my-nginx nginx # List running containers docker ps # View container logs docker logs my-nginx # Stop and remove the container docker stop my-nginx docker rm my-nginx # List all images on your machine docker images

Writing a Dockerfile

A Dockerfile is a plain text file with instructions that tell Docker how to build your image. Here is a Dockerfile for a simple Node.js application:

# Start from the official Node.js 20 image FROM node:20-alpine # Set the working directory inside the container WORKDIR /app # Copy package files and install dependencies COPY package*.json ./ RUN npm install --production # Copy the rest of your application code COPY . . # Expose port 3000 EXPOSE 3000 # Command to start the application CMD ["node", "server.js"]

To build this into an image and run it:

# Build the image and tag it as "my-app:1.0" docker build -t my-app:1.0 . # Run a container from it docker run -d -p 3000:3000 --name my-app my-app:1.0

Docker Compose — Running Multiple Containers

Most real applications need multiple services — a web server, an API, a database. Docker Compose lets you define and run all of them together using a single YAML file. Here is an example for a Node.js API with a PostgreSQL database:

version: '3.9' services: api: build: . ports: - "3000:3000" environment: - DATABASE_URL=postgres://user:password@db:5432/mydb depends_on: - db db: image: postgres:15-alpine environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=password - POSTGRES_DB=mydb volumes: - postgres_data:/var/lib/postgresql/data volumes: postgres_data:

Start both services with a single command:

# Start all services defined in docker-compose.yml docker-compose up -d # Stop all services docker-compose down

Pushing Your Image to Docker Hub

Once you've built your image, you can push it to Docker Hub so it can be pulled and run anywhere — including on your AWS EC2 server or in a Kubernetes cluster.

# Log in to Docker Hub docker login # Tag your image with your Docker Hub username docker tag my-app:1.0 yourusername/my-app:1.0 # Push to Docker Hub docker push yourusername/my-app:1.0

Docker Best Practices

What to Learn Next

Once you're comfortable with Docker basics, the natural next step is Kubernetes — the system for managing containers at scale across multiple servers. Kubernetes (often called K8s) builds on the same container concepts you've learned here and adds scheduling, health monitoring, auto-scaling, and service discovery.

Learn Docker & DevOps with Real Projects

Our DevOps Engineering Program covers Docker, Kubernetes, AWS, CI/CD, and more — with live sessions and job placement included.

View DevOps Course →