Introduction to Docker

Okan Özşahin
11 min readOct 19, 2023

--

In the dynamic world of modern software development and deployment, Docker has emerged as a game-changer, revolutionizing the way we package and run applications. Docker is a containerization platform that provides a standardized way to package and distribute applications, along with all their dependencies, in a lightweight, portable, and isolated environment.

At its core, Docker is an open-source platform that allows developers to create, deploy, and run applications as lightweight, self-sufficient containers. These containers encapsulate everything an application needs to run, such as code, runtime, system tools, libraries, and settings. Docker containers are designed to be consistent and predictable, ensuring that an application runs the same way across various environments, from a developer’s laptop to a production server. Docker containers are based on the principle of containerization, which means that they provide a consistent and isolated environment for applications, separate from the underlying infrastructure. This isolation allows multiple containers to run on the same host without interfering with each other.

Docker offers several compelling advantages for both developers and IT operations:

  1. Portability: Docker containers can run on any system that supports Docker, regardless of the underlying infrastructure. This means you can develop your application on your laptop, test it in a staging environment, and then deploy it to production without worrying about compatibility issues.
  2. Consistency: With Docker, you can ensure that an application behaves the same way in every environment, reducing the “it works on my machine” problem that often plagues software development.
  3. Resource Efficiency: Containers are lightweight and share the host system’s OS kernel, making them efficient in terms of system resource usage. You can run multiple containers on a single host without a significant performance overhead.
  4. Rapid Deployment: Docker containers can be started and stopped quickly, allowing for rapid application deployment and scaling. This is particularly beneficial in microservices architectures.
  5. Version Control: Docker enables you to version your application as a container image, making it easy to roll back to previous versions and maintain a history of changes.
  6. Ecosystem: Docker has a vibrant ecosystem with a vast library of pre-built container images available on Docker Hub, and it integrates well with other tools for orchestration and automation, such as Kubernetes and Docker Compose.

Key Docker Components

  1. Docker Engine: The Docker Engine is the core software that powers Docker. It is responsible for building, running, and managing Docker containers. The Docker Engine includes the Docker daemon (dockerd), which manages containers and images, and the Docker client (docker), which is used to interact with the daemon through a command-line interface or APIs.
  2. Docker Images: A Docker image is a lightweight, standalone, and executable package that includes an application and all its dependencies. Images are used as a blueprint for creating Docker containers. They are often based on a base image and can be customized by adding application code and configuration.
  3. Docker Containers: A Docker container is a runnable instance of a Docker image. It represents an isolated environment in which an application can run. Containers are defined by their image and can be started, stopped, and managed independently. They encapsulate an application and its runtime environment, providing consistency and isolation.

Understanding these fundamental components of Docker is crucial for getting started with containerization and leveraging the benefits of Docker in your software development and deployment processes. Docker’s ability to package applications into containers, ensuring consistency and portability, has made it an indispensable tool in the world of modern software development and DevOps.

Docker Images

Docker images are a fundamental concept in Docker that form the basis for containers. Images are lightweight, stand-alone, and executable packages that contain an application’s code, runtime, system tools, libraries, and settings. In this section, we’ll explore how to work with Docker images, including building custom images, pulling images from Docker Hub, and understanding image layers and caching.

Building custom Docker images allows you to package your applications and dependencies in a way that is repeatable and shareable. Here’s how to create your own Docker images:

1. Create a Dockerfile: A Dockerfile is a text file that contains instructions for building a Docker image. It specifies the base image, copies files, sets environment variables, and more. Here’s a simple example for a Node.js application:

# Use an official Node.js runtime as the base image
FROM node:14
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json to the container
COPY package*.json ./
# Install application dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose a port the application will run on
EXPOSE 3000
# Define the command to start the application
CMD ["npm", "start"]

2. Build the Docker Image: Navigate to the directory containing your Dockerfile and run the following command:

docker build -t my-node-app .

This command builds an image named my-node-app using the current directory (where the Dockerfile is located).

3. Run a Container from the Custom Image: You can create and run containers from your custom image using the following command:

docker run -p 4000:3000 my-node-app

This command maps port 4000 on your host to port 3000 in the container, allowing you to access the Node.js application running in the container.

Pulling Images from Docker Hub

Docker Hub is a public repository of Docker images, containing a wide variety of pre-built images for popular software, programming languages, and services. You can easily pull these images to your local system.

Here’s how to pull an image from Docker Hub:

docker pull image_name:tag

For example, to pull the official Nginx web server image, you can run:

docker pull nginx:latest

This command fetches the Nginx image with the latest version.

Docker images are composed of multiple layers, each representing a set of file system changes. Understanding image layers and caching is essential for optimizing image build processes.

- Layered File System: Docker images are built using a layered file system. Each instruction in a Dockerfile creates a new layer. When you change an instruction in the Dockerfile and rebuild the image, only the layers affected by the changes are rebuilt, saving time and resources.

- Caching: Docker uses caching to speed up the image build process. It checks if a layer with the same content already exists in the cache. If it does, Docker reuses the cached layer rather than rebuilding it, significantly speeding up image builds.

However, caching can lead to unexpected behavior if not managed correctly. To force Docker to ignore the cache and rebuild all layers from a certain point in the Dockerfile, you can use the — no-cache option when building an image:docker build — no-cache -t my-node-app .

By understanding image layers and caching, you can optimize your Dockerfile and image building process, making it more efficient and minimizing the time required to build and rebuild your Docker images.

Docker images are a core part of the Docker ecosystem, and the ability to create, share, and manage images is key to working effectively with containers. Whether you are using pre-built images from Docker Hub or crafting your custom images, Docker images are essential for containerizing applications and services.

Docker Containers

Docker containers are instances of Docker images that run as isolated environments on a host system. In this section, we’ll explore how to create and run containers, manage their lifecycle (start, stop, restart, and remove), and inspect container metadata.

Creating and Running Containers

1.Pull or Build an Image: You need an image to create a container. You can pull an image from Docker Hub, as shown in the previous section, or build a custom image with a Dockerfile.

2. Create a Container: Use the docker run command to create a container from an image. The basic syntax is:

docker run [OPTIONS] IMAGE [COMMAND] [ARG…]

For example, to run a simple Ubuntu container:

docker run -it ubuntu

-it allows you to interact with the container, providing an interactive shell.
ubuntu is the image name.

3. Interact with the Container: Depending on the image and command you specified, you can now interact with the running container. In the case of the Ubuntu image, you have a shell inside the container.

4. Detach from the Container: If you want to detach from a running container without stopping it, press Ctrl + P, Ctrl + Q. This leaves the container running in the background.

Managing Containers

To start a stopped container, use the docker start command:

docker start CONTAINER_ID

Replace CONTAINER_ID with the actual ID or name of the container.

To stop a running container, use the docker stop command:

docker stop CONTAINER_ID

You can restart a container using the docker restart command:

docker restart CONTAINER_ID

This is useful when you want to apply changes, such as updating the image, to a running container.

To remove a container, use the docker rm command:

docker rm CONTAINER_ID

If the container is running, you need to stop it first. You can use docker stop in combination with docker rm like this:

docker stop CONTAINER_ID
docker rm CONTAINER_ID

Alternatively, you can use the -f option to forcefully remove a running container:

docker rm -f CONTAINER_ID

To inspect a container’s metadata and details, use the docker inspect command. This command provides a JSON-formatted output with extensive information about the container, including its configuration, network settings, environment variables, and more.

For example, to inspect a container with the name “my-container,” you would run:

docker inspect my-container

To extract specific information, you can use tools like jq or filter the JSON output with grep and awk. For example, to get the IP address of a container, you can use the following command:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' CONTAINER_ID

Replace CONTAINER_ID with the actual ID or name of the container.

Docker containers are the building blocks of containerized applications. By creating, running, managing, and inspecting containers, you can efficiently deploy and maintain your applications and services, whether in development, testing, or production environments. Docker’s versatility in managing containers makes it a powerful tool for modern software development and deployment workflows.

Docker Networking

Docker provides a robust and flexible networking model for containers, allowing them to communicate with each other and the host system, as well as connect to external networks. In this section, we’ll explore the fundamentals of Docker networking, including understanding container networking, bridged networks, host networks, overlay networks, and port mapping for exposing services.

Docker containers can communicate with each other and the external world through different network modes. Each container has its own isolated network stack, which includes its IP address and network interfaces. Containers can be connected to multiple networks simultaneously, enabling various communication scenarios.

Bridged network is the default networking mode for Docker containers. In this mode:

- Containers run on a separate network bridge on the host.
- Each container receives its own IP address within the subnet of the bridge.
- Containers can communicate with each other over the bridge.
- By default, containers are not directly accessible from the host machine or external networks.

You can create custom bridge networks using the docker network create command. This is useful for isolating groups of containers or defining specific network settings.

docker network create my-network

You can then connect containers to the custom bridge network using the — network option when running a container:

docker run -d - network my-network my-app

In host networking mode, a container shares the network namespace with the host system. This means the container uses the host’s network stack, including the same IP address. Host network mode can be useful when you want a container to have the same network configuration as the host, with no additional isolation.

To run a container in host network mode, use the — network host option:

docker run -d - network host my-app

Overlay networks are used in the context of container orchestration platforms like Docker Swarm and Kubernetes. These networks allow containers to communicate across multiple hosts, forming a virtual network that spans the entire cluster. Containers on different hosts can connect to each other using the same network name.

Overlay networks are typically used in swarm mode. To create an overlay network, you can use the following command:

docker network create - driver overlay my-overlay-network

Containers running in a Docker Swarm cluster can join this overlay network and communicate across different hosts seamlessly.

Port Mapping and Exposing Services

By default, containers can communicate with each other using their internal IP addresses and ports. However, to allow external access or connections from the host system, you often need to map container ports to host ports.

The -p or — publish option can be used when running a container to specify port mapping:

docker run -d -p 8080:80 my-web-app

In this example, the container’s port 80 is mapped to the host’s port 8080. This means you can access the web application within the container by using the host’s IP address and port 8080.

To expose a port without specifying a host port, use the -P or — publish-all option:

docker run -d -P my-web-app

Docker will automatically choose an available port on the host and map it to the container’s exposed port.

Docker networking is a versatile and powerful feature that allows you to create complex network configurations for your containers, from isolated environments to highly interconnected clusters. Understanding these networking options is crucial for effectively deploying and managing containerized applications.

Storage in Docker

Docker provides several mechanisms for handling storage and data within containers. In this section, we’ll explore the concepts of container data persistence, Docker volumes, and bind mounts, and discuss best practices for data management in Docker.

Docker containers are ephemeral by design, meaning that any data written to the container’s file system will be lost when the container is stopped or removed. To ensure data persistence between container instances and even after the container is removed, you can use the following storage mechanisms:

Docker volumes are a dedicated way to manage persistent data in Docker containers. They are managed by Docker and are isolated from the container’s file system. Volumes have several advantages:

- Data stored in volumes persists even when the container is removed.
- Volumes can be shared between multiple containers.
- Docker manages volume creation, storage, and cleanup.

You can create a volume using the docker volume create command:

docker volume create my-data-volume

You can then mount this volume when running a container:

docker run -d -v my-data-volume:/app/data my-app

The -v option specifies the volume to be mounted, and the :/app/data part indicates where the volume should be mounted inside the container.

Bind mounts allow you to map a directory from the host system into a container. Unlike volumes, bind mounts are not managed by Docker and provide more control over where data is stored. Use bind mounts when you need direct access to a host directory or when working with specific directories on the host.

To use a bind mount, specify the host directory path followed by a colon and the container path when running a container:

docker run -d -v /host/path:/container/path my-app

Data stored in the bind mount directory on the host will be available to the container. Bind mounts are suitable when you need to interact with data outside the container.

Best Practices for Data Management

1. Use Volumes for Critical Data: For critical and important data, such as databases or configuration files, use Docker volumes to ensure data persistence and easy backup.

2. Prefer Named Volumes: When creating volumes, give them meaningful names that reflect their purpose. This makes it easier to manage and reference volumes in your Docker workflow.

3. Backup Your Data: Regularly back up the data stored in Docker volumes, especially if it’s critical to your applications. Docker volumes simplify data management but don’t replace the need for proper backups.

4. Minimize Data Writes: Reduce unnecessary data writes in containers to minimize volume usage. This can help improve performance and reduce the storage footprint.

5. Clear Unused Resources: Regularly clean up unused volumes and containers to free up storage space. Docker provides commands like docker volume prune and docker container prune for this purpose.

6. Use Docker Compose: When working with multi-container applications, Docker Compose can simplify data management by defining volumes and bind mounts in a declarative manner.

7. Understand Permissions: Be aware of file permissions when sharing data between the host and containers. Ensure that the container process can access the data by setting appropriate permissions.

By following these best practices, you can effectively manage data within Docker containers and ensure data persistence, accessibility, and reliability for your containerized applications. Docker’s storage mechanisms, such as volumes and bind mounts, provide the flexibility needed to address various data management requirements.

--

--

Okan Özşahin

Backend Developer at hop | Civil Engineer | MS Computer Engineering