Docker Fundamentals

Docker Fundamentals


Docker is a popular open source platform for building, running and deploying software applications in a containerized environment.

Docker was first introduced in 2013 as an open-source project, and it quickly gained popularity in the software development community. Docker containers are based on a technology called "Linux containers" (LXC), which allows multiple isolated Linux systems (containers) to run on a single host. Docker extends the capabilities of LXC by providing a user-friendly interface and toolset for managing containers.

A container can be seen as a standardized software/application unit. So it is a bundle of the actual application source code and all of its dependencies required to run that code. Everything will always come as a unit and can be deployed in a standardized way to multiple environments - with the guarantee, that it will run with the same behavior's as on the development environment, since its shipped with all the necessary dependencies in the required versions.

The main difference between virtual machines and containers is the overhead that a virtual machine comes with. Now even if you could setup a virtual machine for your application and all of its requirements, you would still need to setup a own virtual guest OS for each virtual machine.

The main advantages of a container solutions are:

  • They are very fast and they use far less resource compared to a virtual machine
  • No impact on the OS since they build on encapsulated apps and environments
  • Easy and fast re-building and distribution of containers and full solutions

Image


Containers vs Images


While Docker containers and images are closely related, they serve different purposes and have different properties.

Docker images are read-only templates that are used to create Docker containers. Images contain all the code, libraries, and dependencies needed to run an application, along with instructions for how to run it. Docker images can be thought of as a snapshot of an application and its environment at a particular point in time.

Images are built using a Dockerfile, which is a script that describes the environment and dependencies needed to run an application. Dockerfiles can be version controlled, shared with other developers, and used to rebuild the image at any time. Docker images can be stored in a registry, such as Docker Hub or a private registry, and can be shared with other developers or deployed to production environments.

Docker containers, on the other hand, are running instances of Docker images. Containers can be thought of as lightweight, portable, and self-contained runtime environments. Each container runs in isolation from other containers and the host system, and has its own file system, network interfaces, and processes.

Containers are created by running a Docker image using the Docker run command. Containers can be started, stopped, and managed using Docker commands or tools such as Docker Compose or Kubernetes. Multiple containers can be run from the same image, with each container having its own set of runtime parameters such as environment variables or ports.

A container is the running instance, executing your image with all the code and applications included. This way you can run your image on multiple and different instances without any effort.

You can use already existing images from https://hub.docker.com and select official docker images or images created by the community. Or you can create images from scratch. Customizing images requires building own dockerfile to instruct the customization for your docker image.

Images are layer based. Each Docker image references a cached list of read-only layers that represent any changes made to the Dockerfile. The layers are stacked on top of each other to setup the custom image.

Image

{% include note.html content="Important to know is that if one layer changed (one line in the Dockerfile), all subsequent layers (instruction in the docker file) will be executed again. It is important to structure the layers in a smart way to avoid unnecessary executions of subsequent layers." %}


Volumes


There might be different kind of data required by applications or services building up with docker. The code and framework required for the application are added to the image itself. The data inside the image is static and can not be changed since the images and the layers are read only.

In some situation the application requires storage to write data either for temporary or permanent data that are manipulated during requests or runtime. Docker itself creates a read and write layer when running the container but this layer is removed once the container is deleted.

Image

In case persistent data is required docker allows to create named volumes. Volumes are folders on the host machine, that are managed by docker and mounted into the container.

Image

Bind mounts should only be used if the data needs to be accessed or changed during the container is running. During the development phase it would be possible to change the code, without recreating the image again.

Add anonymouse volume

docker run -v /path/in/container

Add named volume

docker run -v name:/path/in/container

Add bind mount

docker run -v /path/on/host:/path/in/container <image>

List volumes

docker volume ls
This post is licensed under CC BY 4.0 by the author.