Containerization in DevOps (repo included) — Decoding DevOps[03]

The github repo, on branch multi-vm

In our previous discussions on DevOps fundamentals, we explored the concepts of virtualization and networking, among others. These foundational elements provide a strong base for understanding the powerful concept that is containerization. This article will delve into the world of containers and explain how they have revolutionized application development and deployment within the DevOps framework.

What is Containerization?

Containerization is a lightweight approach to packaging and running applications. Unlike virtual machines (VMs) that require an entire operating system (OS) to run, containers share the host OS kernel, making them significantly more resource-efficient. Essentially, containers provide a isolated, self-contained environment for your application, ensuring its consistent behavior across different platforms.

Enter Docker: A Game Changer

Docker is a leading platform for containerization, offering a runtime environment for developing, shipping, and running applications. It simplifies the process of creating and managing containers, facilitating seamless integration with DevOps workflows. Docker is popular due to its features like:

  • Image creation: Docker images are snapshots of an application’s runtime environment, including all dependencies and configurations. These images can be shared and reused across different environments, ensuring consistency.
  • Containerization: Docker containers encapsulate applications and their dependencies within isolated environments. This isolation ensures predictable behavior regardless of the underlying platform, improving consistency and minimizing compatibility issues.
  • Orchestration: Docker provides tools for managing multiple containers across different hosts, allowing for complex deployments and scaling.

Beyond the Basics: Docker Compose

For complex applications, Docker Compose comes into play. It allows you to define and manage multiple containers as a cohesive unit. Using a YAML file, you can specify services, dependencies, networking, and other configurations for each container.

Microservices and Containerization

Containerization is a perfect fit for the microservice architecture, which breaks down applications into smaller, independent services. Each service can be packaged and deployed in its own container, allowing for independent scaling and updates.

A Hands-on Approach

Containerization is about practicality. Let’s see how to Dockerize a rails application.

I have demonstrated how to build and run a containerized Rails application, both as a single unit and as a multi-container application using Docker Compose.

Overview

containerd_rails is a simple Rails application designed to run within a Docker container. This guide on github (:bhavyansh001/containerd_rails) will walk you through the necessary steps to set up and manage the application, covering both single-container and multi-container setups.

Docker concepts we used in this project:

Docker Images:

  • The foundation of Docker is the concept of images. An image is a read-only template that contains all the necessary software and dependencies to run an application. In our example,
    `docker build -t app_image .`
    creates an image named app_image based on your project’s Dockerfile.

Docker Containers:

  • Containers are instances of Docker images. They are essentially running processes based on the image’s specifications. The command
    `docker run -it — user root — rm — entrypoint bash app_image`
    starts a container from the app_image and provides an interactive shell.

Dockerfile:

  • The Dockerfile is a text file that acts as a blueprint for building a Docker image. It defines the base image (e.g., a specific version of Ubuntu), installs necessary packages, sets environment variables, and specifies the application’s entry point. containerd_rails uses a Dockerfile to define the Rails application’s environment, dependencies, and how the application should run.

Dockerfile Instructions:

  • FROM
  • LABELS
  • RUN
  • ADD — unarchives the artifact (code to be deployed) as well
  • COPY — simply copies without unarchiving
  • CMD — run binaries/commands
  • ENTRYPOINT — Similar to CMD, has more priority, used for writing overwritable commands when used together with CMD (default value setting)
  • VOLUME
  • EXPOSE
  • ENV
  • USER
  • WORKDIR
  • ARG
  • ONBUILD

Multi-stage Dockerfile

When we want our dockerfile to do two things: 1. Build artifact, and 2. Build tested image from the unarchived artifact, we set up a multi stage Dockerfile.

Docker Compose:

  • For managing multi-container applications, Docker Compose comes into play. It uses a docker-compose.yml file to define and orchestrate multiple containers. The
    `docker-compose up`
    command starts and links the defined containers, creating a complete application environment. In the example, Docker Compose is used to create containers for both the Rails application and the PostgreSQL database.

Docker-compose instructions:

  • version
  • services: db, build, context, restart, image, container_name, ports, volumes, environment
  • volumes: name {} (at default location)

Bind Mounts:

  • Bind mounts are a way to share directories between the host machine and a Docker container. This allows you to make changes to the application’s code on the host machine and see those changes reflected in the running container without having to rebuild the image. In the example, the command
    `docker run -p 3000:3000 -v $(pwd):/rails app_image`
    uses bind mounts to link the local project directory ($(pwd)) to the /rails directory inside the container.

Docker Volumes:

  • Docker volumes are persistent storage mechanisms that are separate from the container’s file system. They allow you to store data that needs to persist even if the container is stopped or deleted. While the example doesn’t directly use volumes, it is important to understand them as a robust way to manage persistent data. (For instance, you might use volumes to store databases, logs, or uploaded files).

Docker Hub:

  • Docker Hub is a central repository for storing and sharing Docker images. It allows developers to easily publish and retrieve images, fostering collaboration and code reusability.

Benefits of Containerization in DevOps

Containerization offers several advantages for DevOps:

  • Agility: Containers accelerate the development and deployment process, enabling faster iteration cycles.
  • Flexibility: Containers can run on different platforms without modification, providing portability and flexibility.
  • Efficiency: Containers are lightweight and require fewer resources than VMs, improving resource utilization.
  • Scalability: Docker tools provide robust mechanisms for scaling containers up or down based on demand, ensuring optimal performance.

By understanding the fundamentals of containerization and Docker, we have gained valuable insights into a key aspect of modern application development and deployment. As we delve further into the world of DevOps, exploring tools like Kubernetes and others will equip us with the skills to build, deploy, and manage applications efficiently and effectively. We will be covering every major concept on this blog. Stay tuned!

Let’s connect on X @ bhavyansh001

--

--

Bhavyansh @ DiversePixel

Hey I write about Tech. Join me as I share my tech learnings and insights. 🚀