Dockers Notes By ShariqSP

Introduction to Docker

Docker is an open-source platform that automates the deployment of applications within lightweight, portable containers. These containers encapsulate everything an application needs to run, including code, runtime, system tools, and libraries, ensuring consistency across different environments.

Why We Need Docker

Traditional methods of application deployment often face challenges related to dependencies, environment configurations, and deployment consistency. Docker addresses these issues by providing a standardized way to package, distribute, and run applications, regardless of the underlying infrastructure.

How Docker Solves Real-world Problems

Docker simplifies the process of software development and deployment by enabling developers to build, ship, and run applications efficiently. It allows for rapid deployment, scalability, and isolation, making it easier to manage complex microservices architectures and streamline continuous integration/continuous deployment (CI/CD) pipelines. Additionally, Docker facilitates collaboration among development teams and promotes a DevOps culture by bridging the gap between developers and operations.

Important Things to Remember About Docker

  • Containerization: Docker uses containerization technology to create lightweight, standalone containers that encapsulate applications and their dependencies.
  • Docker Images: Images serve as blueprints for containers, containing everything needed to run an application. They are created using Dockerfiles, which define the steps to build the image.
  • Container Lifecycle: Containers can be started, stopped, paused, and deleted. They are immutable, meaning changes made to a running container are not preserved unless explicitly committed to a new image.
  • Orchestration: Docker Swarm and Kubernetes are popular tools for orchestrating and managing containerized applications across clusters of machines.
  • Networking and Storage: Docker provides networking and storage solutions for connecting containers and persisting data.
  • Security: Docker implements various security features to ensure isolation between containers and host systems, including namespace isolation, control groups (cgroups), and Docker Content Trust.

Docker Architecture and Components

Docker's architecture comprises several components working together to enable containerization and streamline application deployment. Understanding these components is essential for effectively utilizing Docker.

Docker Daemon

The Docker daemon (dockerd) is a background process running on the host machine. It manages Docker objects such as images, containers, networks, and volumes. The daemon listens for Docker API requests and executes them.

Docker Client

The Docker client (docker) is a command-line tool that allows users to interact with the Docker daemon via the Docker API. Users can use the Docker client to build, manage, and run Docker containers, as well as perform other administrative tasks.

Docker Images

A Docker image is a read-only template containing the application code, runtime environment, system tools, libraries, and other dependencies needed to run a container. Images are built using Dockerfiles and stored in a registry, such as Docker Hub or a private registry.

Docker Containers

A Docker container is a runnable instance of a Docker image. Containers encapsulate an application along with its dependencies and isolate it from the host system and other containers. Multiple containers can run on the same host, each with its own filesystem, network, and process space.

Docker Registry

A Docker registry is a repository for Docker images. It stores Docker images, making them available for distribution and deployment across different environments. Docker Hub is the default public registry, while organizations often use private registries for hosting proprietary images.

Docker Networking

Docker provides networking capabilities to enable communication between containers running on the same host or across multiple hosts. Docker networking allows containers to connect to each other, expose ports, and interact with external networks.

Docker Volumes

Docker volumes are persistent storage mechanisms used for storing data generated by containers. Volumes enable data sharing and persistence across container restarts and updates. Docker volumes can be managed and shared between containers or mounted from external storage systems.

Docker Images and Docker Containers

Docker images and containers are fundamental concepts in Docker that play crucial roles in the containerization process. Understanding the differences and functionalities of images and containers is essential for efficiently utilizing Docker.

Docker Images

A Docker image is a lightweight, standalone, and executable software package that contains everything needed to run a specific application. This includes the application code, runtime environment, system libraries, dependencies, and configurations. Docker images are built using a declarative syntax defined in a Dockerfile, which specifies the steps required to create the image.

Key Characteristics of Docker Images:

  • Immutable: Docker images are immutable, meaning they are read-only and cannot be modified once created. Any changes made to an image result in the creation of a new image layer.
  • Layered: Docker images are composed of multiple layers, with each layer representing a discrete set of filesystem changes. Layered architecture enables efficient image distribution, storage, and sharing.
  • Reusable: Docker images are designed to be reusable across different environments and platforms, promoting consistency and reproducibility in application deployments.
  • Versioned: Docker images can be versioned and tagged to track changes and facilitate collaboration among development teams.

Docker Containers

A Docker container is a runnable instance of a Docker image. Containers encapsulate an application along with its dependencies and runtime environment, providing process isolation from the host system and other containers. Containers are lightweight, portable, and isolated execution environments that enable efficient resource utilization and scalability.

Key Characteristics of Docker Containers:

  • Isolated: Docker containers run in isolated user space environments, with their own filesystem, network, and process namespace. This isolation ensures that containers do not interfere with each other or the host system.
  • Ephemeral: Docker containers are ephemeral by nature, meaning they are designed to be transient and disposable. Containers can be easily started, stopped, paused, and deleted, facilitating rapid application deployment and scaling.
  • Stateless: Docker containers are typically stateless, with application state stored externally in volumes or databases. Stateless containers promote horizontal scalability and fault tolerance in distributed systems.
  • Portable: Docker containers are highly portable and can be run consistently across different environments, including development, testing, staging, and production, without modification.

Capabilities of Docker Images and Containers

Docker images and containers offer a wide range of capabilities that empower developers, operations teams, and organizations to streamline the software development lifecycle, improve efficiency, and enhance scalability. Below are some of the key tasks and functionalities that can be accomplished using Docker images and containers:

Rapid Application Deployment

Docker enables developers to package applications and their dependencies into portable, self-contained images. These images can be easily distributed and deployed across different environments, facilitating rapid application deployment and reducing time-to-market.

Environment Consistency

By using Docker images, organizations can ensure consistency between development, testing, staging, and production environments. Docker containers run consistently across different platforms and infrastructure, eliminating the "it works on my machine" problem and promoting collaboration among development teams.

Scalability and Resource Efficiency

Docker containers are lightweight, portable, and isolated execution environments that enable efficient resource utilization and scalability. Containers can be easily scaled up or down based on demand, allowing organizations to optimize resource usage and improve application performance.

Microservices Architecture

Docker facilitates the adoption of microservices architecture by providing a lightweight and flexible platform for building, deploying, and managing microservices-based applications. Containers encapsulate individual microservices, enabling independent development, deployment, and scaling of components.

Continuous Integration/Continuous Deployment (CI/CD)

Docker integrates seamlessly with CI/CD pipelines, enabling organizations to automate the build, test, and deployment processes. Docker images serve as the building blocks for CI/CD workflows, allowing for consistent and reliable delivery of software updates and releases.

DevOps Collaboration

Docker promotes collaboration between development and operations teams by providing a common platform for building, deploying, and managing applications. Docker containers encapsulate application code and dependencies, enabling developers and operations teams to work together seamlessly and iterate rapidly.

Isolation and Security

Docker containers provide process isolation from the host system and other containers, enhancing security and minimizing the risk of application conflicts and vulnerabilities. Docker implements various security features, such as namespaces, control groups, and Docker Content Trust, to ensure the integrity and isolation of containers.

Docker Installation

Follow these detailed steps to install Docker on Ubuntu:

  1. Add Docker's official GPG key:
  2. This step adds Docker's official GPG key to ensure the authenticity of the Docker packages.

    sudo apt-get update
    sudo apt-get install ca-certificates curl
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
  3. Add the repository to Apt sources:
  4. This step adds the Docker repository to the APT sources list.

    echo \
                    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
                    $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
                    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
  5. Install Docker:
  6. This step installs Docker using the APT package manager.

    sudo apt install docker.io

Once installed, Docker should be ready to use on your Ubuntu system.

Understanding sudo docker run hello-world

Let's break down the command sudo docker run hello-world step by step:

  1. sudo: This command is used to run the subsequent command with superuser privileges. Docker typically requires superuser privileges to perform certain actions.
  2. docker: This is the Docker command-line interface (CLI) tool used for interacting with Docker containers and images.
  3. run: This subcommand is used to create and run a container based on a Docker image.
  4. hello-world: This is the name of the Docker image that you want to run as a container. The hello-world image is a simple container that prints a message indicating that your Docker installation is working correctly.

When you execute sudo docker run hello-world, Docker performs the following steps:

  1. Checks whether the hello-world image exists locally. If not found locally, Docker pulls the image from the Docker Hub repository.
  2. Creates a new container from the hello-world image.
  3. Starts the container.
  4. The container executes its main command, which in the case of the hello-world image is to print a simple message confirming that your Docker installation is working correctly.
  5. After printing the message, the container stops and exits.

Executing this command is a simple way to verify that your Docker installation is functional and able to run containers.

Docker Commands

docker run

Starts a new Docker container from an image.

docker run -it ubuntu:latest

docker build

Builds a Docker image from a Dockerfile.

docker build -t myapp .

docker pull

Pulls a Docker image from a registry.

docker pull nginx:latest

docker push

Pushes a Docker image to a registry.

docker push myuser/myimage:latest

docker ps

Lists running containers.

docker ps

docker ps -a

Lists all containers (including stopped ones).

docker ps -a

docker exec

Runs a command in a running container.

docker exec -it mycontainer bash

docker stop

Stops a running container.

docker stop mycontainer

docker rm

Removes a container.

docker rm mycontainer

docker rmi

Removes an image.

docker rmi myimage

docker network

Manages Docker networks.

docker network ls

docker volume

Manages Docker volumes.

docker volume ls

docker-compose

Manages multi-container Docker applications.

docker-compose up

Additional Docker Commands

docker images

Lists all available Docker images.

docker images

docker inspect

Displays detailed information about a container or image.

docker inspect mycontainer

docker logs

Displays logs from a running container.

docker logs mycontainer

docker cp

Copies files/folders between a container and the host.

docker cp mycontainer:/path/to/container/file /host/path

docker restart

Restarts a stopped container.

docker restart mycontainer

docker pause

Pauses a running container.

docker pause mycontainer

docker unpause

Unpauses a paused container.

docker unpause mycontainer

docker kill

Forcibly stops a running container.

docker kill mycontainer

docker restart

Restarts a stopped container.

docker restart mycontainer

docker commit

Creates a new image from a container's changes.

docker commit mycontainer mynewimage

docker tag

Adds a tag to an existing image.

docker tag myimage myrepository/myimage:mytag

Dockerfile Explained

A Dockerfile is a text file that contains instructions for building a Docker image. These instructions define the steps required to create a reproducible and portable image that can be used to run containerized applications.

Basic Structure

A Dockerfile typically consists of a series of instructions, each of which performs a specific action. These instructions are executed sequentially when building the Docker image.


                # Comment describing the purpose of the Dockerfile
            
                # Base image
                FROM ubuntu:latest
            
                # Set working directory
                WORKDIR /app
            
                # Copy application files
                COPY . .
            
                # Install dependencies
                RUN apt-get update && apt-get install -y \
                    python3 \
                    python3-pip \
                    && rm -rf /var/lib/apt/lists/*
            
                # Expose port
                EXPOSE 8080
            
                # Command to run the application
                CMD ["python3", "app.py"]
                

Common Instructions

  • FROM: Specifies the base image for the Docker image.
  • WORKDIR: Sets the working directory inside the container.
  • COPY: Copies files and directories from the host into the container.
  • RUN: Executes commands during the image build process.
  • EXPOSE: Exposes ports for networking.
  • CMD: Defines the default command to run when the container starts.

Best Practices

  • Use a minimal base image to reduce image size.
  • Combine multiple RUN commands into a single layer to minimize image layers.
  • Clean up unnecessary files and dependencies to reduce image size.
  • Use .dockerignore file to exclude unnecessary files and directories from the image.
  • Prefer using declarative commands (e.g., RUN apt-get install) over imperative ones (e.g., apt-get install) for better reproducibility.

Building an Image

To build a Docker image from a Dockerfile, use the docker build command:

docker build -t myimage .

Using the Image

To run a container using the built image, use the docker run command:

docker run myimage

Creating a Tar File for a Static Webpage

To package a static webpage into a tar file, you can use the tar command in your terminal. Follow these steps:

Step 1: Organize Your Static Webpage Files

Make sure all your static webpage files (HTML, CSS, JavaScript, images, etc.) are organized within a single directory.

Step 2: Navigate to the Directory Containing Your Files

Open your terminal and navigate to the directory containing your static webpage files using the cd command.

cd /path/to/your/static/webpage

Step 3: Create the Tar File

Use the tar command to create a tar file containing your static webpage files. The basic syntax is:

tar -czf output.tar.gz /path/to/your/static/webpage

This command will create a tar file named output.tar.gz containing all the files and directories within your static webpage directory.

  • -c: Create a new archive.
  • -z: Compress the archive using gzip.
  • -f: Specify the filename of the archive.

Step 4: Verify the Tar File

After creating the tar file, you can verify its contents using the tar command with the -tvf options:

tar -tvf output.tar.gz

This command will list the contents of the tar file, allowing you to ensure that all your static webpage files are included.

Step 5: Distribute or Deploy the Tar File

Once the tar file is created and verified, you can distribute it to others or deploy it to a web server for hosting your static webpage.

Dockerfile Example for deploying a static webapp using Ubuntu,Apache and Tarfile

This Dockerfile example demonstrates how to create a Docker image for serving a static web application using Apache2 on Ubuntu and including the web application files packaged as a tarball.


                FROM ubuntu:latest

                LABEL "author"="ShariqSP"
                LABEL "project"="your-project-name"   

                RUN apt-get update 
                RUN apt-get install -y apache2 
                
                CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

                EXPOSE 80

                WORKDIR /var/www/html
                VOLUME  /var/log/apache2

                ADD your-project-name.tar.gz /var/www/html/
                           
                

Explanation:

  • FROM ubuntu:latest: Specifies Ubuntu as the base image.
  • RUN apt-get update ...: Installs Apache2 and cleans up apt cache to reduce the image size.
  • WORKDIR /var/www/html: Sets the working directory inside the container where the static files will be extracted.
  • COPY my-static-web-app.tar.gz .: Copies the tarball containing static files from the local directory into the container.
  • RUN tar -xzf my-static-web-app.tar.gz ...: Extracts the tarball and removes it after extraction.
  • EXPOSE 80: Exposes port 80, the default port used by Apache2 for serving web applications.
  • CMD ["apache2ctl", "-D", "FOREGROUND"]: Starts Apache2 in the foreground.

Creating a Container and Running the Docker Image

Follow these steps to create a container and run the Docker image using the provided Dockerfile:

Step 1: Build the Docker Image

Save the provided Dockerfile in a directory along with your project files, then navigate to that directory in your terminal. Run the following command to build the Docker image:

docker build -t my-web-app .

This command will build the Docker image using the provided Dockerfile and tag it with the name my-web-app.

Step 2: Run a Docker Container

Once the Docker image is built successfully, you can run a container using the following command:

docker run -d -p 8080:80 --name my-container my-web-app

This command will start a Docker container named my-container using the my-web-app image. Port 8080 on your host machine will be mapped to port 80 in the container, allowing you to access the web application.

Step 3: Verify the Container

You can verify that the container is running using the docker ps command:

docker ps

Step 4: Access the Web Application

Open a web browser and navigate to http://localhost:8080 to access the web application running inside the Docker container.

Step 5: Stop and Remove the Container

When you're finished testing the web application, you can stop and remove the Docker container using the following commands:

docker stop my-container
docker rm my-container

These commands will stop and remove the my-container container, allowing you to clean up resources.

Deploying a Java Web Application (WAR) with MySQL Database Using Docker

In this guide, we'll deploy a Java web application packaged as a WAR file with a MySQL database. The Java web application will be packaged into a Docker image, and the MySQL database will be hosted on Amazon RDS. We'll use Docker containers to run the Java application and connect it to the MySQL database.

Step 1: Create a Dockerfile for the Java Web Application

Create a Dockerfile for your Java web application. This Dockerfile should include instructions to build and run the Java application. Here's an example of a Dockerfile for a Java application packaged as a WAR file:


                FROM tomcat:latest
            
                COPY target/my-java-app.war /usr/local/tomcat/webapps/
                

Replace my-java-app.war with the name of your WAR file.

Step 2: Build the Docker Image

Navigate to the directory containing your Dockerfile and WAR file in your terminal. Run the following command to build the Docker image:

docker build -t my-java-app .

This command will build the Docker image for your Java web application.

Step 3: Create an Amazon RDS MySQL Database

Create a MySQL database instance on Amazon RDS. Make note of the endpoint, username, password, and database name, as you'll need these to connect your Java application to the database.

Step 4: Configure the Java Application to Connect to the MySQL Database

Update your Java application configuration to connect to the MySQL database hosted on Amazon RDS. Use the endpoint, username, password, and database name obtained from Step 3.

Step 5: Create a Docker Compose File

Create a Docker Compose file to define and run your Java application and MySQL database containers. Here's an example of a Docker Compose file:


                version: '3'
                services:
                  mysql:
                    image: mysql:latest
                    environment:
                      MYSQL_ROOT_PASSWORD: your-root-password
                      MYSQL_DATABASE: your-database-name
                      MYSQL_USER: your-username
                      MYSQL_PASSWORD: your-password
                    ports:
                      - "3306:3306"
                  java-app:
                    image: my-java-app
                    depends_on:
                      - mysql
                

Replace your-root-password, your-database-name, your-username, and your-password with your MySQL database credentials obtained from Step 3.

Step 6: Run Docker Compose

Navigate to the directory containing your Docker Compose file in your terminal. Run the following command to start the Java application and MySQL database containers:

docker-compose up

This command will start the Docker containers defined in your Docker Compose file.

Step 7: Verify Deployment

Verify that your Java web application is deployed and running correctly by accessing it in a web browser. Test the functionality that interacts with the MySQL database to ensure proper connectivity.

Docker Hub: A Container Image Repository

Docker Hub is a cloud-based repository provided by Docker that allows users to store and share Docker images. It serves as a centralized platform for distributing containerized applications and enables collaboration among developers and teams.

Using Docker Hub Commands

Below are some commonly used Docker Hub commands:

docker login

Authenticate with Docker Hub to enable pushing and pulling images:

docker login

docker logout

Log out from Docker Hub:

docker logout

docker push

Push a local image to Docker Hub:

docker push username/repository:tag

docker pull

Pull an image from Docker Hub to your local machine:

docker pull username/repository:tag

docker search

Search for Docker images on Docker Hub:

docker search search_term

Pushing an Image to Docker Hub

To push a Docker image to Docker Hub, follow these steps:

  1. Login to Docker Hub using the docker login command.
  2. Tag your local image with your Docker Hub username, repository name, and optional tag:
  3. docker tag local_image username/repository:tag
  4. Push the tagged image to Docker Hub:
  5. docker push username/repository:tag

Pulling an Image from Docker Hub

To pull a Docker image from Docker Hub to your local machine, use the docker pull command:

docker pull username/repository:tag

Conclusion

Docker Hub provides a convenient platform for storing, sharing, and distributing Docker images. By leveraging Docker Hub commands, users can seamlessly push and pull images, facilitating collaboration and efficient deployment of containerized applications.