kubernetes Notes By ShariqSP

kubernetes Overview

Kubernetes: Revolutionizing Container Orchestration

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates the deployment, scaling, and management of containerized applications, providing a robust infrastructure for modern cloud-native architectures.

Why We Need Kubernetes

In the era of microservices and distributed systems, managing containers manually can be a daunting task. Kubernetes addresses several challenges faced in containerized environments:

  • Scaling: Kubernetes automates scaling by adding or removing containers based on resource usage, ensuring optimal performance.
  • High Availability: It ensures that applications remain available by automatically restarting containers in case of failures.
  • Resource Management: Kubernetes efficiently allocates resources such as CPU and memory among containers, maximizing utilization.
  • Rolling Updates: It enables seamless deployment of new versions of applications without downtime, using strategies like rolling updates and blue-green deployments.
  • Service Discovery and Load Balancing: Kubernetes provides built-in mechanisms for service discovery and load balancing, simplifying communication between microservices.
  • Portability: Kubernetes abstracts underlying infrastructure, allowing applications to run consistently across different environments, from on-premises data centers to public clouds.

How Kubernetes is Used in Industry

Kubernetes has gained widespread adoption across various industries, ranging from startups to large enterprises. Some common use cases include:

  • Continuous Integration/Continuous Deployment (CI/CD): Kubernetes integrates seamlessly with CI/CD pipelines, enabling automated testing and deployment of applications.
  • Microservices Architecture: It facilitates the transition to microservices-based architectures, allowing organizations to decompose monolithic applications into smaller, manageable services.
  • Hybrid and Multi-cloud Deployments: Kubernetes enables organizations to deploy applications consistently across hybrid and multi-cloud environments, providing flexibility and avoiding vendor lock-in.
  • Big Data and Machine Learning: Kubernetes is used to orchestrate complex workflows in big data and machine learning applications, managing containers running Apache Spark, TensorFlow, and other distributed systems.
  • Internet of Things (IoT): It helps manage and scale IoT applications by deploying containerized workloads to edge devices.

Key Features of Kubernetes

Kubernetes offers a rich set of features, including but not limited to:

  • Pods: The basic unit of deployment in Kubernetes, consisting of one or more containers.
  • Deployments: Declarative updates to applications, ensuring consistent deployment and rollback mechanisms.
  • Services: An abstraction that defines a set of pods and provides a stable endpoint for accessing them.
  • Ingress: Manages external access to services within a cluster, offering features like SSL termination and virtual hosting.
  • Secrets and ConfigMaps: Securely manage sensitive information and configuration settings.
  • Horizontal Pod Autoscaler (HPA): Automatically adjusts the number of replica pods based on CPU or custom metrics.
  • Custom Resource Definitions (CRDs): Extend the Kubernetes API to support custom resources and controllers.

In conclusion, Kubernetes has become the de facto standard for container orchestration, empowering organizations to build, deploy, and manage applications at scale with unparalleled efficiency and agility.

Kubernetes Architecture

Kubernetes Architecture and Components

Understanding Kubernetes Components: Nodes, Pods, and Control Plane

In Kubernetes architecture, three primary components work together to manage containerized applications: Nodes, Pods, and the Control Plane.

1. Nodes:

Nodes are individual machines in a Kubernetes cluster where applications are deployed and run.

2. Pods:

Pods are the smallest deployable units in Kubernetes, representing one or more containers that share networking and storage resources.

3. Control Plane:

The Control Plane is the centralized management entity of a Kubernetes cluster, responsible for maintaining the desired state of the cluster.

These components work together to ensure the reliable deployment, scaling, and management of containerized applications within Kubernetes clusters.

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Its architecture comprises several components that work together to provide a robust and scalable platform.

1. Control Plane Components:

API Server: The central management entity that exposes the Kubernetes API, allowing users and other components to communicate with the cluster. It validates and processes requests, enforcing policies, and maintaining the desired state of the cluster.

Scheduler: Responsible for assigning workloads (containers/pods) to nodes based on resource requirements, constraints, and availability. It ensures efficient resource utilization and workload distribution.

Controller Manager: Manages various controllers that regulate the state of the cluster. Controllers include Node Controller (manages node lifecycle), ReplicaSet Controller (ensures the specified number of pod replicas are running), and Endpoint Controller (populates endpoints).

etcd: A distributed key-value store that serves as Kubernetes' backing store for all cluster data. It stores configuration data, state information, and metadata about the cluster's objects, ensuring high availability and consistency.

2. Node Components:

Kubelet: An agent running on each node, responsible for maintaining communication between the node and the control plane. It manages pods, ensuring they are running and healthy by communicating with the API server and executing container operations.

Container Runtime: The software responsible for running containers, such as Docker, containerd, or CRI-O. Kubernetes supports various runtimes, offering flexibility to choose based on specific requirements.

Kube Proxy: Maintains network rules on nodes, enabling communication between pods across the cluster and providing a simple load balancing mechanism for services.

3. Add-Ons:

DNS: A Kubernetes cluster typically includes a DNS server to facilitate service discovery. It allows services to be accessed by their DNS names rather than IP addresses, enhancing manageability and scalability.

Dashboard: An optional web-based UI providing visibility and management capabilities for Kubernetes clusters. It offers insights into resource usage, workload status, and cluster health.

Ingress Controller: Manages external access to services within the cluster, acting as a traffic manager for incoming requests. It enables features like SSL termination, load balancing, and routing based on HTTP/HTTPS.

These components work together to provide a scalable, resilient, and automated platform for deploying and managing containerized applications in production environments.

Understanding Kubernetes Commands

Kubernetes offers a rich set of command-line tools (kubectl being the primary one) to interact with clusters, manage resources, and monitor applications. Let's explore some essential commands:

1. kubectl create:

Creates a resource from a file or stdin. Example: kubectl create -f pod.yaml to create a pod defined in a YAML file.

2. kubectl apply:

Applies changes to resources defined in a YAML file. Example: kubectl apply -f deployment.yaml to update a deployment.

3. kubectl get:

Retrieves information about resources. Example: kubectl get pods to list all pods in the cluster.

4. kubectl describe:

Provides detailed information about a specific resource. Example: kubectl describe pod my-pod to get detailed information about a pod.

5. kubectl delete:

Deletes resources. Example: kubectl delete pod my-pod to delete a pod.

6. kubectl exec:

Executes a command in a running container. Example: kubectl exec -it my-pod -- /bin/bash to start an interactive shell in a pod.

7. kubectl logs:

Displays logs from a container in a pod. Example: kubectl logs my-pod to view logs from a pod.

These are just a few examples of the many commands available in Kubernetes. Understanding and mastering these commands are essential for efficiently managing Kubernetes clusters and applications.

Exploring Minikube

Minikube is a tool that enables developers to run Kubernetes locally on their computers. It is designed to simplify the process of setting up a Kubernetes cluster for development and testing purposes. Let's delve into what Minikube offers:

1. Local Kubernetes Cluster:

Minikube allows developers to run a single-node Kubernetes cluster locally on their machines. This enables them to develop, test, and experiment with Kubernetes concepts and applications without needing access to a cloud or on-premises Kubernetes deployment.

2. Easy Setup:

Setting up Minikube is straightforward and can be done with just a few commands. Developers can quickly install Minikube on their system and start a local Kubernetes cluster with minimal configuration.

3. Supported Features:

Minikube supports many features of Kubernetes, including pods, services, deployments, and volumes. Developers can use Minikube to deploy and manage containerized applications using familiar Kubernetes APIs and commands.

4. Integration with Docker:

Minikube seamlessly integrates with Docker, allowing developers to build container images locally and deploy them to the Minikube cluster. This tight integration streamlines the development and testing workflow for containerized applications.

5. Add-Ons and Extensions:

Minikube provides various add-ons and extensions that extend its capabilities. These include features such as a Kubernetes dashboard, metrics server, and storage provisioners, allowing developers to customize their Minikube setup based on their requirements.

6. Cross-Platform Support:

Minikube supports multiple operating systems, including Linux, macOS, and Windows, making it accessible to a wide range of developers. It provides consistent behavior and user experience across different platforms.

Overall, Minikube is an invaluable tool for developers looking to gain hands-on experience with Kubernetes or develop and test applications locally before deploying them to production Kubernetes environments.

Deploying Containers in Kubernetes Using Minikube

Deploying containers in a Kubernetes cluster using Minikube is a straightforward process. Here's a step-by-step procedure:

1. Install Minikube:

If you haven't already installed Minikube, you can do so by following the installation instructions provided on the official Minikube website.

2. Start Minikube:

Start Minikube by running the following command in your terminal:

minikube start

3. Build Docker Image:

Build your Docker image for the containerized application you want to deploy. Navigate to the directory containing your Dockerfile and run:

docker build -t my-container-image .

4. Configure Docker Environment:

Configure Docker to use the Minikube Docker daemon:

eval $(minikube docker-env)

5. Deploy Container:

Deploy your container to the Minikube cluster using the following command:

kubectl run my-container --image=my-container-image --port=80

6. Expose Service:

Expose the deployed container as a Kubernetes service to make it accessible within the cluster:

kubectl expose deployment my-container --type=NodePort --port=80

7. Access Application:

Access your application by retrieving the Minikube IP and NodePort:

minikube ip

Use the IP address along with the NodePort to access your application in a web browser.

By following these steps, you can easily deploy your containerized application to a Minikube Kubernetes cluster and start testing and using it locally.

Load Balancing in Kubernetes

Load balancing in Kubernetes is crucial for distributing incoming traffic across multiple instances of an application to ensure optimal resource utilization and high availability. Let's explore how load balancing works in Kubernetes using an example application:

Example Application:

Consider the containerized application we deployed in the previous steps, which serves a web application on port 80. We have multiple instances of this application running in our Kubernetes cluster to handle incoming traffic efficiently.

1. Service Definition:

To enable load balancing for our application, we define a Kubernetes Service resource. This Service acts as a stable endpoint that abstracts the dynamic IP addresses of the application pods.

We create the service using the following command:

kubectl create service loadbalancer my-service --tcp=80:80

2. Load Balancer:

Kubernetes provisions a load balancer (such as an external cloud provider's load balancer) to distribute incoming traffic across the application pods.

3. Traffic Distribution:

When a client sends a request to access our application, the load balancer receives the request and forwards it to one of the healthy pods running the application. Kubernetes automatically monitors the health of the pods and removes any unhealthy pods from the pool of endpoints.

4. Scaling:

As the demand for our application increases, we can scale the number of application instances (pods) horizontally by adjusting the replica count in the Deployment or ReplicaSet. Kubernetes automatically updates the load balancer configuration to include the newly created pods in the pool of endpoints.

5. High Availability:

By distributing traffic across multiple instances of our application, Kubernetes provides high availability. If one of the application pods fails or becomes unhealthy, the load balancer automatically redirects traffic to other healthy pods, ensuring uninterrupted service.

Load balancing is a fundamental feature of Kubernetes that enables efficient traffic distribution, scalability, and high availability for containerized applications deployed in the cluster.

Understanding Ingress in Kubernetes

In Kubernetes, Ingress is an API object used to manage external access to services within a cluster. It provides HTTP and HTTPS routing to services based on hostnames and paths, allowing you to expose multiple services under a single external IP address. Let's explore how Ingress works in Kubernetes using the same example application:

Example Application:

Consider the containerized application we deployed earlier, which serves a web application on port 80. We want to expose this application to external users via a single endpoint using Ingress.

1. Ingress Resource:

To define the routing rules for our application, we create an Ingress resource. This resource specifies how incoming requests should be routed to different services based on hostnames and paths.

We define the Ingress resource using the following command:

kubectl create ingress my-ingress --rule="host=example.com,path=/,backend=my-service:80"

2. Ingress Controller:

Kubernetes does not implement the actual routing logic defined in the Ingress resource. Instead, an Ingress Controller (such as Nginx Ingress Controller or Traefik) interprets the Ingress rules and configures the external load balancer accordingly.

3. External Access:

Once the Ingress resource is created and the Ingress Controller is deployed, external users can access our application using the specified hostname (e.g., example.com) and path. The Ingress Controller forwards incoming requests to the appropriate service based on the defined rules.

4. SSL Termination:

Ingress also supports SSL termination, allowing you to terminate SSL/TLS connections at the Ingress Controller and securely communicate with backend services over unencrypted connections.

By using Ingress, you can efficiently manage external access to services within your Kubernetes cluster, enabling flexible and centralized routing for multiple applications.

Secret Management in Kubernetes

Secrets are sensitive pieces of data, such as passwords, API keys, and certificates, that should be kept secure and encrypted. Kubernetes provides a built-in mechanism for managing secrets within a cluster, ensuring they are stored securely and made available to applications when needed. Let's explore how secret management works in Kubernetes:

1. Creating Secrets:

You can create secrets in Kubernetes using the kubectl create secret command. Secrets can be created manually or generated from files or literal values. For example, to create a secret from a file:

kubectl create secret generic my-secret --from-file=credentials.txt

2. Storing Data:

Kubernetes stores secrets as Base64-encoded data in etcd, the cluster's key-value store. Although Base64 encoding provides a level of encoding, it's important to note that it is not encryption and should not be considered secure for sensitive data.

3. Using Secrets in Pods:

Applications running in pods can access secrets mounted as files or environment variables. To mount a secret as a file in a pod:


    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
      - name: my-container
        image: my-image
        volumeMounts:
        - name: secret-volume
          mountPath: "/etc/secret"
      volumes:
      - name: secret-volume
        secret:
          secretName: my-secret
        

4. Updating and Deleting Secrets:

You can update secrets in Kubernetes using the kubectl apply command or by directly modifying the secret object. To delete a secret, use the kubectl delete command.

5. Secrets Management Best Practices:

  • Regularly rotate secrets to minimize the risk of unauthorized access.
  • Limit access to secrets using Kubernetes RBAC (Role-Based Access Control).
  • Consider using external secret management solutions for additional security and flexibility.

By following best practices and leveraging Kubernetes' built-in secret management capabilities, you can securely manage sensitive data within your cluster and ensure the integrity and confidentiality of your applications.