Musab Abbasi
8 min readDec 13, 2022

Introduction to Kubernetes for Beginners

Kubernetes: (K8s)

Kubernetes is a Container management /Orchestration tool. It Automates deployments, scale and manage containerized applications on a group of servers.

The main responsibilities of Kubernetes are as following:

1. Deployment

2. Scheduling

3. Scaling

4. Load Balancing

5. Batch Execution

6. Rollbacks

7. Monitoring

Often organizations uses multiple containers to ensure availability, load balancing and to scale up and down based on user load.

What are Containerized Applications?

Containerized applications can be described as follows:

1. Packages the application along with dependencies, libraries and etc in a box called as container.

2. This can be then shipped using a container Platform like Dockers and can be deployed on different systems.

What is Dockers?

Docker is a tool designed to make it easier to deploy and run applications using containers. We will be learning about Dockers indepth after Kubernetes.

— — — — — — — — — — — -xxx — — — — — — — — — — — -

Pods and Nodes

In Kubernetes we do not interact with containers directly.

Ø Containers are located in Pods.

Ø Pods are located on Nodes.

Ø Each Pod can have multiple containers.

Ø Each Node can have Multiple Pods.

n When specifying a Pod, you can optionally specify how much CPU and Memory (RAM) each container needs.

n This helps scheduler to decide which node to place Pods on

n A Pod have Main Container, and may or may not have Int(Initial) container or side-car container.

n Side-car container supports the main container and Int container runs before main container.

Why do you need Kubernetes?

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and fail over for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with following features:


1. Automatic Bin Packing

You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

Kubernetes will take care of packing containers in bin(servers) in most efficient way.

Ø Automatically places container based on their resource requirements while not sacrificing availability.

Ø Saves resources

2. Service Discovery and Load Balancing


A Pod contains

I. An application container

II. Storage resources

III. A unique network IP

3. Storage orchestration

Containers running inside a Pod may need to store data.

- Pods can have a storage volumes.

- Usually Pod has a single volume.

Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

- Local

- Cloud (AWS)

- Network (NFS)

4. Self Healing:

Kubernetes perform self healing and checks the state. Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.

If container fails — -> restart container

If node dies — -> replaces and reschedule containers on other node

If a container does not respond — -> Kill container and take care of availability

5. Automated roll-outs and rollbacks

- Roll-out: Deploy changes to the application or its configuration.

- Roll-back: Revert the changes and restore to previous state.

Kubernetes ensures there is no downtime.

- Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it does not kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the changes.

You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.

6. Secrets and Configuration Management:

Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

[ Secrets and Configurations are stored in ETCD ]

[ ETCD: etcd is an open source distributed key-value store used to hold and manage the critical information that distributed systems need to keep running. Most notably, it manages the configuration data, state data, and metadata for Kubernetes, the popular container orchestration platform. ]

[ Max size limit is 1MB ]

7. Batch Execution

What are Batch Jobs:

A batch job in SAP is a scheduled background program that usually runs on a regular basis without any user intervention. Batch jobs are provided with more allocated memory than the ones that are done in the foreground. They are used to process high volumes of data that would normally consume long-term memory if run in the foreground, as well as for running programs that require less user interaction.

- Batch jobs require an executable/process to be run to completion/

- In Kubernetes run to completion jobs are primarily used for batch processing.

- Each job creates one or more Pods.

- During job execution if any container or pod fails, job controller will reschedule the container, pod on another node.

- Can run multiple pods in parallel and can scale up if required.

- As the job is completed, the pod will move from running state to shutdown state.

8. Horizontal Scaling:

In Kubernetes, we can scale up and down containers.

Scaling up: To create more replicas of container if required.

Scaling down: To kill containers if not required.

Scale up and down can be done:

- Using commands

- From the dashboard (Kubernetes UI)

- Automatically based on CPU usage


1. Replication Controller

2. Manifest file

3. Horizontal Pod Auto-scaler

Replication Controller (rc or rcs)

Enables creation of multiple Pods, then make sure that these number of Pods always exists.

In case a Pod crashes, the replication controller replaces it.

Replication controller gets info of no.of Pods to run and make available at any time from the information that is provided and taken from Manifest file.

Manifest File

Replication controller gets info of no.of Pods to run and make available at any time from the information that is provided and taken from Manifest file.

Horizontal Pod Autoscaler

Automatically scales the number of Pods in a replication controller based on observed CPU utilization or with custom metrics.

- There is a Controller manager in Horizontal Pod autoscaler.

- It loops every 15 seconds.

- Controller manager monitor CPU utilization and based on it, it can send signals to replication controller to maintain the no.of pods.


In Kubernetes, when we deploy, we gets a Cluster. A Cluster is a set of machines, called nodes.

A Cluster can have at least one Worker Node and one Master Node.

[ Worker nodes was called as Minions ]

- Kubernetes can have multiple Clusters.

- A Cluster can have 5000 Nodes.

- 110 Pods per Node

- Max 150,000 pods per Cluster.

- 300,000 containers per Cluster.

Master Node

- A Master Node is responsible for managing the cluster.

- Monitors nodes and pods in a Cluster.

- When a node fails, moves the workload of the failed node to another worker node.


1. API Server:

Manages all communications. ( JSON over HTTP API )

2. Scheduler:

Schedules pods on node

3. Controller Manager:

Runs controllers

4. ETCD:

Open source, distributed key-value database from CoreOS

1. API Server

Used for communication. Can be called via Kubernetes Frontend or CLI. We often uses Kubectl for API.

2. Scheduler

- Schedules pods across multiple nodes

- Check which node is best for requirements( both hardware and sotware requirements ).

- Select nodes for newly created Pods.

- Gets info from ETCD.

3. Controller Manager

- Controller Manager runs different controller.

A. Main Controller:

a) Kube-controller-manager

Responsible to act when

- nodes become unavailable to ensure pod counts are as expected

- to create endpoints.

- service account

- API access

b) Cloud-controller-manager

- Responsible to interact with the underlying infrastructure of a cloud provider when node become unavailable.

- Manage storage volumes when provided by a cloud service.

- Manage load balancing

- Manages routing.

c) Kube-controller-manager

Kube controller manager contains more sub controllers that are responsible for overall health of Cluster.

- Ensures nodes are running all the time.

- Correct no.of Pods are running per Specification File.

I. Node Controller: Responsible for noticing and responding when a node goes down.

II. Replication Controller: Responsible for maintaining the correct no.of pods for every replication controller object in the s

III. Endpoint Controller: Populates the Endpoints object.

IV. Service account and token controller: Create default accounts and API access tokens for new namespaces.

d) Cloud-controller-manager

i. Node-controller: For checking cloud provider if the node was deleted in cloud after it stops responding.

ii. Route-controller: For setting up routes in the underlying cloud infrastructure.

iii. Service-controller: For creating, updating, and deleting cloud provider load balancers.

iv. Volume-controller: For creating, attaching, and mounting volumes and interacting with the cloud provider to orchestrate volumes.



etcd is an open source distributed key-value store used to hold and manage the critical information that distributed systems need to keep running. Most notably, it manages the configuration data, state data, and metadata for Kubernetes, the popular container orchestration platform.

Worker Node

- Can be physical or virtual machine where containers are deployed.

- Every node in a Kubernetes cluster must run a container runtime like Docker.


1. Kubelet

- Kubelet is an agent running on each node and communicates with components from the master node.

- It makes sure that containers are running in a Pod.

- Kubelet takes a set of PodsSpecs (specification file of each pod) that are provided through various mechanisms and ensures that the container described in these PodSpecs are running an are Healthy.

- In case any Pod has an issue, Kubelet tries to restart the Pod on same or different node.

2. Kube Proxy

- Network agent runs on every node responsible for maintaining network configuration and rules.

- Exposes services to outside world.

- Core network components in Kubernetes

3. Container runtime

- Software responsible for running containers.

- Kubernetes supports several container runtime like

- Docker

- Containerd

- Rklet

- Kubernetes CRI



Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.

- Docket is a tool for running applications in an isolated environment.

- Similar to Virtual Machine.

- App runs in same environment.

- Standard for software deployment


- Run container in seconds instead of minutes.

- Less resources results less disk space.

- Uses less memory.

- Does not need full OS.

- Deployment

- Testing

Images in Docker:

- Image is a template for creating an environment of your choice.

- It is a snapshot

- Has everything

- OS, Software, App Code

- Snapshot = version

- Revert to a snapshot/image if any error occurred.

Containers in Docker:

- An active instance of image running is called a container.

Container vs Virtual Machines


- Containers are an abstraction at the app layer that packages code and dependencies together.

- Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.

Virtual Machine:

- VMs are an abstraction of Physical hardware turning one server into many servers.

- Hypervisor allows multiple VMs to run on single machine.

- Each of an OS, application, nesessary binary and libraries.

Musab Abbasi

Computer Science Graduate with MERN stack website development expertise.