Kubernetes(K8s), Internals and some hands on .

Laith
6 min readJul 3, 2018

As mentioned in previous articles , Kubernetes is the basic scheduling platform for cloud native applications. It is equivalent to the cloud’s native operating system. In order to facilitate system expansion, the following interfaces open in Kubernetes can be connected to different backends to implement their own business logic:

1. CRI (Container Runtime Interface) :

container runtime interface, providing computing resources .

Each container runtime has it own strengths, and many users have asked for Kubernetes to support more runtimes.

Kubelet communicates with the container runtime (or a CRI shim for the runtime) over Unix sockets using the gRPC framework, where kubelet acts as a client and the CRI shim as the server.

Kubelet to Container Runtime communication — Source is K8 documentation

The protocol buffers API includes two gRPC services, ImageService, and RuntimeService. The ImageService provides RPCs to pull an image from a repository, inspect, and remove an image. The RuntimeService contains RPCs to manage the lifecycle of the pods and containers, as well as calls to interact with containers (exec/attach/port-forward). A monolithic container runtime that manages both images and containers (e.g., Docker and rkt) can provide both services simultaneously with a single socket. The sockets can be set in Kubelet by –container-runtime-endpoint and –image-service-endpoint flags.

Although CRI is still in its early stages, there are already several projects under development to integrate container runtimes using CRI. Below are a few examples:

2.CNI (Container Network Interface) :

container network interface, providing network resources to container runtime engines

The CNI project is a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers (lately , windows too), along with a number of supported plugins.

CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.

It has been created by multiple companies and projects; including CoreOS, Red Hat OpenShift, Apache Mesos, Cloud Foundry, Kubernetes, Kurma and rkt.

First proposed by CoreOS to define a common interface between the network plugins and container execution, CNI is designed to be a minimal specification concerned only with the network connectivity of containers and removing allocated resources when the container is deleted.

Who is using CNI?

Container runtimes

3rd party plugins

3. CSI (Container Storage Interface ):

container storage interface, providing storage resources, One of the key differentiators for Kubernetes has been a powerful volume plugin system that enables many different types of storage systems to:

  1. Automatically create storage when required.
  2. Make storage available to containers wherever they’re scheduled.
  3. Automatically delete the storage when no longer needed. Adding support for new storage systems to Kubernetes, however, has been challenging

Why Kubernetes CSI?

Kubernetes volume plugins are currently “in-tree”, meaning they’re linked, compiled, built, and shipped with the core Kubernetes binaries.

Adding support for a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes repository, But aligning with the Kubernetes release process is painful for many plugin developers.

The existing Flex Volume plugin attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.

CSI addresses all of these issues by enabling storage plugins to be developed out-of-tree, containerized, deployed via standard Kubernetes primitives, and consumed through the Kubernetes storage primitives users know and love (PersistentVolumeClaims, PersistentVolumes, StorageClasses).

The above three resources are equivalent to the most basic resource types of a distributed operating system, and Kubernetes is the bond that binds them together.

Kubernetes and three main interfaces

Simple 3 nodes + 1 master Architecture :

Master-3 nodes basic architecture

Kubernetes Master\s Components

Following are the key components of Mastes\s servers which are necessary to communicate with Kubernetes node\s .

Etcd

It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all.

API Server

Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface, which means different tools and libraries can readily communicate with it. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API.

Controller Manager

This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in a loop and is responsible for collecting and sending information to API server.

It works toward getting the shared state of a cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller.

The controller manager runs different kind of controllers to handle nodes, endpoints, etc.

Scheduler

This is one of the key components of Kubernetes master. It is a service in master responsible for distributing the workload. It is responsible for tracking utilization of working load on cluster nodes and then placing the workload on which resources are available and accept the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node.

structure of Kubernetes Master and Node.

Kubernetes Node Components

Following are the key components of Node\s servers which are necessary to communicate with Kubernetes master.

Docker

The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service

This is a small service in each node responsible for relaying information to and from control plane service. It interacts with etcd store to read configuration details and write values.

This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service

This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing.

It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc.

--

--

Laith

Informational blog only, not financial advice. No solicitation to buy/sell securities. Consult a financial advisor. Ideas/IP discussed are legally protected.