As mentioned in previous articles , Kubernetes is the basic scheduling platform for cloud native applications. It is equivalent to the cloud’s native operating system. In order to facilitate system expansion, the following interfaces open in Kubernetes can be connected to different backends to implement their own business logic:
1. CRI (Container Runtime Interface) :
container runtime interface, providing computing resources .
Each container runtime has it own strengths, and many users have asked for Kubernetes to support more runtimes.
Kubelet communicates with the container runtime (or a CRI shim for the runtime) over Unix sockets using the gRPC framework, where kubelet acts as a client and the CRI shim as the server.
The protocol buffers API includes two gRPC services, ImageService, and RuntimeService. The ImageService provides RPCs to pull an image from a repository, inspect, and remove an image. The RuntimeService contains RPCs to manage the lifecycle of the pods and containers, as well as calls to interact with containers (exec/attach/port-forward). A monolithic container runtime that manages both images and containers (e.g., Docker and rkt) can provide both services simultaneously with a single socket. The sockets can be set in Kubelet by –container-runtime-endpoint and –image-service-endpoint flags.
Although CRI is still in its early stages, there are already several projects under development to integrate container runtimes using CRI. Below are a few examples:
- cri-o: OCI conformant runtimes.
- rktlet: the rkt container runtime.
- frakti: hypervisor-based container runtimes.
- docker CRI shim.
2.CNI (Container Network Interface) :
container network interface, providing network resources to container runtime engines
The CNI project is a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers (lately , windows too), along with a number of supported plugins.
CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.
It has been created by multiple companies and projects; including CoreOS, Red Hat OpenShift, Apache Mesos, Cloud Foundry, Kubernetes, Kurma and rkt.
First proposed by CoreOS to define a common interface between the network plugins and container execution, CNI is designed to be a minimal specification concerned only with the network connectivity of containers and removing allocated resources when the container is deleted.
Who is using CNI?
Container runtimes
- rkt — container engine
- Kubernetes — a system to simplify container operations
- OpenShift — Kubernetes with additional enterprise features
- Cloud Foundry — a platform for cloud applications
- Apache Mesos — a distributed systems kernel
- Amazon ECS — a highly scalable, high performance container management service
3rd party plugins
- Project Calico — a layer 3 virtual network
- Weave — a multi-host Docker network
- Contiv Networking — policy networking for various use cases
- SR-IOV
- Cilium — BPF & XDP for containers
- Infoblox — enterprise IP address management for containers
- Multus — a Multi plugin
- Romana — Layer 3 CNI plugin supporting network policy for Kubernetes
- CNI-Genie — generic CNI network plugin
- Nuage CNI — Nuage Networks SDN plugin for network policy kubernetes support
- Silk — a CNI plugin designed for Cloud Foundry
- Linen — a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment
- Vhostuser — a Dataplane network plugin — Supports OVS-DPDK & VPP
- Amazon ECS CNI Plugins — a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs)
- Bonding CNI — a Link aggregating plugin to address failover and high availability network
- ovn-kubernetes — an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows
3. CSI (Container Storage Interface ):
container storage interface, providing storage resources, One of the key differentiators for Kubernetes has been a powerful volume plugin system that enables many different types of storage systems to:
- Automatically create storage when required.
- Make storage available to containers wherever they’re scheduled.
- Automatically delete the storage when no longer needed. Adding support for new storage systems to Kubernetes, however, has been challenging
Why Kubernetes CSI?
Kubernetes volume plugins are currently “in-tree”, meaning they’re linked, compiled, built, and shipped with the core Kubernetes binaries.
Adding support for a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes repository, But aligning with the Kubernetes release process is painful for many plugin developers.
The existing Flex Volume plugin attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.
CSI addresses all of these issues by enabling storage plugins to be developed out-of-tree, containerized, deployed via standard Kubernetes primitives, and consumed through the Kubernetes storage primitives users know and love (PersistentVolumeClaims, PersistentVolumes, StorageClasses).
The above three resources are equivalent to the most basic resource types of a distributed operating system, and Kubernetes is the bond that binds them together.
Simple 3 nodes + 1 master Architecture :
Kubernetes Master\s Components
Following are the key components of Mastes\s servers which are necessary to communicate with Kubernetes node\s .
Etcd
It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all.
API Server
Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface, which means different tools and libraries can readily communicate with it. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API.
Controller Manager
This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in a loop and is responsible for collecting and sending information to API server.
It works toward getting the shared state of a cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller.
The controller manager runs different kind of controllers to handle nodes, endpoints, etc.
Scheduler
This is one of the key components of Kubernetes master. It is a service in master responsible for distributing the workload. It is responsible for tracking utilization of working load on cluster nodes and then placing the workload on which resources are available and accept the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node.
Kubernetes Node Components
Following are the key components of Node\s servers which are necessary to communicate with Kubernetes master.
Docker
The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.
Kubelet Service
This is a small service in each node responsible for relaying information to and from control plane service. It interacts with etcd store to read configuration details and write values.
This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.
Kubernetes Proxy Service
This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing.
It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc.