The Ansible tool was developed by Michael DeHaan, the author of the provisioning server application Cobbler and co-author of the Fedora Unified Network Controller (Func) framework for remote administration.[7]
Ansible, Inc. (originally AnsibleWorks, Inc.) was the company founded in 2013 by DeHaan, Timothy Gerla, and Saïd Ziouani to commercially support and sponsor Ansible.[8][9][10]Red Hat acquired Ansible in October 2015.[11][12]
Ansible helps to manage multiple machines by selecting portions of Ansible’s inventory stored in simple plain text files. The inventory is configurable, and target machine inventory can be sourced dynamically or from cloud-based sources in different formats (YAML, INI).[14]
Sensitive data can be stored in encrypted files using Ansible Vault[15] since 2014.[16] In contrast with other popular configuration-management software — such as Chef, Puppet, Salt and CFEngine — Ansible uses an agentless architecture,[17] with Ansible software not normally running or even installed on the controlled node.[17] Instead, Ansible orchestrates a node by installing and running modules on the node temporarily via SSH. For the duration of an orchestration task, a process running the module communicates with the controlling machine with a JSON-based protocol via its standard input and output.[18] When Ansible is not managing a node, it does not consume resources on the node because no daemons are run or software installed.[17]
Dependencies
Ansible requires Python to be installed on all managing machines, including pip package manager along with configuration-management software and its dependent packages. Managed network devices require no extra dependencies and are agentless.[19]
Control node
The control node (master host) is intended to manage (orchestrate) target machines (nodes termed as “inventory“, see below).[20] Control nodes can be run from Linux and Unix-like operating systems (including MacOS); Windows OSs are only supported through the Windows Subsystem for Linux.[21] Multiple control nodes are allowed.[20] Ansible does not require a single controlling machine for orchestration,[22] ensuring that disaster recovery is simple.[22] Nodes are managed by the controlling node over SSH.
Minimal in nature. Management systems should not impose additional dependencies on the environment.[17]
Consistent. With Ansible, one should be able to create consistent environments.
Secure. Ansible does not deploy agents to nodes. Only OpenSSH and Python are required on the managed nodes.[17][22]
Reliable. When carefully written, an Ansible playbook can be idempotent, to prevent unexpected side effects on the managed systems.[23] It is possible to write playbooks that are not idempotent.
Minimal learning required. Playbooks use an easy and descriptive language based on YAML and Jinja templates.
Modules
Modules[24] are mostly standalone and can be written in a standard scripting language (such as Python, Perl, Ruby, Bash, etc.)[citation needed]. One of the guiding goals of modules is idempotency, which means that even if an operation is repeated multiple times (e.g., upon recovery from an outage), it will always place the system into the same state.[18][non-primary source needed]
Inventory configuration
Location of target nodes is specified through inventory configuration lists (INI or YAML formatted) located at /etc/ansible/hosts (on Linux).[14][25] The configuration file lists either the IP address or hostname of each node that is accessible by Ansible. In addition, nodes can be assigned to groups.[14]
This configuration file specifies three nodes: the first node is specified by an IP address, and the latter two nodes are specified by hostnames. Additionally, the latter two nodes are grouped under the webservers group.
Ansible can also use a custom Dynamic Inventory script, which can dynamically pull data from a different system,[26] and supports groups of groups.[27]
Playbooks
Playbooks are YAML files that store lists of tasks for repeated[28][20] executions on managed nodes.[20][29] Each Playbook maps (associates) a group of hosts to a set of roles. Each role is represented by calls to Ansible tasks.[30]
Ansible Automation Platform
The Ansible Automation Platform (AAP) is a REST API, web service, and web-based interface (application) designed to make Ansible more accessible to people with a wide range of IT skillsets. It is a platform composed of multiple components including developer tooling, an operations interface, as well as an Automation Mesh to enable automation tasks at scale across data centers. AAP is a commercial product supported by Red Hat, Inc. but derived from 17+ upstream open source projects including the AWX upstream project (formerly Ansible Tower), which has been open source since September 2017.[31][32][33][34]
There also is another open source alternative to Tower, Semaphore, written in Go.[35][36]
Managed nodes, if they are Unix-like, must have Python 2.4 or later. For managed nodes with Python 2.5 or earlier, the python-simplejson package is also required.[37] Since version 1.7, Ansible can also manage Windows[38] nodes.[37] In this case, native PowerShell remoting supported by the WS-Management protocol is used instead of SSH.
“Dynamic Inventory”. docs.ansible.com. Ansible Documentation. Red Hat, Inc. p. 1. Archived from the original on August 5, 2019. Retrieved November 25, 2016.
Docker is a set of products that uses operating system-level virtualization to deliver software in packages called containers. Docker automates the deployment of applications within lightweight containers, enabling them to run consistently across different computing environments.
The core software that runs and manages these containers is called Docker Engine. Docker was first released in 2013 and continues to be developed by Docker, Inc. The platform includes both free and paid tiers.
dotCloud Inc. was founded by Kamel Founadi, Hykes, and Sebastien Pahl[6] during the Y Combinator Summer 2010 startup incubator group and launched in 2011, and renamed to Docker Inc in 2013.[7]
Docker debuted to the public in Santa Clara at PyCon in 2013.[8] It was released as open-source in March 2013.[9] At the time, it used LXC as its default execution environment. One year later, with the release of version 0.9, Docker replaced LXC with its own component, libcontainer, which was written in the Go programming language.[10][11]
In 2017, Docker created the Moby project [wd] for open research and development.[12] In March 2026, the Communications of the ACM featured Docker as the cover article in a retrospective article of the past decade.[13]
October 15, 2014: Microsoft announced the integration of the Docker engine into Windows Server, as well as native support for the Docker client role in Windows.[15][16]
November 10, 2014: Docker announced a partnership with Stratoscale.[18]
December 4, 2014: IBM announced a strategic partnership with Docker that enables Docker to integrate more closely with the IBM Cloud.[19]
June 22, 2015: Docker and several other companies announced that they were working on a new vendor and operating-system-independent standard for software containers.[20][21]
December 2015: Oracle Cloud added Docker container support after acquiring StackEngine, a Docker container startup.[22]
March 2016: Docker for Mac and Windows betas released.[23][24]
April 2016: Windocks, an independent software vendor released a port of Docker’s open source project to Windows, supporting Windows Server 2012 R2 and Server 2016, with all editions of SQL Server 2008 onward.[25]
June 8, 2016: Microsoft announced that Docker could now be used natively on Windows 10.[27]
January 2017: An analysis of LinkedIn profile mentions showed Docker presence grew by 160% in 2016.[28]
May 6, 2019: Microsoft announced the second version of Windows Subsystem for Linux (WSL). Docker, Inc. announced that it had started working on a version of Docker for Windows to run on WSL 2.[29] In particular, this meant Docker could run on Windows 10 Home (previously it was limited to Windows Pro and Enterprise since it used Hyper-V).
August 2020: Microsoft announced a backport of WSL2 to Windows 10 versions 1903 and 1909 (previously WSL2 was available only on version 2004)[30] and Docker developers announced availability of Docker for these platforms.[31]
August 2021: Docker Desktop for Windows and MacOS was no longer available free of charge for enterprise users. Docker ended free Docker Desktop use for larger business customers and replaced its Free Plan with a Personal Plan. Docker Engine on Linux distributions remained unaffected.[32]
December 2023: Docker acquired AtomicJar to expand its testing capabilities.[33]
Design
Docker can use different interfaces to access virtualization features of the Linux kernel.[34]
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.[35] Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.[36]
Because Docker containers are lightweight, a single server or virtual machine can run several containers simultaneously.[40] A 2018 analysis found that a typical Docker use case involves running eight containers per host, and that a quarter of analyzed organizations run 18 or more per host.[41] It can also be installed on a single board computer like the Raspberry Pi.[42]
The Linux kernel’s support for namespaces mostly[43] isolates an application’s view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel’s cgroups provide resource limiting for memory and CPU.[44] Since version 0.9, Docker includes its own component (called libcontainer) to use virtualization facilities provided directly by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC and systemd-nspawn.[10][34][37][45]
Docker implements a high-level API to provide lightweight containers that run processes in isolation.[9]
Components
The Docker software as a service offering consists of three components: Software The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests that are sent via the Docker Engine API.[46][47] The Docker client program, called docker, provides a command-line interface (CLI) that allows users to interact with Docker daemons.[46][48] Objects Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.[46]
A Docker container is a standardized, encapsulated environment that runs applications.[49] A container is managed using the Docker API or CLI.[46]
A Docker image is a read-only template used to build containers. Images are used to store and ship applications.[46]
A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.[46]
Registries A Docker registry is a repository for Docker images. Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. The main public registry is Docker Hub. Docker Hub is the default registry where Docker looks for images.[46][50] Docker registries also allow the creation of notifications based on events.[51]
A Dockerfile is a text file that commonly specifies several aspects of a Docker container: the Linux distribution, installation commands for the programming language runtime environment and application source code.
ARG CODE_VERSION=latest
FROM ubuntu:${CODE_VERSION}
COPY ./examplefile.txt /examplefile.txt
ENV MY_ENV_VARIABLE="example_value"
RUN apt-get update
# Mount a directory from the Docker volume
# Note: This is usually specified in the 'docker run' command.
VOLUME ["/myvolume"]
# Expose a port (22 for SSH)
EXPOSE 22
Docker Compose is a tool for defining and running multi-container Docker applications.[53] It uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. The docker compose CLI utility allows users to run commands on multiple containers at once; for example, building images, scaling containers, running containers that were stopped, and more.[54] Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.[55] The docker-compose.yml file is used to define an application’s services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more.[56] The first public beta version of Docker Compose (version 0.0.1) was released on December 21, 2013.[57] The first production-ready version (1.0) was made available on October 16, 2014.[58]
Docker Swarm provides native clustering functionality for Docker containers, which turns a group of Docker engines into a single virtual Docker engine.[59] In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine.[60] The docker swarm CLI[61] utility allows users to run Swarm containers, create discovery tokens, list nodes in the cluster, and more.[62] The docker node CLI utility allows users to run various commands to manage nodes in a swarm, for example, listing the nodes in a swarm, updating nodes, and removing nodes from the swarm.[63] Docker manages swarms using the Raftconsensus algorithm. According to Raft, for an update to be performed, the majority of Swarm nodes need to agree on the update.[64][65] In addition to the docker swarm CLI, docker stack is a tool designed to manage Swarm services with greater flexibility. It can use a configuration file very similar to a docker-compose.yml, with a few nuances. Using docker stack instead of docker compose offers several advantages, such as the ability to manage a Swarm cluster across multiple machines or the capability to work with docker secret combined with docker context, a feature that allows executing Docker commands on a remote host, enabling remote container management.
Docker Volume facilitates the independent persistence of data, allowing data to remain even after the container is deleted or re-created.[66]
Licensing model
The Docker Engine is licensed under the Apache License 2.0. Docker Desktop distributes some components that are licensed under the GNU General Public License. Docker Desktop is not free for large enterprises.[67]
The Dockerfile files can be licensed under an open-source license themselves. The scope of such a license statement is only the Dockerfile and not the container image.[citation needed]
K., Chris (14 January 2019). “Lightweight Windows containers: Using Docker process isolation in Windows 10”. Poweruser. Retrieved 2 August 2019. more “lightweight” real containers (via so called process-isolation), where the containerized processes are running directly on the host system — all processes on the host and in the containers are sharing the same Windows kernel. This is similar to how containers on Linux work.
Gupta, Devender (October 13, 2022). “How to Install Docker on Raspberry Pi”. Gizmoxo. Archived from the original on February 14, 2025. Retrieved October 15, 2022.
The name Kubernetes comes from the Ancient Greek term κυβερνήτης, kubernḗtēs (helmsman, pilot), which is also the origin of the words cybernetics and (through Latin) governor. “Kubernetes” is often abbreviated with the numerical contraction “K8s”, meaning “the letter K, followed by 8 letters, followed by s”.[5]
Kubernetes assembles one or more computers, either virtual machines or bare metal, into a cluster which can run workloads in containers. It works with various container runtimes, such as containerd and CRI-O.[6] Its suitability for running and managing workloads of all sizes and styles has led to its widespread adoption in clouds and data centers. There are multiple distributions of this platform—from independent software vendors (ISVs) as well as hosted-on-cloud offerings from all the major public cloud vendors.[7]
The software consists of a control plane and nodes on which the actual applications run. It includes tools like kubeadm and kubectl which can be used to interact with its REST-based API.[8]
History
Google Kubernetes Engine talk at 2017 Google Cloud Summit
Kubernetes was announced by Google on June 6, 2014.[9] The project was conceived and created by Google employees Joe Beda, Brendan Burns, and Craig McLuckie. Others at Google soon joined to help build the project including Ville Aikas, Dawn Chen, Brian Grant, Tim Hockin, and Daniel Smith.[10][11] Other companies such as Red Hat and CoreOS joined the effort soon after, with notable contributors such as Clayton Coleman and Kelsey Hightower.[9]
The design and development of Kubernetes was inspired by Google’s Borg cluster manager and based on Promise Theory.[12][13] Many of its top contributors had previously worked on Borg;[14][15] they codenamed Kubernetes “Project 7” after the Star Trek ex-Borg character Seven of Nine[16] and gave its logo a seven-spoked ship’s wheel (designed by Tim Hockin). Unlike Borg, which was written in C++,[14] Kubernetes is written in the Go language.
Kubernetes was announced in June, 2014 and version 1.0 was released on July 21, 2015.[17] Google worked with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF)[18] and offered Kubernetes as the seed technology.
Google was already offering a managed Kubernetes service, GKE, and Red Hat was supporting Kubernetes as part of OpenShift since the inception of the Kubernetes project in 2014.[19] In 2017, the principal competitors rallied around Kubernetes and announced adding native support for it:
AWS announced support for Kubernetes via the Elastic Kubernetes Service (EKS)[24] in November.
Cisco Elastic Kubernetes Service (EKS)[25] in November.
On March 6, 2018, Kubernetes Project reached ninth place in the list of GitHub projects by the number of commits, and second place in authors and issues, after the Linux kernel.[26]
Until version 1.18, Kubernetes followed an N-2 support policy, meaning that the three most recent minor versions receive security updates and bug fixes.[27] Starting with version 1.19, Kubernetes follows an N-3 support policy.[28]
Concepts
Kubernetes architecture diagram
Kubernetes defines a set of building blocks (“primitives”) that collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics.[29] Kubernetes is loosely coupled and extensible to meet the needs of different workloads. The internal components as well as extensions and containers that run on Kubernetes rely on the Kubernetes API.[30][31]
The platform exerts its control over compute and storage resources by defining resources as objects, which can then be managed as such.
Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.[30][32]
Control plane
The Kubernetes master node handles the Kubernetes control plane of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components such as TLS encryption, RBAC, and a strong authentication method, network separation, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters.[32] The various components of the Kubernetes control plane are as follows.[33]
Etcd
Etcd[34] is a persistent, lightweight, distributed, key-value data store (originally developed as part of CoreOS). It reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. Etcd favors consistency over availability in the event of a network partition (see CAP theorem). The consistency is crucial for correctly scheduling and operating services.
API server
The API server serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes.[30][35] The API server processes, validates REST requests, and updates the state of the API objects in etcd, thereby allowing clients to configure workloads and containers across worker nodes.[36] The API server uses etcd’s watch API to monitor the cluster, roll out critical configuration changes, or restore any divergences of the state of the cluster back to the desired state as declared in etcd.
As an example, a human operator may specify that three instances of a particular “pod” (see below) need to be running, and etcd stores this fact. If the Deployment controller finds that only two instances are running (conflicting with the etcd declaration),[37] it schedules the creation of an additional instance of that pod.[32]
Scheduler
The scheduler is an extensible component that selects the node that an unscheduled pod (the basic unit of workloads to be scheduled) runs on, based on resource availability and other constraints. The scheduler tracks resource allocation on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints or policy directives such as quality-of-service, affinity/anti-affinity requirements, and data locality. The scheduler’s role is to match resource “supply” to workload “demand”.[38]
Kubernetes allows running multiple schedulers within a single cluster. As such, scheduler plug-ins may be developed and installed as in-process extensions to the native vanilla scheduler by running it as a separate scheduler, as long as they conform to the Kubernetes scheduling framework.[39] This allows cluster administrators to extend or modify the behavior of the default Kubernetes scheduler according to their needs.
Controllers
A controller is a reconciliation loop that drives the actual cluster state toward the desired state, communicating with the API server to create, update, and delete the resources it manages (e.g., pods or service endpoints).[40][35]
An example controller is a ReplicaSet controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. The controller also handles creating replacement pods if the underlying node fails.[40] Other controllers that are part of the core Kubernetes system include a DaemonSet controller for running exactly one pod on every machine (or some subset of machines), and a Job controller for running pods that run to completion (e.g. as part of a batch job).[41] Labels selectors often form part of the controller’s definition that specify the set of pods that a controller manages.[42]
The controller manager is a single process that manages several core Kubernetes controllers (including the examples described above), is distributed as part of the standard Kubernetes installation and responding to the loss of nodes.[33]
Custom controllers may also be installed in the cluster, further allowing the behavior and API of Kubernetes to be extended when used in conjunction with custom resources (see custom resources, controllers and operators below).
Nodes
A node, also known as a worker or a minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime, as well as the below-mentioned components, for communication with the primary network configuration of these containers.
kubelet
kubelet is responsible for the running state of each node, ensuring that all containers on the node are healthy. It takes care of starting, stopping, and maintaining application containers organized into pods as directed by the control plane.[30][43] kubelet monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the API server. Once the control plane detects a node failure, a higher-level controller is expected to observe this state change and launch pods on another healthy node.[44]
Container runtime
A container runtime is responsible for the lifecycle of containers, including launching, reconciling and killing of containers. kubelet interacts with container runtimes via the Container Runtime Interface (CRI),[45][46] which decouples the maintenance of core Kubernetes from the actual CRI implementation.
Originally, kubelet interfaced exclusively with the Docker runtime[47] through a “dockershim”. However, from November 2020[48] up to April 2022, Kubernetes has deprecated the shim in favor of directly interfacing with the container through containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI).[49][45][50] With the release of v1.24 in May 2022, the “dockershim” has been removed entirely.[51]
Examples of popular container runtimes that are compatible with kubelet include containerd (initially supported via Docker) and CRI-O.
kube-proxy
kube-proxy is an implementation of a network proxy and a load balancer, and it supports the service abstraction along with the other networking operations.[30] It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request.
Namespaces
In Kubernetes, namespaces are utilized to segregate the resources it handles into distinct and non-intersecting collections.[52] They are intended for use in environments with many users spread across multiple teams, or projects, or even separating environments like development, test, and production.
Pods
The basic scheduling unit in Kubernetes is a pod,[53] which consists of one or more containers that are guaranteed to be co-located on the same node.[30] Each pod in Kubernetes is assigned a unique IP address within the cluster, allowing applications to use ports without the risk of conflict.[54] Within the pod, all containers can reference each other.
A container resides inside a pod. The container is the lowest level of a micro-service, which holds the running application, libraries, and their dependencies.
Workloads
Kubernetes supports several abstractions of workloads that are at a higher level over simple pods. This allows users to declaratively define and manage these high-level abstractions, instead of having to manage individual pods by themselves. Several of these abstractions, supported by a standard installation of Kubernetes, are described below.
ReplicaSets, ReplicationControllers and Deployments
A ReplicaSet‘s purpose is to maintain a stable set of replica pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.[55] The ReplicaSet can also be said to be a grouping mechanism that lets Kubernetes maintain the number of instances that have been declared for a given pod. The definition of a ReplicaSet uses a selector, whose evaluation will result in identifying all pods that are associated with it.
A ReplicationController, similar to a ReplicaSet, serves the same purpose and behaves similarly to a ReplicaSet, which is to ensure that there will always be a specified number of pod replicas as desired. The ReplicationController workload was the predecessor of a ReplicaSet, but was eventually deprecated in favor of ReplicaSet to make use of set-based label selectors.[55]
Deployments are a higher-level management mechanism for ReplicaSets. While the ReplicaSet controller manages the scale of the ReplicaSet, the Deployment controller manages what happens to the ReplicaSet – whether an update has to be rolled out, or rolled back, etc. When Deployments are scaled up or down, this results in the declaration of the ReplicaSet changing, and this change in the declared state is managed by the ReplicaSet controller.[37]
StatefulSets
StatefulSets are controllers that enforce the properties of uniqueness and ordering amongst instances of a pod, and can be used to run stateful applications.[56] While scaling stateless applications is only a matter of adding more running pods, doing so for stateful workloads is harder, because the state needs to be preserved if a pod is restarted. If the application is scaled up or down, the state may need to be redistributed.
Databases are an example of stateful workloads. When run in high-availability mode, many databases come with the notion of a primary instance and secondary instances. In this case, the notion of ordering of instances is important. Other applications like Apache Kafka distribute the data amongst their brokers; hence, one broker is not the same as another. In this case, the notion of instance uniqueness is important.
DaemonSets
DaemonSets are responsible for ensuring that a pod is created on every single node in the cluster.[57] Generally, most workloads scale in response to a desired replica count, depending on the availability and performance requirements as needed by the application. However, in other scenarios it may be necessary to deploy a pod to every single node in the cluster, scaling up the number of total pods as nodes are added and garbage collecting them as they are removed. This is particularly helpful for use cases where the workload has some dependency on the actual node or host machine, such as log collection, ingress controllers, and storage services.
Services
Simplified view showing how Services interact with Pod networking in a Kubernetes cluster
A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a service are defined by a label selector.[30] Kubernetes provides two modes of service discovery, using environment variables or using Kubernetes DNS.[58] Service discovery assigns a stable IP address and DNS name to the service, and load balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine).[54] By default a service is exposed inside a cluster (e.g., back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g., for clients to reach front-end pods).[59]
Volumes
Filesystems in the Kubernetes container provide ephemeral storage, by default. This means that a restart of the pod will wipe out any data on such containers, and therefore, this form of storage is quite limiting in anything but trivial applications. A Kubernetes volume[60] provides persistent storage that exists for the lifetime of the pod itself. This storage can also be used as shared disk space for containers within the pod. Volumes are mounted at specific mount points within the container, which are defined by the pod configuration, and cannot mount onto other volumes or link to other volumes. The same volume can be mounted at different points in the file system tree by different containers.
ConfigMaps and Secrets
A common application challenge is deciding where to store and manage configuration information, some of which may contain sensitive data. Configuration data can be anything as fine-grained as individual properties, or coarse-grained information like entire configuration files such as JSON or XML documents. Kubernetes provides two closely related mechanisms to deal with this need, known as ConfigMaps and Secrets, both of which allow for configuration changes to be made without requiring an application rebuild.
The data from ConfigMaps and Secrets will be made available to every single instance of the application to which these objects have been bound via the Deployment. A Secret and/or a ConfigMap is sent to a node only if a pod on that node requires it, which will only be stored in memory on the node. Once the pod that depends on the Secret or ConfigMap is deleted, the in-memory copy of all bound Secrets and ConfigMaps are deleted as well.
The data from a ConfigMap or Secret is accessible to the pod through one of the following ways:[61]
As environment variables, which will be consumed by kubelet from the ConfigMap when the container is launched;
Mounted within a volume accessible within the container’s filesystem, which supports automatic reloading without restarting the container.
The biggest difference between a Secret and a ConfigMap is that Secrets are specifically designed for containing secure and confidential data, although they are not encrypted at rest by default, and requires additional setup in order to fully secure the use of Secrets within the cluster.[62] Secrets are often used to store confidential or sensitive data like certificates, credentials to work with image registries, passwords, and ssh keys.
Labels and selectors
Kubernetes enables clients (users or internal components) to attach keys called labels to any API object in the system, such as pods and nodes. Correspondingly, label selectors are queries against labels that resolve to matching objects.[30] When a service is defined, one can define the label selectors that will be used by the service router/load balancer to select the pod instances that the traffic will be routed to. Thus, simply changing the labels of the pods or changing the label selectors on the service can be used to control which pods get traffic and which don’t, which can be used to support various deployment patterns like blue–green deployments or A/B testing. This capability to dynamically control how services utilize implementing resources provides a loose coupling within the infrastructure.
For example, if an application’s pods have labels for a system tier (with values such as frontend, backend, for example) and a release_track (with values such as canary, production, for example), then an operation on all of backend and canary nodes can use a label selector, such as:[42]
tier=backend AND release_track=canary
Just like labels, field selectors also let one select Kubernetes resources. Unlike labels, the selection is based on the attribute values inherent to the resource being selected, rather than user-defined categorization. metadata.name and metadata.namespace are field selectors that will be present on all Kubernetes objects. Other selectors that can be used depend on the object/resource type.
Add-ons
Add-ons are additional features of the Kubernetes cluster implemented as applications running within it. The pods may be managed by Deployments, ReplicationControllers, and so on. There are many add-ons. Some of the more important are: DNS Cluster DNS is a DNS server, in addition to the other DNS server(s) in the environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Web UI This is a general purpose, web-based UI for Kubernetes clusters. It allows administrators to manage and troubleshoot applications running in the cluster, as well as the cluster itself. Resource monitoring Container Resource Monitoring records metrics about containers in a central database, and provides a UI for browsing that data. Cost monitoring Kubernetes cost monitoring applications allow breakdown of costs by pods, nodes, namespaces, and labels. Cluster-level logging To prevent the loss of event data in the event of node or pod failures, container logs can be saved to a central log store with a search/browsing interface. Kubernetes provides no native storage for log data, but one can integrate many existing logging solutions into the Kubernetes cluster.
Storage
Containers emerged as a way to make software portable. The container contains all the packages needed to run a service. The provided file system makes containers extremely portable and easy to use in development. A container can be moved from development to test or production with no or relatively few configuration changes.
Historically Kubernetes was suitable only for stateless services. However, many applications have a database, which requires persistence, leading to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so one needs to ensure the data’s survival in case of container termination or hardware failure. When deploying containers with Kubernetes or containerized applications, organizations often realize that they need persistent storage. They need to provide fast and reliable storage for databases, root images and other data used by the containers.
In addition to the landscape, the Cloud Native Computing Foundation (CNCF), has published other information about Kubernetes Persistent Storage including a blog helping to define the container attached storage pattern. This pattern can be thought of as one that uses Kubernetes itself as a component of the storage system or service.[63]
More information about the relative popularity of these and other approaches can be found on the CNCF’s landscape survey as well, which showed that OpenEBS – a Stateful Persistent Storage platform from Datacore Software,[64] and Rook – a storage orchestration project – were the two projects most likely to be in evaluation as of the Fall of 2019.[65]
Container Attached Storage is a type of data storage that emerged as Kubernetes gained prominence. The Container Attached Storage approach or pattern relies on Kubernetes itself for certain capabilities while delivering primarily block, file, object and interfaces to workloads running on Kubernetes.[66]
Common attributes of Container Attached Storage include the use of extensions to Kubernetes, such as custom resource definitions, and the use of Kubernetes itself for functions that otherwise would be separately developed and deployed for storage or data management. Examples of functionality delivered by custom resource definitions or by Kubernetes itself include retry logic, delivered by Kubernetes itself, and the creation and maintenance of an inventory of available storage media and volumes, typically delivered via a custom resource definition.[67][68]
Container Storage Interface (CSI)
In Kubernetes version 1.9, the initial Alpha release of Container Storage Interface (CSI) was introduced.[69] Previously, storage volume plug-ins were included in the Kubernetes distribution. By creating a standardized CSI, the code required to interface with external storage systems was separated from the core Kubernetes code base. Just one year later, the CSI feature was made Generally Available (GA) in Kubernetes.[70]
API
A key component of the Kubernetes control plane is the API Server, which exposes an HTTP API that can be invoked by other parts of the cluster as well as end users and external components. This API is a REST API and is declarative in nature, and is the same API exposed to the control plane.[71] The API server is backed by etcd to store all records persistently.[72]
API objects
In Kubernetes, all objects serve as the “record of intent” of the cluster’s state, and are able to define the desired state that the writer of the object wishes for the cluster to be in.[73] As such, most Kubernetes objects have the same set of nested fields, as follows:
spec: Describes the desired state of the resource, which can be controlled by end users, or other higher-level controllers;
status: Describes the current state of the resource, which is actively updated by the controller of the resource.
All objects in Kubernetes are subject to the same API conventions. Some of these include:
Must have the following metadata under the nested object field metadata:[74]
namespace: a label that objects are subdivided into;
name: a string that uniquely identifies the object within the defined namespace;
uid: a unique string that is able to distinguish between objects with the same name across space and time (even across deletions and recreations with the same name).
May be managed by another controller, which is defined in the metadata.ownerReferences field:[75]
At most one other object shall be the managing controller of the controllee object, which is defined by the controller field.
May be garbage collected if the owner is deleted:[76]
When an object is deleted, all dependent objects may also be deleted in a cascading fashion.
Custom resources, controllers and operators
The Kubernetes API can be extended using Custom Resources, which represent objects that are not part of the standard Kubernetes installation. These custom resources are declared using Custom Resource Definitions (CRDs), which is a kind of resource that can be dynamically registered and unregistered without shutting down or restarting a cluster that is currently running.[77]
Custom controllers are another extension mechanism that interact with the Kubernetes API, similar to the default controllers in the standard pre-installed Kubernetes controller manager. These controllers may interact with custom resources to allow for a declarative API: users may declare the desired state of the system via the custom resources, and it is the responsibility of the custom controller to observe the change and reconcile it.
The combination of custom resources and custom controllers are often referred to as a KubernetesOperator.[78] The key use case for operators are to capture the aim of a human operator who is managing a service or set of services and to implement them using automation, and with a declarative API supporting this automation. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.
Examples of problems solved by operators include taking and restoring backups of that application’s state, and handling upgrades of the application code alongside related changes such as database schemas or extra configuration settings. Several notable projects under the Cloud Native Computing Foundation‘s incubation program follow the operator pattern to extend Kubernetes, including Argo, Open Policy Agent and Istio.[79]
API security
Kubernetes defines the following strategies for controlling access to its API.[80]
In older versions of Kubernetes, the API server supported listening on both HTTP and HTTPS ports (with the HTTP port number having no transport security whatsoever). This was deprecated in v1.10 and eventually dropped support in v1.20 of Kubernetes.[81]
Authentication
All requests made to the Kubernetes API server are expected to be authenticated, and supports several authentication strategies, some of which are listed below:[82]
Service account tokens, intended for programmatic API access
Users are typically expected to indicate and define cluster URL details along with the necessary credentials in a kubeconfig file, which are natively supported by other Kubernetes tools like kubectl and the official Kubernetes client libraries.[83]
Authorization
The Kubernetes API supports the following authorization modes:[84]
Node authorization mode: Grants a fixed list of operations of API requests that kubelets are allowed to perform, in order to function properly.
Attribute-based access control (ABAC) mode: Grants access rights to users through the use of defined access control policies which combine attributes together.
Role-based access control (RBAC) mode: Grants access rights to users based on roles that are granted to the user, where each role defines a list of actions that are allowed.
Webhook mode: Queries a REST API service to determine if a user is authorized to perform a given action.[33]
API clients
Kubernetes supports several official API clients:
kubectl: Command-line for interacting with the Kubernetes control plane[85]
The same API design principles have been used to define an API to harness a program in order to create, configure, and manage Kubernetes clusters. This is called the Cluster API.[87] A key concept embodied in the API is using Infrastructure as Software, or the notion that the Kubernetes cluster infrastructure is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces – the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider’s services and resources.[33]
Uses
Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.
Criticism
A common criticism of Kubernetes is that it is too complex. Google admitted this as well.[88]