How do you explain Kubernetes terminology and orchestration in plain terms that people can at least begin to understand? Heck, how do you even say Kubernetes? (Pronunciations may vary a bit, but the agreed-upon origin is from the Greek, meaning “helmsman” or “sailing master.”)
Here’s how Red Hat technology evangelist Gordon Haff explains Kubernetes in his book, “From Software and Vats to Programs and Apps,” co-authored with Red Hat cloud strategist William Henry:
“Kubernetes, or k8s (k, 8 characters, s… get it?), or ‘kube’ if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications,” Haff and Henry write. “In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.” Understand basic kubernetes terminology to understand the complete cluster.
A cluster is a group or bunch of nodes that run your containerized applications. You manage the cluster and everything it includes – in other words, you manage your application(s) – with Kubernetes.
Nodes are comprised of physical or virtual machines on your cluster; these “worker” machines have everything
necessary to run your application containers, including the container runtime and other critical services.
This is essentially the smallest deployable unit of the Kubernetes ecosystem; more accurately, it’s the smallest
object. A pod specifically represents a group of one or more containers running together on your cluster.
Containers sound so simple. We know what the word means: It’s something that you use to hold stuff. Just do a Google image search: The top visual explainer is a shipping container.
This translates reasonably well to a software context: A container is still essentially something that we put stuff in; in this case, the “stuff” is an application’s code as well as everything that code needs to run properly.
Simple enough, right?
“Containers solve the packaging problem of how to quickly build and deploy applications. They’re akin to virtual machines, but with two notable differences: they’re lightweight and spun up in seconds; and they move reliably from one environment to another (what works on the developer’s computer will work the same in dev/test and production).”
The Kubernetes API, in kubernetes terminology, is the lifeblood of the system. You may have heard of Kubernetes described as a “declarative” tool – in other words, Kubernetes lets you say “this is how I want things to run,” and then it does what’s needed to make that happen in a highly automated way. The Kubernetes API helps make that a reality. The official Kubernetes site defines the Kubernetes API as “the application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.”
Kubernetes Control Plane
This sits between a cluster and Kubernetes basically as a necessary intermediary; it makes sure everything behaves
properly – like a chaperon at a container dance party. When people extol automation as one of the key benefits of
Kubernetes and container orchestration, this is a key piece. Says the Kubernetes official site: “The Control Plane
maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage
those objects’ state.” The control plane continuously checks and rechecks that everything matches your desired
state. In general, the job of a controller in Kubernetes – there are multiple types – is to take actions needed to
manage a specific type of resource.
The Kubernetes master maintains the desired state of your cluster; you will commonly see it referred to as the master node. Every cluster has a master node, as well as several “worker” nodes. The master includes three critical processes for managing the state of your cluster: kube-apiserver, kube-controller-manager and kube-scheduler. When you make changes, you’re almost always making them to the master node, not to each individual node in a cluster.
Simply put, in kubernetes terminology, kubectl is a command line interface (CLI) for managing operations on your Kubernetes clusters. It does so by communicating with the Kubernetes API. Kubectl runs on every node and communicate with master.
A volume is simply a directory of data; it lives within a pod and can be accessed by any container running in that pod. A volume is the abstraction that lets Kubernetes deal with the ephemeral nature of containers; when a container is retired, the volume (and its data) continues to exist within the pod, still accessible to other containers. It exists as long as its pod exists; once the latter “dies,” so does the volume and its data.
Speaking of ephemerality and data: Persistent volumes deal with the issue of storage that needs to exist outside of
the lifetime of any particular container or application, whereas general volumes deal with compute. This becomes
particularly important when you’re discussing stateful applications like databases.
Certified Kubernetes Administrator Training is the best place to start with.