Talk to any IT pro for more than a minute and the conversation will probably turn to container orchestration — it’s one of the fastest-growing open source projects. Kubernetes (pronounced koo-ber-nets, though it’s also sometimes pronounced kay-eights or kates) is the go-to tool for managing containerized applications. And it’s not just for cloud environments, as companies are starting to use the platform to manage on-premises workloads.
A key part of kubernetes is the concept of a cluster, a group of machines that act together to deploy and run containers. Each cluster has a master node that acts as the control plane and oversees the entire system, and worker nodes that contain a tool like Docker to manage individual containerized applications.
Kubernetes lets IT teams automate the deployment of applications and scale them up or down based on user-defined criteria, such as CPU usage. It offers horizontal pod autoscaling to handle increasing loads and vertical pod autoscaling to limit pod CPU or memory resources as necessary.
IT pros can use a graphical dashboard or command line to control the cluster’s operations, and they can set policies for which workers perform tasks or how much computing power each worker should have. They can also set security controls for a whole node, such as the fixed list of API requests a kubelet is allowed to make or the role-based access control mode that uses a combination of attributes to define permissions.
Because containers inside pods are ephemeral, kubernetes helps IT teams establish higher-level abstractions of workloads that manage the life cycle of groups of containers to simplify routing and public access. For example, the platform lets IT teams create a service to route traffic to a logical group of pods selected based on labels.