Kubernetes architecture explained – an introductory guide to K8s

A simply and concise explanation of the container orchestration technology

DevOpsUPDATED ON January 24, 2022

John Adam K&C head of marketing

Author

Kubernetes artchitecture explained. An introductor guide to K8s

A natural starting point to our Kubernetes consulting blog series would be an introduction to Kubernetes architecture itself. At least, you’d think that would be the natural starting point! In fact, this is actually our ‘several-th’ (I couldn’t be bothered counting…? )  post in the series.

But, better late than never. This fundamental introduction to Kubernetes is based on IBM’s marvellously simply and concise explanation of the container orchestration technology.

What is Kubernetes?

Kubernetes is an orchestration tool that allows us to run and manage container-based workloads. To explain it, we’ll take a high-level look at a reference architecture of managed Kubernetes services. And dive a little bit deeper into how we would deploy our microservices within a Kubernetes architecture.

Agile & DevOps teams and consultants

Supercharge your next cloud development project!

A Kubernetes architecture consists of 2 sides – cloud and customer

A Kubernetes architecture can be broken down into two sides:

  1. Cloud-managed side of K8s
  2. Customer managed side of K8s

Cloud-side – API server and master node

The most important component on the cloud side of Kubernetes architecture is the K8s master or master node. The Kubernetes master has a lot of important components in it but the most important, in the context of a broad overview, is the API server.

Kubernetes API server

The Kubernetes API server running on the master is integral to the running all of the workloads. It exposes a set of capabilities that allows us to define exactly how we want those workloads to run.

Customer-side – worker nodes and Kubelets

Worker nodes, which are all also Kubernetes-based, are found on the customer-managed side of the architecture. Each worker node contains a Kubelet.

The Kubelet is responsible for scheduling and making sure apps are healthy and running within the worker nodes. As you can probably imagine, that means that the master node and Kubelets often work together.

Why use Kubernetes?

But let’s take a step back to ask the fundamental question “why use Kubernetes”?

Microservices

If we have a cloud-native application made up from microservices, these microservices need to communicate with each other over the network. As a simplified example, let’s say we have a front-end and a back-end. We want to scale those two components out and deploy them to the cluster, today.

Kubernetes uses YAML to define the resources which are sent to the API server and end up creating the actual application. So, what would a simple YAML for deploying a pod, a small logical unit that allows us to run a simple container in a worker node, look like?

We start with a pod, and an image which is associated with it. Let’s say it’s a container and we’ve already pushed it up to Docker Hub. We’ll use a registry and say the application is named ‘f v.1.0’.

And we have labels, which are very important and will be covered in more detail shortly. Labels allow us to define exactly what the type of artefact we’ve got here is. The labels would be expected to say something like ‘the app is f’:

Labels – a:f

Once the pod is created, we want to push it through our process into a worker node.

Kubectl

That’s achieved via the Kubectl. Using the Kubectl, we’ll deploy the simple manifest represented by our pod into one of the worker nodes. The pod manifest is pushed through the Kubetcl, hits the Kubernetes API running on the K8s master, and will then, in turn, talk to one of the Kubelets on the customer side of our architecture.

The Kubelet will then start the pod up in its worker node. The pod will also be assigned an internal IP address. At this point, we could SSH into any of the worker nodes and use the IP to hit that application.

That covers deploying a simple application within a Kubernetes architecture. So, let’s take it a step further.

Kubernetes Deployments and Desired State

Kubernetes has an abstraction called deployments that allows us to create something referred to as a ‘desired state’. We can define the number of replicas we want for a pod, and if something were to happen to that pod and it dies, a new one would be created for us.

Let’s go back to the pod we’ve deployed to a worker node and labelled as app:f. We want to create 3 replicas of that pod. Let’s also return to our original manifest on the cloud side, one thing we need to tell Kubernetes is that we don’t want a pod, but rather a template for a pod.

On top of that, there’s a few other things that we want. We need to define the number of replicas we’d like e.g., 3. We also have a selector. We use that to tell the deployment to manage any application deployed with the kind of name (app:f) as the one we have running on our worker node.

And finally, we have to define what kind of artefact we’re dealing with – a deployment. Our new manifest will look something like:

Kubernetes deployment manifest YAML

We push the new manifest through Kubectl, and it hits the API server. It’s important to note the manifest is not an ephemeral kind of object. Kubernetes needs to manage the desired state. So as long as we have the deployment and haven’t deleted it, Kubernetes will manage it in our master node.

Kubernetes master node

Our master node now creates a deployment, and since we have 3 replicas, it will make sure we’ve always got 3 running. Once that deployment’s created, it will realise we’ve only got a single replica currently running on our worker nodes, and that we need 2 more.

The master node will schedule deploying the application in 3 replicas wherever our Kubernetes architecture has resources on the customer side. So, it places another 2 in empty worker nodes we still have available.

Kubernetes architecture sketch 1

We’ve now created our Kubernetes deployment. If we need to do the same for our application’s back-end, we’ll create a further application deployment. In our master node, we’ll add App:back-end and, for example, scale it out with 2 replicas. We now have something like:

Kubernetes architecture sketch 2

How do services communicate in a Kubernetes architecture?

We now have to start thinking about communication between the services we’ve deployed on the customer side of our K8s architecture. As mentioned, every pod has an IP address. But some might die or need to be updated at some point.

When a pod goes away and comes back, it does so with a different IP address. But if we want to access one of those pods from the back-end, or even external users, we need an IP address that we can rely on. This is a long-standing problem and service registry and service recovery capabilities were created specifically to deal with it.

Those capabilities come built in to Kubernetes. So now we are going to create a service to fix a more stable IP address so we can access our pods as a singular app, rather than individual, separate services.

To do that, we take a step back again and create a service definition around our three pods. To do that we will need some new manifests in YAML. Let’s go back and create a new section in our file. We need to add a kind: service, and a selector to match the app label. Finally, we need a ‘type’ – how we want to expose things. That will be our cluster IP, so we can access things inside the Kubernetes cluster. That will result in something like:

Kubernetes deployment manifest YAML with service and selector for clustering

We deploy our YAML again through Kubectl, it will hit our master then transfer over to the customer side to create the abstraction that groups our pods together as a single application. We now have an internal cluster IP we can use to reliably enable communication between our services:

Kubernetes architecture sketch 3

Additionally, the Kubernetes DNS service, usually running by default, will make it even easier for our services to access each other, using just their names like front-end and back-end, or f and b for short.

How do we go about exposing our front-end to users?

To now progress to the next stage and expose our application’s front-end to end-users, we need to define the type of service. What we want is a load balancer. There are other options to achieve this exposure, like node ports, but a load balancer creates an external IP for our cluster.

We then expose that external IP address directly to end-users, who can then access the front-end by directly using that service.

Kubernetes architecture sketch 4

In conclusion

We’ve covered three major components to a Kubernetes architecture:

  1. Pods
  2. Pods deployed and then managed by deployments.
  3. Facilitating access to the pods created by those deployments, using services.

Those are the three main components that work together with the Kubernetes master and worker nodes to allow us to freely redefine the DevOps workflow for deploying applications into a managed Kubernetes service.

If you could benefit from Kubernetes consultancy or software development resources with Kubernetes experience, we’d love to help. Just get in touch!

Can We Help You With Your Next Software Development Project?

Flexible models to fit your needs!