In this installment of our Kubernetes consulting series, we explore the key role of Kubernetes as part of a secure microservices architecture.
First up, what is a microservice? In the early ‘noughties’ (2000s), applications were solely based on monolith architecture but fast forward to the present day and we can see engineers more often implementing microservices or migrating their monolith project to microservices.
How do microservices differ from monolith infrastructure? And how does Kubernetes deal with all this? Let’s find out.
Microservices and monolith are architectural styles, but not those that we are likely to encounter in art history lessons. Here we’re talking about the architecture of cloud-based applications.
Microservice is the approach taken when cloud-based applications are built as a set of small services. Each works autonomously but in communication with the others. This is similar to the traditional architecture approach taken to building a Baroque church. Different craftsmen work on their own part of the framework, with the autonomously created sections finally coming together as a complete ‘composition’.
Let’s take Instagram as a typical monolith app example. It can be compared to a stone fort, where if one stone is wrongly placed, the stability of the whole building could be compromised. Another famous example of monolith architecture is the Linux core.
Microservice architecture transforms everything. You can much more easily change details without risking the stability of the whole structure.
That is not to say that monolith apps can’t be successful. However, as the number of cloud-based applications grows exponentially, more and more developers are adopting the microservices approach to architecture at the expense of monolith. The reason is simple convenience with the slightest change to a monolithic app requiring a full rebuild and redeployment to the cloud.
The argument in favor of microservices is that the approach better facilitates both the development experience and makes the subsequent maintenance of cloud-based applications far more convenient. If one of the services breaks down, it is less ‘painful’ for the specialist to deal with. For instance, in Facebook, a good example of a huge cloud-based application built on the microservices architectural approach, if Messenger or another feature goes down, it doesn’t disrupt the rest of the platform functioning.
Another strength of microservices is that they can be written using different languages, technologies, and by different teams. A microservice approach to cloud-based applications is widely considered to be the future. Keep in mind, however, that there are exceptions to the rule of microservices always being the more manageable approach to building cloud-based applications. Smaller apps with few ‘moving parts’ and interrelated functionalities can still be better suited to monolith architecture. The strengths of microservices come into their own within the context of a large project that numerous developers and developer teams are working on. The approach helps manage the working process and distribution of tasks, giving each team of engineers one particular functionality of a group of closely related functionalities to work on.
However, it is important to appreciate that while the microservices approach to building the architecture of cloud-based applications has many advantages, it is not a silver bullet. They cannot resolve all problems, nor be applied everywhere. Additionally, while the primary strength of adopting a microservices architecture approach is the management process, which can be facilitated by Kubernetes (more on that next), the flip side to the coin is that it can be time-intensive.
So what do microservices have to do with Kubernetes? Kubernetes also referred to by the abbreviation ‘K8s’, is an open-source framework backed by Google and created to orchestrate ‘containers’.
Microservices, dockers, and K8s are a natural fit. Microservices are small independent services. Dockers isolate those services. K8s orchestrates them, allowing for the deployment of cloud-based applications in seconds and additionally providing an automated health check functionality.
K8s define how services interact with each other. However, it is not the only tool you can use for this purpose. Other notable examples include Docker Swarm, Mesos, Nomad, and others. However, the significant experience we have built up through numerous projects as a Kubernetes consultant has led us to the conclusion that K8s are the strongest of the tools available. Docker Swarm, for example, has declared itself as unsuited to major installations of more than 5000 nodes. Mesos is appropriate only for very specific purposes.
Kubernetes is container-oriented and as an additional strength is the most tested orchestrator currently available. Although supported by Google, as an open-source system K8s has many other champions in the market that contribute towards the maintenance and evolution of the technology. This means that it is not reliant on any sole actor or company for its survival. Companies such as Cisco, IBM, Intel and Microsoft all help contribute towards Kubernetes. K8s is additionally very efficient in technical and human resources.
#1. Consistency of operation: there is no need to resort to quick installation solutions.
#2. Equipment efficient: K8s allows for decreased processor load due to the minimalistic approach to technologies applied.
#3. Develop efficient: a natural consequence of less equipment is the need for fewer people to service it. Day-to-day just one or two people are needed to work on and adjust a Kubernetes set-up.
#4. Kubernetes is open source: as open-source, Kubernetes offers flexibility and choice. K8s can be run on a laptop for personal projects. It can be used for mining cryptocurrencies. Or it can run in a cloud environment. In contrast, the main alternative, AWS ECS (container management service) can be run only on Amazon’s cloud service.
In addition to the range of benefits already detailed, the most significant value K8s offers is rock-solid security. In the contemporary reality of regular hacker attacks, security must be a default priority of any app that complies with GDPR protocols.
In view of this, Kubernetes offers:
With K8s, it is possible to use separate containers to isolate various application services from each other with the Orchestration Engine facilitating the communication between them. In this way, K8s helps implement the very essence of microservices, namely, containers are fully separated from each other. If one runs into problems, the others won’t be compromised. Isolation prevents DDoS attacks, allowing for data protection and privacy.
The K8s API is the most significant part of the whole security environment. This is because it has built-in admission controls, and authorization and authentication controls as well, which filter and regulate all requests to the API after authentication and authorization. The Kubernetes API is considered to be the central interface for users, administrators, and applications that communicate with each other. Users and services can access the API to initiate operations.
If something does go wrong with your application, the best way to determine the root of the problem is to examine system logs, which K8s help with. In addition to standard system logs, you can record Kubernetes-specific logs that provide insight into operations that a particular user has made. And if there’s any unauthorized access, you can quickly patch the vulnerability.
Thanks to complex security features within the K8s environment, the ecosystem allows you to secure your application network to the extent that it can be considered a “cyber-fortress”. Communication between groups of pods and other network endpoints often requires a complex set of network policies.
K8s allows for complex Cluster Networking that can unite scaled infrastructures and facilitates communication between them. To achieve that, all containers and nodes should be able to communicate without NAT over the network and the container should know the IP that it is assigned to. This should be the IP that others see and know.
Heptio Ark is a disaster recovery management utility that functions alongside Kubernetes. This technology allows for easy backup and can restore services via a set of checkpoints with the help of AWS, GCP, and Azure. There are a lot of questions around Heptio Ark backups; for more a detailed explanation read our dedicated blog post on Heptio, part of our Kubernetes Consultants series. It includes a detailed visual step-by-step explanation.
Consul provides dynamic networking, taking us away from classic host-based systems and moving us to a service-based approach. Outside of the new networking changes there are not static firewalls as Consul moves us to dynamic service segmentation which provides for an entirely new level of security. In addition, Consul is a service discovery tool that provides information on the load of every Pod in your infrastructure.
Security is ensured via TLS certificates, service-to-service interaction and identity-based authorization. Consul can segment the network into different parts, providing each part with its privileges and communication policies without IP-based rules. If you want to add an extra layer of security, then Vault, our next Kubernetes security technology, comes into play.
The interaction between applications and systems can be a point of vulnerability. Dynamically created ‘secrets’ are used to prevent unauthorized access. These secrets are created and exist only at moments when apps or services really need them. That makes them a particularly effective security feature because no-one actually knows the secrets and passwords. Apps and services also expect secrets to expire at some point in time. The interaction between apps and services has become more reliable with Vault which avoids providing random users with root privileges to underlying systems. Vault can also revoke secrets and offers key rolling.
Kubernetes security has come a long way since the project came into being but there are still pitfalls. Standalone implementations can lead to project bottlenecks. Integrating these technologies efficiently requires experienced specialists.