Back in 2014, Rancher labs, a software company working on operating-system-level virtualization tools, more commonly known as Linux Containers, launched their two tools. The first one was RancherOS, a Linux operating system simplified to host containers. The second was Rancher, a platform that dealt with container management for Docker container systems.
Everything you need to know about Docker is that it has created a modern approach to hosting applications and user services in the cloud by introducing portable containers for applications.
Portability means no dependency on a specific cloud infrastructure (vendor lock-in), simple migration of applications between the clouds, simple deployment, reduced support and maintenance costs. With a containerized portable application, you can focus on increasing application performance, availability, and other essential application features.
But all this would be hard to imagine without Rancher. Imagine that you are deploying an application in Docker in the cloud. The infrastructure is portable, and utility services such as failover load balancers and other solutions are not. Therefore, if you need to emigrate to another cloud where this functionality is different, some problems can arise.
The goal of the described technology is the creation of portable infrastructure services around Docker, including elastic block store (EBS), virtual networks, failover load balancer, security groups, monitoring, database services, and much more. All this you can transfer between your own servers and the clouds of different manufacturers, using several regions of the cloud provider at the same time.
-Private networks. The ability to create private SDN networks for each environment, allowing secure communications between containers, hosts, and clouds.
-Load balancing. Built-in elastic load balancer to distribute traffic between containers or services. The load balancing service can work even between different cloud regions.
-Storage management. Support for snapshots and backups of Docker volumes, the ability to backup the status of containers and the state of services.
-Service discovery. A distributed DNS discovery service with built-in health monitoring that allows containers to automatically register themselves as services and dynamically find others on the network.
-Resource management. Support for Docker Machine, a tool for provisioning hosts. The ability to monitor host resources and manage the deployment of containers.
-Sharing and managing users. The ability to create a variety of infrastructure users and work together to support the life cycle of services. The ability to create separate environments for development, testing, and industrial use with the ability to share resources.
The new version 2.0 is already with us and you can trial it. On the official website, you’ll find everything you may need while testing the latest version.
Here is a step-by-step guide for deployment: https://rancher.com/quick-start/.
And here you can download an eBook describing the main operational principles of the new version: http://info.rancher.com/rancher2-technical-architecture.
From the get-go, you’ll understand how hard the team of developers worked to bring the slogan “Kubernetes Everywhere” into life. As for now, Rancher is completely redesigned to work on Kubernetes instead of Cattle, which, being a "high-level component written in Java," was called the "basic orchestration engine" and "the main cycle of the entire system."
In fact, Cattle was not even a framework for the orchestration of containers, but a special layer that managed metadata (+ resources, relationships, states, etc.) and transmitted all real tasks to external systems.
Cattle was not bad, but the growing popularity of Kubernetes was due to the new requirements that Rancher users had, as well as ease of interaction with this system.
In this way, today when the Docker-image of rancher/server starts, the Kubernetes cluster starts as well, and each newly added host becomes a part of it. In addition, you can create additional clusters and also import existing clusters with kops or from external providers like Google (GKE).
On the top of all this set of Kubernetes-clusters, common layers are implemented for centralized management (authentication and RBAC, provisioning, updates, monitoring, backups) and interaction with them (web user interface, API, CLI):
Here is a more detailed illustration of Rancher 2.0 architecture:
From now on, Rancher will not deal with orchestration on its own, but orchestrate the orchestrators (sorry for the tautology). Rancher lost out to Kubernetes in many parameters, so rather than fighting, the Rancher team joined. The Rancher team of developers didn’t change their engine, but wrapped it. In this way, they got the whole functionality of Kubernetes along with existing Rancher opportunities.
At this, developers have also resolved ongoing issues of Kubernetes:
-Access restriction to particular cluster resources
The first version maintained six types of user authorization: GitHub, Ldap, Local, AD, Azure, AD, and shibboleth. In the second version, there left only three: Local, GitHub, and Active Directory.
Users and Groups
Now, there are two standard roles: Administrator and Standard User, as well as custom roles that we can add and edit (RBAC).
In the new version, you can find the following main components:
-Control Plane Nodes
Rancher is still able to deploy on any popular cloud provider, such as AWS,
gCloud, Azure, and Digital Ocean.
In the new version, the developers have changed the catalogs of applications. Formerly, docker-compose files were used, whilst today helm templates are provided.
Alerts and Notifications
It should be noted that from now it is possible to get notifications for any significant event (processor load, freespace in corememory, or if a service isn’t responding for a long time).
Such alerts are usually sent to slack, email, pagerduty, and webhook.
Developers also added the backing for cluster tracking data. It can be sent to elasticsearch, kafka, syslog, and splunk.
It’s possible to integrate Rancher with GitHub to create a pipeline, a great initiative. It can be set up in a way to run a series of stages and steps to test your code and deploy it. Stages are executed sequentially, thus, the next stage will not be executed until all of the steps within the stage are performed. Steps, in their turn, are executed in parallel with a stage.
All in all, there are many other improvements in the new version, but for sure the implementation of Kubernetes is the best one.
According to Rancher Labs, “Kubernetes is going to be a universal standard for infrastructure”. Is it true? You never know. But here, at K&C, we’ve been working with the described technologies already for a long time for such projects as VAIX or the German provider of radiology as a service, and we’ve never been disappointed with these technologies. In the end, there’s a scalable, secure, and independent regarding deployment infrastructure.
Reach out to our DevOps team, to make the concept of containers work for your business.