How To Launch Kubernetes Federation on Google Cloud Platform

DevOpsUPDATED ON June 1, 2021

How to launch Kubernetes Federation on Google Cloud Platform

In this edition of our Kubernetes consulting series, we’ll take you through a step-by-step to launching Kubernetes Federation on Google Cloud Platform.

What is Kubernetes Federation? It allows you to combine several Kubernetes clusters and manage them through one Control Plane. With Federation, we’re able to synchronize resources in all clusters, reduce response time for queries from different parts of the world, and achieve high availability, as we place the clusters on different continents.

K&C - Creating Beautiful Technology Solutions For 20+ Years . Can We Be Your Competitive Edge?

Drop us a line to discuss your needs or next project

In our example, we will be using Google Kubernetes Engine.

DNS

Creating zone:

$ gcloud dns managed-zones create federation 
          --description "Kubernetes Federation Zone" 
            --dns-name federation.com

Checking:

$ gcloud dns managed-zones describe federation

Output:

creationTime: '2018-08-28T10:33:49.424Z'
description: Kubernetes Federation Zone
dnsName: federation.com.
id: '8875495119636580191'
kind: dns#managedZone
name: federation
nameServers:
- ns-cloud-e1.googledomains.com.
- ns-cloud-e2.googledomains.com.
- ns-cloud-e3.googledomains.com.
- ns-cloud-e4.googledomains.com.

Clusters

Creating a cluster in Asia:

$ gcloud container clusters create asia 
            --zone asia-southeast1-a 
            --scopes "cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite"

Output:

Creating cluster asia...⠹
kubeconfig entry generated for asia.
NAME  LOCATION           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
asia  asia-southeast1-a  1.9.7-gke.6     35.197.139.197  n1-standard-1  1.9.7-gke.6   3          RUNNING

Getting the connection credentials:

$ gcloud container clusters get-credentials asia 
            --zone asia-southeast1-a

Output:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for asia.

Defining user policy:

$ kubectl create clusterrolebinding cluster-admin-binding 
            --clusterrole cluster-admin --user $(gcloud config get-value account)

Output:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for asia.

Creating a cluster in Europe:

$ gcloud container clusters create europe 
            --zone europe-west2-a 
            --scopes "cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite"

Getting the connection credentials:

$ gcloud container clusters get-credentials europe 
            --zone europe-west2-a

Defining user policy:

$ kubectl create clusterrolebinding cluster-admin-binding 
            --clusterrole cluster-admin --user $(gcloud config get-value account)

Creating a cluster in USA:

$ gcloud container clusters create america 
            --zone us-central1-a 
            --scopes "cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite"

Getting the connection credentials:

$ gcloud container clusters get-credentials america 
            --zone us-central1-a

Defining user policy:

$ kubectl create clusterrolebinding cluster-admin-binding 
            --clusterrole cluster-admin --user $(gcloud config get-value account)

Let’s do the same thing again for two more clusters in Europe and Asia:

$ gcloud container clusters create asia-2 
            --zone asia-east1-a 
            --scopes "cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite"
 
 
$ gcloud container clusters get-credentials asia-2 
            --zone asia-east1-a
 
$ kubectl create clusterrolebinding cluster-admin-binding 
            --clusterrole cluster-admin --user $(gcloud config get-value account)
$ gcloud container clusters create europe-2 
            --zone europe-north1-a 
            --scopes "cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite"
 
$ gcloud container clusters get-credentials europe-2 
            --zone europe-north1-a 
 
$ kubectl create clusterrolebinding cluster-admin-binding 
            --clusterrole cluster-admin --user $(gcloud config get-value account)

Checking after all the actions are completed:

$ gcloud container clusters list

Output:

NAME      LOCATION           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
asia-2    asia-east1-a       1.9.7-gke.6     35.221.221.46   n1-standard-1  1.9.7-gke.6   3          RUNNING
asia      asia-southeast1-a  1.9.7-gke.6     35.197.139.197  n1-standard-1  1.9.7-gke.6   3          RUNNING
europe-2  europe-north1-a    1.9.7-gke.6     35.228.203.204  n1-standard-1  1.9.7-gke.6   3          RUNNING
europe    europe-west2-a     1.9.7-gke.6     35.242.178.241  n1-standard-1  1.9.7-gke.6   3          RUNNING
america   us-central1-a      1.9.7-gke.6     35.188.203.7    n1-standard-1  1.9.7-gke.6   3          RUNNING

Or in GCP console:

Federation

Federation Control Plane manages the state of all your clusters. The control panel can be placed inside one of your Kubernetes clusters.

Even if the Control Plane cluster does not work, the other clusters are independent, so they will continue to function until the control panel goes back online. You can manage clusters separately! This means that you do not need to worry about one point of failure.

Now let’s see what contexts are used:

$ kubectl config get-contexts

Output:

          gke_federation_asia-east1-a_asia-2        gke_federation_asia-east1-a_asia-2        gke_federation_asia-east1-a_asia-2
*         gke_federation_europe-north1-a_europe-2   gke_federation_europe-north1-a_europe-2   gke_federation_europe-north1-a_europe-2

Kubernetes Federation uses the context name to create the Federation, but it must conform to the RFC1123 specification. It means that you need to rename the context. You can do this with the following commands:

$ kubectl config set-context asia 
   --cluster gke_federation_asia-southeast1-a_asia 
   --user gke_federation_asia-southeast1-a_asia
 
$ kubectl config delete-context 
            gke_federation_asia-southeast1-a_asia
 
$ kubectl config set-context europe 
   --cluster gke_federation_europe-west2-a_europe 
   --user gke_federation_europe-west2-a_europe
 
$ kubectl config delete-context 
            gke_federation_europe-west2-a_europe
 
$ kubectl config set-context america 
   --cluster gke_federation_us-central1-a_america 
   --user gke_federation_us-central1-a_america 
 
$ kubectl config delete-context 
            gke_federation_us-central1-a_america
 
$ kubectl config set-context asia-2 
   --cluster gke_federation_asia-east1-a_asia-2 
   --user gke_federation_asia-east1-a_asia-2
 
$ kubectl config delete-context 
            gke_federation_asia-east1-a_asia-2
 
$ kubectl config set-context europe-2 
   --cluster gke_federation_europe-north1-a_europe-2 
   --user gke_federation_europe-north1-a_europe-2
 
$ kubectl config delete-context 
            gke_federation_europe-north1-a_europe-2

Checking context:

$ kubectl config get-contexts

Output:

<span class="pln">CURRENT   NAME       CLUSTER                                           AUTHINFO                                          NAMESPACE
          america    gke_federation_us</span><span class="pun">-</span><span class="pln">central1</span><span class="pun">-</span><span class="pln">a_america      gke_federation_us</span><span class="pun">-</span><span class="pln">central1</span><span class="pun">-</span><span class="pln">a_america
          asia       gke_federation_asia</span><span class="pun">-</span><span class="pln">southeast1</span><span class="pun">-</span><span class="pln">a_asia     gke_federation_asia</span><span class="pun">-</span><span class="pln">southeast1</span><span class="pun">-</span><span class="pln">a_asia
          asia</span><span class="pun">-</span><span class="lit">2</span><span class="pln">     gke_federation_asia</span><span class="pun">-</span><span class="pln">east1</span><span class="pun">-</span><span class="pln">a_asia</span><span class="pun">-</span><span class="lit">2</span><span class="pln">        gke_federation_asia</span><span class="pun">-</span><span class="pln">east1</span><span class="pun">-</span><span class="pln">a_asia</span><span class="pun">-</span><span class="lit">2</span><span class="pln">
          europe     gke_federation_europe</span><span class="pun">-</span><span class="pln">west2</span><span class="pun">-</span><span class="pln">a_europe      gke_federation_europe</span><span class="pun">-</span><span class="pln">west2</span><span class="pun">-</span><span class="pln">a_europe
          europe</span><span class="pun">-</span><span class="lit">2</span><span class="pln">   gke_federation_europe</span><span class="pun">-</span><span class="pln">north1</span><span class="pun">-</span><span class="pln">a_europe</span><span class="pun">-</span><span class="lit">2</span><span class="pln">   gke_federation_europe</span><span class="pun">-</span><span class="pln">north1</span><span class="pun">-</span><span class="pln">a_europe</span><span class="pun">-</span><span class="lit">2</span>

To create a Federation we will use Kubefed. But it only works on Linux, it won’t start on Mac right away, but we can fix this with Docker.

For authorization in Kubernetes on Google Cloud, we use google sdk. -/.kube /config:

- name: gke_krusche-federation_asia-southeast1-a_asia
  user:
    auth-provider:
      config:
        access-token: ya29.GlwHBt-fKASzD91Uxp-mtfbMMHz94w
        cmd-args: config config-helper --format=json
        cmd-path: /Users/roman/work/google-cloud-sdk/bin/gcloud
        expiry: 2018-08-28 15:03:09
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

In my case, the binary file is located in my home directory, and Kubefed uses the same configuration files. Therefore, we will mount the entire home directory in the Docker container:

Dockerfile:

FROM centos:7
 
COPY bin/kubefed /usr/local/bin
COPY repo/kubernetes.repo /etc/yum.repos.d/
RUN mkdir -p /Users/roman 
    && yum install -y kubectl
 
ENV HOME /Users/roman
WORKDIR /Users/roman
 
ENTRYPOINT ["kubefed"]

Building Docker image:

$ docker build --no-cache --rm -t k8s/kubefed .

Defining alias:

alias kubefed='docker run -v "$HOME":/Users/roman k8s/kubefed'

Initializing Federation:

$ kubefed init kfed 
  --host-cluster-context=america 
  --dns-zone-name="federation.com." 
  --dns-provider="google-clouddns"

Output:

Creating a namespace federation-system for federation system components... done
Creating federation control plane service............. done
Creating federation control plane objects (credentials, persistent volume claim)... done
Creating federation component deployments... done
Updating kubeconfig... done
Waiting for federation control plane to come up................. done
Federation API server is running at: 104.154.131.222

Connecting clusters to the Federation Control Plane:

kubefed --context=kfed join asia 
  --cluster-context=asia 
  --host-cluster-context=america
 
kubefed --context=kfed join europe 
  --cluster-context=europe 
  --host-cluster-context=america
 
kubefed --context=kfed join america 
  --cluster-context=america 
  --host-cluster-context=america
 
kubefed --context=kfed join asia-2 
  --cluster-context=asia-2 
  --host-cluster-context=america
 
kubefed --context=kfed join europe-2 
  --cluster-context=europe-2 
  --host-cluster-context=america

Output:

cluster "asia" created
cluster "europe" created
cluster "america" created
cluster "asia-2" created
cluster "europe-2" created

Checking:

$ kubectl --context kfed get cluster

Output:

NAME       AGE
america    58s
asia       2m
asia-2     46s
europe     1m
europe-2   36s

Creating default namespace:

$ kubectl --context=kfed create ns default

Setting up and running the application

Creating a global static IP address:

$ gcloud compute addresses create ingress --global

Launching NGINX:

$ kubectl --context=kfed create deployment nginx --image=nginx:stable 
  && kubectl --context=kfed scale deployment nginx --replicas=12

Checking:

Creating NGINX Service:

$ kubectl --context=kfed create service nodeport nginx 
  --tcp=80:80 --node-port=30036

Creating file ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.global-static-ip-name: ingress
spec:
  backend:
    serviceName: nginx
    servicePort: 80

Let’s deploy ingress:

kubectl --context=kfed create -f ingress.yaml

Ingress may not be created due to bugs 242 and 245.

In that case, we’ll create the balancer manually:

If everything’s ok, our NGINX should open.

Now let’s write a simple website with a world map to check the response of the clusters. You can check external IP from containers, but all of the Google IPs are American, so we’ll take the region from the cluster’s name.

Creating location.php:

<?php
$a = $_ENV[MY_NODE_NAME];
if (strpos($a, 'asia') !== false) {
  $continent = 'asia';
  $image = 'asia.png';
} elseif (strpos($a, 'europe') !== false) {
  $continent = 'europe';
  $image = 'europe.png';
} elseif (strpos($a, 'america') !== false) {
  $continent = 'North America';
  $image = 'america.png';
}
?>

Dockerfile:

FROM php:7.0-apache-stretch
ADD data /data
RUN cp -R /data/* /var/www/html 
    && chown www-data -R /var/www/html/

Building image:

docker build --no-cache --rm -t wacken/location:nodename .

Creating deployment.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 20
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        name: nginx
        image: "wacken/location:nodename"
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80

Applying:

$ kubectl --context=kfed apply -f deployment.yaml

Checking

Now let’s open our balancer:

That’s it! Hurray!

So now we know how to setup Kubernetes Federation to reduce response time and ensure high availability of services. For more on Kubernetes setup you can also check out our step-by-step guide to How to Setup Kubernetes cluster on AWS.

If your organisation could benefit from Kubernetes consultancy or flexible DevOps teams please don’t hesitate to get in touch. K&C is one of Munich and Germany’s most trusted IT services providers with over 20 years of experience. We work with some of Europe’s best known brands, exciting start-ups and ambitious SMEs and would love to tell you more and hear about your next project!

When does IT Outsourcing work?

(And when doesn’t it?)