Prometheus Operator – Installing Prometheus Monitoring Within The Kubernetes Environment

Your complete step-by-step guide to setting up Kubernetes monitoring with Prometheus Operator

BlogUPDATED ON July 27, 2021

prometheus operator

In this post, part of our Kubernetes consulting series,  we will provide an overview of and step-by-step setup guide for the open source Prometheus Operator software. Prometheus Operator is used in the integration of the Prometheus monitoring system within a Kubernetes environment. 

Operators are a new class of software introduced in 2016 by CoreOS – recently acquired by Red Hat. CoreOS is the company behind Tectonic, the commercial Kubernetes distribution platform that brings the CoreOS stack together with Kubernetes to provide companies with a Google-esque infrastructure in any cloud or on-premise/bare-metal environment.


Operators are a new class of software introduced in 2016 by CoreOS  – recently acquired by Red Hat. CoreOS is the company behind Tectonic, the commercial Kubernetes distribution platform that brings the CoreOS stack together with Kubernetes to provide companies with a Google-esque infrastructure in any Cloud or on-premise/bare-metal environment.

CoreOS created Operators as a class of software that operates other software. They inject human-sourced operational knowledge into software. The Prometheus Operator optimised the running of Prometheus on top of Kubernetes, while retaining Kubernetes-native configuration options

Operator software incorporates application domain knowledge to automate regular tasks, building upon the standard Kubernetes resource and controller concepts. At the heart of operator software is that it removes the burden of manual deployment and lifecycle management, leaving DevOps engineers to focus on optimising configuration.

Prometheus itself is closely related to Kubernetes, with both under the governance of the Cloud Native Computing Foundation (CNCF). Kubernetes evolved as an open-source progression from Google’s Borg cluster system and is also originally a Google release. Prometheus shares much of the same design concept blueprint of Borgmon – the monitoring system Google developed to work within Borg. That shared ancestry is apparent from a look under the Kubernetes hood – which reveals the latter exports its internal metrics in the same format that is native to Prometheus.

When does IT Outsourcing work?

(And when doesn’t it?)

Prometheus Operator for Kubernetes

The mission of the Prometheus Operator is to make running Prometheus on top of Kubernetes as easy as possible while preserving configurability as well as making the configuration Kubernetes native. (source)

The operator saves the user (administrator) from editing configuration files, automatically configures Prometheus based on YAML files.

How does it work?

Prometheus Operator Architecture (source)

The Prometheus Operator: Optimising Prometheus and Kubernetes Integration

Installing Prometheus Operator is as clean as writing a single command line. The result of that simple command line is DevOps engineers being able to configure and manage Prometheus instances with stripped-back declarative configuration. This configuration will result in the creation, configuration and management of Prometheus monitoring instances.

Prometheus Operator then offers a range of key features:


A Prometheus instance can be simply launched in the Kubernetes namespace, a team using the Operator or a particular application.

Easy Configuration

The fundamentals of Prometheus are configured like versions, persistence, retention policies and replicas from a native Kubernetes resource.

Target Services via Labels

Monitoring target configurations are automatically generated, based on well-known Kubernetes label queries. Prometheus doesn’t involve developers learning a unique configuration language.

Custom Resource Definitions (CRD) in Prometheus Operator

Prometheus Operator uses CRD (Custom Resource Definitions) to generate configuration files and identify Prometheus resources.

  • alertmanagers – defines installation for Alertmanager
  • podmonitors – determines which pods should be monitored
  • prometheuses – defines installation for Prometheus
  • prometheusrules – defines rules for alertmanager
  • servicemonitors – determines which services should be monitored

The operator monitors Prometheus resources and generates StatefullSet (Prometheus and Alertmanager) and configuration files (prometheus.yaml, alertmanager.yaml)

The operator also monitors resources from ServiceMonitors, PodMonitors and ConfigMaps, generating prometheus.yaml based on them.

Prometheus Pod

3 containers are launched in the Prometheus hearth:

  • Prometheus
  • prometheus-config-reloader – an add-on to prometheus that monitors changes in prometheus.yaml and an HTTP request reloads the prometheus configuration
  • rules-configmap-reloader – monitors changes in the alerts file and also reloads prometheus

Service Monitors Processing Principle:

  • Prometheus Operator subscribes to Service Monitor resource events, monitoring their addition, removal or modification.
  • Based on ServiceMonitors, the prometheus generates part of the configuration file and stores the secret in kubernetes
  • From kubernetes secret config gets into under
  • Changes discovered by prometheus-config-reloader and reload prometheus
  • Prometheus reloads configuration after reboot and then collects new metrics according to this logic

Alertmanager Pod

As in the case with prometheus, 2 containers run in one pod:

  • alertmanager
  • config-reloader – add-on to alertmanager which monitors changes and reloads alert manager via HTTP request

Grafana Pod

  • Grafana
  • Grafana-sc-dashboard – an add-on to grafana which will subscribe to ConfigMaps resources and generate json dashboards for Grafana based on them

How to Install and Configure Prometheus Operator “ A Step-by-Step Guide

Prometheus is installed using helm.

We clone the repository and update the envoy:

cd charts/stable/prometheus-operator
helm dependency update

Now install Prometheus:

helm install --name prometheus --namespace monitoring prometheus-operator

You should now see:

kubectl get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          1m
prometheus-grafana-656769c888-445wm                      2/2     Running   0          1m
prometheus-kube-state-metrics-57f9c94c59-sg5bn           1/1     Running   0          1m
prometheus-prometheus-oper-operator-6844ff8f64-rzwlf     2/2     Running   0          1m
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   1          1m

After all the pods have started, we can look at the web UI Prometheus:

kubectl port-forward prometheus-prometheus-prometheus-oper-prometheus-0 9090:9090

Open in the browser http: // localhost: 9090. The services that were set by default should be listed in the ‘service discovery’:


To see what metrics are going to be needed to run the command:

kubectl get


prometheus-prometheus-oper-alertmanager              19d
prometheus-prometheus-oper-apiserver                 19d
prometheus-prometheus-oper-coredns                   19d
prometheus-prometheus-oper-grafana                   19d
prometheus-prometheus-oper-kube-controller-manager   19d
prometheus-prometheus-oper-kube-etcd                 19d
prometheus-prometheus-oper-kube-proxy                19d
prometheus-prometheus-oper-kube-scheduler            19d
prometheus-prometheus-oper-kube-state-metrics        19d
prometheus-prometheus-oper-kubelet                   19d
prometheus-prometheus-oper-operator                  19d
prometheus-prometheus-oper-prometheus                19d

We saw the same thing in the web UI.

Now add our own metrics to Prometheus. As an example we will use traefik. Create traefik-deployment.yaml file and install it in kubernetes

vi traefik-deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
  name: traefik
  namespace: ingress
    app: traefik
  replicas: 1
      app: traefik
        app: traefik
      - image: traefik:v1.7.11-alpine
        name: traefik-ingress-lb
        - --api
        - --api.statistics
        - --kubernetes
        - --logLevel=INFO
        - --configfile=/config/traefik.toml
          - containerPort: 8080
            name: metrics
kubectl apply -f traefik-deployment.yaml

Check if there are metrics:

kubectl port-forward  traefik-hjbjk 8080:8080

Open in the browser http: // localhost: 8080 / metrics. Must see:

Now create the traefik-metrics-service.yaml service file for the metrics:

vi traefik-metrics-service.yaml
apiVersion: v1
kind: Service
  name: traefik-metrics
  namespace: ingress
    app: traefik-metrics
    release: monitoring
    app: traefik
  - name: metrics
    port: 8080
    targetPort: 8080
  type: ClusterIP

Deploy it to our Kubernetes:

kubectl apply -f traefik-metrics-service.yaml

We check our service:

kubectl port-forward svc/traefik-metrics 8080:8080

At http: // localhost: 8080 / metrics you should see the same metrics as port-forward described above

Now deployServiceMonitors. Prometheus discovers ServiceMonitors by label. You need to know which ServiceMonitors label it is looking for. To do this:

kubectl get -oyaml

We are looking for the serviceMonitorSelector block:

serviceMonitorNamespaceSelector: {}
    release: monitoring

In our case, this is release: monitoring. Knowing the label, we create the traefik-servicemonitor.yaml file

vi traefik-servicemonitor.yaml
kind: ServiceMonitor
  name: traefik
    release: monitoring
    app: traefik-metrics
  - port: metrics
    path: '/metrics'
    any: true
      app: traefik-metrics
      release: monitoring

A new target should appear in our Prometheus, check:

kubectl port-forward prometheus-prometheus-prometheus-oper-prometheus-0 9090:9090

Open in the browser http: // localhost: 9090:

Metrics successfully Prometheus takes, you can move on to creating a dashboard for Grafana.

Download the dashboard here – . And change it on ConfigMap:

vi traefik-dashboard.yaml
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled .Values.kubeEtcd.enabled }}
apiVersion: v1
kind: ConfigMap
  name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "traefik" | trunc 63 | trimSuffix "-" }}
    {{- if $.Values.grafana.sidecar.dashboards.label }}
    {{ $.Values.grafana.sidecar.dashboards.label }}: "1"
    {{- end }}
    app: {{ template "" $ }}-grafana
{{ include "prometheus-operator.labels" $ | indent 4 }}
  traefik.json: |-
    JSON Dashboard starts
    JSON Dashboard ends
{{- end }}

Indent the json of our dashboard with the JSON Dashboard starts ……. JSON Dashboard ends. In json itself, it is important to avoid expressions like {{instance}} into {{{{instance}}}}.

We now put our file to prometheus-operator / templates / grafana / dashboards, apply:

helm upgrade prometheus ./

Now Dashboard should appear in our Grafana.

Prometheus Operator Is Under Heavy Development

These are the basic principles of the Prometheus Operator. Note that the Prometheus Operator is under heavy development and this could mean that the step-by-step instructions provided go quickly out of date. We will update this guide periodically but please also refer to the CoreOS user guide and Github project for the latest information.

DevOps Consulting From K&C

Prometheus Operator monitoring of infrastructure to forecast outages and errors and detect security threats is just one ‘spoke’ of K&C’s Kubernetes and DevOps Consulting & Development wheel. Whatever your needs, from expertise on a specific tool or platform to full dedicated teams of DevOps experts, we are here to advise and serve. Please do Get in Touch to explain your specific needs and situation and we’ll be delighted to help.

K&C - Creating Beautiful Technology Solutions For 20+ Years . Can We Be Your Competitive Edge?

Drop us a line to discuss your needs or next project