Configure Prometheus and Grafana
This document shows how you can monitor your Portworx Backup cluster with Prometheus and Grafana.
Prerequisites
- A Portworx Backup cluster
- You must have
kubectl
access to your Portworx Backup cluster
Below topic explains the deployment of monitoring stack in Portworx Backup namespace. If Portworx Backup is deployed in a different namespace, please ensure to modify the namespace wherever required.
Install and configure Prometheus
-
(Optional) Enter the following combined spec and
kubectl
command to install the Prometheus Operator:Skip this step if you have not configured your own Prometheus stack in Portworx Backup version 2.7.0 and above
kubectl apply -f - <<'_EOF'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-operator
namespace: px-backup
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-operator
subjects:
- kind: ServiceAccount
name: prometheus-operator
namespace: px-backup
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-operator
namespace: px-backup
rules:
- apiGroups:
- extensions
resources:
- thirdpartyresources
verbs: ["*"]
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs: ["*"]
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- servicemonitors
- prometheusrules
- podmonitors
- thanosrulers
- alertmanagerconfigs
- probes
verbs: ["*"]
- apiGroups:
- apps
resources:
- statefulsets
verbs: ["*"]
- apiGroups: [""]
resources:
- configmaps
- secrets
verbs: ["*"]
- apiGroups: [""]
resources:
- pods
verbs: ["list", "delete"]
- apiGroups: [""]
resources:
- services
- endpoints
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch", "get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-operator
namespace: px-backup
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: prometheus-operator
name: prometheus-operator
namespace: px-backup
spec:
selector:
matchLabels:
k8s-app: prometheus-operator
replicas: 1
template:
metadata:
labels:
k8s-app: prometheus-operator
spec:
containers:
- args:
- --kubelet-service=kube-system/kubelet
- --prometheus-config-reloader=docker.io/portworx/prometheus-config-reloader:v0.56.3
- --namespaces=px-backup
name: prometheus-operator
image: docker.io/portworx/prometheus-operator:v0.56.3
ports:
- containerPort: 8080
name: http
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: prometheus-operator -
To grant Prometheus access to the metrics API, create the
ClusterRole
,ClusterRoleBinding
,Service
, andServiceAccount
Kubernetes objects:kubectl apply -f - <<'_EOF'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: px-backup-prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- nonResourceURLs:
- /metrics
- /federate
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: px-backup-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: px-backup-prometheus
subjects:
- kind: ServiceAccount
name: px-backup-prometheus
namespace: px-backup
---
apiVersion: v1
kind: Service
metadata:
name: px-backup-prometheus
namespace: px-backup
spec:
type: ClusterIP
ports:
- name: web
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: px-backup-prometheus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-backup-prometheus
namespace: px-backup -
To specify the monitoring rules for Portworx Backup, create a
ServiceMonitor
object by entering the following combined spec andkubectl
command:kubectl apply -f - <<'_EOF'
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
namespace: px-backup
name: px-backup-prometheus-sm
labels:
name: px-backup-prometheus-sm
spec:
selector:
matchLabels:
app: px-backup
namespaceSelector:
any: true
endpoints:
- port: rest-api
targetPort: 10001 -
Apply Prometheus specs for Portworx Backup metrics:
kubectl apply -f - <<'_EOF'
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: px-backup-prometheus
namespace: px-backup
spec:
replicas: 2
logLevel: debug
serviceAccountName: px-backup-prometheus
serviceMonitorSelector:
matchLabels:
name: px-backup-prometheus-sm
Install and configure Grafana
-
Create a storage class for Grafana and persistent volumes with the grafana-data, grafana-dashboard, grafana-source-config, and grafana-extensions names:
kubectl apply -f - <<'_EOF'
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-grafana-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
priority_io: "high"
allowVolumeExpansion: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana-data
namespace: px-backup
annotations:
volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana-dashboard
namespace: px-backup
annotations:
volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana-source-config
namespace: px-backup
annotations:
volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana-extensions
namespace: px-backup
annotations:
volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1GinoteIn this storage class:
- The
provisioner
parameter is set tokubernetes.io/portworx-volume
. For details about the Portworx-specific parameters, refer to the Portworx Volume section of the Kubernetes website. - Three replicas of each volume will be created.
- The
-
Enter the following command to install Grafana:
noteIf your cluster is on a cloud provider, then follow the instructions in Step 3 to export it to a node port or load balancer.
kubectl apply -n px-backup -f - <<'_EOF'
---
apiVersion: v1
kind: Service
metadata:
name: grafana
labels:
app: grafana
spec:
type: ClusterIP
ports:
- port: 3000
selector:
app: grafana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 2000
containers:
- image: docker.io/portworx/grafana:7.5.16
name: grafana
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
readinessProbe:
httpGet:
path: /login
port: 3000
volumeMounts:
- name: grafana
mountPath: /etc/grafana/provisioning/dashboard
readOnly: false
- name: grafana-dash
mountPath: /var/lib/grafana/dashboards
readOnly: false
- name: grafana-source-cfg
mountPath: /etc/grafana/provisioning/datasources
readOnly: false
- name: grafana-plugins
mountPath: /var/lib/grafana/plugins
readOnly: false
volumes:
- name: grafana
persistentVolumeClaim:
claimName: grafana-data
- name: grafana-dash
persistentVolumeClaim:
claimName: grafana-dashboard
- name: grafana-source-cfg
persistentVolumeClaim:
claimName: grafana-source-config
- name: grafana-plugins
persistentVolumeClaim:
claimName: grafana-extensionsnoteIn this deployment, the
volumes
section references the PVCs you created in the previous step. -
Enter the following
kubectl port-forward
command to forward all connections made tolocalhost:3000
tosvc/grafana:3000
:kubectl port-forward svc/grafana --namespace px-backup --address 0.0.0.0 3000
Alternatively, if your cluster is on a cloud provider, perform one of the following to export to the nodeport or loadbalancer:
-
To export to the nodeport, navigate to Install and configure Grafana, change the following parameter in Step 2:
type: NodePort
noteThe cluster node must be accessible using an external IP.
-
To export to the loadbalancer, create an ingress role:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook2
namespace: px-backup
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- backend:
serviceName: px-backup-prometheus
servicePort: 9090
-
-
Follow the instructions in Prometheus documentation to create a Prometheus data source named
px-backup
. Please ensure to replace the HTTP URL fromhttp://localhost:9090/
tohttp://prometheus-operated:9090/
. -
Follow the instructions in Grafana documentation to import the Portworx Backup dashboard JSON file.