Skip to main content
Version: 2.7

Configure your own Prometheus

This topic describes how to configure your own Prometheus in OCP, Kubernetes Vanilla, and Rancher application cluster environments. With this configuration you can monitor the clusters, backups, restores and other key information through Portworx Backup dashboard.

Openshift Container Platform

Prerequisites

OpenShift clusters have their own monitoring stack, hence Portworx by PureStorage does not recommend to install Prometheus provided by Portworx Backup. Portworx Backup allows you to Bring Your Own Prometheus (BYOP) so that you can mount the OCP Prometheus Stack and pass the Prometheus and AlertManager end point of OCP cluster into Portworx Backup during installation to fetch metrics from those end points.

How to deploy OCP user monitoring

  1. Enable user monitoring and Prometheus:

    The cluster monitoring system, deployed in the Openshift-monitoring namespace by default, handles monitoring the OpenShift components only. OCP recommends users to enable the user workload monitoring system by creating the following ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: cluster-monitoring-config
    namespace: openshift-monitoring
    data:
    config.yaml: |
    enableUserWorkload: true

    Run this command in the OpenShift CLI.

    This triggers the OpenShift controllers to create a new monitoring stack (with Prometheus) in the openshift-user-workload-monitoring namespace, which takes care of monitoring the user applications running in all user namespaces.

  2. Enable user defined AlertManager in openshift-user-workload-monitoring namespace, by creating below ConfigMap:

    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: user-workload-monitoring-config
    namespace: openshift-user-workload-monitoring
    data:
    config.yaml: |
    alertmanager:
    enabled: true
    enableAlertmanagerConfig: true

    Run this command in the OpenShift CLI. This enables custom alerting capabilities for user-defined metrics.

    This enables AlertManager and allows the AlertManager pod to monitor for AlertmanagerConfig which is a CR created by Portworx Backup.

Portworx by PureStorage recommends the below procedure to ensure uninterrupted service and data retention from OCP Prometheus Stack.

Perform these steps in addition to the existing ConfigMap:

  1. Increase the retention of data in Prometheus to 90 days by updating the below ConfigMap in openshift-user-workload-monitoring namespace.

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: user-workload-monitoring-config
    namespace: openshift-user-workload-monitoring
    data:
    config.yaml: |
    prometheus:
    retention: 90d
    retentionSize: 9GB
  2. Enable persistent volume to ensure data is retained in case of Prometheus pod crash. To do that, update the below ConfigMap in openshift-user-workload-monitoring namespace.

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: user-workload-monitoring-config
    namespace: openshift-user-workload-monitoring
    data:
    config.yaml: |
    alertmanager:
    enabled: true
    enableAlertmanagerConfig: true
    prometheus:
    retention: 90d
    retentionSize: 9GB
    volumeClaimTemplate:
    spec:
    storageClassName: <storage-class-name>
    resources:
    requests:
    storage: 10Gi

    Replace <storage-class-name> with the actual storage class.

  3. Upload the Portworx Backup email template into AlertManager pod. To upload Portworx Backup custom template to OCP user, you need to add template path into the AlertManager secret. The secret is base-64 encoded, so first decode the secret, add to the template reference line into the yaml and then encode the data once again to replace the original one. Edit the secret/alertmanager-user-workload in openshift-user-workload-monitoring namespace to upload a custom certificate.

    oc get secrets -n openshift-user-workload-monitoring alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 -d > alertmanager.yaml

    This command should be run from the OpenShift CLI.

    Sample output:

    route:
    receiver: "null"
    receivers:
    - name: "null"
  4. Add the following line to the result you obtained:

    route:
    receiver: "null"
    receivers:
    - name: "null"
    templates: [ "/etc/alertmanager/config/*.tmpl" ]

    This ensures that the custom Portworx Backup email template (which is about to get uploaded) is picked up by the AlertManager.

  5. Encode the result with base-64:

    cat alertmanager.yaml | base64
  6. Copy the output value and replace it with alertmanager.yaml in secret alertmanager-user-workload of namespace openshift-user-workload-monitoring.

    For example:

    oc edit secrets -n openshift-user-workload-monitoring alertmanager-user-workload
    apiVersion: v1
    data:
    alertmanager.yaml: cm91dGU6CiAgcmVjZWl2ZXI6ICJudWxsIgpyZWNlaXZlcnM6Ci0gbmFtZTogIm51bGwiCnRlbXBsYXRlczogWyAiL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbCIgXQo= ## replace with the value you have generated
  7. Copy the following pxc_template.tmpl into the same file:

    apiVersion: v1
    kind: Secret
    metadata:
    name: alertmanager-user-workload
    data:
    alertmanager.yaml: cm91dGU6CiAgcmVjZWl2ZXI6ICJudWxsIgpyZWNlaXZlcnM6Ci0gbmFtZTogIm51bGwiCnRlbXBsYXRlczogWyAiL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbCIgXQo=
    pxc_template.tmpl: PCFET0NUWVBFIGh0bWw
    +CjxodG1sPgogIDxoZWFkPgogICAgICA8dGl0bGU+UHgtQmFja3VwIEVtYWlsPC90aXRsZT4KICAgICAgPHN0eWxlPgogICAgICAgIGJvZHkgewogICAgICAgIGZvbnQtZmFtaWx5OiBzYW5zLXNlcmlmOwogICAgICAgIG1hcmdpbjogMDsKICAgICAgICBwYWRkaW5nOiAwOwogICAgICAgIGRpc3BsYXk6IGZsZXg7CiAgICAgICAganVzdGlmeS1jb250ZW50OiBjZW50ZXI7CiAgICAgICAgYmFja2dyb3VuZC1jb2xvcjogI2Y0ZjRmNDsKICAgICAgICB9CiAgICAgICAgLmNvbnRhaW5lciB7CiAgICAgICAgbWFyZ2luOiAyMHB4OwogICAgICAgIHBhZGRpbmc6IDIwcHg7CiAgICAgICAgYmFja2dyb3VuZC1jb2xvcjogI2ZmZjsKICAgICAgICBib3JkZXItcmFkaXVzOiA1cHg7CiAgICAgICAgYm94LXNoYWRvdzogMCAycHggNXB4IHJnYmEoMCwgMCwgMCwgMC4xKTsKICAgICAgICB9CiAgICAgICAgaDEgewogICAgICAgIGNvbG9yOiAjMzMzOwogICAgICAgIH0KICAgICAgICBwIHsKICAgICAgICBjb2xvcjogIzY2NjsKICAgICAgICB9CiAgICAgICAgdGQgewogICAgICAgIGNvbG9yOiAjMjkyOTI5OwogICAgICAgIGZvbnQtc2l6ZTogMTRweDsKICAgICAgICBmb250LXN0eWxlOiBub3JtYWw7CiAgICAgICAgZm9udC13ZWlnaHQ6IDQwMDsKICAgICAgICB9CiAgICAgICAgLmJvbGQtdGV4dCB7CiAgICAgICAgcGFkZGluZy1sZWZ0OiAyMHB4OwogICAgICAgIGZvbnQtd2VpZ2h0OiBib2xkOwogICAgICAgIH0KICAgICAgICAucGItMTYsCiAgICAgICAgdGQgewogICAgICAgIHBhZGRpbmctYm90dG9tOiAxNnB4OwogICAgICAgIH0KICAgICAgPC9zdHlsZT4KICA8L2hlYWQ+CiAgPGJvZHk+CiAgICAgIDxkaXYgY2xhc3M9ImNvbnRhaW5lciI+CiAgICAgICAgPGltZyBoZWlnaHQ9IjUwcHgiIHNyYz0iaHR0cHM6Ly9wb3J0d29yeC5jb20vd3AtY29udGVudC90aGVtZXMvcG9ydHdvcngvYXNzZXRzL2ltYWdlcy9oZWFkZXIvcG9ydHdvcngtbG9nby5wbmciIGFsdD0iIiBzcmNzZXQ9IiI+CiAgICAgICAge3sgcmFuZ2UgLkFsZXJ0cyB9fQogICAgICAgIHt7LSBpZiBlcSAuTGFiZWxzLmFsZXJ0bmFtZSAiQ2x1c3RlckFsZXJ0In19CiAgICAgICAgPGRpdgogICAgICAgICAgICBjbGFzcz0icGItMTYiCiAgICAgICAgICAgIHN0eWxlPSJ3aWR0aDogNDAwcHg7IGNvbG9yOiAjYmMxYjA2OyBmb250LXNpemU6IDE4cHg7IGZvbnQtd2VpZ2h0OiA1MDAiPgogICAgICAgICAgICBDcml0aWNhbCBBbGVydDogQ2x1c3RlciBEaXNjb25uZWN0ZWQKICAgICAgICA8L2Rpdj4KICAgICAgICA8ZGl2CiAgICAgICAgICAgIGNsYXNzPSJwYi0xNiIKICAgICAgICAgICAgc3R5bGU9IgogICAgICAgICAgICBjb2xvcjogdmFyKC0tY29udGVudC1vbkJhc2Utc3Ryb25nLCAjMjkyOTI5KTsKICAgICAgICAgICAgZm9udC1zaXplOiAxNnB4OwogICAgICAgICAgICBmb250LXdlaWdodDogNzAwOwogICAgICAgICAgICAiCiAgICAgICAgICAgID4KICAgICAgICAgICAgQWxlcnQgZGV0YWlscwogICAgICAgIDwvZGl2PgogICAgICAgIDx0YWJsZT4KICAgICAgICAgICAgPHRyPgogICAgICAgICAgICAgIDx0ZD5DbHVzdGVyIG5hbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuTGFiZWxzLm5hbWUgfX08L3RkPgogICAgICAgICAgICA8L3RyPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkVycm9yPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5lcnJvcl9yZWFzb24gfX08L3RkPgogICAgICAgICAgICA8L3RyPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkNyZWF0aW9uIHRpbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuU3RhcnRzQXQuRm9ybWF0ICIyMDA2LTAxLTAyIDE1OjA0OjA1IiB9fTwvdGQ+CiAgICAgICAgICAgIDwvdHI+CiAgICAgICAgPC90YWJsZT4KICAgICAgICB7ey0gZWxzZSBpZiBlcSAuTGFiZWxzLmFsZXJ0bmFtZSAiQmFja3VwQWxlcnQiIH19CiAgICAgICAgPGRpdgogICAgICAgICAgICBjbGFzcz0icGItMTYiCiAgICAgICAgICAgIHN0eWxlPSJ3aWR0aDogNDAwcHg7IGNvbG9yOiAjYmMxYjA2OyBmb250LXNpemU6IDE4cHg7IGZvbnQtd2VpZ2h0OiA1MDAiPgogICAgICAgICAgICBDcml0aWNhbCBBbGVydDogQmFja3VwIEZhaWxlZAogICAgICAgIDwvZGl2PgogICAgICAgIDxkaXYKICAgICAgICAgICAgY2xhc3M9InBiLTE2IgogICAgICAgICAgICBzdHlsZT0iCiAgICAgICAgICAgIGNvbG9yOiB2YXIoLS1jb250ZW50LW9uQmFzZS1zdHJvbmcsICMyOTI5MjkpOwogICAgICAgICAgICBmb250LXNpemU6IDE2cHg7CiAgICAgICAgICAgIGZvbnQtd2VpZ2h0OiA3MDA7CiAgICAgICAgICAgICIKICAgICAgICAgICAgPgogICAgICAgICAgICBBbGVydCBkZXRhaWxzCiAgICAgICAgPC9kaXY+CiAgICAgICAgPHRhYmxlPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkJhY2t1cCBuYW1lPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5uYW1lIH19PC90ZD4KICAgICAgICAgICAgPC90cj4KICAgICAgICAgICAgPHRyPgogICAgICAgICAgICAgIDx0ZD5DbHVzdGVyIG5hbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuTGFiZWxzLmNsdXN0ZXIgfX08L3RkPgogICAgICAgICAgICA8L3RyPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkVycm9yPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5lcnJvcl9yZWFzb24gfX08L3RkPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkNyZWF0aW9uIHRpbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuU3RhcnRzQXQuRm9ybWF0ICIyMDA2LTAxLTAyIDE1OjA0OjA1IiB9fTwvdGQ+CiAgICAgICAgICAgIDwvdHI+CiAgICAgICAgPC90YWJsZT4KICAgICAgICB7ey0gZWxzZSBpZiBlcSAuTGFiZWxzLmFsZXJ0bmFtZSAiUmVzdG9yZUFsZXJ0IiB9fQogICAgICAgIDxkaXYKICAgICAgICAgICAgY2xhc3M9InBiLTE2IgogICAgICAgICAgICBzdHlsZT0id2lkdGg6IDQwMHB4OyBjb2xvcjogI2JjMWIwNjsgZm9udC1zaXplOiAxOHB4OyBmb250LXdlaWdodDogNTAwIj4KICAgICAgICAgICAgQ3JpdGljYWwgQWxlcnQ6IFJlc3RvcmUgRmFpbGVkCiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdgogICAgICAgICAgICBjbGFzcz0icGItMTYiCiAgICAgICAgICAgIHN0eWxlPSIKICAgICAgICAgICAgY29sb3I6IHZhcigtLWNvbnRlbnQtb25CYXNlLXN0cm9uZywgIzI5MjkyOSk7CiAgICAgICAgICAgIGZvbnQtc2l6ZTogMTZweDsKICAgICAgICAgICAgZm9udC13ZWlnaHQ6IDcwMDsKICAgICAgICAgICAgIgogICAgICAgICAgICA+CiAgICAgICAgICAgIEFsZXJ0IGRldGFpbHMKICAgICAgICA8L2Rpdj4KICAgICAgICA8dGFibGU+CiAgICAgICAgICAgIDx0cj4KICAgICAgICAgICAgICA8dGQ+UmVzdG9yZSBuYW1lPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5uYW1lIH19PC90ZD4KICAgICAgICAgICAgPC90cj4KICAgICAgICAgICAgPHRyPgogICAgICAgICAgICAgIDx0ZD5DbHVzdGVyIG5hbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuTGFiZWxzLmNsdXN0ZXIgfX08L3RkPgogICAgICAgICAgICA8L3RyPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkVycm9yPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5lcnJvcl9yZWFzb24gfX08L3RkPgogICAgICAgICAgICA8dHI+CiAgICAgICAgICAgICAgPHRkPkNyZWF0aW9uIHRpbWU8L3RkPgogICAgICAgICAgICAgIDx0ZCBjbGFzcz0iYm9sZC10ZXh0Ij57eyAuU3RhcnRzQXQuRm9ybWF0ICIyMDA2LTAxLTAyIDE1OjA0OjA1IiB9fTwvdGQ+CiAgICAgICAgICAgIDwvdHI+CiAgICAgICAgPC90YWJsZT4KICAgICAgICB7ey0gZWxzZSBpZiBlcSAuTGFiZWxzLmFsZXJ0bmFtZSAiQmFja3VwTG9jYXRpb25BbGVydCIgfX0KICAgICAgICA8ZGl2CiAgICAgICAgICAgIGNsYXNzPSJwYi0xNiIKICAgICAgICAgICAgc3R5bGU9IndpZHRoOiA0MDBweDsgY29sb3I6ICNiYzFiMDY7IGZvbnQtc2l6ZTogMThweDsgZm9udC13ZWlnaHQ6IDUwMCI+CiAgICAgICAgICAgIENyaXRpY2FsIEFsZXJ0OiBCYWNrdXAgTG9jYXRpb24gRGlzY29ubmVjdGVkCiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdgogICAgICAgICAgICBjbGFzcz0icGItMTYiCiAgICAgICAgICAgIHN0eWxlPSIKICAgICAgICAgICAgY29sb3I6IHZhcigtLWNvbnRlbnQtb25CYXNlLXN0cm9uZywgIzI5MjkyOSk7CiAgICAgICAgICAgIGZvbnQtc2l6ZTogMTZweDsKICAgICAgICAgICAgZm9udC13ZWlnaHQ6IDcwMDsKICAgICAgICAgICAgIgogICAgICAgICAgICA+CiAgICAgICAgICAgIEFsZXJ0IGRldGFpbHMKICAgICAgICA8L2Rpdj4KICAgICAgICA8dGFibGU+CiAgICAgICAgICAgIDx0cj4KICAgICAgICAgICAgICA8dGQ+QmFja3VwIExvY2F0aW9uPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLkxhYmVscy5uYW1lIH19PC90ZD4KICAgICAgICAgICAgPC90cj4KICAgICAgICAgICAgPHRyPgogICAgICAgICAgICAgIDx0ZD5FcnJvcjwvdGQ+CiAgICAgICAgICAgICAgPHRkIGNsYXNzPSJib2xkLXRleHQiPnt7IC5MYWJlbHMuZXJyb3JfcmVhc29uIH19PC90ZD4KICAgICAgICAgICAgPHRyPgogICAgICAgICAgICAgIDx0ZD5DcmVhdGlvbiB0aW1lPC90ZD4KICAgICAgICAgICAgICA8dGQgY2xhc3M9ImJvbGQtdGV4dCI+e3sgLlN0YXJ0c0F0LkZvcm1hdCAiMjAwNi0wMS0wMiAxNTowNDowNSIgfX08L3RkPgogICAgICAgICAgICA8L3RyPgogICAgICAgIDwvdGFibGU+CiAgICAgICAge3stIGVuZCB9fQogICAgICAgIHt7IGVuZCB9fQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZTogMTRweCIgY2xhc3M9InBiLTE2Ij4KICAgICAgICAgICAgUGxlYXNlIGxvZ2luIHRvIHlvdXIgUG9ydHdvcnggQmFja3VwIGRlcGxveW1lbnQgdG8gc2VlIG1vcmUgZGV0YWlscyBhbmQKICAgICAgICAgICAgdGFrZSBjb3JyZWN0aXZlIGFjdGlvbnMuCiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOiAxNHB4IiBjbGFzcz0icGItMTYiPgogICAgICAgICAgICBCZXN0IFJlZ2FyZHMsPGJyIC8+UG9ydHdvcnggVGVhbQogICAgICAgIDwvZGl2PgogICAgICA8L2Rpdj4KICA8L2JvZHk+CjwvaHRtbD4K

    This is the base-64 encoded html template for Portworx Backup email alert. Now AlertManager pods in openshift-user-workload-monitoring namespace will have the Portworx Backup template loaded into them and ready for use.

    note

    Any error while applying Portworx backup email template results in an empty email for the users.

Configure secrets

  1. Create px-backup namespace if not created already.

  2. Execute the following steps to create Prometheus and Alertmanger credentials in px-backup namespace required for the Portworx Backup dashboard.

    note

    Ensure that you provide the px-backup namespace as a value for PXBACKUP_NAMESPACE.

    PXBACKUP_NAMESPACE=<px-backup-namespace>

    PROMETHEUS_CERT=$(echo -n $(oc get secret prometheus-user-workload-tls --template='{{ index .data "tls.crt" }}' -n openshift-user-workload-monitoring) | base64 -d)
    PROMETHEUS_TOKEN=$(oc create token prometheus-k8s -n openshift-monitoring --duration=87600h)

    ALERTMANAGER_CERT=$(echo -n $(oc get secret alertmanager-user-workload-tls --template='{{ index .data "tls.crt" }}' -n openshift-user-workload-monitoring) | base64 -d)
    ALERTMANAGER_TOKEN=$(oc create token thanos-ruler -n openshift-user-workload-monitoring --duration=87600h)

    oc create secret generic prometheus-cred \
    --from-literal=cert="$PROMETHEUS_CERT" \
    --from-literal=token=$PROMETHEUS_TOKEN \
    -n $PXBACKUP_NAMESPACE

    oc create secret generic alertmanager-cred \
    --from-literal=cert="$ALERTMANAGER_CERT" \
    --from-literal=token=$ALERTMANAGER_TOKEN \
    -n $PXBACKUP_NAMESPACE
  3. Provide the following details in the Spec Gen during Portworx Backup installation:

    1. Prometheus Endpoint: https://thanos-querier.openshift-monitoring.svc:9091
    2. Alertmanager Endpoint: https://alertmanager-user-workload.openshift-user-workload-monitoring.svc:9095
    3. Prometheus secret name: prometheus-cred
    4. Alertmanager secret name: alertmanager-cred

Rancher

Prerequisites

  • Ensure that Monitoring component is installed from Home > local > Cluster Tools through Rancher web console.

    When the user has Rancher monitoring from Cluster Tools installed already in the cluster, user can make use of it with the Portworx Backup. Ensure that the Prometheus configuration parameters are in sync with the below prerequisites.

To check the configuration parameters in the Rancher web console:

  1. Click on the local option (under Home icon) from the left navigation pane.

  2. Navigate to Cluster Tools at the bottom and select Monitoring.

  3. Choose Prometheus.

  4. Populate Version, Install into Project, Customize Helm options..., Container Registry... and then click Next.

  5. Choose Prometheus and:

    1. Select Admin API
    2. Enable Use:Monitors can access resources based on namespaces that match the namespace selector field
    3. Set Scrape Interval and Evaluate Interval to 30s
    4. Set Retention to 90d
    5. Select Persistent storage for Prometheus
    6. Provide Storage Class Name, Size, and Access Mode values

    note

    If your Prometheus configuration parameters are not in sync with the above prerequisites, Portworx by PureStorage recommends installing Portworx Backup's Prometheus stack.

Configure secrets

  1. Create secrets with the below structure for Prometheus and AlertManager if it is configured with TLS, basic auth or bearer token in px-backup installed namespace.

    data:
    username: <prometheus/alertmanager username>
    password: <prometheus/alertmanager password>
    token: <bearer-token>
    cert: <certificate-data>
  2. Provide the secret names of Prometheus and AlertManager in the spec gen config parameters.

Kubernetes Vanilla cluster

Prerequisites

  1. Retention time of metrics should be at least 90 days.
  2. Storage size should be at least 5 GB
  3. Prometheus instance should be attached with persistent volumes
  4. Prometheus scrape interval should be 30 seconds or below
  5. Prometheus and AlertManager should be managed by the Prometheus operator
  6. Make sure that your Prometheus stack can read the CR created by Portworx Backup in the namespace where it is installed:
spec:
alertmanagerConfigNamespaceSelector:
matchExpressions:
- key: name
operator: In
values:
- px-backup #specify the name of namespace where px-backup is installed
alertmanagerConfigSelector: {}
note

If your Prometheus configuration parameters are not in sync with the above prerequisites, Portworx by PureStorage recommends to install Prometheus stack provided by Portworx Backup.

Configure secrets

Users can configure with TLS without TLS, with basic auth or without basic auth, with bearer token or without bearer token. Here is a sample configuration to generate secrets with TLS:

  1. Create secrets with the below structure for Prometheus and AlertManager if it is configured with TLS, basic auth or bearer token in px-backup installed namespace.

    data:
    username: <prometheus/alertmanager username>
    password: <prometheus/alertmanager password>
    token: <bearer-token>
    cert: <certificate-data>

    Replace <prometheus/alertmanager username>, <bearer-token>, and <certificate-data> with the appropriate values. Verify that these values are working by checking logs or running a test scrape.

  2. Provide the secret names of Prometheus and AlertManager in the spec gen config parameters.

Was this page helpful?