Skip to main content
Version: 3.1

StorageCluster

The Portworx cluster configuration is specified by a Kubernetes CRD (CustomResourceDefinition) called StorageCluster. The StorageCluster object acts as the definition of the Portworx Cluster.

The StorageCluster object provides a Kubernetes native experience. You can manage your Portworx cluster just like any other application running on Kubernetes. That is, if you create or edit the StorageCluster object, the operator will create or edit the Portworx cluster in the background.

To generate a StorageCluster spec customized for your environment, point your browser to Portworx Central, and click "Install and Run" to start the Portworx spec generator. Then, the wizard will walk you through all the necessary steps to create a StorageCluster spec customized for your environment.

Note that using the Portworx spec generator is the recommended way of generating a StorageCluster spec. However, if you want to generate the StorageCluster spec manually, you can refer to the StorageCluster Examples and StorageCluster Schema sections.

StorageCluster Examples

This section provides a few examples of common Portworx configurations you can use for manually configuring your Portworx cluster. Update the default values in these files to match your environment.

  • Portworx with internal KVDB, configured to use all unused devices on the system.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    kvdb:
    internal: true
    storage:
    useAll: true
  • Portworx with external ETCD and Stork as default scheduler.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    kvdb:
    endpoints:
    - etcd:http://etcd-1.net:2379
    - etcd:http://etcd-2.net:2379
    - etcd:http://etcd-3.net:2379
    authSecret: px-kvdb-auth
    stork:
    enabled: true
    args:
    health-monitor-interval: "100"
    webhook-controller: "true"
  • Portworx with Security enabled.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    security:
    enabled: true
  • Portworx with Security enabled, guest access disabled, a custom self signed issuer/secret location, and five day token lifetime.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    security:
    enabled: true
    auth:
    guestAccess: 'Disabled'
    selfSigned:
    issuer: 'openstorage.io'
    sharedSecret: 'px-shared-secret'
    tokenLifetime: '5d'
  • Portworx with update and delete strategies, and placement rules.

    note

    From Kubernetes version 1.24 and newer, the label key node-role.kubernetes.io/master is replaced by node-role.kubernetes.io/control-plane.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    updateStrategy:
    type: RollingUpdate
    rollingUpdate:
    maxUnavailable: 20%
    deleteStrategy:
    type: UninstallAndWipe
    placement:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: px/enabled
    operator: NotIn
    values:
    - "false"
    - key: node-role.kubernetes.io/control-plane
    operator: DoesNotExist
    - key: node-role.kubernetes.io/worker
    operator: Exists
    tolerations:
    - key: infra/node
    operator: Equal
    value: "true"
    effect: NoExecute
  • Portworx with custom image registry, network interfaces, and miscellaneous options.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    imagePullPolicy: Always
    imagePullSecret: regsecret
    customImageRegistry: docker.private.io/repo
    network:
    dataInterface: eth1
    mgmtInterface: eth2
    secretsProvider: vault
    runtimeOptions:
    num_io_threads: "10"
    env:
    - name: VAULT_ADDRESS
    value: "http://10.0.0.1:8200"
  • Portworx with node specific overrides. Use different devices or no devices on different set of nodes.

    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0
    storage:
    devices:
    - /dev/sda
    - /dev/sdb
    nodes:
    - selector:
    labelSelector:
    matchLabels:
    <custom-key>:"<custom-value>"
    storage:
    devices:
    - /dev/nvme1
    - /dev/nvme2
    - selector:
    labelSelector:
    matchLabels:
    <custom-key>:"<custom-value>"
    storage:
    devices: []

    Replace <custom-key>:<custom-value1> with the node label that you have added for the node in your cluster. For example, if you have labeled your node as px/storage: "nvme" to specify that the node uses NVME drives, you may use this key-value pair, where custom-key is px/storage and custom-value is nvme.

  • Portworx with a cluster domain defined.

        apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    annotations:
    portworx.io/misc-args: "-cluster_domain example-cluster-domain-name”
  • Portworx with an option to use HTTPS proxy to enable telemetry and share cluster diagnostics and callhome data to Pure1 cloud:

      apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:3.0.0
    env:
    - name: PX_HTTP_PROXY
    value: "http://<IP:port>"
    - name: PX_HTTPS_PROXY
    value: "http://<IP:port>"
note

You can use http://user:password@<IP:port>; if your proxy requires authentication.

StorageCluster Schema

This section explains the fields used to configure the StorageCluster object.

FieldDescriptionTypeDefault
spec.
image
Specifies the Portworx monitor image.stringNone
spec.
imagePullPolicy
Specifies the image pull policy for all the images deployed by the operator. It can take one of the following values: Always or IfNotPresentstringAlways
spec.
imagePullSecret
If Portworx pulls images from a secure repository, you can use this field to pass it the name of the secret. Note that the secret should be in the same namespace as the StorageCluster object.stringNone
spec.
customImageRegistry
The custom container registry server Portworx uses to fetch the Docker images. You may include the repository as well.stringNone
spec.
secretsProvider
The name of the secrets provider Portworx uses to store your credentials. To use features like cloud snapshots or volume encryption, you must configure a secret store provider. Refer to the Secret store management page for more details.stringNone
spec.
runtimeOptions
A collection of key-value pairs that overwrites the runtime options.map[string]stringNone
spec.
security
An object for specifying PX-Security configurations. Refer to the Operator Security page for more details.objectNone
spec.
featureGates
A collection of key-value pairs specifying which Portworx features should be enabled or disabled. 1map[string]stringNone
spec.
env[]
A list of Kubernetes like environment variables. Similar to how environment variables are provided in Kubernetes, you can directly provide values to Portworx or import them from a source like a Secret, ConfigMap, etc.[]objectNone
spec.
metadata.
annotations
A map of components and custom annotations. 2map[string]map[string]stringNone
spec.
metadata.
labels
A map of components and custom labels. 3map[string]map[string]stringNone
spec.
resources.
requests.
cpu
Specifies the cpu that the Portworx container requests; for example: "4000m"stringNone
spec.
resources.
requests.
memory
Specifies the memory that the Portworx container requests; for example: "4Gi"stringNone

KVDB configuration

This section explains the fields used to configure Portworx with a KVDB. Note that, if you don't specify the endpoints, the operator starts Portworx with the internal KVDB.

FieldDescriptionTypeDefault
spec.
kvdb.
internal
Specifies if Portworx starts with the internal KVDB.booleantrue
spec.
kvdb.endpoints[]
A list of endpoints for your external key-value database like ETCD. This field takes precedence over the spec.kvdb.internal field. That is, if you specify the endpoints, Portworx ignores the spec.kvdb.internal field and it uses the external KVDB.[]stringNone
spec.
kvdb.
authSecret
Indicates the name of the secret Portworx uses to authenticate against your KVDB. The secret must be placed in the same namespace as the StorageCluster object. The secret should provide the following information:
- username (optional)
- password (optional)
- kvdb-ca.crt (the CA certificate)
- kvdb.key (certificate key)
- kvdb.crt (etcd certificate)
- acl-token (optional)
For example, create a directory called etcd-secrets, copy the files into it and create a secret with kubectl -n kube-system create secret generic px-kvdb-auth --from-file=etcd-secrets/
stringNone

Storage configuration

This section provides details about the fields used to configure the storage for your Portworx cluster. If you don't specify a device, the operator sets the spec.storage.useAll field to true.

FieldDescriptionTypeDefault
spec.
storage.
useAll
If set to true, Portworx uses all available, unformatted, and unpartitioned devices. 4booleantrue
spec.
storage.
useAllWithPartitions
If set to true, Portworx uses all the available and unformatted devices. 4booleanfalse
spec.
storage.
forceUseDisks
If set to true, Portworx uses a device even if there's a file system on it. Note that Portworx may wipe the drive before using it.booleanfalse
spec.
storage.
devices[]
Specifies the list of devices Portworx should use.[]stringNone
spec.
storage.
cacheDevices[]
Specifies the list of cache devices Portworx should use.[]stringNone
spec.
storage.
journalDevice
Specifies the device Portworx uses for journaling.stringNone
spec.
storage.
systemMetadataDevice
Indicates the device Portworx uses to store metadata. For better performance, specify a system metadata device when using Portworx with the internal KVDB.stringNone
spec.
storage.
kvdbDevice
Specifies the device Portworx uses to store internal KVDB data.stringNone

Cloud storage configuration

This section explains the fields used to configure Portworx with cloud storage. Once the cloud storage is configured, Portworx manages the cloud disks automatically based on the provided specs. Note that the spec.storage fields take precedence over the fields presented in this section. Make sure the spec.storage fields are empty when configuring Portworx with cloud storage.

FieldDescriptionTypeDefault
spec.
cloudStorage.
provider
Specifies the cloud provider name, such as: pure, azure, aws, gce, vsphere.stringNone
spec.
cloudStorage.
deviceSpecs[]
A list of the specs for your cloud storage devices. Portworx creates a cloud disk for every device.[]stringNone
spec.
cloudStorage.
journalDeviceSpec
Specifies the cloud device Portworx uses for journaling.stringNone
spec.
cloudStorage.
systemMetadataDeviceSpec
Indicates the cloud device Portworx uses for metadata. For performance, specify a system metadata device when using Portworx with the internal KVDB.stringNone
spec.
cloudStorage.
kvdbDeviceSpec
Specifies the cloud device Portworx uses for an internal KVDB.stringNone
spec.
cloudStorage.
maxStorageNodesPerZone
Indicates the maximum number of storage nodes per zone. If this number is reached, and a new node is added to the zone, Portworx doesn't provision drives for the new node. Instead, Portworx starts the node as a compute-only node.uint32None
spec.
cloudStorage.
maxStorageNodes
Specifies the maximum number of storage nodes. If this number is reached, and a new node is added, Portworx doesn't provision drives for the new node. Instead, Portworx starts the node as a compute-only node. As a best practice, it is recommended to use the maxStorageNodesPerZone field.uint32None

Network configuration

This section describes the fields used to configure the network settings. If these fields are not specified, Portworx auto-detects the network interfaces.

FieldDescriptionTypeDefault
spec.
network.
dataInterface
Specifies the network interface Portworx uses for data traffic.stringNone
spec.
network.
mgmtInterface
Indicates the network interface Portworx uses for control plane traffic.stringNone

Volume configuration

This section describes the fields used to configure custom volume mounts for Portworx pods.

FieldDescriptionTypeDefault
spec.
volumes[].
name
Unique name for the volume.stringNone
spec.
volumes[].
mountPath
Path within the Portworx container at which the volume should be mounted. Must not contain the ':' character.stringNone
spec.
volumes[].
mountPropagation
Determines how mounts are propagated from the host to container and the other way around.stringNone
spec.
volumes[].
readOnly
Volume is mounted read-only if true, read-write otherwise.booleanfalse
spec.
volumes[].
[secret|configMap|hostPath]
Specifies the location and type of the mounted volume. This is similar to the VolumeSource schema of a Kubernetes pod volume.objectNone

Placement rules

You can use the placement rules to specify where Portworx should be deployed. By default, the operator deploys Portworx on all worker nodes.

FieldDescriptionTypeDefault
spec.
placement.
nodeAffinity
Use this field to restrict Portwox on certain nodes. It works similarly to the Kubernetes node affinity feature.objectNone
spec.
placement.
tolerations[]
Specifies a list of tolerations that will be applied to Portworx pods so that they can run on nodes with matching taints.[]objectNone

For Operator 1.8 and higher, if you have topology labels topology.kubernetes.io/region or topology.kubernetes.io/zone specified on worker nodes, Operator would deploy Stork, Stork scheduler, CSI and PVC controller pods with topologySpreadConstraints to distribute pod replicas across Kubernetes failure domains.

Update strategy

This section provides details on how to specify an update strategy.

FieldDescriptionTypeDefault
spec.
updateStrategy.
type
Indicates the update strategy. Currently, Portworx supports the following update strategies- RollingUpdate and OnDelete.objectRollingUpdate
spec.
updateStrategy.
rollingUpdate.
maxUnavailable
Similarly to how Kubernetes rolling update strategies work, this field specifies how many nodes can be down at any given time. Note that you can specify this as a number or percentage.

Note: Portworx by Pure Storage recommends keeping the maxUnavailable value as 1. Changing this value could potentially lead to volume and Portworx quorum loss during the upgrade process.
int or string1
spec.
updateStrategy.
rollingUpdate.
minReadySeconds
During rolling updates, this flag will wait for all pods to be ready for at least minReadySeconds before updating the next batch of pods, where the size of the pod batch is specified through the spec.updateStrategy.rollingUpdate.maxUnavailable flag.string1
spec.
autoUpdateComponents
Indicates the update strategy for the component images (such as Stork, Autopilot, Prometheus, and so on). Portworx supports the following auto update strategies for the component images:
  • None: Updates the component images only when the Portworx image changes in StorageCluster.spec.image.
  • Once: Updates the component images once even if the Portworx image does not change. This is useful when the component images on the manifest server change due to bug fixes.
  • Always: Regularly checks for the updates on the manifest server, and updates the component images if required.
stringNone

Delete/Uninstall strategy

This section provides details on how to specify an uninstall strategy for your Portworx cluster.

FieldDescriptionTypeDefault
spec.
deleteStrategy.
type
Indicates what happens when the Portworx StorageCluster object is deleted. By default, there is no delete strategy, which means only the Kubernetes components deployed by the operator are removed. The Portworx systemd service continues to run, and the Kubernetes applications using the Portworx volumes are not affected. Portworx supports the following delete strategies:
- Uninstall - Removes all Portworx components from the system and leaves the devices and KVDB intact.
- UninstallAndWipe - Removes all Portworx components from the system and wipes the devices and metadata from KVDB.
stringNone

Monitoring configuration

This section provides details on how to enable monitoring for Portworx.

FieldDescriptionTypeDefault
spec.
monitoring.
prometheus.
enabled
Enables or disables a Prometheus cluster.booleanfalse
spec.
monitoring.
prometheus.
exportMetrics
Expose the Portworx metrics to an external or operator deployed Prometheus.booleanfalse
spec.
monitoring.
prometheus.
alertManager.
enabled
Enable Prometheus AlertmanagerbooleanNone
spec.
monitoring.
prometheus.
remoteWriteEndpoint
Specifies the remote write endpoint for Prometheus.stringNone
spec.
monitoring.
telemetry.
enabled
Enable telemetry and metrics collectorbooleanfalse
spec.
monitoring.
prometheus.
resources
Provides the ability to configure Prometheus resource usage such as memory and CPU usage.objectDefault limits: CPU 1, memory 800M, and ephemeral storage 5G
spec.
monitoring.
prometheus.
replicas
Specifies the number of Prometheus replicas that will be deployed.int1
spec.
monitoring.
prometheus.
retention
Specifies the time period for which Prometheus retains historical metrics.string24h
spec.
monitoring.
prometheus.
retentionSize
Specifies the maximum amount of disk space that Prometheus can use to store historical metrics.stringNone
spec.
monitoring.
prometheus.
storage
Specifies the storage type that Prometheus will use for storing data. If you set the storage type to PVCs, do not set the runAsGroup or fsGroup option for the spec.monitoring.prometheus.securityContext flag.objectNone
spec.
monitoring.
prometheus.
volumes
Specifies additional volumes to the output Prometheus StatefulSet. These specified volumes will be appended to other volumes generated as a result of the spec.monitoring.prometheus.storage spec.objectNone
spec.
monitoring.
prometheus.
volumeMounts
Specifies additional VolumeMounts on the output Prometheus StatefulSet definition. These specified VolumeMounts specified will be appended to other VolumeMounts in the Prometheus container generated as a result of spec.monitoring.prometheus.storage spec.objectNone
spec.
monitoring.
prometheus.
securityContext.
runAsNonRoot
Flag that indicates that the container must run as a non-root user.booleanNone
spec.
monitoring.
grafana.
enabled
Enables or disables a Grafana instance with Portworx dashboards.booleanfalse

Stork configuration

This section describes the fields used to manage the Stork deployment through the Portworx operator.

FieldDescriptionTypeDefault
spec.
stork.
enabled
Enables or disables Stork at any given time.booleantrue
spec.
stork.
image
Specifies the Stork image.stringNone
spec.
stork.
lockImage
Enables locking Stork to the given image. When set to false, the Portworx Operator will overwrite the Stork image to a recommended image for given Portworx version.booleanfalse
spec.
stork.
args
A collection of key-value pairs that overrides the default Stork arguments or adds new arguments.map[string]stringNone
spec.
stork.
args.admin-namespace
Set up a cluster's admin namespace for migration.
Refer to admin namespace for more information
stringkube-system
spec.
stork.
args.verbose
Set to true to enable verbose loggingbooleanfalse
spec.
stork.
args.webhook-controller
Set to true to make Stork the default scheduler for workloads using Portworx volumesbooleanfalse
spec.
stork.
env[]
A list of Kubernetes like environment variables passed to Stork.[]objectNone
spec.
stork.
volumes[]
A list of volumes passed to Stork pods. The schema is similar to the top-level volumes.[]objectNone

CSI configuration

This section provides details on how to configure CSI for the StorageCluster. Note this is for Operator 1.8 and higher only.

FieldDescriptionTypeDefault
spec.
csi.
enabled
Flag indicating whether CSI needs to be installed for the storage cluster.booleantrue
spec.
csi.
installSnapshotController
Flag indicating whether CSI Snapshot Controller needs to be installed for the storage cluster.booleanfalse

Autopilot configuration

This section provides details on how to deploy and manage Autopilot.

FieldDescriptionTypeDefault
spec.
autopilot.
enabled
Enables or disables Autopilot at any given time.booleanfalse
spec.
autopilot.
image
Specifies the Autopilot image.stringNone
spec.
autopilot.
lockImage
Enables locking Autopilot to the given image. When set to false, the Portworx Operator will overwrite the Autopilot image to a recommended image for given Portworx version.booleanfalse
spec.
autopilot.
providers[]
List of data providers for Autopilot.[]objectNone
spec.
autopilot.
providers[].
name
Unique name of the data provider.stringNone
spec.
autopilot.
providers[].
type
Type of the data provider. For instance, prometheus.stringNone
spec.
autopilot.
providers[].
params
Map of key-value params for the provider.map[string]stringNone
spec.
autopilot.
args
A collection of key-value pairs that overrides the default Autopilot arguments or adds new arguments.map[string]stringNone
spec.
autopilot.
env[]
A list of Kubernetes like environment variables passed to Autopilot.[]objectNone
spec.
autopilot.
volumes[]
A list of volumes passed to Autopilot pods. The schema is similar to the top-level volumes.[]objectNone

Node specific configuration

This section provides details on how to override certain cluster level configuration for individual or group of nodes.

FieldDescriptionTypeDefault
spec.
nodes[]
A list of node specific configurations.[]objectNone
spec.
nodes[].
selector
Selector for the node(s) to which the configuration in this section will be applied.objectNone
spec.
nodes[].
selector.
.nodeName
Name of the node to which this configuration will be applied. Node name takes precedence over selector.labelSelector.stringNone
spec.
nodes[].
selector.
.labelSelector
Kubernetes style label selector for nodes to which this configuration will be applied.objectNone
spec.
nodes[].
network
Specify network configuration for the selected nodes, similar to the one specified at cluster level. If this network configuration is empty, then cluster level values are used.objectNone
spec.
nodes[].
storage
Specify storage configuration for the selected nodes, similar to the one specified at cluster level. If some of the config is left empty, the cluster level storage values are passed to the nodes. If you don't want to use a cluster level value and set the field to empty, then explicitly set an empty value for it so no value is passed to the nodes. For instance, set spec.nodes[0].storage.kvdbDevice: "", to prevent using the KVDB device for the selected nodes.objectNone
spec.
nodes[].
env
Specify extra environment variables for the selected nodes. Cluster level environment variables are combined with these and sent to the selected nodes. If same variable is present at cluster level, then the node level variable takes precedence.objectNone
spec.
nodes[].
runtimeOptions
Specify runtime options for the selected nodes. If specified, cluster level options are ignored and only these runtime options are passed to the nodes.objectNone

Security Configuration

FieldDescriptionTypeDefault
spec.
security.
enabled
Enables or disables Security at any given time.booleanfalse
spec.
security.
auth.
guestAccess
Determines how the guest role will be updated in your cluster. The options are Enabled, Disabled, or Managed. Managed will cause the operator to ignore updating the system.guest role. Enabled and Disabled will allow or disable guest role access, respectively.stringEnabled
spec.
security.
auth.
selfSigned.
tokenLifetime
The length operator-generated tokens will be alive until being refreshed.string24h
spec.
security.
auth.
selfSigned.
issuer
The issuer name to be used when configuring PX-Security. This field maps to the PORTWORX_AUTH_JWT_ISSUER environment variable in the Portworx Daemonset.stringoperator.portworx.io
spec.
security.
auth.
selfSigned.
sharedSecret
The Kubernetes secret name for retrieving and storing your shared secret. This field can be used to add a pre-existing shared secret or for customizing which secret name the operator will use for its auto-generated shared secret key. This field maps to the PORTWORX_AUTH_JWT_SHAREDSECRET environment variable in the Portworx Daemonset.stringpx-shared-secret

StorageCluster Annotations

AnnotationDescription
portworx.io/misc-argsArguments that you specify in this annotation are passed to portworx container verbatim. For example:
portworx.io/misc-args: "-cluster_domain datacenter1 --tracefile-diskusage 5 -kvdb_cluster_size 5"
Note that you cannot use = to specify the value of an argument.
portworx.io/service-typeAnnotation to configure type of services created by operator. For example:
portworx.io/service-type: "LoadBalancer" to specify LoadBalancer type for all services.
For Operator 1.8.1 and higher, the value can be a list of service names and corresponding type configurations split in ;, service not specified will use its default type. For example:
portworx.io/service-type: "portworx-service:LoadBalancer;portworx-api:ClusterIP;portworx-kvdb-service:LoadBalancer"
portworx.io/portworx-proxyPortworx proxy is needed to allow Kubernetes to communicate with Portworx when using the in-tree driver. The proxy is automatically enabled if you run Portworx in a namespace other than kube-system and not using the default 9001 start port. You can set the annotation to false to disable the proxy. For example:
portworx.io/portworx-proxy: "false"
portworx.io/disable-storage-classWhen applied to a StorageCluster object and set to true, this annotation instructs the Portworx Operator to disable and remove the default storage classes created during Portworx setup. For example: portworx.io/disable-storage-class: "true"

Footnotes

  1. As an example, here's how you can enable the CSI feature.

    For Operator 1.8 and higher:

    spec:
    csi:
    enabled: true
    installSnapshotController: false

    For Operator 1.7 and earlier:

    spec:
    featureGates:
    CSI: "true"

    Please note that you can also use CSI: "True" or CSI: "1".

  2. The following example configures custom annotations. Change <custom-domain/custom-key>: <custom-val> to whatever key: val pairs you wish to provide.

    spec:
    metadata:
    annotations:
    pod/storage:
    <custom-domain/custom-key>: <custom-val>
    service/portworx-api:
    <custom-domain/custom-key>: <custom-val>
    service/portworx-service:
    <custom-domain/custom-key>: <custom-val>
    service/portworx-kvdb-service:
    <custom-domain/custom-key>: <custom-val>

    Note that StorageCluster.spec.metadata.annotations is different from StorageCluster.metadata.annotations. Currently, custom annotations are supported on following types of components:

    TypeComponents
    Podstorage pods
    Serviceportworx-api
    portworx-service
    portworx-kvdb-service
  3. The following example configures labels for the portworx-api service. Change <custom-label-key>: <custom-val> to whatever key: val pairs you wish to provide.

    spec:
    metadata:
    labels:
    service/portworx-api:
    <custom-label-key>: <custom-val>

    Note that StorageCluster.spec.metadata.labels is different from StorageCluster.metadata.labels. Currently, custom labels are only supported on the portworx-api service.

  4. Note that Portworx ignores this filed if you specify the storage devices using the spec.storage.devices field. 2

Was this page helpful?