StorageCluster
The Portworx cluster configuration is specified by a Kubernetes CRD (CustomResourceDefinition) called StorageCluster. The StorageCluster object acts as the definition of the Portworx Cluster.
The StorageCluster
object provides a Kubernetes native experience. You can manage your Portworx cluster just like any other application running on Kubernetes. That is, if you create or edit the StorageCluster
object, the operator will create or edit the Portworx cluster in the background.
To generate a StorageCluster
spec customized for your environment, point your browser to Portworx Central, and click "Install and Run" to start the Portworx spec generator. Then, the wizard will walk you through all the necessary steps to create a StorageCluster
spec customized for your environment.
Note that using the Portworx spec generator is the recommended way of generating a StorageCluster
spec. However, if you want to generate the StorageCluster
spec manually, you can refer to the StorageCluster Examples and StorageCluster Schema sections.
StorageCluster Examples
This section provides a few examples of common Portworx configurations you can use for manually configuring your Portworx cluster. Update the default values in these files to match your environment.
-
Portworx with internal KVDB, configured to use all unused devices on the system.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
kvdb:
internal: true
storage:
useAll: true -
Portworx with external ETCD and Stork as default scheduler.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
kvdb:
endpoints:
- etcd:http://etcd-1.net:2379
- etcd:http://etcd-2.net:2379
- etcd:http://etcd-3.net:2379
authSecret: px-kvdb-auth
stork:
enabled: true
args:
health-monitor-interval: "100"
webhook-controller: "true" -
Portworx with Security enabled.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
security:
enabled: true -
Portworx with Security enabled, guest access disabled, a custom self signed issuer/secret location, and five day token lifetime.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
security:
enabled: true
auth:
guestAccess: 'Disabled'
selfSigned:
issuer: 'openstorage.io'
sharedSecret: 'px-shared-secret'
tokenLifetime: '5d' -
Portworx with update and delete strategies, and placement rules.
noteFrom Kubernetes version 1.24 and newer, the label key
node-role.kubernetes.io/master
is replaced bynode-role.kubernetes.io/control-plane
.apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 20%
deleteStrategy:
type: UninstallAndWipe
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
- key: node-role.kubernetes.io/worker
operator: Exists
tolerations:
- key: infra/node
operator: Equal
value: "true"
effect: NoExecute -
Portworx with custom image registry, network interfaces, and miscellaneous options.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
imagePullPolicy: Always
imagePullSecret: regsecret
customImageRegistry: docker.private.io/repo
network:
dataInterface: eth1
mgmtInterface: eth2
secretsProvider: vault
runtimeOptions:
num_io_threads: "10"
env:
- name: VAULT_ADDRESS
value: "http://10.0.0.1:8200" -
Portworx with node specific overrides. Use different devices or no devices on different set of nodes.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0
storage:
devices:
- /dev/sda
- /dev/sdb
nodes:
- selector:
labelSelector:
matchLabels:
<custom-key>:"<custom-value>"
storage:
devices:
- /dev/nvme1
- /dev/nvme2
- selector:
labelSelector:
matchLabels:
<custom-key>:"<custom-value>"
storage:
devices: []Replace
<custom-key>:<custom-value1>
with the node label that you have added for the node in your cluster. For example, if you have labeled your node aspx/storage: "nvme"
to specify that the node uses NVME drives, you may use this key-value pair, wherecustom-key
ispx/storage
andcustom-value
isnvme
. -
Portworx with a cluster domain defined.
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
annotations:
portworx.io/misc-args: "-cluster_domain example-cluster-domain-name” -
Portworx with an option to use HTTPS proxy to enable telemetry and share cluster diagnostics and callhome data to Pure1 cloud:
apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
name: portworx
namespace: <px-namespace>
spec:
image: portworx/oci-monitor:3.0.0
env:
- name: PX_HTTP_PROXY
value: "http://<IP:port>"
- name: PX_HTTPS_PROXY
value: "http://<IP:port>"
You can use http://user:password@<IP:port>;
if your proxy requires authentication.
StorageCluster Schema
This section explains the fields used to configure the StorageCluster
object.
Field | Description | Type | Default |
---|---|---|---|
spec. image | Specifies the Portworx monitor image. | string | None |
spec. imagePullPolicy | Specifies the image pull policy for all the images deployed by the operator. It can take one of the following values: Always or IfNotPresent | string | Always |
spec. imagePullSecret | If Portworx pulls images from a secure repository, you can use this field to pass it the name of the secret. Note that the secret should be in the same namespace as the StorageCluster object. | string | None |
spec. customImageRegistry | The custom container registry server Portworx uses to fetch the Docker images. You may include the repository as well. | string | None |
spec. secretsProvider | The name of the secrets provider Portworx uses to store your credentials. To use features like cloud snapshots or volume encryption, you must configure a secret store provider. Refer to the Secret store management page for more details. | string | None |
spec. runtimeOptions | A collection of key-value pairs that overwrites the runtime options. | map[string]string | None |
spec. security | An object for specifying PX-Security configurations. Refer to the Operator Security page for more details. | object | None |
spec. featureGates | A collection of key-value pairs specifying which Portworx features should be enabled or disabled. 1 | map[string]string | None |
spec. env[] | A list of Kubernetes like environment variables. Similar to how environment variables are provided in Kubernetes, you can directly provide values to Portworx or import them from a source like a Secret , ConfigMap , etc. | []object | None |
spec. metadata. annotations | A map of components and custom annotations. 2 | map[string]map[string]string | None |
spec. metadata. labels | A map of components and custom labels. 3 | map[string]map[string]string | None |
spec. resources. requests. cpu | Specifies the cpu that the Portworx container requests; for example: "4000m" | string | None |
spec. resources. requests. memory | Specifies the memory that the Portworx container requests; for example: "4Gi" | string | None |
KVDB configuration
This section explains the fields used to configure Portworx with a KVDB. Note that, if you don't specify the endpoints, the operator starts Portworx with the internal KVDB.
Field | Description | Type | Default |
---|---|---|---|
spec. kvdb. internal | Specifies if Portworx starts with the internal KVDB. | boolean | true |
spec. kvdb.endpoints[] | A list of endpoints for your external key-value database like ETCD. This field takes precedence over the spec.kvdb.internal field. That is, if you specify the endpoints, Portworx ignores the spec.kvdb.internal field and it uses the external KVDB. | []string | None |
spec. kvdb. authSecret | Indicates the name of the secret Portworx uses to authenticate against your KVDB. The secret must be placed in the same namespace as the StorageCluster object. The secret should provide the following information: - username (optional) - password (optional) - kvdb-ca.crt (the CA certificate) - kvdb.key (certificate key) - kvdb.crt (etcd certificate) - acl-token (optional) For example, create a directory called etcd-secrets, copy the files into it and create a secret with kubectl -n kube-system create secret generic px-kvdb-auth --from-file=etcd-secrets/ | string | None |
Storage configuration
This section provides details about the fields used to configure the storage for your Portworx cluster. If you don't specify a device, the operator sets the spec.storage.useAll
field to true
.
Field | Description | Type | Default |
---|---|---|---|
spec. storage. useAll | If set to true , Portworx uses all available, unformatted, and unpartitioned devices. 4 | boolean | true |
spec. storage. useAllWithPartitions | If set to true , Portworx uses all the available and unformatted devices. 4 | boolean | false |
spec. storage. forceUseDisks | If set to true , Portworx uses a device even if there's a file system on it. Note that Portworx may wipe the drive before using it. | boolean | false |
spec. storage. devices[] | Specifies the list of devices Portworx should use. | []string | None |
spec. storage. cacheDevices[] | Specifies the list of cache devices Portworx should use. | []string | None |
spec. storage. journalDevice | Specifies the device Portworx uses for journaling. | string | None |
spec. storage. systemMetadataDevice | Indicates the device Portworx uses to store metadata. For better performance, specify a system metadata device when using Portworx with the internal KVDB. | string | None |
spec. storage. kvdbDevice | Specifies the device Portworx uses to store internal KVDB data. | string | None |
Cloud storage configuration
This section explains the fields used to configure Portworx with cloud storage. Once the cloud storage is configured, Portworx manages the cloud disks automatically based on the provided specs. Note that the spec.storage
fields take precedence over the fields presented in this section. Make sure the spec.storage
fields are empty when configuring Portworx with cloud storage.
Field | Description | Type | Default |
---|---|---|---|
spec. cloudStorage. provider | Specifies the cloud provider name, such as: pure, azure, aws, gce, vsphere. | string | None |
spec. cloudStorage. deviceSpecs[] | A list of the specs for your cloud storage devices. Portworx creates a cloud disk for every device. | []string | None |
spec. cloudStorage. journalDeviceSpec | Specifies the cloud device Portworx uses for journaling. | string | None |
spec. cloudStorage. systemMetadataDeviceSpec | Indicates the cloud device Portworx uses for metadata. For performance, specify a system metadata device when using Portworx with the internal KVDB. | string | None |
spec. cloudStorage. kvdbDeviceSpec | Specifies the cloud device Portworx uses for an internal KVDB. | string | None |
spec. cloudStorage. maxStorageNodesPerZone | Indicates the maximum number of storage nodes per zone. If this number is reached, and a new node is added to the zone, Portworx doesn't provision drives for the new node. Instead, Portworx starts the node as a compute-only node. | uint32 | None |
spec. cloudStorage. maxStorageNodes | Specifies the maximum number of storage nodes. If this number is reached, and a new node is added, Portworx doesn't provision drives for the new node. Instead, Portworx starts the node as a compute-only node. As a best practice, it is recommended to use the maxStorageNodesPerZone field. | uint32 | None |
Network configuration
This section describes the fields used to configure the network settings. If these fields are not specified, Portworx auto-detects the network interfaces.
Field | Description | Type | Default |
---|---|---|---|
spec. network. dataInterface | Specifies the network interface Portworx uses for data traffic. | string | None |
spec. network. mgmtInterface | Indicates the network interface Portworx uses for control plane traffic. | string | None |
Volume configuration
This section describes the fields used to configure custom volume mounts for Portworx pods.
Field | Description | Type | Default |
---|---|---|---|
spec. volumes[]. name | Unique name for the volume. | string | None |
spec. volumes[]. mountPath | Path within the Portworx container at which the volume should be mounted. Must not contain the ':' character. | string | None |
spec. volumes[]. mountPropagation | Determines how mounts are propagated from the host to container and the other way around. | string | None |
spec. volumes[]. readOnly | Volume is mounted read-only if true, read-write otherwise. | boolean | false |
spec. volumes[]. [secret|configMap|hostPath] | Specifies the location and type of the mounted volume. This is similar to the VolumeSource schema of a Kubernetes pod volume. | object | None |
Placement rules
You can use the placement rules to specify where Portworx should be deployed. By default, the operator deploys Portworx on all worker nodes.
Field | Description | Type | Default |
---|---|---|---|
spec. placement. nodeAffinity | Use this field to restrict Portwox on certain nodes. It works similarly to the Kubernetes node affinity feature. | object | None |
spec. placement. tolerations[] | Specifies a list of tolerations that will be applied to Portworx pods so that they can run on nodes with matching taints. | []object | None |
For Operator 1.8 and higher, if you have topology labels topology.kubernetes.io/region
or topology.kubernetes.io/zone
specified on worker nodes, Operator would deploy Stork, Stork scheduler, CSI and PVC controller pods with topologySpreadConstraints
to distribute pod replicas across Kubernetes failure domains.
Update strategy
This section provides details on how to specify an update strategy.
Field | Description | Type | Default |
---|---|---|---|
spec. updateStrategy. type | Indicates the update strategy. Currently, Portworx supports the following update strategies- RollingUpdate and OnDelete . | object | RollingUpdate |
spec. updateStrategy. rollingUpdate. maxUnavailable | Similarly to how Kubernetes rolling update strategies work, this field specifies how many nodes can be down at any given time. Note that you can specify this as a number or percentage. Note: Portworx by Pure Storage recommends keeping the maxUnavailable value as 1. Changing this value could potentially lead to volume and Portworx quorum loss during the upgrade process. | int or string | 1 |
spec. updateStrategy. rollingUpdate. minReadySeconds | During rolling updates, this flag will wait for all pods to be ready for at least minReadySeconds before updating the next batch of pods, where the size of the pod batch is specified through the spec.updateStrategy.rollingUpdate.maxUnavailable flag. | string | 1 |
spec. autoUpdateComponents | Indicates the update strategy for the component images (such as Stork, Autopilot, Prometheus, and so on). Portworx supports the following auto update strategies for the component images:
| string | None |
Delete/Uninstall strategy
This section provides details on how to specify an uninstall strategy for your Portworx cluster.
Field | Description | Type | Default |
---|---|---|---|
spec. deleteStrategy. type | Indicates what happens when the Portworx StorageCluster object is deleted. By default, there is no delete strategy, which means only the Kubernetes components deployed by the operator are removed. The Portworx systemd service continues to run, and the Kubernetes applications using the Portworx volumes are not affected. Portworx supports the following delete strategies: - Uninstall - Removes all Portworx components from the system and leaves the devices and KVDB intact. - UninstallAndWipe - Removes all Portworx components from the system and wipes the devices and metadata from KVDB. | string | None |
Monitoring configuration
This section provides details on how to enable monitoring for Portworx.
Field | Description | Type | Default |
---|---|---|---|
spec. monitoring. prometheus. enabled | Enables or disables a Prometheus cluster. | boolean | false |
spec. monitoring. prometheus. exportMetrics | Expose the Portworx metrics to an external or operator deployed Prometheus. | boolean | false |
spec. monitoring. prometheus. alertManager. enabled | Enable Prometheus Alertmanager | boolean | None |
spec. monitoring. prometheus. remoteWriteEndpoint | Specifies the remote write endpoint for Prometheus. | string | None |
spec. monitoring. telemetry. enabled | Enable telemetry and metrics collector | boolean | false |
spec. monitoring. prometheus. resources | Provides the ability to configure Prometheus resource usage such as memory and CPU usage. | object | Default limits: CPU 1, memory 800M, and ephemeral storage 5G |
spec. monitoring. prometheus. replicas | Specifies the number of Prometheus replicas that will be deployed. | int | 1 |
spec. monitoring. prometheus. retention | Specifies the time period for which Prometheus retains historical metrics. | string | 24h |
spec. monitoring. prometheus. retentionSize | Specifies the maximum amount of disk space that Prometheus can use to store historical metrics. | string | None |
spec. monitoring. prometheus. storage | Specifies the storage type that Prometheus will use for storing data. If you set the storage type to PVCs, do not set the runAsGroup or fsGroup option for the spec.monitoring.prometheus.securityContext flag. | object | None |
spec. monitoring. prometheus. volumes | Specifies additional volumes to the output Prometheus StatefulSet. These specified volumes will be appended to other volumes generated as a result of the spec.monitoring.prometheus.storage spec. | object | None |
spec. monitoring. prometheus. volumeMounts | Specifies additional VolumeMounts on the output Prometheus StatefulSet definition. These specified VolumeMounts specified will be appended to other VolumeMounts in the Prometheus container generated as a result of spec.monitoring.prometheus.storage spec. | object | None |
spec. monitoring. prometheus. securityContext. runAsNonRoot | Flag that indicates that the container must run as a non-root user. | boolean | None |
spec. monitoring. grafana. enabled | Enables or disables a Grafana instance with Portworx dashboards. | boolean | false |
Stork configuration
This section describes the fields used to manage the Stork deployment through the Portworx operator.
Field | Description | Type | Default |
---|---|---|---|
spec. stork. enabled | Enables or disables Stork at any given time. | boolean | true |
spec. stork. image | Specifies the Stork image. | string | None |
spec. stork. lockImage | Enables locking Stork to the given image. When set to false, the Portworx Operator will overwrite the Stork image to a recommended image for given Portworx version. | boolean | false |
spec. stork. args | A collection of key-value pairs that overrides the default Stork arguments or adds new arguments. | map[string]string | None |
spec. stork. args.admin-namespace | Set up a cluster's admin namespace for migration. Refer to admin namespace for more information | string | kube-system |
spec. stork. args.verbose | Set to true to enable verbose logging | boolean | false |
spec. stork. args.webhook-controller | Set to true to make Stork the default scheduler for workloads using Portworx volumes | boolean | false |
spec. stork. env[] | A list of Kubernetes like environment variables passed to Stork. | []object | None |
spec. stork. volumes[] | A list of volumes passed to Stork pods. The schema is similar to the top-level volumes. | []object | None |
CSI configuration
This section provides details on how to configure CSI for the StorageCluster. Note this is for Operator 1.8 and higher only.
Field | Description | Type | Default |
---|---|---|---|
spec. csi. enabled | Flag indicating whether CSI needs to be installed for the storage cluster. | boolean | true |
spec. csi. installSnapshotController | Flag indicating whether CSI Snapshot Controller needs to be installed for the storage cluster. | boolean | false |
Autopilot configuration
This section provides details on how to deploy and manage Autopilot.
Field | Description | Type | Default |
---|---|---|---|
spec. autopilot. enabled | Enables or disables Autopilot at any given time. | boolean | false |
spec. autopilot. image | Specifies the Autopilot image. | string | None |
spec. autopilot. lockImage | Enables locking Autopilot to the given image. When set to false, the Portworx Operator will overwrite the Autopilot image to a recommended image for given Portworx version. | boolean | false |
spec. autopilot. providers[] | List of data providers for Autopilot. | []object | None |
spec. autopilot. providers[]. name | Unique name of the data provider. | string | None |
spec. autopilot. providers[]. type | Type of the data provider. For instance, prometheus . | string | None |
spec. autopilot. providers[]. params | Map of key-value params for the provider. | map[string]string | None |
spec. autopilot. args | A collection of key-value pairs that overrides the default Autopilot arguments or adds new arguments. | map[string]string | None |
spec. autopilot. env[] | A list of Kubernetes like environment variables passed to Autopilot. | []object | None |
spec. autopilot. volumes[] | A list of volumes passed to Autopilot pods. The schema is similar to the top-level volumes. | []object | None |
Node specific configuration
This section provides details on how to override certain cluster level configuration for individual or group of nodes.
Field | Description | Type | Default |
---|---|---|---|
spec. nodes[] | A list of node specific configurations. | []object | None |
spec. nodes[]. selector | Selector for the node(s) to which the configuration in this section will be applied. | object | None |
spec. nodes[]. selector. .nodeName | Name of the node to which this configuration will be applied. Node name takes precedence over selector.labelSelector . | string | None |
spec. nodes[]. selector. .labelSelector | Kubernetes style label selector for nodes to which this configuration will be applied. | object | None |
spec. nodes[]. network | Specify network configuration for the selected nodes, similar to the one specified at cluster level. If this network configuration is empty, then cluster level values are used. | object | None |
spec. nodes[]. storage | Specify storage configuration for the selected nodes, similar to the one specified at cluster level. If some of the config is left empty, the cluster level storage values are passed to the nodes. If you don't want to use a cluster level value and set the field to empty, then explicitly set an empty value for it so no value is passed to the nodes. For instance, set spec.nodes[0].storage.kvdbDevice: "" , to prevent using the KVDB device for the selected nodes. | object | None |
spec. nodes[]. env | Specify extra environment variables for the selected nodes. Cluster level environment variables are combined with these and sent to the selected nodes. If same variable is present at cluster level, then the node level variable takes precedence. | object | None |
spec. nodes[]. runtimeOptions | Specify runtime options for the selected nodes. If specified, cluster level options are ignored and only these runtime options are passed to the nodes. | object | None |
Security Configuration
Field | Description | Type | Default |
---|---|---|---|
spec. security. enabled | Enables or disables Security at any given time. | boolean | false |
spec. security. auth. guestAccess | Determines how the guest role will be updated in your cluster. The options are Enabled, Disabled, or Managed. Managed will cause the operator to ignore updating the system.guest role. Enabled and Disabled will allow or disable guest role access, respectively. | string | Enabled |
spec. security. auth. selfSigned. tokenLifetime | The length operator-generated tokens will be alive until being refreshed. | string | 24h |
spec. security. auth. selfSigned. issuer | The issuer name to be used when configuring PX-Security. This field maps to the PORTWORX_AUTH_JWT_ISSUER environment variable in the Portworx Daemonset. | string | operator.portworx.io |
spec. security. auth. selfSigned. sharedSecret | The Kubernetes secret name for retrieving and storing your shared secret. This field can be used to add a pre-existing shared secret or for customizing which secret name the operator will use for its auto-generated shared secret key. This field maps to the PORTWORX_AUTH_JWT_SHAREDSECRET environment variable in the Portworx Daemonset. | string | px-shared-secret |
StorageCluster Annotations
Annotation | Description |
---|---|
portworx.io/misc-args | Arguments that you specify in this annotation are passed to portworx container verbatim. For example:portworx.io/misc-args: "-cluster_domain datacenter1 --tracefile-diskusage 5 -kvdb_cluster_size 5" Note that you cannot use = to specify the value of an argument. |
portworx.io/service-type | Annotation to configure type of services created by operator. For example:portworx.io/service-type: "LoadBalancer" to specify LoadBalancer type for all services.For Operator 1.8.1 and higher, the value can be a list of service names and corresponding type configurations split in ; , service not specified will use its default type. For example:portworx.io/service-type: "portworx-service:LoadBalancer;portworx-api:ClusterIP;portworx-kvdb-service:LoadBalancer" |
portworx.io/portworx-proxy | Portworx proxy is needed to allow Kubernetes to communicate with Portworx when using the in-tree driver. The proxy is automatically enabled if you run Portworx in a namespace other than kube-system and not using the default 9001 start port. You can set the annotation to false to disable the proxy. For example: portworx.io/portworx-proxy: "false" |
portworx.io/disable-storage-class | When applied to a StorageCluster object and set to true, this annotation instructs the Portworx Operator to disable and remove the default storage classes created during Portworx setup. For example: portworx.io/disable-storage-class: "true" |
portworx.io/health-check | An annotation created by the Operator to save the state of the health checks. If the health checks pass, the Operator writes a value of passed . If the health checks fail, the Operator writes a value of failed and reruns the checks periodically. You can control the health checks manually by setting the value to skip to bypass the health checks, or removing the annotation to instruct the Operator to rerun the checks immediately. |
Footnotes
-
As an example, here's how you can enable the
CSI
feature.For Operator 1.8 and higher:
spec:
csi:
enabled: true
installSnapshotController: falseFor Operator 1.7 and earlier:
spec:
featureGates:
CSI: "true"Please note that you can also use
CSI: "True"
orCSI: "1"
. ↩ -
The following example configures custom annotations. Change
<custom-domain/custom-key>: <custom-val>
to whateverkey: val
pairs you wish to provide.spec:
metadata:
annotations:
pod/storage:
<custom-domain/custom-key>: <custom-val>
service/portworx-api:
<custom-domain/custom-key>: <custom-val>
service/portworx-service:
<custom-domain/custom-key>: <custom-val>
service/portworx-kvdb-service:
<custom-domain/custom-key>: <custom-val>Note that
StorageCluster.spec.metadata.annotations
is different fromStorageCluster.metadata.annotations
. Currently, custom annotations are supported on following types of components:
↩Type Components Pod storage pods Service portworx-api
portworx-service
portworx-kvdb-service -
The following example configures labels for the
portworx-api
service. Change<custom-label-key>: <custom-val>
to whateverkey: val
pairs you wish to provide.spec:
metadata:
labels:
service/portworx-api:
<custom-label-key>: <custom-val>Note that
StorageCluster.spec.metadata.labels
is different fromStorageCluster.metadata.labels
. Currently, custom labels are only supported on theportworx-api
service. ↩ -
Note that Portworx ignores this filed if you specify the storage devices using the
spec.storage.devices
field. ↩ ↩2