Skip to main content

StorageCluster

The Portworx CSI cluster configuration is defined by a Kubernetes CustomResourceDefinition (CRD) called StorageCluster. This object specifies the Portworx Cluster.

The StorageCluster object offers a Kubernetes-native experience, allowing you to manage your Portworx cluster like any other Kubernetes application. When you create or modify the StorageCluster object, the Operator will automatically create or update the Portworx cluster in the background.

To generate a StorageCluster specification tailored to your environment, visit Portworx Central and click Install and Run. This will launch the Portworx specification generator, which will guide you through the steps to create a customized StorageCluster specification.

Using the Portworx specification generator is the recommended method for generating a StorageCluster specification. If you prefer to create the specification manually, refer to the StorageCluster Schema section.

StorageCluster schema

This section explains the fields used to configure the StorageCluster object.

FieldDescriptionTypeDefault
spec.
image
Specifies the Portworx monitor image.stringNone
spec.
imagePullPolicy
Specifies the image pull policy for all the images deployed by the operator. It can take one of the following values: Always or IfNotPresentstringAlways
spec.
imagePullSecret
If Portworx pulls images from a secure repository, you can use this field to pass it the name of the secret. Note that the secret should be in the same namespace as the StorageCluster object.stringNone
spec.
customImageRegistry
The custom container registry server Portworx uses to fetch the Docker images. You may include the repository as well.stringNone
spec.
runtimeOptions
A collection of key-value pairs that overwrites the runtime options.map[string]stringNone
spec.
env[]
A list of Kubernetes like environment variables. Similar to how environment variables are provided in Kubernetes, you can directly provide values to Portworx or import them from a source like a Secret, ConfigMap, etc.[]objectNone
spec.
metadata.
annotations
A map of components and custom annotations.map[string]map[string]stringNone
spec.
metadata.
labels
A map of components and custom labels.map[string]map[string]stringNone
spec.
resources.
requests.
cpu
Specifies the cpu that the Portworx container requests; for example: "4000m"stringNone
spec.
resources.
requests.
memory
Specifies the memory that the Portworx container requests; for example: "4Gi"stringNone

KVDB configuration

This section describes the field used to configure Portworx with a KVDB. If you do not specify the endpoints, the operator will start Portworx with the internal KVDB.

FieldDescriptionTypeDefault
spec.kvdb.internalSpecifies whether Portworx starts with the internal KVDB.booleantrue

Storage configuration for FlashBlade

This section describes the field used to configure storage for your Portworx cluster. If you don't specify a device, the operator sets the spec.storage.useAll field to true.

FieldDescriptionTypeDefault
spec.storage.kvdbDeviceSpecifies the device Portworx uses to store internal KVDB data.stringNone

Cloud storage configuration

This section describes the field used to configure Portworx with cloud storage. Once configured, Portworx manages the cloud disks automatically based on the provided specifications. Note that the spec.storage fields take precedence over the fields in this section. Ensure the spec.storage fields are empty when configuring Portworx with cloud storage.

FieldDescriptionTypeDefault
spec.cloudStorage.kvdbDeviceSpecSpecifies the cloud device Portworx uses for an internal KVDB.stringNone

Network configuration

This section describes the fields used to configure the network settings. If these fields are not specified, Portworx auto-detects the network interfaces.

FieldDescriptionTypeDefault
spec.
network.
dataInterface
Specifies the network interface Portworx uses for data traffic.stringNone
spec.
network.
mgmtInterface
Indicates the network interface Portworx uses for control plane traffic.stringNone

Volume configuration

This section describes the fields used to configure custom volume mounts for Portworx pods.

FieldDescriptionTypeDefault
spec.
volumes[].
name
Unique name for the volume.stringNone
spec.
volumes[].
mountPath
Path within the Portworx container at which the volume should be mounted. Must not contain the ':' character.stringNone
spec.
volumes[].
mountPropagation
Determines how mounts are propagated from the host to container and the other way around.stringNone
spec.
volumes[].
readOnly
Volume is mounted read-only if true, read-write otherwise.booleanfalse
spec.
volumes[].
[secret|configMap|hostPath]
Specifies the location and type of the mounted volume. This is similar to the VolumeSource schema of a Kubernetes pod volume.objectNone

Placement rules

You can use the placement rules to specify where Portworx should be deployed. By default, the operator deploys Portworx on all worker nodes.

FieldDescriptionTypeDefault
spec.
placement.
nodeAffinity
Use this field to restrict Portwox on certain nodes. It works similarly to the Kubernetes node affinity feature.objectNone
spec.
placement.
tolerations[]
Specifies a list of tolerations that will be applied to Portworx pods so that they can run on nodes with matching taints.[]objectNone

Update strategy

This section provides details on how to specify an update strategy.

FieldDescriptionTypeDefault
spec.
updateStrategy.
type
Indicates the update strategy. Currently, Portworx supports the following update strategies- RollingUpdate and OnDelete.objectRollingUpdate
spec.
updateStrategy.
rollingUpdate.
maxUnavailable
Similarly to how Kubernetes rolling update strategies work, this field specifies how many nodes can be down at any given time. Note that you can specify this as a number or percentage.

Note: Portworx by Pure Storage recommends keeping the maxUnavailable value as 1. Changing this value could potentially lead to volume and Portworx quorum loss during the upgrade process.
int or string1
spec.
updateStrategy.
rollingUpdate.
minReadySeconds
During rolling updates, this flag will wait for all pods to be ready for at least minReadySeconds before updating the next batch of pods, where the size of the pod batch is specified through the spec.updateStrategy.rollingUpdate.maxUnavailable flag.string1
spec.
autoUpdateComponents
Indicates the update strategy for the component images (such as Stork, Autopilot, Prometheus, and so on). Portworx supports the following auto update strategies for the component images:
  • None: Updates the component images only when the Portworx image changes in StorageCluster.spec.image.
  • Once: Updates the component images once even if the Portworx image does not change. This is useful when the component images on the manifest server change due to bug fixes.
  • Always: Regularly checks for the updates on the manifest server, and updates the component images if required.
stringNone

Delete strategy

This section provides details on how to specify a delete strategy for your Portworx cluster.

FieldDescriptionTypeDefault
spec.
deleteStrategy.
type
Indicates what happens when the Portworx StorageCluster object is deleted. By default, there is no delete strategy, which means only the Kubernetes components deployed by the operator are removed. The Portworx systemd service continues to run, and the Kubernetes applications using the Portworx volumes are not affected. Portworx supports the following delete strategies:
- Uninstall - Removes all Portworx components from the system and leaves the devices and KVDB intact.
- UninstallAndWipe - Removes all Portworx components from the system and wipes the devices and metadata from KVDB.
stringNone

Monitoring configuration

This section provides details on how to enable monitoring for Portworx.

FieldDescriptionTypeDefault
spec.monitoring.telemetry.enabledEnable telemetry and metrics collectorbooleanfalse

CSI configuration

This section provides details on how to configure CSI for the StorageCluster. Note this is for Operator 1.8 and higher only.

FieldDescriptionTypeDefault
spec.
csi.
enabled
Flag indicating whether CSI needs to be installed for the storage cluster.booleantrue
spec.
csi.
installSnapshotController
Flag indicating whether CSI Snapshot Controller needs to be installed for the storage cluster.booleanfalse

Node specific configuration

This section provides details on how to override certain cluster level configuration for individual or group of nodes.

FieldDescriptionTypeDefault
spec.
nodes[]
A list of node specific configurations.[]objectNone
spec.
nodes[].
selector
Selector for the node(s) to which the configuration in this section will be applied.objectNone
spec.
nodes[].
selector.
.nodeName
Name of the node to which this configuration will be applied. Node name takes precedence over selector.labelSelector.stringNone
spec.
nodes[].
selector.
.labelSelector
Kubernetes style label selector for nodes to which this configuration will be applied.objectNone
spec.
nodes[].
network
Specify network configuration for the selected nodes, similar to the one specified at cluster level. If this network configuration is empty, then cluster level values are used.objectNone
spec.
nodes[].
storage
Specify storage configuration for the selected nodes, similar to the one specified at cluster level. If some of the config is left empty, the cluster level storage values are passed to the nodes. If you don't want to use a cluster level value and set the field to empty, then explicitly set an empty value for it so no value is passed to the nodes. For instance, set spec.nodes[0].storage.kvdbDevice: "", to prevent using the KVDB device for the selected nodes.objectNone
spec.
nodes[].
env
Specify extra environment variables for the selected nodes. Cluster level environment variables are combined with these and sent to the selected nodes. If same variable is present at cluster level, then the node level variable takes precedence.objectNone
spec.
nodes[].
runtimeOptions
Specify runtime options for the selected nodes. If specified, cluster level options are ignored and only these runtime options are passed to the nodes.objectNone

StorageCluster Annotations

AnnotationDescription
portworx.io/misc-argsArguments that you specify in this annotation are passed to portworx container verbatim. Note that you cannot use = to specify the value of an argument.
Some of the arguments that you can specify in the annotation are listed below:
  • --oem px-csi - Specifies that Portworx will be installed with the CSI FA/FB license, determining the type of build and license for the installation.
  • -cluster_domain - Specifies the cluster domain.
  • -kvdb_cluster_size - Specifies the size of the KVDB cluster.
  • --memory - Specifies the memory in bytes that the Portworx service can consume.
  • --cpus - Specifies the number of CPUs that Portworx service can use.

Example usage:
portworx.io/misc-args: "-cluster_domain datacenter1 --tracefile-diskusage 5 -kvdb_cluster_size 5 --memory 8589934592 --cpus 2"
portworx.io/service-typeAnnotation to configure type of services created by operator. For example:
portworx.io/service-type: "LoadBalancer" to specify LoadBalancer type for all services.
For Operator 1.8.1 and higher, the value can be a list of service names and corresponding type configurations split in ;, service not specified will use its default type. For example:
portworx.io/service-type: "portworx-service:LoadBalancer;portworx-api:ClusterIP;portworx-kvdb-service:LoadBalancer"
portworx.io/disable-storage-classWhen applied to a StorageCluster object and set to true, this annotation instructs the Portworx Operator to disable and remove the default storage classes created during Portworx setup. For example: portworx.io/disable-storage-class: "true"
portworx.io/health-checkAnnotation created by the Operator to save the state of the health checks. If the health checks pass, the Operator writes a value of passed. If the health checks fail, the Operator writes a value of failed and reruns the checks periodically. You can control the health checks manually by setting the value to skip to bypass the health checks, or removing the annotation to instruct the Operator to rerun the checks immediately.