Skip to main content
Version: 25.8

What is PX-CSI

Portworx CSI (PX-CSI) is a lightweight storage orchestration solution that uses the Kubernetes Container Storage Interface (CSI) framework to integrate the following storage solutions by Pure Storage:

  • FlashArray, including Pure Cloud Block Store (CBS)
  • FlashBlade

PX-CSI orchestrates the provisioning, management, and accessibility of volumes through the following Kubernetes API objects:

  • PersistentVolumeClaim (PVC): Requests and consumes persistent storage
  • VolumeSnapshot: Captures point-in-time copies of volumes for backup or restore
  • CustomResource: Manages volume and snapshot metadata using the PureVolume and PureSnapshot custom resources (CRs), respectively.

PX-CSI runs on Kubernetes node and manages the operations required to ensure volumes are available and accessible to applications.

PX-CSI architecture

PX-CSI components run as pods inside the cluster and interact with the Kubernetes API server. PX-CSI is composed of two types of pods:

  • The Controller plugin pod, which runs as a Kubernetes Deployment with three replicas and manages cluster-wide storage operations such as provisioning and snapshots.
  • The Node plugin pod, which runs as a Kubernetes DaemonSet on every worker node and manages volume lifecycle operations locally.

PX-CSI stores volume metadata in Kubernetes by using Custom resource (CR) objects. These CRs persist information such as backend storage references, serial IDs, and endpoints in etcd, so that the driver can manage volumes without relying on KVDB.

The following diagram shows a Kubernetes cluster, illustrating how PX-CSI pods are distributed.

Portworx CSI Architecture

Controller plugin pod

The controller plugin Pod implements CSI Controller APIs. It manages create, update, and delete operations on volumes by interacting directly with FlashArray and FlashBlade. For high availability, it is deployed as a Deployment with three replicas by default.

The Controller plugin pod includes:

  • External containers (community-maintained):

    • csi-provisioner: Watches for PVC requests and triggers volume creation or deletion.
    • csi-resizer: Watches for PVC resize requests and calls the PX-CSI controller plugin to expand backend volumes.
    • csi-snapshotter: Handles creation and deletion of snapshots.
    • snapshot-controller: Manages VolumeSnapshot objects cluster-wide and ensures consistency between snapshot CRs and backend snapshots.
    • liveness-probe: Invokes the Probe API of the controller plugin to check health and report status to Kubernetes.
  • csi-attacher: Manages volume attachment and detachment to nodes.

  • PX-CSI controller plugin:

    • Implements the CSI Controller Plugin API defined by the CSI spec.
    • Provisions new volumes on FlashArray and new file systems on FlashBlade, including snapshot management.
    • Creates and deletes backend hosts when Kubernetes nodes require access to volumes.
    • Returns connection details (such as LUNs, NQNs, or NFS endpoints) to PX-CSI node plugins.
    • Persists volume and backend metadata using Kubernetes CustomResources (PureVolume, StorageBackend), replacing KVDB for durable state management.

Node plugin pod

The Node plugin pod runs as a DaemonSet, which ensures one instance runs on every Kubernetes worker node. It is responsible for making volumes provisioned by the controller available to application pods on that node.

The Node plugin pod includes:

  • External containers (community-maintained):
    • node-driver-registrar: Registers the PX-CSI driver with kubelet so that the kubelet knows which volumes the node can handle
    • liveness-probe: Invokes the Probe API of the controller plugin to check health and reports the status back to Kubernetes
  • PX-CSI node-plugin:
    • Implements the CSI Node Plugin API to support staging, publishing, expanding, and unpublishing volumes
    • Logs into iSCSI or NVMe targets, or connects to NFS endpoints, depending on the storage backend
    • Mounts block devices or file systems to the appropriate pod path
    • Expands file systems when PVCs are resized
    • Cleans up mounts and detaches volumes when pods are deleted

Custom resources

PX-CSI uses Kubernetes-native CustomResource objects to store the metadata of the provisioned volumes. This removes the dependency on external databases and ensures that all metadata is stored in etcd.

The key CustomResource objects are:

  • PureVolume CR: Stores metadata for each provisioned volume, including:

    • Storage backend with management endpoints
    • Serial ID of the storage device
    • Device realm and FlashArray pod name
    • iSCSI, NVMe, and FC portals
    • NFS endpoint
    • UUID of the volume

    This resource maps PVCs to physical volumes. PX-CSI creates a PureVolume CR during the CreateVolume API call and deletes it during DeleteVolume. Other operations that use this resource include ControllerPublishVolume, ControllerUnpublishVolume, CreateSnapshot, and DeleteSnapshot.

  • PureSnapshot CR: Stores metadata for each provisioned snapshot, including:

    • Storage backend with management endpoints
    • Snapshot UUID
    • Snapshot name
    • FlashArray pod name

    This resource maps and tracks snapshots across storage backend systems.

  • StorageNodeInitiator CR: Stores host-specific metadata required for volume attachment, including:

    • The iSCSI IQNs
    • The NVMe NQNs
    • The FC WWNs

    This resource ensures that backend hosts are correctly configured for each Kubernetes node when the node driver registers with the Kubernetes API server.