Skip to main content
Version: 25.4.0

Expand or delete PVCs

Follow the instructions on this page to either expand or delete Persistent Volume Claims (PVCs). These actions are often necessary to manage storage efficiently based on the application's requirements. For instance, you might need to increase the storage capacity of a PVC to accommodate growing data, or delete unused PVCs to free up resources.

Expand a PVC

  1. Edit the PVC specification:
    Run the following command to open the PVC specification for editing. This step allows you to modify the PVC's configuration to reflect the required storage size.

    kubectl edit pvc <pvc-name> -n <pvc-namespace>
  2. Modify the storage field:
    Locate the resources.requests.storage field in the spec section and update it to the desired storage size. This change specifies the new storage capacity for the PVC. For example, update the storage to the desired capacity:

    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 2Gi
    storageClassName: sc-portworx-fa-direct-access
    volumeMode: Filesystem
    volumeName: pvc-7ba7c112-xxxx-xxxx
  3. Apply the changes:
    Save the modified specification. This action triggers Kubernetes to resize the PVC to match the updated storage request.

Delete a PVC

Run the following command to delete a PVC associated with a FlashArray Direct Access volume. You can use this operation to delete unused PVCs that helps optimize storage resource usage and avoid unnecessary costs.

kubectl delete pvc <pvc-name> -n <pvc-namespace>
persistentvolumeclaim "<pvc-name>" deleted
important

Known limitation: For FA file systems, deleting a PVC doesn't delete the persistent volume (PV) in the FlashArray if the volume contains data and remains in the Released state. You must manually delete the PVs. For more information, see Delete persistent volumes in a released state in FA file systems.

Delete persistent volumes in a released state in FA file systems

To delete PVs in the Released state, follow these steps:

  1. List the PVs in the Released state:

    kubectl get pv | grep Released
    pvc-xxxxxxxx-xxxx-xxxx-xxxx-105a73740493   50Gi       RWO   Delete   Released   fio/fio-log-fio-0   fa-file-sc-v4-authsys   <unset>   41m
    pvc-xxxxxxxx-xxxx-xxxx-xxxx-3aefabb5a065 200Gi RWO Delete Released fio/fio-data-fio-1 fa-file-sc-v4-authsys <unset> 40m
    pvc-xxxxxxxx-xxxx-xxxx-xxxx-2b76f4288bdc 50Gi RWO Delete Released fio/fio-log-fio-1 fa-file-sc-v4-authsys <unset> 40m
    pvc-xxxxxxxx-xxxx-xxxx-xxxx-ce081914311c 200Gi RWO Delete Released fio/fio-data-fio-0 fa-file-sc-v4-authsys <unset> 41m
  2. Describe the PV to verify whether the deletion failed due to the volume containing data:

    kubectl describe pv <pvc-xxxxxxxx-xxxx-xxxx-xxxx-105a73740493>
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning VolumeFailedDelete 63s (x18 over 39m) pxd.portworx.com_px-csi-ext-765d64898d-pjshw_xxxxxxxx-xxxx-xxxx-9a5a-e3e8b212c1c3 rpc error: code = Aborted desc = Unable to delete volume with id 1004978454102986342: rpc error: code = Internal desc = Failed to delete volume 1004978454102986342: deletion of non-empty directory not allowed
  3. Log in to a worker node and inspect the PV to retrieve the volume label and NFS endpoint:

    pxctl v i pvc-xxxxxxxx-xxxx-xxxx-xxxx-105a73740493
    Volume                 : 1004978454102986342
    Name : pvc-xxxxxxxx-xxxx-xxxx-xxxx-105a73740493
    Size : 50 GiB
    Format : none
    HA : 1
    IO Priority : LOW
    Creation time : Mar 19 06:15:22 UTC 2025
    Shared : no
    FlashArray File :
    Type : nfs
    Endpoint : xx.xx.xxx.95
    FileSystem : fa-files-ga
    NfsPolicy : fa-files-ga-nfs-policy
    Status : up
    State : Consumed from xx.xx.xxx.95
    Labels : app=fio,backend=pure_fa_file,namespace=fio,pure_fa_file_system=fa-files-ga,pure_nfs_endpoint=xx.xx.xxx.95,pure_nfs_policy=fa-files-ga-nfs-policy,pvc=fio-log-fio-0
  4. Mount the file system to a new directory so you can remove its contents:

    mkdir /var/lib/osd/mounts/1004978454102986342-clean-up
    mount -t nfs xx.xx.xxx.95:/px_20e4c7e1-pvc-xxxxxxxx-xxxx-xxxx-xxxx-105a73740493 /var/lib/osd/mounts/1004978454102986342-clean-up

    In this example:

    • xx.xx.xxx.95 is the NFS endpoint.
    • px_20e4c7e1 consists of px and the prefix of the cluster UUID.
    • To retrieve the prefix, run pxctl status and use the first section of the UUID.
      For example, if the UUID is 20e4c7e1-xxxx-xxxx-xxxx-c367b2933167, use 20e4c7e1.
  5. Delete the directory contents and unmount the volume:

    rm -rf /var/lib/osd/mounts/1004978454102986342-clean-up/{..?*,.[!.]*,*}
    umount /var/lib/osd/mounts/1004978454102986342-clean-up
    rm -rf /var/lib/osd/mounts/1004978454102986342-clean-up

After completing these steps, the PV is automatically deleted.