Portworx Datastores
Portworx supports two types of datastores:
- PX-StoreV1
- PX-StoreV2
These storage backends provide different capabilities and are used in distinct deployment scenarios.
PX-StoreV1 is the legacy storage backend for Portworx Enterprise. As a full-featured filesystem backend, PX-StoreV1 manages volumes with enhanced metadata and supports RAID and device management features.
PX-StoreV2 data store is optimized for I/O-intensive workloads that use high-performance NVMe-class devices. PX-StoreV2 efficiently manages and balances workloads across nodes by dynamically assigning tasks to the most suitable nodes based on available resources, thereby improving the overall performance and scalability of the cluster.
PX-StoreV2 focuses on volume management with optimized metadata handling and enhanced performance metrics, making it well suited for high-performance environments.
Advantages of PX-StoreV2 over PX-StoreV1
- Simplicity: PX-StoreV2 focuses on volume management only, which simplifies the architecture by managing blocks with minimal metadata, unlike PX-StoreV1, which operates as a complete filesystem. This reduces the complexity in handling RAID or device management.
- Stability: PX-StoreV2 is designed with lower metadata and refcount overhead compared to PX-StoreV1. By avoiding recursive refcounts and extent backreferences, PX-StoreV2 offers improved stability and simplicity in environments where block management is key.
- Performance:
- Low Performance Overhead: PX-StoreV2 ensures low write amplification and predictable latencies, which provides better performance consistency for users, especially in scenarios with high throughput demands.
- Userspace Bypass (PX fastpath): PX-StoreV2 enables the fastpath, which further improves performance by bypassing certain userspace operations.
Minimum Requirements
For information on the minimum requiremets for deploying a cluster on PX-StoreV1 or PX-StoreV2, see System Requirements.
Limitations
PX-StoreV2 has the following limitations:
- Does not support
add-diskpool expansion operation. - Does not support online pool resize.
- Does not support upgrading from a previous Portworx version to deploy PX-StoreV2 with cloud drives.
Supported Platforms and Distributions
- PX-StoreV2
- PX-StoreV1
PX-StoreV2 is supported on the following platforms and distributions:
| Platform | Kubernetes Distribution | Storage Backend |
|---|---|---|
| AWS | Elastic Kubernetes Service (EKS) | IO1 and GP3 |
| OpenShift 4+ | IO1 and GP3 | |
| Red Hat OpenShift Service on AWS (ROSA) | IO1 and GP3 | |
| Google Cloud | Google Kubernetes Engine (GKE) | PD_SSD and PD_Balanced |
| OpenShift 4+ | PD_SSD and PD_Balanced | |
| Anthos | PD_SSD and PD_Balanced | |
| Azure | Azure Kubernetes Service (AKS) | Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk |
| Azure Red Hat OpenShift (ARO) | Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk | |
| OpenShift 4+ | Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk | |
| DAS/SAN | Anthos | Preprovisioned SSD or NVMe |
| OpenShift 4+ | Preprovisioned SSD or NVMe | |
| Rancher Kubernetes Engine (RKE2) | Preprovisioned SSD or NVMe | |
| vSphere | Anthos | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin |
| OpenShift 4+ | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin | |
| Rancher Kubernetes Engine (RKE2) | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin | |
| VMware vSphere Kubernetes Service (VKS) | VKS | Storage Class based configuration |
| FlashArray | Anthos | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) |
| OpenShift 4+ | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) | |
| Rancher Kubernetes Engine (RKE2) | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) |
PX-StoreV1 is supported on the following platforms and distributions:
| Platform | Kubernetes Distribution | Storage Backend |
|---|---|---|
| AWS | Elastic Kubernetes Service (EKS) | IO1, GP2, and GP3 |
| OpenShift 4+ | IO1, GP2, and GP3 | |
| Red Hat OpenShift Service on AWS (ROSA) | IO1, GP2, and GP3 | |
| Google Cloud | Google Kubernetes Engine (GKE) | PD_Standard, PD_SSD, and PD_Balanced |
| OpenShift 4+ | PD_Standard, PD_SSD, and PD_Balanced | |
| Anthos | PD_Standard, PD_SSD, and PD_Balanced | |
| Azure | Azure Kubernetes Service (AKS) | Standard HDD, Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk |
| Azure Red Hat OpenShift (ARO) | Standard HDD, Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk | |
| OpenShift 4+ | Standard HDD, Standard SSD, Premium SSD, Premium SSD v2, Ultra Disk | |
| Oracle | Oracle Kubernetes Engine (OKE) | Volume Processing Units (VPUs) based configuration |
| DAS/SAN | Anthos | Preprovisioned SSD or NVMe |
| OpenShift 4+ | Preprovisioned SSD or NVMe | |
| Rancher Kubernetes Engine (RKE2) | Preprovisioned SSD or NVMe | |
| Pivotal Container Service (PKS) | Preprovisioned SSD or NVMe | |
| vSphere | Anthos | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin |
| OpenShift 4+ | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin | |
| Rancher Kubernetes Engine (RKE2) | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin | |
| Pivotal Container Service (PKS) | Lazy-Zeroed Thick, Eager-Zeroed Thick, Thin | |
| VKS | VKS | Storage Class based configuration |
| FlashArray | OpenShift 4+ | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) |
| Anthos | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) | |
| Pivotal Container Service (PKS) | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) | |
| Rancher Kubernetes Engine (RKE2) | Thin (iSCSI, Fibre Channel, NVMe-oF RDMA, NVMe-oF TCP) |