Manage Storage Nodes
In cloud environments, Portworx can dynamically create disks based on an input disk template whenever a new instance spins up and use those disks for the Portworx cluster.
Portworx fingerprints the disks and attaches them to an instance in the autoscaling cluster. In this way an ephemeral instance gets its own identity.
Why would I need this?
- Users don't have to manage the lifecycle of disks. Instead, they just have to provide disks specs and Portworx manages the disk lifecycle.
- When an instance terminates, the auto scaling group will automatically add a new instance to the cluster. Portworx gracefully handles this scenario by re-attaching the disks to it and give a new instance the old identity. This ensures that the instance’s data is retained with zero storage downtime.
Limit the number of storage nodes
For the two deployment architectures, you can manage the number of storage nodes as follows:
Architecture | Strategy |
---|---|
Converged | spec.cloudStorage.maxStorageNodesPerZone setting is used to control the number of storage nodes in the cluster. |
Disaggregated | Only the nodes labeled as portworx.io/node-type=storage are converted to storage nodes.Note: If maxStorageNodesPerZone is specified and its value is less than the number of nodes labeled as storage, then maxStorageNodesPerZone takes precedence. |
Portworx automatically manages the number of storage nodes in a cluster. For more information about why automatic management is necessary and how it is implemented, see Manage the number of storage nodes.
If a cluster has no zones, all nodes are assumed to be in a single zone. For example, most vSphere setups and all FlashArray cloud setups are considered to be in a single zone.
📄️ AWS
Learn EBS volume template that Portworx uses as reference
📄️ GCP
Learn the GCP disk template that Portworx uses as reference.
🗃️ VMware vSphere
1 item
📄️ Portworx storage nodes
Understand how the Portworx manages the number of storage nodes in a cluster.