- Cloud Native Storage
- How it Works
- Minimum Requirements
- Join us on Slack!
Portworx is a software defined persistent storage solution designed and purpose built for applications deployed as containers, via container orchestrators such as Kubernetes, Marathon, Nomad, and Swarm. It is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler. Portworx volumes are always hyper-converged. That is, they are exposed on the same host where the application container executes.
- Is delivered as a container and gets installed on your servers that run stateful applications. Portworx volumes are available on the same host where an application container consumes the volume.
- Provides virtual, container-granular data volumes to applications running in containers.
- Is scheduler aware - provides data persistence and HA across multiple nodes, cloud instances, regions, data centers or even clouds.
- Is application aware - applications like Cassandra are deployed as a set of containers, and Portworx is aware of the entire stack. Data placement and management is done at an application POD level.
- Is designed for enterprise production deployments, with features like BYOK inline encryption, snapshot-and-backup to S3 and support for stateful Blue-Green deployments.
- Manages physical storage that is directly attached to servers, from cloud volumes, or provided by hardware arrays. It monitors the health of the drives and manages the RAID groups directly, repairing failures when needed.
- Provides programmatic control on your storage resources - volumes and other stateful services can be created and consumed directly via the scheduler and orchestration software tool chain.
- Is radically simple - Portworx is deployed just like any other container - and managed by your scheduler of choice.
Cloud Native Storage
How it Works
Unlike traditional storage which is designed to provide storage to a host machine or operating system via protocols like iSCSI, NBD or NFS, Portworx directly provides block, file and object storage to your applications on the same server where the application is running. Portworx itself is deployed as a container and runs on every host in your cluster. Application containers consume Portworx volumes directly through the Container Orchestrator. The following are supported:
- Docker volume plugins
- Kubernetes Persistent Volumes
- Mesosphere DC/OS DVDI External Storage Interface
Read more about how Portworx provides storage volumes to your application containers here.
- Linux kernel 3.10 or greater
- Docker 1.13.1 or greater
- Configure Docker to use shared mounts. The shared mounts configuration is required, as PX-Developer exports mount points.
- Run sudo mount –make-shared / in your SSH window
- If you are using systemd, remove the
MountFlags=slaveline in your docker.service file.
- Minimum resources recommended per server:
- 4 CPU cores
- 4 GB RAM
- 128 GB Storage
- 10 GB Ethernet NIC
- Maximum nodes per cluster:
- Unlimited for the Enterprise Edition
- 3 for the Developer Edition
- Open network ports:
- Ensure ports 9001-9016 are open between all nodes that will run Portworx.
- All nodes running PX container must be synchronized in time and recommend setting up NTP to keep the time synchronized between all the nodes
- Before going production, ensure a 3-node clustered etcd is deployed that PX can use for configuration storage. Follow the instructions here to deploy a clustered etcd
Portworx can run with heterogenously configured servers, including servers with different cpu/memory configurations. Portworx can also run with servers that have different local storage profiles (number and size of disks/SSD/NVMe, etc.) There is a definite benefit (but not an absolute requirement) for servers to have local storage to contribute, since providing local storage enables high-performance features such as scheduler hyper-convergance. Nodes that do not contribute local storage can still fully participate in a Portworx cluster. However data access for any jobs scheduled on a “storage-less” or “head-only” node will be via implicit network access.
From an operational standpoint, Portworx recommends installing on all nodes within a cluster. Having Portworx installed on all nodes eliminates the overhead of configuring scheduler “constraints” that would then be needed to prohibit a job from running on a node where Portworx is not installed.
ansible playbook can be used to deploy a 3-node etcd cluster
PX runs completely in a container. It can be installed to run directly via
OCI runC or deployed and managed via your container orchestrator. Follow the instructions for either method below.
Run the pre-install check on each node to be added to Portworx cluster to ensure that the node is ready for deploying PX successfully and join a PX cluster. Follow this link to run the pre-install check script.
Install with RunC
You can run Portworx directly via OCI runC. This will run Portworx as a standalone OCI container without any reliance on the Docker daemon. In this method, PX is managed by
systemd. Install with RunC
Install with a Container Orchestrator
You can also deploy Portworx via your container orchestrator. Chose the appropriate installation instructions for your scheduler.
- Install with Kubernetes
- Install with Mesosphere DCOS
- Install with Docker
- Install with GKE
- Install with AWS ECS
- Install with Rancher
Join us on Slack!
Contact us to share feedback, work with us, and to request features.