- Cloud Native Storage
- How it Works
- Minimum Requirements
- Install with RunC
- Install with a Container Orchestrator
- Join us on Slack!
Portworx is a software defined persistent storage solution designed and purpose built for applications deployed as containers, via container orchestrators such as Kubernetes, Marathon and Swarm. It is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler. Portworx volumes are always hyper-converged. That is, they are exposed on the same host where the application container executes.
- Is delivered as a container and gets installed on your servers that run stateful applications. Portworx volumes are available on the same host where an application container consumes the volume.
- Provides virtual, container-granular data volumes to applications running in containers.
- Is scheduler aware - provides data persistence and HA across multiple nodes, cloud instances, regions, data centers or even clouds.
- Is application aware - applications like Cassandra are deployed as a set of containers, and Portworx is aware of the entire stack. Data placement and management is done at an application POD level.
- Is designed for enterprise production deployments, with features like BYOK inline encryption, snapshot-and-backup to S3 and support for stateful Blue-Green deployments.
- Manages physical storage that is directly attached to servers, from cloud volumes, or provided by hardware arrays. It monitors the health of the drives and manages the RAID groups directly, repairing failures when needed.
- Provides programmatic control on your storage resources - volumes and other stateful services can be created and consumed directly via the scheduler and orchestration software tool chain.
- Is radically simple - Portworx is deployed just like any other container - and managed by your scheduler of choice.
Cloud Native Storage
How it Works
Unlike traditional storage which is designed to provide storage to a host machine or operating system via protocols like iSCSI, NBD or NFS, Portworx directly provides block, file and object storage to your applications on the same server where the application is running. Portworx itself is deployed as a container and runs on every host in your cluster. Application containers consume Portworx volumes directly through the Container Orchestrator. The following are supported:
- Docker volume plugins
- Kubernetes Persistent Volumes
- Mesosphere DC/OS DVDI External Storage Interface
Read more about how Portworx provides storage volumes to your application containers here.
- Linux kernel 3.10 or greater
- Docker 1.10 or greater.
- Configure Docker to use shared mounts. The shared mounts configuration is required, as PX-Developer exports mount points.
- Run sudo mount –make-shared / in your SSH window
- If you are using systemd, remove the
MountFlags=slaveline in your docker.service file.
- Minimum resources per server:
- 4 CPU cores
- 4 GB RAM
- Additional resources recommended per server:
- 128 GB Storage
- 10 GB Ethernet NIC
- Maximum nodes per cluster:
- Unlimited for the Enterprise License
- 3 for the Developer License
- Open network ports:
- Ports 9001, 9002, 9003, 9010, 9012, 9014 must be open for internal network traffic between nodes running PX
- All nodes running PX container must be synchronized in time and recommend setting up NTP to keep the time synchronized between all the nodes
Before going production, ensure a 3-node clustered etcd is deployed that PX can use for configuration storage. Follow the instructions here to deploy a clustered etcd
Also, you can use this ansible playbook to deploy a 3-node etcd cluster
Install with RunC
You can run Portworx directly via OCI runC. This will run Portworx as a standalone OCI container without any reliance on the Docker daemon. Install with RunC
Install with a Container Orchestrator
Visit the Schedulers section of this documentation, and chose the appropriate installation instructions for your scheduler.
- Install on Kubernetes
- Install on Mesosphere DCOS
- Install on Docker
- Install on Rancher
- Install on AWS ECS
Join us on Slack!
Contact us to share feedback, work with us, and to request features.