Install on Docker Swarm


This document presents the Docker method of installing a Portworx cluster using Docker in swarm mode. Please refer to the Portworx on Kubernetes page if you want to install Portworx on Kubernetes.

This section describes installing Portworx on Docker Swarm.

Identify storage

Portworx pools the storage devices on your server and creates a global capacity for containers.

Back up any data on storage devices that will be pooled. Storage devices will be reformatted!

To view the storage devices on your server, use the lsblk command.

For example:

lsblk
    NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    xvda                      202:0    0     8G  0 disk
    └─xvda1                   202:1    0     8G  0 part /
    xvdb                      202:16   0    64G  0 disk
    xvdc                      202:32   0    64G  0 disk

Note that devices without the partition are shown under the TYPE column as part. This example has two non-root storage devices (/dev/xvdb, /dev/xvdc) that are candidates for storage devices.

Identify the storage devices you will be allocating to PX. PX can run in a heterogeneous environment, so you can mix and match drives of different types. Different servers in the cluster can also have different drive configurations.

Install

PX runs as a container directly via OCI runC. This ensures that there are no cyclical dependencies between Docker and PX.

On each swarm node, perform the following steps to install PX.

Step 1: Install the PX OCI bundle

Portworx provides a Docker based installation utility to help deploy the PX OCI bundle. This bundle can be installed by running the following Docker container on your host system:

# Uncomment appropriate `REL` below to select desired Portworx release
REL="/2.1"       # 2.1 portworx release
#REL="/2.0"     # 2.0 portworx release

latest_stable=$(curl -fsSL "https://install.portworx.com$REL/?type=dock&stork=false" | awk '/image: / {print $2}')

# Download OCI bits (reminder, you will still need to run `px-runc install ..` after this step)
sudo docker run --entrypoint /runc-entry-point.sh \
    --rm -i --privileged=true \
    -v /opt/pwx:/opt/pwx -v /etc/pwx:/etc/pwx \
    $latest_stable
Running the PX OCI bundle does not require Docker, but Docker will still be required to install the PX OCI bundle. If you do not have Docker installed on your target hosts, you can download this Docker package and extract it to a root tar ball and manually install the OCI bundle.

Step 2: Configure PX under runC

Specifiy -x swarm in the px-runc install command below to select Docker Swarm as your scheduler.

Now that you have downloaded and installed the PX OCI bundle, you can use the px-runc install command from the bundle to configure systemd to start PX runC.

The px-runc command is a helper-tool that does the following:

  1. Prepares the OCI directory for runc
  2. Prepares the runC configuration for PX
  3. Uses systemd to start the PX OCI bundle

Installation example:

sudo /opt/pwx/bin/px-runc install -c MY_CLUSTER_ID \
    -k etcd://myetc.company.com:2379 \
    -s /dev/xvdb -s /dev/xvdc

Command-line arguments

General options
-c <id>                   [REQUIRED] Specifies the cluster ID that this PX instance is to join
-k <kvdb://host:port>     [REQUIRED] Points to your key value database, such as an etcd cluster or a consul cluster
-b                        Use in-built kvdb. Provide the kvdb endpoints required for bootstrap with -k option.
-s <device path>          [REQUIRED unless -a/-A are used] Specify storage devices that PX should use for storing the data
-T <type>                 Specify backend storage type (<type> is dmthin, btrfs, mdraid or lvm)
-cache <device path>      Specify storage devices that PX should use for caching
-dedicated_cache          Constrain cache drive assignment from given -cache drives only
-j <device path>          Specify a journal device for PX, or "auto" (recommended) 
-metadata <device path>   Specify storage device that PX should use for storing the system meta data
-e key=value              Specify extra environment variables
-v <dir:dir[:shared,ro]>  Specify extra mounts
-d <ethX>                 Specify the data network interface
-m <ethX>                 Specify the management network interface
-z                        Instructs PX to run in zero storage mode
-f                        Instructs PX to use an unmounted drive even if it has a filesystem on it
-a                        Instructs PX to use any available, unused and unmounted drives
-A                        Instructs PX to use any available, unused and unmounted drives or partitions
-x <swarm|kubernetes>     Specify scheduler type (if PX running in scheduler environment)
-r <startport>            Start of the portrange Portworx will use for communication (dfl: 9001)
  • additional PX-OCI -specific options:
-oci <dir>                Specify OCI directory (dfl: /opt/pwx/oci)
-sysd <file>              Specify SystemD service file (dfl: /etc/systemd/system/portworx.service)
KVDB options
-userpwd <user:passwd>    Username and password for ETCD authentication
-ca <file>                Specify location of CA file for ETCD authentication
-cert <file>              Specify location of certificate for ETCD authentication
-key <file>               Specify location of certificate key for ETCD authentication
-acltoken <token>         Specify ACL token for Consul authentication
# internal-kvdb-options:
-kvdb_cluster_size <#>    Size of the internal kvdb cluster (dfl: 3)
-kvdb_recovery            Starts the nodes in kvdb recovery mode
Secrets options
-secret_type <type>        Specify the secrets type for cloudsnap and encryption features (<type> is aws-kms, dcos, docker, ibm-kp, k7s, kvdb, vault, gcloud-kms or azure-kv)
-cluster_secret_key <key>  Specify cluster-wide secret for AWS KMS or Vault and volume encryption
PX-API options
# px-api-ssl-options:
-apirootca <file>             Specify self-signed root CA certificate file
-apicert <file>               Specify node certificate file
-apikey <file>                Specify node certificate key file
-apidisclientauth             Disable api client authentication
# px-authentication-options:
-oidc_issuer   <URL>          Location of OIDC service (e.g. https://accounts.google.com)
-oidc_client_id <id>          Client id provided by the OIDC
-oidc_custom_claim_namespace  OIDC namespace for custom claims
-jwt_issuer <val>             JSON Web Token issuer (e.g. openstorage.io)
-jwt_rsa_pubkey_file <file>   JSON Web Token RSA Public file path
-jwt_ecds_pubkey_file <file>  JSON Web Token ECDS Public file path
-username_claim <claim>       Claim key from the token to use as the unique id of the user (<claim> is sub, email or name; dfl: sub)
Resource control options
--cpus <#.#>                  Specify maximum number of CPUs Portworx can use (e.g. --cpus=1.5)
--cpu-shares <#>              Specify CPU shares (relative weight)
--cpuset-cpus <val>           Specify CPUs in which to allow execution (<val> is range <#-#>, or sequence <#,#>)
--memory <bytes>              Specify maximum ammount of memory Portworx can use
--memory-reservation <bytes>  Specify memory reservation soft limit (must be smaller than '--memory')
--memory-swap <bytes>         Specify maximum ammount of RAM+SWAP memory Portworx can use
--memory-swappiness <0-100>   Specify percentage of container's anonymous pages host can swap out
Misc options
-raid <0|10>              Specify which RAID-level should PX use with local storage (dfl: 0)
-cluster_domain <name>    Cluster Domain Name for this cluster
NOTE: The -raid <0|10> option is different than the volume replication factor. For example, px-nodes using -raid 10 and hosting volumes with replication factor 3 will keep 6 copies of the data.

Environment variables
PX_HTTP_PROXY          If running behind an HTTP proxy, set the PX_HTTP_PROXY variables to your HTTP proxy.
PX_HTTPS_PROXY         If running behind an HTTPS proxy, set the PX_HTTPS_PROXY variables to your HTTPS proxy.
PX_ENABLE_CACHE_FLUSH  To enable cache flush daemon, set PX_ENABLE_CACHE_FLUSH=true.
Setting environment variables can be done using the -e option

Below is an example install command with extra “PX_ENABLE_CACHE_FLUSH” environment variable:

sudo /opt/pwx/bin/px-runc install -e PX_ENABLE_CACHE_FLUSH=yes \
    -c MY_CLUSTER_ID -k etcd://myetc.company.com:2379 -s /dev/xvdb

Examples

Installing Portworx using etcd:

px-runc install -k etcd://my.company.com:2379 -c MY_CLUSTER_ID -s /dev/sdc -s /dev/sdb2 {{ include.sched-flags }}
px-runc install -k etcd://70.0.1.65:2379 -c MY_CLUSTER_ID -s /dev/sdc -d enp0s8 -m enp0s8 {{ include.sched-flags }}

Installing Portworx using consul:

px-runc install -k consul://my.company.com:8500 -c MY_CLUSTER_ID -s /dev/sdc -s /dev/sdb2 {{ include.sched-flags }}
px-runc install -k consul://70.0.2.65:8500 -c MY_CLUSTER_ID -s /dev/sdc -d enp0s8 -m enp0s8 {{ include.sched-flags }}

Modifying the PX configuration

After the initial installation, you can modify the PX configuration file at /etc/pwx/config.json (see details) and restart Portworx using systemctl restart portworx.

Step 3: Starting PX runC

Once you install the PX OCI bundle and systemd configuration from the steps above, you can start and control PX runC directly via systemd.

Below commands reload systemd configurations, enable and starts the Portworx service.

sudo systemctl daemon-reload
sudo systemctl enable portworx
sudo systemctl start portworx

Adding Nodes

To add nodes to increase capacity and enable high availability, simply repeat these steps on other servers. As long as PX is started with the same cluster ID, they will form a cluster.

Access the pxctl CLI

After Portworx is running, you can create and delete storage volumes through the Docker volume commands or the pxctl command line tool.

With pxctl, you can also inspect volumes, the volume relationships with containers, and nodes. For more on using pxctl, see the CLI Reference.

To view the global storage capacity, run:

pxctl status

The following sample output of pxctl status shows that the global capacity for Docker containers is 128 GB.

pxctl status
Status: PX is operational
Node ID: 0a0f1f22-374c-4082-8040-5528686b42be
	IP: 172.31.50.10
 	Local Storage Pool: 2 pools
	POOL	IO_PRIORITY	SIZE	USED	STATUS	ZONE	REGION
	0	LOW		64 GiB	1.1 GiB	Online	b	us-east-1
	1	LOW		128 GiB	1.1 GiB	Online	b	us-east-1
	Local Storage Devices: 2 devices
	Device	Path		Media Type		Size		Last-Scan
	0:1	/dev/xvdf	STORAGE_MEDIUM_SSD	64 GiB		10 Dec 16 20:07 UTC
	1:1	/dev/xvdi	STORAGE_MEDIUM_SSD	128 GiB		10 Dec 16 20:07 UTC
	total			-			192 GiB
Cluster Summary
	Cluster ID: 55f8a8c6-3883-4797-8c34-0cfe783d9890
	IP		ID					Used	Capacity	Status
	172.31.50.10	0a0f1f22-374c-4082-8040-5528686b42be	2.2 GiB	192 GiB		Online (This node)
Global Storage Pool
	Total Used    	:  2.2 GiB
	Total Capacity	:  192 GiB

Post-Install

Once you have Portworx up, take a look below at an example of running stateful Jenkins with Portworx and Swarm!



Last edited: Thursday, Nov 21, 2019