Skip to main content
Version: 3.4

Installation on Air-Gapped Amazon Elastic Kubernetes Service (EKS) Cluster

This topic provides instructions for installing Portworx on an air-gapped Amazon Elastic Kubernetes Service (Amazon EKS) cluster. You can deploy Portworx and required packages by using a private container registry.

The following collection of tasks describes how to install Portworx on an air-gapped Amazon Elastic Kubernetes Service (EKS) cluster:

Complete all tasks to install Portworx.

Create an IAM policy

Provide the permissions for all the instances in the Auto Scaling group by creating an IAM role. Perform the following steps in the AWS Management Console:

  1. In the AWS Management Console, open IAM, select Policies under Identity and Access Management (IAM), and then select Create policy.

  2. Choose the JSON tab, and then paste the following permissions into the editor. Provide your own value for Sid if applicable. You can either use the minimum permissions required or the permissions required for disk encryption.

    note

    These are the minimum permissions needed for storage operations for a Portworx cluster. For complete permissions required for all of Portworx storage operations, see the credentials reference.

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "",
    "Effect": "Allow",
    "Action": [
    "ec2:AttachVolume",
    "ec2:ModifyVolume",
    "ec2:DetachVolume",
    "ec2:CreateTags",
    "ec2:CreateVolume",
    "ec2:DeleteTags",
    "ec2:DeleteVolume",
    "ec2:DescribeTags",
    "ec2:DescribeVolumeAttribute",
    "ec2:DescribeVolumesModifications",
    "ec2:DescribeVolumeStatus",
    "ec2:DescribeVolumes",
    "ec2:DescribeInstances",
    "autoscaling:DescribeAutoScalingGroups"
    ],
    "Resource": ["*"]
    }
    ]
    }
  3. Name the policy and create it.

Attach the IAM policy

Attach the previously created policy to your node instance role or AWS user account.

Follow these instructions to attach the policy to your NodeInstanceRole:

  1. From the IAM page, click Roles in the left pane.

  2. On the Roles page, search for and select your nodegroup NodeInstanceRole using your cluster name. System should display the nodegroup instance role.

    note

    If there are more than one nodegroup NodeInstanceRole for your cluster, attach the policy to those NodeInstanceRoles as well.

  3. Attach the previously created policy by selecting Attach policies from the Add permissions dropdown on the right side of the screen.

  4. Under Other permissions policies, search for your policy name. Select your policy name and select the Attach policies button to attach it.

    System should show attached policy on the Permissions policies section of your nodegroup NodeInstanceRole.

Get Portworx container images

  1. Set an environment variable for the Kubernetes version you are using:

    KBVER=$(kubectl version --short | awk -Fv '/Server Version: / {print $3}')
  2. Set an environment variable to the Portworx version:

    PXVER=<portworx-version>
  3. On an internet-connected host with the same architecture and OS version as the Kubernetes cluster nodes intended for Portworx installation, download the air-gapped installation bootstrap script for the specified Kubernetes and Portworx versions:

    curl -o px-ag-install.sh -L "https://install.portworx.com/$PXVER/air-gapped?kbver=$KBVER"
  4. Pull the container images required for the specified versions:

    sh px-ag-install.sh pull

Set your container registry

To make the Portworx container images available in your air-gapped cluster, configure a container registry that the nodes can access. In AWS, you can use Amazon Elastic Container Registry (ECR). ECR repositories host a single image each, and Portworx consists of multiple images. Create a separate ECR repository for each image.

  1. Sign in to Docker:

    aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com
  2. Create an ECR repository for each image. The following command creates repositories in the us-west-2 Region, differentiated by the pxmirror prefix. Replace the Region with the one where you are deploying your Portworx cluster:

    for images in $(curl -fsSL install.portworx.com/$PXVER/air-gapped | awk -F / '/^IMAGES="$IMAGES /{print $NF}' | cut -d: -f1); do aws ecr create-repository --repository-name pxmirror/$images --image-scanning-configuration scanOnPush=true --region us-west-2; done

    If the output is paged, press q to exit after each repository is created.

  3. Create the Kubernetes pull secret in the same Region, replacing <namespace> with the namespace in which you will deploy Portworx:

    kubectl create secret docker-registry ecr-pxmirror --docker-server XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com --docker-username=AWS --docker-password=$(aws ecr get-login-password --region us-west-2) -n <namespace>

    The registry used in the preceding command is XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/pxmirror, where XXXXXXXXXXXX is your AWS account ID number.

    note

    Skip the above steps if you are not using the ECR registry.

  4. Push the container images to a private registry that is accessible to your air-gapped nodes. Do not include http:// in your private registry path:

    sh px-ag-install.sh push XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/pxmirror

Create a version manifest ConfigMap for Portworx Operator

  1. Download the Portworx version manifest:

    curl -o versions.yaml "https://install.portworx.com/$<portworx-version>/version?kbver=$<kubernetes-version>&opver=$<operator-version>"

    Replace:

    • <portworx_version> with the Portworx version you want to use.
    • <kubernetes-version> with the Kubernetes version you want to use.
    • <operator-version> with the Operator version you want to use.
  2. (Optional) If your installation images are spread across multiple custom registries, update your version manifest with the custom registry location details. You can use DNS hostname+domains or IP addresses (IPv4 or IPv6) to specify the container registry server in the following format:

    <dns-host.domain or IPv4 or IPv6>[:<port>]/repository/image:tag

    The following example demonstrates registries using a custom DNS hostname and domain, IPv4, and IPv6:

    version: 2.13.3
    components:
    stork: custom-registry.acme.org/portworx/backup/stork:23.2.1
    autopilot: 192.168.1.2:5433/tools/autopilot:1.3.7
    nodeWiper: [2001:db8:3333:4444:5555:6666:7777:8888]:5443/portworx/px-node-wiper:2.13.2
    note
    • Ensure that the Custom Container Registry location field is empty for any specs you generate in the spec generator.

    • kubeScheduler, kubeControllerManager, and pause may not appear in the version manifest, but you can include them in the px-version configmap:

      ...
      kubeScheduler: custom-registry.acme.org/k8s/kube-scheduler-amd64:v1.26.4
      kubeControllerManager: custom-registry.acme.org/k8s/kube-controller-manager-amd64:v1.26.4
      pause: custom-registry.acme.org/k8s/pause:3.1
  3. Create a configmap from the downloaded or updated version manifest, replacing <namespace> with the namespace in which you will deploy Portworx:

    kubectl -n <namespace> create configmap px-versions --from-file=versions.yaml

Install NFS packages for SharedV4

To install the NFS package on your host systems so that Portworx can use the SharedV4 feature, follow these steps:

  1. Start the repository container as a standalone service in Docker:

    docker run -p 8080:8080 docker.io/portworx/px-repo:1.2.0
  2. Using a browser within your air-gapped environment, navigate to your host IP address where the above docker image is running (For example, http://<ip-address>:8080), and follow the instructions for your Linux distribution provided by the container to configure your host to use the package repository service, and install the NFS packages.

    screen capture of the service URL steps

Generate the Portworx specification

To install Portworx, first generate the Kubernetes manifests that you will deploy in your Amazon EKS cluster.

  1. Sign in to the Portworx Central console.
    The system displays the Welcome to Portworx Central! page.

  2. In the Portworx Enterprise section, select Generate Cluster Spec.
    The system displays the Generate Spec page.

  3. From the Portworx Version dropdown menu, select the Portworx version to install.

  4. From the Platform dropdown menu, select AWS.

  5. From the Distribution Name dropdown menu, select Elastic Kubernetes Service (EKS).

  6. Click Customize.

  7. On the Basic tab:

    1. Select the Use the Portworx Operator and Built-in ETCD checkboxes.
    2. From the Portworx version dropdown, select the same version that you set as the Portworx version when configuring your environment.
    3. Click Next.
  8. On the Storage tab, retain the recommended default values and click Next.

  9. On the Network tab:

    1. Enter the Data Network Interface to be used for data traffic.
    2. Enter the Management Network Interface to be used for management traffic.
    3. Enter the Starting port for Portworx services.
    4. Click Next.
  10. On the Customize tab:

    1. In the Customize section, under Are you running on either of these?, select None.
    2. In the Registry and Image Settings section:
      • If you use a single private registry, enter the internal registry path and the details for how to connect to your private registry in the Custom Container Registry Location field.
      • If you use multiple private registries, leave the Custom Container Registry Location field blank.
    3. In the Advanced Settings section, clear the Enable Telemetry checkbox.
  11. Select Finish to generate the specs.

Deploy Portworx Operator

Deploy the Operator by running the command that Portworx Central provided, which looks similar to the following:

kubectl apply -f "https://install.portworx.com/<portworx_version>?comp=pxoperator"
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created

Deploy StorageCluster

Deploy the StorageCluster by running the command that Portworx Central provided, which looks similar to the following:

kubectl apply -f "https://install.portworx.com/<portworx_version>?operator=true&mc=false&kbver=&b=true&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b&stork=true&csi=true&mon=true&tel=false&st=k8s&reg=XXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com&rsec=ecr-pxmirror&promop=true"
storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b created

Verify Portworx pod status

List and filter the results for Portworx pods, and specify the namespace where you deployed Portworx:

kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME                                                    READY   STATUS    RESTARTS         AGE     IP                NODE                   NOMINATED NODE   READINESS GATES
portworx-api-774c2 1/1 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
portworx-api-dvw64 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node2 <none> <none>
portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-kvdb-8b67l 1/1 Running 0 10s 192.168.121.196 username-k8s1-node1 <none> <none>
portworx-kvdb-fj72p 1/1 Running 0 30s 192.168.121.196 username-k8s1-node2 <none> <none>
portworx-operator-58967ddd6d-kmz6c 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-9gs79 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-vpptx 2/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-bxmpn 2/2 Running 0 2m55s 192.168.121.191 username-k8s1-node2 <none> <none>
px-csi-ext-868fcb9fc6-54bmc 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-8tk79 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node2 <none> <none>
px-csi-ext-868fcb9fc6-vbqzk 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none>
px-prometheus-operator-59b98b5897-9nwfv 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none>

Note the name of a px-cluster pod. You will run pxctl commands from these pods in Verify pxctl cluster provision status.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status from a pod. Enter the following kubectl exec command, specifying the pod name you retrieved in Verify Portworx pod status:

kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e
IP: 192.168.121.99
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 3.0 TiB 10 GiB Online default default
Local Storage Devices: 3 devices
Device Path Media Type Size Last-Scan
0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
* Internal kvdb on this node is sharing this storage device /dev/vdc to store its data.
total - 3.0 TiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d
Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.121.196 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.99 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.191 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a username-k8s1-node2 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 30 GiB
Total Capacity : 9.0 TiB

The status displays PX is operational when the cluster is running as expected. If the cluster is using the PX-StoreV2 datastore, the StorageNode entries for each node display Yes(PX-StoreV2).

Verify pxctl cluster provision status

  1. Access the Portworx CLI.

  2. Run the following command to find the storage cluster:

    kubectl -n <px-namespace> get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae Online 2.11.0 10m

    The status must display the cluster is Online.

  3. Run the following command to find the storage nodes:

    kubectl -n <px-namespace> get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Online 2.11.0-81faacc 11m
    username-k8s1-node1 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Online 2.11.0-81faacc 11m
    username-k8s1-node2 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Online 2.11.0-81faacc 11m
  4. Verify the Portworx cluster provision status by running the following command.
    Specify the pod name you retrieved in Verify Portworx pod status.

    kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
    NODE					        NODE STATUS	 POOL					      POOL STATUS  IO_PRIORITY	SIZE	AVAILABLE	USED   PROVISIONED ZONE REGION	RACK
    0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 8ec9e6aa-7726-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 06fcc73a-7e2f-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default

What to do next

Create a PVC. For more information, see Create your first PVC.