Skip to main content
Version: 3.1

Install

Prerequisites

  • You must have a Kubernetes cluster with a minimum of three worker nodes.
  • Portworx is installed on your Kubernetes cluster. For details about how you can install Portworx on Kubernetes, see the Portworx on Kubernetes page.
  • You must have Stork installed on your Kubernetes cluster. For details about how you can install Stork, see the Stork page.

Install Cassandra

  1. Enter the following kubectl apply command to create a headless service:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
    labels:
    app: cassandra
    name: cassandra
    spec:
    clusterIP: None
    ports:
    - port: 9042
    selector:
    app: cassandra
    EOF
    service/cassandra created

    Note the following about this service:

  • The spec.clusterIP field is set to None.
  • The spec.selector.app field is set to cassandra. The Kubernetes endpoints controller will configure the DNS to return addresses that point directly to your Cassandra Pods.
  1. Use the following kubectl apply command to create a storage class:

    kubectl apply -f - <<EOF
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: portworx-sc
    provisioner: pxd.portworx.com
    parameters:
    repl: "1"
    priority_io: "high"
    group: "cassandra_vg"
    fg: "true"
    EOF
    storageclass.storage.k8s.io/px-storageclass created

    Note the following about this storage class:

  • The provisioner field is set to pxd.portworx.com. For details about the Portworx-specific parameters, refer to the Portworx Volume section of the Kubernetes documentation
  • The name of the StorageClass object is portworx-sc
  • Portworx will create one replica of each volume
  • Portworx will use a high priority storage pool
  1. The following command creates a stateful set with three replicas and uses the STORK scheduler to place your Pods closer to where their data is located:

    kubectl apply -f - <<EOF
    apiVersion: "apps/v1"
    kind: StatefulSet
    metadata:
    name: cassandra
    spec:
    selector:
    matchLabels:
    app: cassandra
    serviceName: cassandra
    replicas: 3
    template:
    metadata:
    labels:
    app: cassandra
    spec:
    schedulerName: stork
    containers:
    - name: cassandra
    image: gcr.io/google-samples/cassandra:v12
    imagePullPolicy: Always
    ports:
    - containerPort: 7000
    name: intra-node
    - containerPort: 7001
    name: tls-intra-node
    - containerPort: 7199
    name: jmx
    - containerPort: 9042
    name: cql
    resources:
    limits:
    cpu: "500m"
    memory: 1Gi
    requests:
    cpu: "500m"
    memory: 1Gi
    securityContext:
    capabilities:
    add:
    - IPC_LOCK
    lifecycle:
    preStop:
    exec:
    command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"]
    env:
    - name: MAX_HEAP_SIZE
    value: 512M
    - name: HEAP_NEWSIZE
    value: 100M
    - name: CASSANDRA_SEEDS
    value: "cassandra-0.cassandra.default.svc.cluster.local"
    - name: CASSANDRA_CLUSTER_NAME
    value: "K8Demo"
    - name: CASSANDRA_DC
    value: "DC1-K8Demo"
    - name: CASSANDRA_RACK
    value: "Rack1-K8Demo"
    - name: CASSANDRA_AUTO_BOOTSTRAP
    value: "false"
    - name: POD_IP
    valueFrom:
    fieldRef:
    fieldPath: status.podIP
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    readinessProbe:
    exec:
    command:
    - /bin/bash
    - -c
    - /ready-probe.sh
    initialDelaySeconds: 40
    timeoutSeconds: 40
    # These volume mounts are persistent. They are like inline claims,
    # but not exactly because the names need to match exactly one of
    # the stateful pod volumes.
    volumeMounts:
    - name: cassandra-data
    mountPath: /var/lib/cassandra
    # These are converted to volume claims by the controller
    # and mounted at the paths mentioned above.
    volumeClaimTemplates:
    - metadata:
    name: cassandra-data
    annotations:
    volume.beta.kubernetes.io/storage-class: portworx-sc
    spec:
    accessModes: [ "ReadWriteOnce" ]
    resources:
    requests:
    storage: 1Gi
    EOF
    statefulset.apps/cassandra configured

Validate the cluster functionality

  1. Use the kubectl get pvc command to verify that the PVCs are bound to your persistent volumes:

    kubectl get pvc
    NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    cassandra-data-cassandra-0 Bound pvc-a9fbca67-713c-4442-xxxx-cafaaf5e20ad 1Gi RWO portworx-sc <unset> 4m41s
    cassandra-data-cassandra-1 Bound pvc-e6d39f88-0ea0-47a9-xxxx-c8f5a073d0de 1Gi RWO portworx-sc <unset> 3m22s
    cassandra-data-cassandra-2 Bound pvc-2cf36362-12e8-4527-xxxx-1b6a2391c073 1Gi RWO portworx-sc <unset> 2m21s
  2. Verify that Kubernetes created the portworx-sc storage class:

    kubectl get storageclass
    NAME                 TYPE
    portworx-sc pxd.portworx.com
  3. List your pod:

    kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    cassandra-0 1/1 Running 0 8m47s
    cassandra-1 1/1 Running 0 7m29s
    cassandra-2 1/1 Running 0 6m28s
  4. Use the pxctl volume list command to display the list of volumes in your cluster by running the following command on one of the pods that has Portworx installed:

    pxctl volume list
    ID			NAME						SIZE	HA	SHARED	ENCRYPTED	PROXY-VOLUME	IO_PRIORITY	STATUS				SNAP-ENABLED	
    120416509069581339 pvc-2cf36362-12e8-4527-xxxx-1b6a2391c073 1 GiB 1 no no no HIGH up - attached on 10.13.xxx.x6 no
    583995935441831735 pvc-a9fbca67-713c-4442-xxxx-cafaaf5e20ad 1 GiB 1 no no no HIGH up - attached on 10.13.xxx.xx8 no
    1014207801779375780 pvc-e6d39f88-0ea0-47a9-xxxx-c8f5a073d0de 1 GiB 1 no no no HIGH up - attached on 10.13.xxx.xx0 no

    Make a note of the ID of your volume. You'll need it in the next step.

  5. To verify that your Portworx volumes has one replica, enter the pxctl volume inspect command, specifying the ID from the previous step. The following example command uses 120416509069581339:

    pxctl volume inspect 120416509069581339
    Volume          	 :  120416509069581339
    Name : pvc-2cf36362-12e8-4527-xxxx-1b6a2391c073
    Group : cassandra_vg
    Size : 1.0 GiB
    Format : ext4
    HA : 1
    IO Priority : HIGH
    Creation time : Sep 23 07:29:14 UTC 2024
    Shared : no
    Status : up
    State : Attached: xxxxxxxx-xxxx-xxxx-xxxx-7ee8b9b8c659 (10.13.xxx.x6)
    Last Attached : Sep 23 07:29:16 UTC 2024
    Device Path : /dev/pxd/pxd120416509069581xx9
    Labels : app=cassandra,fg=true,group=cassandra_vg,namespace=default,priority_io=high,pvc=cassandra-data-cassandra-2,repl=1
    Mount Options : discard
    Reads : 51
    Reads MS : 8
    Bytes Read : 413696
    Writes : 31
    Writes MS : 88
    Bytes Written : 16867328
    IOs in progress : 0
    Bytes used : 624 KiB
    Replica sets on nodes:
    Set 0
    Node : 10.13.xxx.x6
    Pool UUID : 99c52975-711a-47ed-xxxx-8a2eeb68615a
    Replication Status : Up
    Volume consumers :
    - Name : cassandra-2 (f6a48a84-3da1-48d7-xxxx-18493effc8bf) (Pod)
    Namespace : default
    Running on : ip-10-13-xxx-x6.pwx.purestorage.com
    Controlled by : cassandra (StatefulSet)

    Note that this volume is up and the volume consumer is the cassandra-2 pod. If you want to have more replicas of volume, increase the replica count while creating the StorageClass portworx-sc. Please ensure you accordingly increase the worker nodes also and Portworx is installed on all the node to avoid preemption issues.

  6. Show the list of your Pods and the hosts on which Kubernetes scheduled them:

    kubectl get pods -l app=cassandra -o json | jq '.items[] | {"name": .metadata.name,"hostname": .spec.nodeName, "hostIP": .status.hostIP, "PodIP": .status.podIP}'
    {
    "name": "cassandra-0",
    "hostname": "ip-10-13-xxx-xx8.pwx.purestorage.com",
    "hostIP": "10.xx.xxx.xx8",
    "PodIP": "10.xxx.xx.xx8"
    }
    {
    "name": "cassandra-1",
    "hostname": "ip-10-13-xxx-xx0.pwx.purestorage.com",
    "hostIP": "10.xx.xxx.xx0",
    "PodIP": "10.xxx.xx.xx0"
    }
    {
    "name": "cassandra-2",
    "hostname": "ip-10-13-xxx-x6.pwx.purestorage.com",
    "hostIP": "10.xx.xxx.xx6",
    "PodIP": "10.xx.xxx.x"
    }
  7. To open a shell session into one of your Pods, enter the following kubectl exec command, specifying your Pod name. This example opens the cassandra-0 Pod:

    kubectl exec -it cassandra-0 -- bash
  8. Use the nodetool status command to retrieve information about your Cassandra set up:

    nodetool status
    Datacenter: DC1-K8Demo
    ======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address Load Tokens Owns (effective) Host ID Rack
    UN 10.2xx.xx.xx0 65.66 KiB 32 64.7% 315bd721-aadd-40a0-xxxx-10ac4e5ff668 Rack1-K8Demo
    UN 10.2xx.xx.xx8 101.01 KiB 32 61.9% 2e686b0f-fd34-4456-xxxx-f664dbd36da6 Rack1-K8Demo
    UN 10.2xx.xx.x 65.66 KiB 32 73.4% 6eccbd09-8641-43b3-xxxx-bac12293b1f9 Rack1-K8Demo
  9. Terminate the shell session:

    exit
Was this page helpful?