Docker interaction with Portworx
Docker interaction with Portworx
Portworx implements the Docker Volume Plugin Specification.
The plugin API allows creation, instantiation, and lifecycle management of Portworx volumes. This allows direct use by Docker and DCOS via dvdi.
Discovery
Docker scans the plugin directory (/run/docker/plugins
) on startup and whenever a user or a container requests a plugin by name.
When the Portworx container is run, a unix domain socket pxd.sock
is exported under /var/run/docker/plugins
directory. Portworx volumes are shown as owned by volume driver pxd
.
Create
See https://docs.docker.com/engine/reference/commandline/volume_create/
Portworx volumes are created by specifying volume driver as pxd
.
Here is an example of how to create a 10GB volume with replication factor set to 3:
docker volume create --driver pxd \
--opt size=10G \
--opt repl = 3 \
--name my_portworx_vol
Docker looks in its cache before sending the request to create to Portworx. For this reason, Portworx by Pure Storage recommends to not mix-and-match create and delete operations with pxctl and docker. If a volume with the same name is created again, it is a No-op.
Use of options in docker volume create
You can include any desired volume options with the volume create
command:
--opt io_priority=high
The following table lists what options you can include:
Name | Description | Example |
---|---|---|
fs | Specifies a filesystem to be laid out: xfs|ext4. Note: The only preferred and default filesystem across all backends is ext4. The default backend can also support xfs. However, the PX-StoreV2 backend exclusively supports ext4, with no provision for any other filesystems. | fs: "ext4" |
repl | Specifies the replication factor for the volume: 1|2|3 | repl: "3" |
sharedv4 | Creates a globally shared namespace volume which can be used by multiple pods over NFS with POSIX compliant semantics | sharedv4: "true" |
sharedv4_svc_type | Indicates the mechanism Kubernetes will use for locating your sharedv4 volume. If you use this flag and there's a failover of the nodes running your sharedv4 volume, you no longer need to restart your pods. Possible values are: ClusterIP or LoadBalancer. | sharedv4_svc_type: "ClusterIP" |
sharedv4_failover_strategy | Specifies how aggressively to fail over to a new server for a Sharedv4 or Sharedv4 Service volume (Valid Values: aggressive, normal). The default failover strategy for sharedv4 service volumes is aggressive, because these volumes are able to fail over without restarting all the application pods. For more information, see Sharedv4 failover and failover strategy. | sharedv4_failover_strategy: "normal" |
priority_io | Specifies IO Priority: low|medium|high. The default is low | priority_io: "high" |
io_profile | Overrides I/O algorithm that Portworx uses for a volume. For more information about IO profiles, see the IO profiles section of the documentation. | io_profile: "db" |
group | Specifies the group a volume should belong too. Portworx restricts replication sets of volumes of the same group on different nodes. If the force group option 'fg' is set to true, the volume group rule is strictly enforced. By default, it's not strictly enforced. | group: "volgroup1" |
fg | Enforces volume group policy. If a volume belonging to a group cannot find nodes for its replication sets which don't have other volumes of the same group, the volume creation will fail. | fg: "true" |
label | Arbitrary key: value labels that can be applied on a volume | label: "name: mypxvol" |
nodes | Specifies comma-separated Portworx Node IDs to use for replication sets of the volume | nodes: "minion1,minion2" |
ephemeral | Creates the ephemeral volumes | ephemeral: false |
size | Specifies a volume size in GB (default 1) | size: "1073741824" |
block_size | Specifies a block size in Bytes (default 4096) | block_size: "4096" |
queue_depth | Specifies a block device queue depth. (Valid Range: [1 256]) (default 128) | queue_depth: 128 |
snap_interval | Specifies an interval in minutes at which periodic snapshots will be triggered. Set to 0 to disable snapshots | |
snap_schedule | Specifies the name of the snapshot schedule policy created using the pxctl sched-policy command | Refer to this page for examples |
zones | Specify comma-separated zone names in which the volume replicas should be distributed | |
racks | Specify comma-separated rack names in which the volume replicas should be distributed | |
async_io | Enables asynchronous IO processing on the backend drives and could be useful in situations where the workload profile is bursty in nature. Please reach out to Portworx Support before enabling. | async_io: false |
csi_mount_options | Specifies the mounting options for a volume through CSI | |
sharedv4_mount_options | Specifies a comma-separated list of Sharedv4 NFS client mount options provided as key=value pairs | |
proxy_endpoint | Specifies the endpoint address of the external NFS share Portworx is proxying | proxy_endpoint: "nfs://<nfs-share-endpoint> " |
proxy_nfs_subpath | Specifies the sub-path from the NFS share to which this proxy volume has access to | |
proxy_nfs_exportpath | Exports path for NFS proxy volume | proxy_nfs_exportpath: "/<mount-path> " |
export_options | Defines the export options. Currently, only NFS export options are supported for Sharedv4 volumes | |
mount_options | Specifies the mounting options for a volume when it is attached and mounted | |
best_effort_location_provisioning | Requested nodes, zones, racks are optional | |
direct_io | Enables Direct IO on a volume | direct_io: "true" |
scan_policy_trigger | Specifies the trigger point on which filesystem check is triggered. Valid Values: none, on_mount, on_next_mount | |
scan_policy_action | Specifies a filesystem scan action to be taken when triggered. Valid Values: none, scan_only, scan_repair | |
force_unsupported_fs_type | Forces a filesystem type that is not supported. The driver may still refuse to use the type | force_unsupported_fs_type: false |
match_src_vol_provision | Provisions the restore volume on the same pools as the source volume (src volume must exist) | |
nodiscard | Mounts the volume with nodiscard option. This is useful when the volume undergoes a large amount of block discards and later the application rewrites to these discarded block making the discard work done by Portworx useless. This option must be used along with auto_fstrim . Refer to this page for limitations on xfs formatted volumes. | nodiscard: false |
auto_fstrim | Enables auto_fstrim on a volume and requires the nodiscard option to be set. Refer to this page for more details. | auto_fstrim: true |
storagepolicy | Creates a volume on the Portworx cluster that follows the specified set of specs/rules. Refer this page for more details. | |
backend | Specifies which storage backend Portworx is going to provide direct access to. (Valid Values: pure_block, pure_file) | backend: "pure_block" |
pure_export_rules | Specifies the export rules for exporting a Pure Flashblade volume | pure_export_rules: "*(rw)" |
io_throttle_rd_iops | Specifies maximum Read IOPs a volume will be throttled to. Refer to this page for more details. | io_throttle_rd_iops: "1024" |
io_throttle_wr_iops | Specifies maximum Write IOPs a volume will be throttled to. Refer to this page for more details. | io_throttle_wr_iops: "1024" |
io_throttle_rd_bw | Specifies maximum Read bandwidth a volume will be throttled to. Refer to this page for more details. | io_throttle_rd_bw: "10" |
io_throttle_wr_bw | Specifies maximum Write bandwidth a volume will be throttled to. Refer to this page for more details. | io_throttle_wr_bw: "10" |
aggregation_level | Specifies the number of replication sets the volume can be aggregated from | aggregation_level: "2" |
sticky | Creates sticky volumes that cannot be deleted until the flag is disabled | sticky: "true" |
journal | Indicates if you want to use journal device for the volume's data. This will use the journal device that you used when installing Portworx. This is useful to absorb frequent syncs from short bursty workloads. Default: false | journal: "true" |
secure | Creates an encrypted volume. For details about how you can create encrypted volumes, see the Create encrypted PVCs page. | secure: "true" |
placement_strategy | Flag to refer the name of the VolumePlacementStrategy . For example:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: postgres-storage-class provisioner: pxd.portworx.com parameters: placement_strategy: "postgres-volume-affinity" For details about how to create and use VolumePlacementStrategy , see [this page] (/docs/portworx-enterprise/operations/operate-kubernetes/storage-operations/create-pvcs/volume-placement-strategies/create-use-volplacestrat.md). | placement_strategy: "postgres-volume-affinity" |
snapshotschedule.stork.libopenstorage.org | Creates scheduled snapshots with Stork. For example:snapshotschedule.stork.libopenstorage.org/default-schedule: schedulePolicyName: daily annotations: portworx/snapshot-type: local snapshotschedule.stork.libopenstorage.org weekly-schedule: schedulePolicyName: weekly annotations: portworx/snapshot-type: cloud portworx/cloud-cred-id: <credential-uuid> NOTE:This example references two schedules:
For details about how you can create scheduled snapshots with Stork, see the Scheduled snapshots page. |
Replicaset
Specify replica nodes
Multiple nodes through docker volume create is supported from 1.3.0.1.
Use the nodes option to specify the nodes you wish the replicas to reside on.
Some valid examples of this are:
- nodes="xxxxxxxx-xxxx-xxxx-xxxx-3b95b3236efc;xxxxxxxx-xxxx-xxxx-xxxx-8f5e1035ec1e"
- nodes="xxxxxxxx-xxxx-xxxx-xxxx-3b95b3236efc"
- nodes='xxxxxxxx-xxxx-xxxx-xxxx-3b95b3236efc;xxxxxxxx-xxxx-xxxx-xxxx-8f5e1035ec1e'
- nodes='xxxxxxxx-xxxx-xxxx-xxxx-3b95b3236efc'
- nodes=xxxxxxxx-xxxx-xxxx-xxxx-3b95b3236efc
It is important to note that the number of nodes should equal the repl option otherwise Portworx will pick a node for the remaining requested replica's.
Snapshot
Scheduled snapshots
Scheduled snapshots are only available in Portworx 1.3 and higher.
Use the snap_schedule option to specify the snapshot schedule.
Following are the accepted formats:
periodic=mins,snaps-to-keep
daily=hh:mm,snaps-to-keep
weekly=weekday@hh:mm,snaps-to-keep
monthly=day@hh:mm,snaps-to-keep
snaps-to-keep is optional. Periodic, Daily, Weekly and Monthly keep last 5, 7, 5 and 12 snapshots by default respectively.
Some examples of snapshots schedules are:
- snap_schedule="periodic=60,10"
- snap_schedule="daily=12:00,4"
- snap_schedule="weekly=sunday@12:00,2"
- snap_schedule="monthly=15@12:00"
The scheduled snapshots do not occur if the volume you are trying to snapshot is not attached to a container.
On-demand snapshots
There is no explicit Snapshot operation via Docker plugin API. However, this can be achieved via the create operation. Specifying a parent
operation will create a snapshot.
The following command creates the volume snap_of_my_portworx_vol
by taking a snapshot of my_portworx_vol
docker volume create --driver pxd \
--opt parent=my_portworx_vol \
--name snap_of_my_portworx_vol
The snapshot can then be used as a regular Portworx volume.
Mount
Mount operation mounts the Portworx volume in the propagated mount location. If the device is un-attached, Mount
will implicitly perform an attach as well. Mounts are reference counted and are idempotent. The same volume can be mounted at multiple locations on the same node. The same device can be mounted at the same location multiple times.
Attach
The docker plugin API does not have an Attach call. The Attach call is called internally via Mount on the first mount call for the volume.
Portworx exports virtual block devices in the host namespace. This is done via the Portworx container running on the system and does not rely on an external protocol such as iSCSI or NBD. Portworx virtual block devices only exist in host kernel memory. Two interesting consequences of this architecture are:
- volumes can be unmounted from dead/disconnected nodes
- IOs on porworx can survive a Portworx restart.
Portworx volume can be attached to any participating node in the cluster, although it can be attached to only one node at any given point in time. The node where the Portworx volume is attached is deemed the transaction coordinator and all I/O access to the volume is arbitrated by that node.
Attach is idempotent - multiple attach calls of a volume on the same node will return success. Attach on a node will return a failure, if the device is attached on a different node.
The following command will instantiate a virtual block device in the host namespace and mount it under propagated mount location. The mounted volume is then bind mounted under /data in the busybox container.
docker run -it -v my_portworx_vol:/data busybox c
Running it again will create a second instance of busybox, another bind mount and the Portworx volume reference count will be at 2. Both containers need to exit for the Portworx volume to be unmounted (and detached).
Unmount
Umount operation unmounts the Portworx volume from the propagated mount location. If this is the last surviving mount on a volume, then the volume is detached as well. Once successfully unmounted the volume can be mounted on any other node in the system.
Detach
The docker plugin API does not have an Detach call. The Detach call is called internally via Unmount on the last unmount call for the volume.
Detach operation involves unexporting the virtual block device from the host namespace. Similar to attach, this is again accomplished via the Portworx container and does not require any external protocol. Detach is idempotent, multiple calls to detach on the same device will return success. Detach is not allowed if the device is mounted on the system.
Remove
Remove will delete the underlying Portworx volume and all associated data. The operation will fail if the volume is mounted.
The following command will remove the volume my_portworx_vol
:
docker volume rm my_portworx_vol
Capabilities
The Portworx volume driver identifies itself as a global
driver. Portworx operations can be executed on any node in the cluster. Portworx volumes can be used and managed from any node in the cluster.