Skip to main content

Release 24.07.01

July 24, 2024

In this release PDS is integrated as a vital component of the broader Portworx platform. PDS is transitioned from a standalone product to an integral part of the Portworx ecosystem. This integration brings enhanced capabilities, streamlined management, and improved user experience across the platform.

As part of the Portworx platform, the PDS version 24.07.01 offers unified access to data services and storage solutions, allowing for more cohesive and efficient management of your data infrastructure.

note

If you are using PDS version 123.3, access the relevant documentation here.

Key changes from the PDS version 123

User interface update

The user interface has undergone a significant update, transitioning from a dark theme to a light theme. This change aims to enhance readability and provide a more visually appealing experience for users.

Introduction of Projects

A new feature called Projects has been introduced to help you organize and manage your resources more effectively. Projects allow you to group related resources, such as data services, templates, and backup locations, into a single entity. This helps streamline resource management, improves accessibility, and enables more granular control over permissions and access within the platform. With Projects, you can easily monitor and manage the resources associated with specific initiatives or teams, ensuring better organization and efficiency.

Deprecation of ZooKeeper data service

The ZooKeeper data service has been deprecated in this release. Users who rely on ZooKeeper are encouraged to migrate their workloads to alternative services supported by the PDS platform. This change is part of our ongoing efforts to streamline the platform and focus on supporting a more robust and widely-used set of data services.

Refactored APIs

The APIs have been refactored to improve performance, security, and usability. This refactoring includes changes to the structure and behavior of existing APIs, as well as the introduction of new endpoints to support upcoming features. The updated APIs provide a more consistent and efficient interface for integrating with the PDS platform, making it easier for developers to build and maintain applications that leverage our data services. Detailed documentation on the new and updated APIs is available to help users transition smoothly.

New data services versions

The following new versions of data services are now supported:

Data ServiceVersions
Cassandra
  • 4.0.13
  • 4.1.5
Consul
  • 1.18.2
Couchbase
  • 7.2.4
Elastic Search
  • 8.13.4
Kafka
  • 3.6.2
MongoDB Enterprise
  • 6.0.15
  • 7.0.11
MS SQL Server
  • 2022-CU13
MySQL
  • 8.0.37
PostgreSQL
  • 13.15
  • 14.12
  • 15.7
  • 16.3
RabbitMQ
  • 3.12.14
Redis
  • 7.0.15
  • 7.2.5

Known Issues

Issue NumberIssue Description
DS-9272Issue: Updates to backup locations, backup schedules, and cloud credentials in target clusters consuming these resources are not supported in this release. However, you can edit these resources in the PDS version 123.
User Impact: You will not be able to update backup locations, backup schedules, and cloud credentials across target clusters. This limitation requires you to manually handle changes by deleting and recreating resources with new data.
Workaround: To manage changes to backup locations, backup schedules, and cloud credentials, you can create new backup locations, backup schedules, or cloud credentials as needed, then update the deployments to point to these new resources. This ensures the continuity of backup operations without depending on updates.
DS-9298Issue: Metrics for data services are unavailable in this release.
User Impact: The unavailability of metrics impacts the ability to monitor and analyze the performance and health of data services, hindering operational oversight and troubleshooting efforts.
Workaround: Metrics for data services will be reinstated in a future release. For immediate needs, consider using external monitoring tools if metrics are critical to your operations.
DS-9443Issue: Cross cluster and same cluster restores are failing with the error message:
failed to create a new service: namespaces not found
This issue occurs when the source namespace is not present on the destination cluster during the restore process.
User Impact: When you attempt to perform cross cluster or same cluster restores, you will encounter failures if the source namespace does not exist on the destination cluster. This prevents successful restoration of services and data.
Workaround: Ensure that the source namespace is created and present on the destination cluster before initiating the restore process. This can be done by manually creating the namespace or by ensuring the namespace is included in any pre-restore setup procedures.
DS-9653Issue: When attempting to delete a template that is associated with an existing deployment, you will encounter the following error message:
unable to delete templates as they are currently in use by 1 application resources
This issue arises because PDS incorrectly treats templates as dependent on deployments, preventing their deletion while they are in use.
User Impact: You will not be able to delete templates that are currently associated with any deployments. This limitation affects the ability to manage and clean up unused templates, potentially leading to clutter and confusion in the template management interface.
Workaround: You can manually dissociate or delete the deployments that are using the template before attempting to delete the template itself.
DS-10068Issue: Renaming the target cluster does not fully propagate the new name across all components. For instance, if a cluster was originally named abc and a database was deployed, renaming the cluster afterwards does not update all references to the new cluster name.
User Impact: When you rename your target cluster, you may encounter inconsistencies where some components or references still use the old cluster name. This can lead to confusion, difficulty in managing resources, and potential operational issues due to mismatched cluster names.
Workaround: After renaming a target cluster, you should manually verify and update all components and references to ensure they reflect the new cluster name.
DS-10229Issue: Backup locations are being added even when you enter invalid cloud configuration details such as an incorrect bucket name, region, or endpoint. Although these backup locations can be added successfully, any backup operations that attempt to use these invalid locations will fail.
User Impact: You may unintentionally add backup locations with incorrect cloud configuration details. This can lead to backup operations failing when these invalid locations are selected, resulting in unsuccessful backups and potential data protection issues.
Workaround: You should carefully verify the cloud configuration details (bucket name, region, and endpoint) before adding a backup location. If an invalid backup location has already been added, delete it and re-add the location with the correct configuration details.
DS-10259Issue: Data service deployments become stuck in the Unavailable status if an incorrect cluster issuer is provided when TLS is enabled on the target cluster. In this scenario, the Custom Resource (CR) is observed in the deployed state, but the StatefulSet does not come up in the target cluster. Consequently, data services remain in an Unknown state.
User Impact: You will experience data service deployments getting stuck in the Unavailable status, preventing the deployment from progressing. This results in the inability to use the deployed data services, leading to potential disruptions and delays in operations.
Workaround: Ensure that the correct cluster issuer value is provided during the data service deployment process when TLS is enabled. Verify the validity of the cluster issuer before initiating the deployment. If a deployment is already stuck, correct the cluster issuer value and reattempt the deployment.
DS-10282Issue: Backup configurations are observed to revert to the APPLIED state after being set to the DELETING state.
User Impact: When you attempt to delete a backup schedule, the configuration does not remain deleted and returns to the APPLIED state. This can lead to confusion and potential mismanagement of backup schedules, affecting your backup operations and resource allocation.
Workaround: This issue is under investigation and will be resolved in a future release. In the meantime, closely monitor backup configurations after attempting deletion to ensure they maintain the desired state.
DS-10285Issue: Once scheduled, backup schedules cannot be deleted from the UI. However, you can delete backup schedules using the DeleteBackupConfig API or suspend the backup schedules from the UI.
User Impact: When you schedule backups through the UI, you will not be able to delete these schedules via the same interface. This limitation may lead to inconvenience as you must resort to API calls to manage your backup schedules.
Workaround: To delete backup schedules, you can use the DeleteBackupConfig API. This issue will be addressed in a future release to allow backup schedule deletions directly from the UI.
DS-10342Issue: Unable to delete a backup policy after updating a schedule policy. This happens because when a schedule is updated, the backend creates a new schedule and suspends the existing one. The suspended schedule retains its association with the original backup policy, preventing the deletion of the old backup policy.
User Impact: Users are unable to delete obsolete backup policies after updating schedule policies. This can lead to confusion and clutter in the backup policy management interface, as old policies that are no longer in use cannot be removed.
Workaround: Manually identify and delete suspended schedules that reference the old backup policies before attempting to delete the backup policies. You can do this by listing all suspended schedules, deleting the schedules that are no longer in use, and reference the old backup policies. Attempt to delete the backup policies again.
DS-10550Issue: Unable to delete a backup location if it was used in a failed backup operation. This occurs because the backup configuration object retains a reference to the backup location, even if the backup operation fails. This persistent reference prevents the deletion of the backup location and applies to both ad-hoc and scheduled backups.
User Impact: You cannot delete backup locations that were used in failed backup operations. This can lead to clutter and confusion in the backup location management interface, as incorrect or unused backup locations cannot be removed.
Workaround: Manually remove any backup configurations that reference the backup location before attempting to delete the location. You can do this by identifying the backup configurations that reference the failed backup location, deleting these backup configurations, and then attempting to delete the backup location again.