Release 24.10.01
October 24, 2024
New feature
OIDC support
PDS now supports OpenID Connect (OIDC) authentication, enabling seamless integration with external identity providers such as Google, Okta, and Auth0. This allows users to authenticate securely using their organization's existing identity management system, enhancing security and simplifying user access through single sign-on (SSO). Organization administrators can configure OIDC in Portworx Central, providing streamlined and secure access for all users.
Enhancements
Consolidated and trusted image repository for PDS
PDS now supports a consolidated image repository managed and owned by Portworx, ensuring that all images, including those previously sourced from external repositories, are reviewed, secured, and hosted in a singular trusted location. This enhancement eliminates the need for PDS users to evaluate and allowlist multiple external repositories, reducing exposure to untrusted sources and preventing potential issues with incompatible or outdated images. By centralizing image management, PDS enhances security, simplifies deployment, and accelerates time to usage for customers using custom repositories.
Lock template for deployment
PDS now includes a template locking feature that allows administrators to lock service configuration templates during deployment. When enabled, this feature prevents users from adding new configuration key-value pairs during deployment, ensuring that all data services adhere strictly to the predefined configurations. This enhancement improves consistency and control over deployments, especially in environments where uniform configurations are critical for maintaining stability and performance.
New data service versions
The following new versions of data services are now supported:
Data Service | Versions |
---|---|
Cassandra |
|
Couchbase |
|
Elastic Search |
|
Kafka |
|
MongoDB Enterprise |
|
MS SQL Server |
|
Neo4j Community Edition |
|
PostgreSQL |
|
RabbitMQ |
|
Fixes
Issue Number | Issue Description |
---|---|
DS-10832 | In Teleport version 16.0.4, when deployed on a Kubernetes cluster in AWS (EKS) behind an Application Load Balancer (ALB), there were connectivity problems when integrating with a MiTM (Man-in-the-Middle) proxy server. Specifically, when the Teleport agent routed all traffic through the MiTM proxy server to connect to the Teleport server, the connection would initially establish successfully but was abruptly dropped shortly afterward. User Impact: Users running Teleport on EKS behind an ALB and utilizing a MiTM proxy server experienced unexpected connection drops between the Teleport agent and server. This issue interrupted connectivity and caused instability in environments that relied on MiTM proxies for traffic routing. Resolution: This issue has been resolved in the current release. Teleport now maintains a stable connection when routing traffic through a MiTM proxy server in environments running on Kubernetes (EKS) behind an ALB. Users can now integrate Teleport with MiTM proxies without experiencing connection disruptions. |
Known issues (Errata)
Issue Number | Issue Description |
---|---|
DS-12443 | Issue: Backup location validation in PDS is only performed during the initial backup location creation. After the backup location is created with valid credentials and a valid location (for example, S3, S3-compatible, Azure), the validation process does not repeat during subsequent backups or ad-hoc backup operations. If the bucket or location is deleted after the initial backup location creation, backups will still proceed as if the location is valid, but they will ultimately fail, leaving a reference to the invalid backup location. User Impact: Backups may proceed even if the underlying bucket or location has been deleted, causing failures later in the process while still maintaining incorrect references to the deleted backup location. Additionally, if users attempt to re-create the backup location using the same credentials and bucket details, ad-hoc backups will fail due to re-validation, which correctly identifies the missing or invalid bucket. Workaround: To prevent failed backups after a bucket or location has been deleted, ensure that the backup location remains valid and intact following its initial creation. If the bucket is deleted, create a new backup location with updated bucket details and credentials to ensure backups proceed successfully. |
Release 24.09.02
September 25, 2024
New data service
Neo4j Community Edition
The Neo4j Community Edition data service is now available on the PDS platform. This data service provides powerful graph database capabilities, making it ideal for applications that require complex relationship mappings, such as recommendation engines and fraud detection. The integration enhances data analytics and improves performance for relationship-intensive queries, driving innovation and offering a competitive edge.
See the supported version for the Neo4j Community Edition data service.
Enhancement
Microsoft SQL Server Always On availability groups
While SQL Server is already supported in PDS, the integration of SQL Server Always On availability groups (AG) enhances data availability. It offers automatic failover, ensuring minimal downtime for critical applications. This results in a robust, scalable, and resilient database infrastructure for PDS customers.
See the supported version and configurable parameters for SQL Server data service.
Fixes
Issue Number | Issue Description |
---|---|
DS-10584 | Project administrators lacked the capability to view, re-invite, or delete member invitations they had sent. This limitation required Project administrators to rely on Account administrators for managing their project member invitations. User Impact: Project administrators were unable to track the status of invitations, re-send invitations, or revoke them if needed, leading to delays and dependency on Account administrators. Resolution: In this release, the project Members page has been updated to the project Access page, which now includes an Invitations tab. This new feature allows Project administrators to view all sent member invitations, re-invite members, and delete invitations as necessary. This enhancement provides Project administrators with greater control and efficiency in managing project memberships. |
DS-10858 | In the Access Manager page, members who have joined the Portworx platform with an email containing the word admin cannot be removed. This issue arises because the current deletion process relies on the user's email address, which can trigger certain restrictions and block the deletion action. User Impact: Users who have been invited to the Portworx platform with an email address containing the word admin (for example, xyz-admin@purestorage.com ) and accepted the invitation will not be able to be removed from the Access Manager page. This limitation affects all members irrespective of their assigned role, causing inconvenience in user management and potentially leading to security and access control challenges. Resolution: To resolve this issue, the deletion process in the IAM API is modified to use the member’s UID instead of their email address. By utilizing the UID, which is a unique identifier for each member, the deletion action will bypass the restrictions associated with certain keywords in the email address. This approach aligns with the existing methodology used in the invitations delete API, ensuring consistency and reliability in the user management process. |
Known issues (Errata)
Issue Number | Issue Description |
---|---|
DS-10832 | In Teleport version 16.0.4, running on a Kubernetes cluster in AWS (EKS) behind an Application Load Balancer (ALB), the integration with a MiTM proxy server encounters connectivity problems. Specifically, when the Teleport agent routes all traffic through the MiTM proxy server to connect to the Teleport server, the connection is initially established but then gets abruptly dropped. |
DS-11844 | Issue: You can initiate data service deployments in PDS for clusters that are either disconnected or have unhealthy onboarded Portworx components. User Impact: In these scenarios, the data service deployment may fail and enter an Unknown state.Workaround: Before deploying data services, ensure that the target cluster is healthy by referring to the cluster status as described in the cluster information section. |
Release 24.09.01
September 02, 2024
This release includes a hotfix that addresses a specific issue to enhance system stability and performance.
Fixes
Issue Number | Issue Description |
---|---|
DS-11452 | If you are using a custom registry with a self-signed certificate to sync PDS charts and platform-agents charts, you might have encountered an error in the bootstrapper logs when attempting to add a target cluster. The error indicated a failure to verify the TLS certificate due to a bug in the code that failed to trust the registry certificate, even when configured correctly. User Impact: This issue would have prevented you from successfully adding target clusters when using a custom registry, causing failed deployments and blocking your setup process. Resolution: The code has been fixed to correctly trust the registry certificate, resolving the issue. |
Known Issues (Errata)
Issue Number | Issue Description |
---|---|
PWX-38801 | Issue: When PDS deployments are created or running on a Kubernetes cluster with Portworx Enterprise and Portworx Security enabled, disabling Portworx Security can cause data service deployments and storage pools to become unresponsive. User Impact: You will not be able to disable Porworx Security on Portworx Enterprise in a cluster with active PDS deployments, as it will lead to issues with responsiveness. Workaround: To manage Portworx Security changes on a Portworx Enterprise cluster with PDS deployments, you can create new backup locations, backup schedules, or cloud credentials. Then, restore the deployments to different clusters with the appropriate Portworx Security settings. This approach ensures the continuity of your PDS deployments on clusters where Portworx Security has been disabled. |
Release 24.07.01
July 24, 2024
In this release PDS is integrated as a vital component of the broader Portworx platform. PDS is transitioned from a standalone product to an integral part of the Portworx ecosystem. This integration brings enhanced capabilities, streamlined management, and improved user experience across the platform.
As part of the Portworx platform, the PDS version 24.07.01 offers unified access to data services and storage solutions, allowing for more cohesive and efficient management of your data infrastructure.
If you are using PDS version 123.3, access the relevant documentation here.
Key changes from the PDS version 123
User interface update
The user interface has undergone a significant update, transitioning from a dark theme to a light theme. This change aims to enhance readability and provide a more visually appealing experience for users.
Introduction of Projects
A new feature called Projects has been introduced to help you organize and manage your resources more effectively. Projects allow you to group related resources, such as data services, templates, and backup locations, into a single entity. This helps streamline resource management, improves accessibility, and enables more granular control over permissions and access within the platform. With Projects, you can easily monitor and manage the resources associated with specific initiatives or teams, ensuring better organization and efficiency.
Deprecation of ZooKeeper data service
The ZooKeeper data service has been deprecated in this release. Users who rely on ZooKeeper are encouraged to migrate their workloads to alternative services supported by the PDS platform. This change is part of our ongoing efforts to streamline the platform and focus on supporting a more robust and widely-used set of data services.
Refactored APIs
The APIs have been refactored to improve performance, security, and usability. This refactoring includes changes to the structure and behavior of existing APIs, as well as the introduction of new endpoints to support upcoming features. The updated APIs provide a more consistent and efficient interface for integrating with the PDS platform, making it easier for developers to build and maintain applications that leverage our data services. Detailed documentation on the new and updated APIs is available to help users transition smoothly.
New data services versions
The following new versions of data services are now supported:
Data Service | Versions |
---|---|
Cassandra |
|
Consul |
|
Couchbase |
|
Elastic Search |
|
Kafka |
|
MongoDB Enterprise |
|
MS SQL Server |
|
MySQL |
|
PostgreSQL |
|
RabbitMQ |
|
Redis |
|
Known Issues
Issue Number | Issue Description |
---|---|
DS-9272 | Issue: Updates to backup locations, backup schedules, and cloud credentials in target clusters consuming these resources are not supported in this release. However, you can edit these resources in the PDS version 123. User Impact: You will not be able to update backup locations, backup schedules, and cloud credentials across target clusters. This limitation requires you to manually handle changes by deleting and recreating resources with new data. Workaround: To manage changes to backup locations, backup schedules, and cloud credentials, you can create new backup locations, backup schedules, or cloud credentials as needed, then update the deployments to point to these new resources. This ensures the continuity of backup operations without depending on updates. |
DS-9298 | Issue: Metrics for data services are unavailable in this release. User Impact: The unavailability of metrics impacts the ability to monitor and analyze the performance and health of data services, hindering operational oversight and troubleshooting efforts. Workaround: Metrics for data services will be reinstated in a future release. For immediate needs, consider using external monitoring tools if metrics are critical to your operations. |
DS-9443 | Issue: Cross cluster and same cluster restores are failing with the error message:failed to create a new service: namespaces not found This issue occurs when the source namespace is not present on the destination cluster during the restore process. User Impact: When you attempt to perform cross cluster or same cluster restores, you will encounter failures if the source namespace does not exist on the destination cluster. This prevents successful restoration of services and data. Workaround: Ensure that the source namespace is created and present on the destination cluster before initiating the restore process. This can be done by manually creating the namespace or by ensuring the namespace is included in any pre-restore setup procedures. |
DS-9653 | Issue: When attempting to delete a template that is associated with an existing deployment, you will encounter the following error message:unable to delete templates as they are currently in use by 1 application resources This issue arises because PDS incorrectly treats templates as dependent on deployments, preventing their deletion while they are in use. User Impact: You will not be able to delete templates that are currently associated with any deployments. This limitation affects the ability to manage and clean up unused templates, potentially leading to clutter and confusion in the template management interface. Workaround: You can manually dissociate or delete the deployments that are using the template before attempting to delete the template itself. |
DS-10068 | Issue: Renaming the target cluster does not fully propagate the new name across all components. For instance, if a cluster was originally named abc and a database was deployed, renaming the cluster afterwards does not update all references to the new cluster name.User Impact: When you rename your target cluster, you may encounter inconsistencies where some components or references still use the old cluster name. This can lead to confusion, difficulty in managing resources, and potential operational issues due to mismatched cluster names. Workaround: After renaming a target cluster, you should manually verify and update all components and references to ensure they reflect the new cluster name. |
DS-10229 | Issue: Backup locations are being added even when you enter invalid cloud configuration details such as an incorrect bucket name, region, or endpoint. Although these backup locations can be added successfully, any backup operations that attempt to use these invalid locations will fail. User Impact: You may unintentionally add backup locations with incorrect cloud configuration details. This can lead to backup operations failing when these invalid locations are selected, resulting in unsuccessful backups and potential data protection issues. Workaround: You should carefully verify the cloud configuration details (bucket name, region, and endpoint) before adding a backup location. If an invalid backup location has already been added, delete it and re-add the location with the correct configuration details. |
DS-10259 | Issue: Data service deployments become stuck in the Unavailable status if an incorrect cluster issuer is provided when TLS is enabled on the target cluster. In this scenario, the Custom Resource (CR) is observed in the deployed state, but the StatefulSet does not come up in the target cluster. Consequently, data services remain in an Unknown state.User Impact: You will experience data service deployments getting stuck in the Unavailable status, preventing the deployment from progressing. This results in the inability to use the deployed data services, leading to potential disruptions and delays in operations.Workaround: Ensure that the correct cluster issuer value is provided during the data service deployment process when TLS is enabled. Verify the validity of the cluster issuer before initiating the deployment. If a deployment is already stuck, correct the cluster issuer value and reattempt the deployment. |
DS-10282 | Issue: Backup configurations are observed to revert to the APPLIED state after being set to the DELETING state. User Impact: When you attempt to delete a backup schedule, the configuration does not remain deleted and returns to the APPLIED state. This can lead to confusion and potential mismanagement of backup schedules, affecting your backup operations and resource allocation. Workaround: This issue is under investigation and will be resolved in a future release. In the meantime, closely monitor backup configurations after attempting deletion to ensure they maintain the desired state. |
DS-10285 | Issue: Once scheduled, backup schedules cannot be deleted from the UI. However, you can delete backup schedules using the DeleteBackupConfig API or suspend the backup schedules from the UI.User Impact: When you schedule backups through the UI, you will not be able to delete these schedules via the same interface. This limitation may lead to inconvenience as you must resort to API calls to manage your backup schedules. Workaround: To delete backup schedules, you can use the DeleteBackupConfig API. This issue will be addressed in a future release to allow backup schedule deletions directly from the UI. |
DS-10342 | Issue: Unable to delete a backup policy after updating a schedule policy. This happens because when a schedule is updated, the backend creates a new schedule and suspends the existing one. The suspended schedule retains its association with the original backup policy, preventing the deletion of the old backup policy. User Impact: Users are unable to delete obsolete backup policies after updating schedule policies. This can lead to confusion and clutter in the backup policy management interface, as old policies that are no longer in use cannot be removed. Workaround: Manually identify and delete suspended schedules that reference the old backup policies before attempting to delete the backup policies. You can do this by listing all suspended schedules, deleting the schedules that are no longer in use, and reference the old backup policies. Attempt to delete the backup policies again. |
DS-10550 | Issue: Unable to delete a backup location if it was used in a failed backup operation. This occurs because the backup configuration object retains a reference to the backup location, even if the backup operation fails. This persistent reference prevents the deletion of the backup location and applies to both ad-hoc and scheduled backups. User Impact: You cannot delete backup locations that were used in failed backup operations. This can lead to clutter and confusion in the backup location management interface, as incorrect or unused backup locations cannot be removed. Workaround: Manually remove any backup configurations that reference the backup location before attempting to delete the location. You can do this by identifying the backup configurations that reference the failed backup location, deleting these backup configurations, and then attempting to delete the backup location again. |
DS-10550 | Issue: Unable to delete a backup location if it was used in a failed backup operation. This occurs because the backup configuration object retains a reference to the backup location, even if the backup operation fails. This persistent reference prevents the deletion of the backup location and applies to both ad-hoc and scheduled backups. User Impact: You cannot delete backup locations that were used in failed backup operations. This can lead to clutter and confusion in the backup location management interface, as incorrect or unused backup locations cannot be removed. Workaround: Manually remove any backup configurations that reference the backup location before attempting to delete the location. You can do this by identifying the backup configurations that reference the failed backup location, deleting these backup configurations, and then attempting to delete the backup location again. |
DS-11585 | Issue: Volume creation for data services fails when Portworx Security is enabled in PDS. This bug causes the deployment operator to fail to annotate the storage class properly. As a result, the system does not detect that Portworx Security is enabled and does not apply the necessary parameters to enable PDS with Portworx Security .When this issue occurs, Portworx displays an error message that tokens not being available. This issue is particularly problematic when Portworx supports CSI and the user selects the in-tree storage option, causing the request to fail.User Impact: Users with Portworx Security enabled may experience failures when attempting to create volumes for data services. The absence of proper storage class annotation leads to issues in authenticating requests, resulting in failed deployments.Workaround: Currently, there is no available workaround for this issue. The problem arises because the system does not properly annotate the storage class when Portworx Security is enabled. Even though the guest access is enabled in the CSI provisioner, it only accepts requests with no authentication token. If an incorrect token is provided, the request fails, and when no token is provided, it passes.The PDS team is actively working on a fix for this issue, which will be included in the next immediate release. In the meantime, users should be aware of this limitation when working with Portworx Security and data service deployments. |