Manage clusters
Managing Kubernetes clusters involves a series of tasks that ensure your applications run smoothly and securely within the Kubernetes ecosystem. Here is a guide to manage Kubernetes clusters effectively:
-
Create clusters: Use cloud provider's CLI or web interface to create clusters and relevant tools for on-premises environments.
-
Manage nodes: scale the node pool based on resource demands using the cluster autoscaler. GKE/AKS provide automatic scaling, but you can also manually adjust node sizes.
-
Deploy applications: define deployments, services, ingress, and so on in Kubernetes YAML files and apply them. Use Helm to package applications into versioned charts for easier management of complex deployments. Use kubectl rollout for controlled updates and rollback commands to revert faulty deployments.
-
Scale applications: scale pods automatically based on CPU/memory usage. Define the Horizontal Pod Autoscaler (HPA) in your deployment YAML files or create a new HPA resource with kubectl autoscale. Use Vertical Pod Autoscaler (VPA) to adjust resource requests and limits for pods dynamically.
-
Monitor and logging in the clusters: Use monitoring tools for resource monitoring. Some of the cloud services provide integrated solutions. Use ELK stack (Elasticsearch, Logstash, Kibana), or cloud-native logging like or built-in cloud logging solutions.
-
Manage security: Define fine-grained access control for users and groups. Create
Role
andClusterRole
resources along withRoleBinding
andClusterRoleBinding
to associate them with users or service accounts. Manage sensitive data using Kubernetes Secrets. Use ConfigMaps for non-sensitive environment configurations. Define rules for pod-level security, such as controlling privileged containers, setting security contexts, and so on. -
Backup your cluster data: Regularly back up your
etcd
database (the core of Kubernetes state). Ensure the cluster is set up to handle failover and data restoration processes. -
Maintain and upgrade your clusters: Regularly upgrade Kubernetes versions as new stable versions become available. On managed services like GKE/AKS/EKS, Kubernetes versions can be upgraded with minimal downtime. Drain nodes before performing maintenance and keep worker nodes updated with security patches.
-
Networking and Ingress Management: Use CNI (Container Network Interface) plugins for internal networking. Manage internal and external communication using Kubernetes Services (ClusterIP, NodePort, LoadBalancer). Manage external HTTP/S traffic with Ingress resources and use a load balancer to route traffic to your applications.
-
Monitor cluster health: Regularly monitor cluster components like etcd, API server, controller-manager, and scheduler using built-in tools or third-party solutions.
-
Troubleshooting: Use kubectl logs, kubectl describe, and kubectl get events for debugging pods and deployments. Manage nodes with kubectl top nodes or kubectl describe nodes for resource consumption.
-
Networking & Service Mesh: Use service mesh tools like Istio, Linkerd, or Consul for advanced networking features such as traffic shaping, observability, and service-to-service security.
- You cannot create a backup schedule if the cluster is in Delete Pending or Offline state.
- DeleteBackup flag is (deprecated) in PXB 2.7.3, so you cannot delete a cluster with this flag from PXB 2.7.3 onwards.
- You cannot share backups if the cluster is in Delete Pending state.
- You cannot delete a cluster if it has active backup schedules. Unless you are a super admin:
- You cannot share a cluster if the cluster is queued for deletion.
- Only cluster owner can share the cluster with the intended users.
- You cannot unshare a cluster, if restore deletion operation fails for that cluster.
Follow the topics in this section to add a cluster.
🗃️ Connect clusters
11 items
📄️ Delete clusters
delete cluster in Portworx Backup