Upgrade EKS

Upgrading an EKS Cluster (Real-Time Scenario)

Upgrading an AWS EKS Cluster is a critical task to keep your cluster secure, performant, and compatible with new Kubernetes features.


πŸ›  Step 1: Pre-Upgrade Checklist

πŸ”Ή Check the current version of your cluster:

aws eks describe-cluster --name my-cluster --query cluster.version

πŸ”Ή Check for supported versions:

aws eks describe-cluster --name my-cluster --query cluster.platformVersion

πŸ”Ή Verify worker nodes version:

kubectl get nodes

πŸ”Ή Check deprecated APIs (important to avoid issues):

kubectl get apiservices | grep v1beta1

πŸ”Ή Backup the etcd database (if using self-managed control plane):

etcdctl snapshot save /backup/etcd-snapshot.db

πŸš€ Step 2: Upgrade the EKS Control Plane

Upgrade EKS using AWS CLI

Upgrade EKS using eksctl

πŸ”Ή Verify the upgrade status:

βœ… Cluster status should be ACTIVE.


πŸ–₯ Step 3: Upgrade Node Groups

Managed Node Groups:

Self-Managed Nodes: 1️⃣ Drain nodes before upgrade

2️⃣ Upgrade the node AMI

3️⃣ Launch new worker nodes with updated AMI. 4️⃣ Reattach nodes to cluster and uncordon

βœ… Ensure all nodes are Ready.


⚠️ Step 4: Fix Common Issues After Upgrade

❌ Issue: API server errors

πŸ”Ή Check logs for API failures

Solution: Update deprecated API resources.


❌ Issue: Some pods stuck in CrashLoopBackOff

πŸ”Ή Check pod logs

Solution: Restart workloads that depend on old API versions.


❌ Issue: Nodes not joining after upgrade

πŸ”Ή Check node status

πŸ”Ή Check kubelet logs

Solution: Update kubelet and restart worker nodes.


πŸ”„ Step 5: Verify Cluster Post-Upgrade

πŸ”Ή Ensure cluster is on the latest version:

πŸ”Ή Check if workloads are running fine:

πŸ”Ή Confirm DNS resolution inside cluster:

πŸ”Ή Test application connectivity:


πŸ“Œ Final Thoughts

βœ”οΈ Control plane upgraded successfully βœ”οΈ Worker nodes upgraded and stable βœ”οΈ Workloads running without issues

πŸš€ NEXT: Do you want to automate this using Terraform or GitHub Actions?

Last updated