Scenario and challenge

Absolutely, Pavan. Here's a mock interview-style discussion for a Docker-to-Kubernetes migration, following the same structured approach β€” real-time scenarios, challenges, and solutions you could confidently share in an interview:


πŸš€ Context

You’re telling the interviewer that your team migrated services running as standalone Docker containers (possibly on EC2 or VMs) into Kubernetes.


πŸ‘€ Interviewer: You mentioned migrating from Docker to Kubernetes. Can you walk me through some real-world challenges you faced during this transition and how you addressed them?


πŸ‘¨β€πŸ’» Pavan: Sure. One of the first challenges we faced was translating Docker Compose configurations into Kubernetes manifests. Our services were defined in docker-compose.yml, with networks, volumes, environment variables, and dependencies.

To migrate, we used kompose initially for conversion, but it generated very verbose and sometimes suboptimal YAML. So, I rewrote most of the manifests manually β€” defining Deployments, Services, ConfigMaps, and PersistentVolumeClaims to better suit Kubernetes-native patterns. This also helped modularize and improve long-term maintainability.


πŸ‘€ Interviewer: How did you handle service discovery and networking? That tends to be quite different from Docker.


πŸ‘¨β€πŸ’» Pavan: Exactly. In Docker, services communicated over bridge networks with hardcoded hostnames or container names. But in Kubernetes, we shifted to ClusterIP Services and DNS-based service discovery.

The challenge was refactoring legacy services that had hardcoded IPs or hostnames. We solved this by introducing environment variables for service URLs and using Kubernetes’ built-in DNS resolution like http://service-name.namespace.svc.cluster.local. We also enforced readiness and liveness probes to ensure services didn’t crash-loop due to network delays.


πŸ‘€ Interviewer: How did you manage persistent data, like volumes?


πŸ‘¨β€πŸ’» Pavan: Great question. In Docker, we used local host volumes like /var/lib/mysql. But Kubernetes discourages hostPath for production use. So we moved to PersistentVolumes backed by AWS EBS (we were on EKS).

One key challenge was data migration β€” we had to ensure no data loss. So, we took volume backups, moved them to EBS, and mounted them via PVCs in stateful pods. For services like MySQL, we used StatefulSets and scheduled downtime windows during migration to ensure data consistency.


πŸ‘€ Interviewer: How did your team deal with container orchestration concerns like scaling and self-healing?


πŸ‘¨β€πŸ’» Pavan: Previously, we had to manually restart Docker containers or use basic scripts to restart failed processes. With Kubernetes, we took advantage of horizontal pod autoscaling, ReplicaSets, and probes.

One real issue we faced was that some services would take longer to start due to initialization tasks. Without proper readiness probes, they were getting marked as ready too soon and started receiving traffic. We fixed this by implementing accurate readiness probes and startup probes, especially for databases and backend APIs with heavy init logic.


πŸ‘€ Interviewer: What about logging and monitoring? Jenkins to GitHub Actions is one thing, but Docker to K8s affects your whole observability setup.


πŸ‘¨β€πŸ’» Pavan: True, that was another big shift. With Docker, we used basic docker logs and some ELK stack setup. In Kubernetes, we adopted EFK (Elasticsearch, Fluent Bit, Kibana) and later switched to Loki with Grafana for better scalability.

One challenge was log retention and storage costs β€” initially, we collected everything. We optimized it by filtering logs using Fluent Bit, dropping noisy logs, and adjusting retention policies. We also set up Prometheus + Grafana for metrics, using node exporters and application-level exporters like for Nginx, PostgreSQL, etc.


πŸ‘€ Interviewer: How did you handle deployment strategies like zero-downtime or rollbacks?


πŸ‘¨β€πŸ’» Pavan: Before Kubernetes, deployment was via shell scripts or manually stopping/restarting containers. In Kubernetes, we implemented rolling updates via Deployment strategies.

One tricky situation was a failed release causing cascading pod failures. To address this, we configured proper resource requests/limits, maxUnavailable, and also integrated ArgoCD later for GitOps-based deployment with versioned rollbacks. This helped us revert faulty deployments quickly.


πŸ‘€ Interviewer: Any learnings from the team collaboration or onboarding side?


πŸ‘¨β€πŸ’» Pavan: Yes, Kubernetes was a steep learning curve for some team members. So I created internal documentation with diagrams for Pods, Services, Ingress, etc., and conducted weekly hands-on sessions. We also used Helm for packaging deployments, which made things simpler for the team to deploy without writing raw YAML.


Would you like me to create a one-pager or slide-ready version of this Docker-to-Kubernetes migration story for interview or portfolio use?

Last updated