Copy of Kubernetes

Traditionally, applications were deployed as monolithic systems on virtual machines, where a single large application ran on one VM, making scaling slow, updates risky, and resource utilization inefficient; as systems grew, teams moved toward microservices and containers to improve portability and independent scaling, but manual container management introduced new challenges such as inconsistent deployments, downtime, lack of health checks, and poor scalability; Kubernetes addresses these problems by providing an automated, declarative platform for deploying and managing containerized workloads, offering self-healing, auto-scaling, rolling updates, service discovery, and efficient resource management; as a result, organizations achieve higher availability, zero-downtime releases, better performance, faster development cycles, and a scalable foundation that works seamlessly across cloud, on-prem, and hybrid environments.

control plane

The Kubernetes Control Plane is the core component of Kubernetes, which is responsible for managing the cluster’s overall state and ensuring the desired state of resources is maintained. It orchestrates all operations in the cluster, from scheduling workloads to scaling applications and managing networking.

  1. API Server: Serves as the entry point for all cluster operations, handling RESTful API requests.

gateway for all Kubernetes operations

all kubelet all those requests first hit the API Server.

(user) You must be mapped in the aws-auth ConfigMap to access the cluster.

kubectl -> API Server -> Authentication (IAM) -> Authorization (RBAC)
        -> Admission Controller -> etcd (Store state)
        -> Controller/Scheduler -> Node/Kubelet -> Pod Running
  1. etcd: A distributed key-value store that acts as the single source of truth for the cluster’s state and configuration.

  2. Controller Manager: Manages control loops that monitor the desired state of resources (e.g., Deployments, ReplicaSets) and make adjustments to achieve that state.

  3. Scheduler: Assigns pods to appropriate nodes based on resource requirements and constraints.

Worker Node

A Worker Node is a machine in a Kubernetes cluster that runs the actual application workloads. It hosts the components necessary to execute and manage containers,

  1. Kubelet:

  • The primary node agent communicates with the control plane.

  • Ensures that the containers described in the pod specification are running and healthy.

  • Continuously monitors the pod’s status and reports back to the control plane.

Operations Performed by Kubelet on a Node

The Kubelet is the primary agent running on each node in a Kubernetes cluster. It ensures that the containers are running as expected by communicating with the Kubernetes control plane. Below are the key operations performed by Kubelet on a node:

Registers the Node with the API server. ✅ Pulls Pod definitions from the API server. ✅ Schedules and starts containers using the Container Runtime (Docker, containerd, CRI-O). ✅ Monitors Pod health and restarts failed containers. ✅ Handles Pod eviction when a node is under resource pressure. ✅ Logs and reports Pod status back to the API server.

Interacts with the container runtime using the CRI (Container Runtime Interface). ✅ Pulls container images when needed. ✅ Creates, starts, stops, and deletes containers as per the Pod definition. ✅ Mounts volumes into containers based on Persistent Volume Claims (PVCs).

Assigns IP addresses to Pods via the CNI (Container Network Interface). ✅ Sets up networking rules for Pod communication. ✅ Handles Service discovery by interacting with CoreDNS. ✅ Manages liveness and readiness probes to check container health

Attaches Persistent Volumes (PV) to Pods. ✅ Mounts Persistent Volumes (PV) using PVCs. ✅ Handles storage-related events (e.g., resizing a PVC).

Monitors node resources (CPU, memory, disk, network). ✅ Handles node pressure conditions (e.g., Out of Memory (OOM), disk pressure). ✅ Reports node status to the API server. ✅ Marks the node as NotReady if it becomes unhealthy.

Authenticates with the API server using TLS certificates. ✅ Ensures proper RBAC permissions for accessing resources. ✅ Runs containers with correct security policies (e.g., seccomp, AppArmor, SELinux)

Collects logs from containers and writes them to /var/log/pods/. ✅ Exposes metrics via the /metrics endpoint for monitoring systems (Prometheus). ✅ Reports logs to centralized logging solutions (if configured).

Handles Pod termination signals (SIGTERM). ✅ Drains the node when it is being removed from the cluster. ✅ Waits for pods to gracefully terminate before shutting down.

You can manually check Kubelet logs on a node using:

2. Container Runtime:

  • Responsible for pulling container images and running containers.

  • Examples include Docker, containerd, or CRI-O.

3. Kube Proxy:

  • Manages networking for the node, ensuring that services are reachable and load balancing is performed between pods.

  • Implements network rules to allow communication within the cluster.

  • The pod selection happens using a round-robin (default) algorithm for load balancing

  • Request 1 → pod-1

  • Request 2 → pod-2

  • Request 3 → pod-3

  • Request 4 → pod-1 (cycle repeats)

A Pod is the smallest deployable unit in Kubernetes that represents a single instance of a running process in the cluster. It can contain one or more tightly coupled containers with the same network namespace, storage, and specifications.

A ReplicaSet in Kubernetes ensures that a specified number of identical pod replicas are running at any given time. It provides self-healing capabilities by maintaining the desired state of pods, replacing any that fail or are deleted unexpectedly.

A Deployment in Kubernetes is a resource that manages stateless applications by maintaining a specified number of replicas of a pod. It provides features like rolling updates, rollbacks, scaling and self-healing, ensuring high availability and declarative updates for applications. Deployments are ideal for managing applications that require consistent uptime and easy version management.

A stateless system does not retain any client-specific data between requests.

A stateful system maintains session data or state across requests. Eg. A shopping cart that remembers selected items until checkout

  • Rolling Updates: Incrementally update pods without downtime.

  • Rollbacks: Quickly revert to a previous state if an update fails.

  • Scaling: Dynamically increase or decrease the number of replicas based on workload demands.

  • Self-healing: Automatically replace failed pods to maintain the desired state.

chevron-rightIf i can run database using deployment, then why choose statefullset?hashtag

Great question! You can technically run a database using a Kubernetes Deployment, but you shouldn't in most production use cases. Here's why:

💥 Why Deployments Are Not Suitable for Databases

  • No Stable Identity: Databases like PostgreSQL, MySQL, MongoDB, etc., expect a persistent identity — hostname, disk, etc. With Deployments, a pod might be rescheduled and come up as a different name.

  • Storage Issues: If you use a Deployment and the pod restarts on a different node, it may get a new empty volume (unless you manage PVCs manually).

  • Clustered Databases: Databases like Cassandra, RabbitMQ, Elasticsearch, etc. need ordered startup, stable peers, and pod identity, which StatefulSet provides.

🔁 Example: PostgreSQL

If you run PostgreSQL using a Deployment:

  • On pod failure or reschedule, the pod name might change.

  • The PVC may not follow, and data loss is possible if not carefully configured.

If you use a StatefulSet:

  • Pod name postgres-0 always maps to the same PVC.

  • If postgres-0 restarts, it uses the same disk.

🧠 When is Deployment okay for DB?

Only when:

  • You're running a non-persistent or in-memory database (like Redis in ephemeral mode).

  • You're using a managed database externally (RDS, Cloud SQL), and the container is just a client.

  • It's just for testing or development environments.

Use StatefulSet for:

  • Databases

  • Queues (Kafka, RabbitMQ)

  • Clusters needing identity and storage

Let me know your DB type (e.g. PostgreSQL, MongoDB, etc.), and I can show you a StatefulSet YAML tailored to that.

Excellent follow-up! Let’s clear this up — both Deployment and StatefulSet need a PersistentVolumeClaim (PVC) for persistent data, but how they handle PVCs is very different.

✅ With Deployment + PVC:

You must:

  • Manually create a PVC (or use a volumeClaimTemplate in higher tools like Helm).

  • Mount it in your Deployment definition.

  • Ensure it's ReadWriteOnce (RWO) and used by one pod only — or you'll get attach errors.

But:

  • If the pod is deleted and recreated, it may not get the same volume, unless you manage it tightly.

  • If you scale replicas >1, all pods share the same volume, which is dangerous for DBs.

✅ With StatefulSet:

You define a volumeClaimTemplates section, and Kubernetes does the rest:

K8s creates PVCs like:

Each pod gets its own dedicated, persistent volume, e.g.:

  • db-0data-db-0

  • db-1data-db-1

Even if pods are deleted, their volumes remain — perfect for databases

A DaemonSet in Kubernetes ensures that a specific pod runs on all nodes in a cluster. It is used to deploy services or components that need to run on every node to provide functionalities like logging, monitoring, or networking.

A StatefulSet in Kubernetes is a resource used to manage stateful applications that require stable, unique network identities and persistent storage. Unlike a Deployment or ReplicaSet, a StatefulSet ensures the pods are created and scaled in a predictable order, making it ideal for applications like databases, distributed systems, and queues.

  • Stateful: Maintains client specific data or session state between requests (e.g., databases).

  • Stateless: Does not retain any state; each request is independent (e.g., REST APIs).

  • Stable Network Identity: Each pod has a predictable name (e.g., app-0, app-1).

  • Persistent Storage: Each pod gets its own persistent volume, which is retained even after the pod is deleted. Kubelet on the node attaches the volume to the Pod.

  • Ordered Scaling and Updates: Pods are created, deleted, and updated in a defined sequence.

  • Resilience: Ensures data consistency and availability in stateful workloads.

  • No, StatefulSet does not provide a stable IP address to a Pod, but it provides a stable network identity (DNS name) - Unique pod name.

A ConfigMap in Kubernetes is an API object that allows you to store configuration data as key-value pairs. It decouples configuration settings from application code, enabling you to manage environment-specific configurations without modifying the application. ConfigMaps are used to inject configuration data into containers as environment variables, command-line arguments, or configuration files.

Eg. Superset configuration

A Kubernetes Secret is an object used to store sensitive information, such as passwords, OAuth tokens, SSH keys, or certificates. It helps manage confidential data securely by avoiding hardcoding sensitive values in application code or configuration files. Secrets are stored in base64-encoded format and can be accessed by pods and applications in a secure way.

A StorageClass in Kubernetes defines the parameters for dynamic storage provisioning. It acts as a blueprint for creating Persistent Volumes (PVs) based on the specified storage backend, such as AWS EBS, GCP Persistent Disks

Provisioner – Specifies the storage backend (e.g., kubernetes.io/aws-ebs). Reclaim Policy – Defines what happens after a PersistentVolume is deleted (Retain, Delete, Recycle). Volume Binding Mode – Controls when volumes are provisioned (Immediate, WaitForFirstConsumer). Parameters – Backend-specific settings like storage type, IOPS, encryption, etc. Mount Options – Extra options for mounting the volume (e.g., rw, noatime).

A Persistent Volume (PV) in Kubernetes is a storage resource that exists independently of pods and provides a way to store data persistently. Unlike ephemeral (Temporary) pod storage, a PV allows data to persist even if the pod is deleted or restarted

A Persistent Volume Claim (PVC) in Kubernetes is a request for storage by a user. It acts as a bridge between pods and Persistent Volumes (PVs), allowing pods to use storage without knowing the details of the underlying infrastructure. A PVC specifies the amount of storage and access modes needed

Access Modes in Kubernetes

Kubernetes defines three main PersistentVolume (PV) access modes:

  • ReadWriteOnce (RWO):

    • Volume can be mounted as read-write by a single node.

    • Multiple pods on the same node can share it, but not across nodes.

    • ✅ Best for most workloads (databases, Superset, ClickHouse, Prometheus, etc.). like ebs

  • ReadWriteMany (RWX):

    • Volume can be mounted as read-write by multiple nodes at the same time.

    • ✅ Best for shared storage (WordPress, CMS, shared uploads, ML model files).

    • Requires a backend that supports RWX (like EFS).

  • ReadOnlyMany (ROX):

    • Volume can be mounted as read-only by multiple nodes.

    • ✅ Useful for distributing data (e.g., preloaded config, reference datasets).

chevron-rightExamplehashtag

This YAML file includes: ✅ StorageClass – Defines the type of storage (e.g., AWS EBS, GCE PD, local storage). ✅ PersistentVolume (PV) – Represents the actual storage (e.g., 10Gi disk). ✅ PersistentVolumeClaim (PVC) – Requests storage from the PV. ✅ Pod – Uses the PVC to mount storage.

  • ClusterIP: Exposes the Service internally within the cluster. It is the default Service type and cannot be accessed externally..

  • NodePort: Exposes the Service on each node’s IP at a static port. Useful for debugging or exposing services without a cloud load balancer. (30000 to 32767)

  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. Ideal for production workloads requiring external traffic access. ELB routes traffic directly to the Pods via a Pod IP. Kube-Proxy is not involved in this path.

  • ExternalName: Maps the Service to an external DNS name. It does not create real endpoints but redirects traffic using a CNAME record.

An Ingress in Kubernetes is an API object that manages external access to services within a cluster, typically HTTP and HTTPS traffic. Unlike a LoadBalancer service, which exposes an application using an external IP, Ingress provides routing rules to direct traffic to different services based on paths or hostnames. It acts as a Layer 7 (HTTP/HTTPS) load balancer and can provide features such as SSL termination, name-based virtual hosting, and more

An Ingress Controller is a component in Kubernetes responsible for implementing the Ingress resource, processing its rules, and directing traffic accordingly. It acts as a reverse proxy, routing HTTP/HTTPS traffic based on defined Ingress rules. While an Ingress resource defines how traffic should be routed, the Ingress Controller enforces those rules and manages actual traffic flow within the cluster.

Kubernetes does not provide a default Ingress Controller; users need to deploy one such as Nginx, Traefik, HAProxy, AWS ALB, or GCP HTTP(S) Load Balancer based on their infrastructure requirements.

A Headless Service in Kubernetes is used to expose applications without providing a stable ClusterIP. Instead of routing traffic through a service proxy, it directly returns the IP addresses of the backend Pods, allowing clients to communicate with individual Pods directly. This is particularly useful for applications that require direct pod-to-pod communication, such as databases or stateful workloads.

A NetworkPolicy in Kubernetes is used to control the communication between Pods within a cluster. It allows fine-grained control over ingress (incoming) and egress (outgoing) traffic to and from Pods based on labels, IP blocks, namespaces, and ports. NetworkPolicies help enforce security by restricting unauthorized access to applications running inside the cluster.

A Container Network Interface (CNI) is a plugin interface used in Kubernetes to configure networking for containers. eg in eks we have vpc-cni plugin

The Container Runtime Interface (CRI) is a plugin interface in Kubernetes that allows the kubelet to use different container runtimes for managing pods and containers. eg. docker, containerd, crio

The Container Storage Interface (CSI) is a plugin interface that allows Kubernetes to interact with various storage systems. eg. ebs-csi driver.

Resource requests and limits define how much CPU and memory a container can use. This ensures fair resource allocation among workloads and prevents any single pod from consuming excessive resources, which could impact other applications running in the cluster.

  • Requests: The minimum amount of resources (CPU/memory) guaranteed to a container. The scheduler uses this value to place the pod on a suitable node.

  • Limits: The maximum amount of resources (CPU/memory) a container can use. If a container exceeds this, Kubernetes restricts it (for CPU) or terminates it (for memory).

Node Selector is a simple way to constrain a pod to run on a specific set of nodes. It ensures that your workloads are scheduled on the appropriate nodes based on labels assigned to them.

  • Labels: Key-value pairs assigned to nodes that categorize them based on attributes like hardware type, region, or purpose.

  • nodeSelector: A field in the pod specification that matches node labels to schedule pods on specific nodes.

  • kubectl label nodes <node-name> disktype=ssd

  • Use Node Selector → for quick & simple constraints.

  • Use Node Affinity → when you need complex rules or soft preferences.

  • Use Taints & Tolerations → when you want to keep nodes exclusive and only allow certain pods in.

A taint is a property applied to a node that prevents Pods from being scheduled on tainted node unless they have a matching toleration.

A toleration is a property set in a Pod specification that allows the Pod to be scheduled on a tainted node.

Running GPU workloads only on specialized GPU nodes.

Node Affinity is a rule that ensures Pods are scheduled on specific nodes based on defined labels.

kubectl label node storage=ssd

Use When: You want to schedule Pods on nodes that meet specific conditions.

Pod Affinity ensures that Pods are scheduled close to other Pods based on label selectors.

You want Pods to be scheduled on the same node (or nearby) as other Pods for better performance.

Placing backend and database on the same node for low-latency communication.

Pod Anti-Affinity prevents Pods from being scheduled on the same nodes or within a failure domain.

Spreading critical apps across different availability zones.

Priority

Higher-priority Pods get scheduled before lower-priority ones. If resources are insufficient, Kubernetes may preempt (evict) lower-priority Pods to make room for higher-priority ones.

Pod priorities are defined/created using PriorityClasses Kind and it is used in pod specification.

Preemption (eviction - forcefully removing a running pod from a node to free up resources.) occurs when a high-priority pod cannot be scheduled due to resource constraints, and Kubernetes evicts lower-priority pods to make room for it.

A Role is a namespaced Kubernetes object that defines a set of permissions within a specific namespace. It grants access to resources such as Pods, ConfigMaps, and Deployments based on verbs (actions) like get, list, create, delete, and update.

This Role named pod-reader allows users to get and list Pods within the default namespace.

A RoleBinding associates a Role with a user, group, or service account, granting the permissions defined in the Role.

A ClusterRole is similar to a Role but applies permissions cluster-wide instead of being restricted to a specific namespace.

A ClusterRoleBinding associates a ClusterRole with a user, group, or service account, granting them the specified permissions at the cluster level.

Manifest similar to above, just remove Namespace

A Service Account in Kubernetes is used to authenticate Pods and provide them with permissions to access the API server securely.

Each Pod runs under a Service Account (Default), which can be assigned specific RBAC (Role-Based Access Control) permissions.

E.g.. Giving Ingress Controller to Provision a Cloud load balancer

  • Create an IAM Policy for the ALB Controller

  • Create an IAM Role for ALB Controller

  • Attach the IAM Policy to the Role

  • Create a Kubernetes Service Account by adding created Role ARN

  • Add SA to pod spec: serviceAccountName: my-custom-sa

A Horizontal Pod Autoscaler (HPA) is a Kubernetes resource that automatically scales the number of pod replicas in a deployment, replica set, or stateful set based on CPU, memory, or custom metrics

The scaling decision is based on data collected from the Metrics Server

A Vertical Pod Autoscaler (VPA) automatically adjusts the CPU and memory resource requests/limits of a pod based on real-time usage

Dynamically increases memory limits to prevent pod crashes due to memory starvation.

A Startup Probe in Kubernetes is a type of probe designed to determine if a container within a pod has started successfully. It is useful for applications that have long startup times or require significant initialization before they are ready to handle requests

Step by step explanation:

  • httpGet The probe hits the container’s http://<pod-ip>:8080/startup endpoint.

  • initialDelaySeconds: 15 Kubernetes waits 15 seconds after container starts before running the first probe.

  • periodSeconds: 10 Probe runs every 10 seconds.

  • timeoutSeconds: 5 If /startup does not respond within 5 seconds, the probe fails.

  • failureThreshold: 5 If the probe fails 5 times in a row → container is killed & restarted.

If the startup probe fails, Kubernetes will kill and restart the pod. However, if the startup probe is successful, Kubernetes assumes that the container is running properly and will then rely on the readiness probe (if configured) to check whether the pod is ready to handle traffic.

A Liveness Probe in Kubernetes is a diagnostic tool used to check if a pod is still running properly. It helps Kubernetes determine whether a pod is healthy and should continue running. If a pod fails its liveness probe, Kubernetes will automatically restart it to restore functionality

Manifest same as Startup Probe

A Readiness Probe in Kubernetes is a diagnostic tool used to determine if a pod is ready to serve traffic. Unlike the Liveness Probe, which checks if a pod is still running, the readiness probe checks whether the pod is ready to accept requests. If the readiness probe fails, the pod is considered not ready, and Kubernetes will stop sending traffic to it

Probes type: HTTP and TCP probes

We create a sample app with a path to check the status with 200 or not

All Probes expect 200 OK Status Code → The app has finished initializing.

Drain Node

Draining a node in Kubernetes means evicting all the running Pods from that node so it can be safely node removed from the cluster or undergo maintenance. The kubectl drain command ensures that workloads are rescheduled onto other available nodes.

Helm is a package manager for Kubernetes that helps you define, install, and manage applications using reusable, versioned, and configurable templates. It simplifies deploying complex applications by packaging them into Helm charts.

🔒 Pod Disruption Budget (PDB)

  • Purpose → protect app availability during voluntary disruptions (node drain, upgrades).

  • Key fields (mutually exclusive):

    • minAvailable → minimum pods that must stay ready.

    • maxUnavailable → maximum pods that can be disrupted.

  • Checked during evictions (kubectl drain, cluster autoscaler).

  • Not enforced for involuntary failures (node crash, OOM).

  • Inspect with: kubectl describe pdb <name>.

🌐 Network Policy

  • Controls traffic flow between pods, namespaces, and external endpoints.

  • Default = all allowed (no isolation).

  • Needs a CNI that supports it (Calico, Cilium).

  • Key fields:

    • podSelector → which pods this policy applies to.

    • ingress / egress rules → allowed sources/destinations.

    • namespaceSelector → isolate by namespace.

  • Example: allow traffic only from frontendbackend.

🌍 FQDN in Kubernetes

  • Service DNS: <service>.<namespace>.svc.cluster.local

    • Example: nginx.default.svc.cluster.local

  • Pod DNS: <pod-ip>.<namespace>.pod.cluster.local

    • Example: 10-244-1-5.default.pod.cluster.local

  • Headless Service + StatefulSet: stable DNS per pod.

    • Example: mysql-0.mysql.default.svc.cluster.local

⚠️ Pod & Node Lifecycle / Common Errors

  • CrashLoopBackOff → app crashes repeatedly.

  • ImagePullBackOff / ErrImagePull → bad image/tag or registry auth error.

  • CreateContainerConfigError → config/secret/volume missing.

  • CreateContainerError → runtime/containerd failure.

  • OOMKilled → memory exceeded.

  • Completed → job finished successfully.

  • Evicted → pod removed due to node resource pressure.

  • Pending → unschedulable (no suitable node, taints, resources).

🛠️ Job & CronJob

  • Job → run once until completion (batch processing, db migration).

  • CronJob → schedule jobs like cron (*/5 * * * *).

  • Key fields: schedule, jobTemplate, restartPolicy.

  • Common restart policies: Never, OnFailure.

📦 Volume Types (beyond PV/PVC)

  • emptyDir → ephemeral, tied to pod lifecycle.

  • hostPath → mounts node filesystem (node-specific).

  • configMap → inject config as file/env.

  • secret → inject sensitive data (Base64 encoded).

  • projected → combine sources (configMap + secret + downwardAPI).

  • downwardAPI → expose pod metadata (name, labels) inside container.

  • Ephemeral volume → temporary storage, vanishes when pod ends.

🔐 Policy & Governance

  • OPA Gatekeeper / Kyverno → enforce org rules.

    • Example: block pods without resource limits.

  • Admission Webhooks:

    • Validating → reject invalid requests.

    • Mutating → inject/modify objects (e.g., add sidecar).

  • CRDs (Custom Resource Definitions) → extend API with new resource kinds (e.g., KafkaTopic).

  • Operators → controllers that automate app lifecycle using CRDs (e.g., DB backups, upgrades).

  • Admission Controllers (built-in) → enforce defaults/quotas (e.g., NamespaceLifecycle, LimitRanger, ResourceQuota).

  • Service Mesh Purpose → manage service-to-service traffic, security, and observability.

  • Core features:

    • Traffic control: retries, failovers, canaries.

    • Security: mTLS between services.

    • Observability: metrics, tracing, logging.

  • Popular tools: Istio, Linkerd, Consul.

  • Architecture:

    • Data plane → sidecar proxies (e.g., Envoy) inside each pod.

    • Control plane → config, policies, certificates, routing.

  • Use cases:

    • Zero-trust networking.

    • Canary/blue-green deployments.

    • Detailed telemetry per request.

Container Type

Purpose

Lifecycle

Examples

Main container

Runs the primary application or service.

Runs as long as the Pod is alive.

Web server (nginx), app backend (node.js).

Init container

Prepares environment before main containers start (setup, config, migrations).

Runs once, must complete successfully before others start.

Wait for DB, copy config files.

Sidecar container

Provides supporting functionality to the main container.

Runs alongside main container(s).

Logging agent, Istio Envoy proxy, metrics collector.

Ephemeral container

Used for debugging a running Pod (temporary).

Runs on demand, no restart policy.

kubectl debug troubleshooting container.

Status
Meaning

Running

Pod is running and at least one container is up.

Pending

Pod is accepted but not scheduled to a node (or waiting for image pull/volume).

Succeeded

Pod finished successfully (all containers exited with 0).

Failed

Pod finished but at least one container exited with non-zero.

Unknown

State can’t be determined (node communication issue).

CrashLoopBackOff

Container repeatedly crashes and restarts.

ImagePullBackOff

Failed to pull image (wrong image, auth issue, registry problem).

ErrImagePull

Error pulling container image.

CreateContainerConfigError

Invalid pod spec (bad config/secret/env).

RunContainerError

Pod scheduled, but container failed to start.

OOMKilled

Container killed because it used more memory than allowed.

Completed

Same as Succeeded, mostly for Jobs.

Last updated