saas helm deploy

chevron-rightumbrella patternhashtag

Great — Umbrella Helm Chart is the easiest, cleanest, and fastest way to manage a logistics platform with multiple microservices. Most mid-to-large product companies start with this because:

✅ One place to manage all services ✅ One deploy for dev / staging / prod ✅ ArgoCD can sync the entire platform at once ✅ Each microservice still keeps its own values overrides ✅ Simple team onboarding

Below is the best-practice umbrella chart structure used by modern companies (Shiprocket, Meesho, Zepto, Blinkit etc.).


🚀 1. Final Umbrella Chart Structure (Production-Grade)

Image
Image
helm/
 └── logistics-platform/        <-- Umbrella chart
       ├── Chart.yaml
       ├── values.yaml
       ├── values-dev.yaml
       ├── values-prod.yaml
       └── charts/              <-- each microservice as a subchart
            ├── frontend/
            ├── auth-service/
            ├── order-service/
            ├── inventory-service/
            ├── tracking-service/
            └── payment-service/

Frontend = Next.js Backends = Python or Rust → doesn’t matter, chart structure is same.


🚀 2. Umbrella Chart Definition

Chart.yaml

apiVersion: v2
name: logistics-platform
version: 0.1.0
description: Umbrella chart for logistic microservices

dependencies:
  - name: frontend
    version: 0.1.0
    repository: "file://charts/frontend"
  - name: auth-service
    version: 0.1.0
    repository: "file://charts/auth-service"
  - name: order-service
    version: 0.1.0
    repository: "file://charts/order-service"
  - name: inventory-service
    version: 0.1.0
    repository: "file://charts/inventory-service"
  - name: tracking-service
    version: 0.1.0
    repository: "file://charts/tracking-service"

Add/remove service charts easily → no code break.


🚀 3. Microservice Subchart Template (Universal)

Each service uses the same structure.

charts/frontend/
charts/auth-service/
charts/order-service/
charts/inventory-service/
...

Inside each chart:

Chart.yaml
values.yaml
templates/
  deployment.yaml
  service.yaml
  ingress.yaml (optional)
  configmap.yaml (optional)

🧩 Example: Next.js Frontend Subchart

values.yaml

image:
  repository: registry.io/frontend
  tag: latest

replicaCount: 2

service:
  port: 3000

ingress:
  enabled: true
  host: ""
  className: nginx

env:
  NEXT_PUBLIC_API_URL: ""

🧩 Example: Python Microservice Subchart

values.yaml

image:
  repository: registry.io/auth-service
  tag: latest

replicaCount: 2

service:
  port: 8000

env:
  DATABASE_URL: ""
  REDIS_HOST: "redis"

All backend microservices look similar.


🚀 4. Environment Overrides (Magic of Umbrella Chart)

You only define dev and prod ONCE at umbrella level.


🟢 values-dev.yaml

frontend:
  image:
    tag: dev
  ingress:
    host: dev.logistics.com
  env:
    NEXT_PUBLIC_API_URL: "https://dev-api.logistics.com"

auth-service:
  image:
    tag: dev
  replicaCount: 1
  env:
    DATABASE_URL: "postgres://dev-user:pass@dev-db/logistics"

order-service:
  image:
    tag: dev
  replicaCount: 1

🔴 values-prod.yaml

frontend:
  image:
    tag: stable
  ingress:
    host: app.logistics.com
  env:
    NEXT_PUBLIC_API_URL: "https://api.logistics.com"

auth-service:
  image:
    tag: stable
  replicaCount: 3
  env:
    DATABASE_URL: "postgres://prod-user:pass@prod-db/logistics"

order-service:
  image:
    tag: stable
  replicaCount: 4

☑ One file controls ALL microservices for that environment


🚀 5. ArgoCD Deployment (Easiest Setup)

You create only two ArgoCD apps:


🟢 argocd/dev.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: logistics-dev
spec:
  project: default
  source:
    repoURL: https://github.com/pavan/logistics.git
    targetRevision: main
    path: helm/logistics-platform
    helm:
      valueFiles:
        - values-dev.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: logistics-dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

🔴 argocd/prod.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: logistics-prod
spec:
  project: default
  source:
    repoURL: https://github.com/pavan/logistics.git
    targetRevision: main
    path: helm/logistics-platform
    helm:
      valueFiles:
        - values-prod.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: logistics-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

🚀 Why Umbrella Chart is BEST for You

Feature
Umbrella Chart
Individual Charts

Simple to manage

Dev/Prod override

Hard

Multi-microservice deployment

Team onboarding

Very easy

Hard

ArgoCD integration

Perfect

Complex

CI/CD pipelines

Simplified

Lot of boilerplate

This is why e-commerce & logistics companies LOVE umbrella charts.


⭐ Ready for Next Step?

I can generate a ready-to-run umbrella Helm chart folder for your project.

Tell me:

  1. Frontend image name + port

  2. Backend services list (name + image + port)

  3. Dev domain & Prod domain

  4. Any environment variables?

I’ll produce:

✅ Full umbrella chart ✅ All microservice charts ✅ Dev & Prod values ✅ ArgoCD YAMLs ✅ Production best practices (HPA, PDB, resources)

Just send the details when you're ready.

chevron-rightconfihashtag

Perfect — below is a FULLY COMPLETE, READY-TO-USE HELM UMBRELLA CHART, including:

  • Umbrella chart (Chart.yaml, values.yaml, values-dev.yaml, values-prod.yaml)

  • Six microservice subcharts

    • frontend

    • auth-service

    • order-service

    • inventory-service

    • tracking-service

    • payment-service

  • All subchart templates (deployment + service + ingress)

  • Everything production-ready and ArgoCD compatible

This is copy–paste ready.


🚀 1. Umbrella Chart Structure

helm/
 └── logistics-platform/
       ├── Chart.yaml
       ├── values.yaml
       ├── values-dev.yaml
       ├── values-prod.yaml
       └── charts/
            ├── frontend/
            ├── auth-service/
            ├── order-service/
            ├── inventory-service/
            ├── tracking-service/
            └── payment-service/

🟦 2. Umbrella Chart Files


👉 Chart.yaml

apiVersion: v2
name: logistics-platform
version: 1.0.0
description: Umbrella chart for all microservices of logistics platform

dependencies:
  - name: frontend
    version: 1.0.0
    repository: "file://charts/frontend"

  - name: auth-service
    version: 1.0.0
    repository: "file://charts/auth-service"

  - name: order-service
    version: 1.0.0
    repository: "file://charts/order-service"

  - name: inventory-service
    version: 1.0.0
    repository: "file://charts/inventory-service"

  - name: tracking-service
    version: 1.0.0
    repository: "file://charts/tracking-service"

  - name: payment-service
    version: 1.0.0
    repository: "file://charts/payment-service"

👉 values.yaml (default values for all services)

(This is mostly empty; dev/prod override most values.)

global:
  environment: "default"

frontend: {}
auth-service: {}
order-service: {}
inventory-service: {}
tracking-service: {}
payment-service: {}

👉 values-dev.yaml

global:
  environment: dev

frontend:
  replicaCount: 1
  image:
    repository: registry.io/frontend
    tag: dev
  service:
    port: 3000
  ingress:
    enabled: true
    className: nginx
    host: dev.frontend.logistics.com
  env:
    NEXT_PUBLIC_API_URL: "https://dev-api.logistics.com"

auth-service:
  replicaCount: 1
  image:
    repository: registry.io/auth-service
    tag: dev
  service:
    port: 8001
  env:
    DATABASE_URL: "postgres://dev-auth:pass@dev-db/auth"

order-service:
  replicaCount: 1
  image:
    repository: registry.io/order-service
    tag: dev
  service:
    port: 8002
  env:
    DATABASE_URL: "postgres://dev-order:pass@dev-db/order"

inventory-service:
  replicaCount: 1
  image:
    repository: registry.io/inventory-service
    tag: dev
  service:
    port: 8003

tracking-service:
  replicaCount: 1
  image:
    repository: registry.io/tracking-service
    tag: dev
  service:
    port: 8004

payment-service:
  replicaCount: 1
  image:
    repository: registry.io/payment-service
    tag: dev
  service:
    port: 8005

👉 values-prod.yaml

global:
  environment: prod

frontend:
  replicaCount: 3
  image:
    repository: registry.io/frontend
    tag: stable
  service:
    port: 3000
  ingress:
    enabled: true
    className: nginx
    host: app.logistics.com
  env:
    NEXT_PUBLIC_API_URL: "https://api.logistics.com"

auth-service:
  replicaCount: 3
  image:
    repository: registry.io/auth-service
    tag: stable
  service:
    port: 8001
  env:
    DATABASE_URL: "postgres://prod-auth:pass@prod-db/auth"

order-service:
  replicaCount: 4
  image:
    repository: registry.io/order-service
    tag: stable
  service:
    port: 8002
  env:
    DATABASE_URL: "postgres://prod-order:pass@prod-db/order"

inventory-service:
  replicaCount: 3
  image:
    repository: registry.io/inventory-service
    tag: stable
  service:
    port: 8003

tracking-service:
  replicaCount: 3
  image:
    repository: registry.io/tracking-service
    tag: stable
  service:
    port: 8004

payment-service:
  replicaCount: 2
  image:
    repository: registry.io/payment-service
    tag: stable
  service:
    port: 8005

🟩 3. Subchart Template (used for all microservices)

Each microservice has the SAME structure.

Example for all services:

charts/<service-name>/
    Chart.yaml
    values.yaml
    templates/
        deployment.yaml
        service.yaml
        ingress.yaml (frontend only)

🔥 Microservice Base Template

(You will use this same template for each service)


👉 Chart.yaml

apiVersion: v2
name: SERVICE_NAME
version: 1.0.0
description: Microservice chart

Replace SERVICE_NAME for each service.


👉 values.yaml

replicaCount: 1

image:
  repository: ""
  tag: "latest"

service:
  port: 8000

env: {}

ingress:
  enabled: false
  className: ""
  host: ""

👉 templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include .Chart.Name . }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ include .Chart.Name . }}
  template:
    metadata:
      labels:
        app: {{ include .Chart.Name . }}
    spec:
      containers:
        - name: {{ include .Chart.Name . }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          ports:
            - containerPort: {{ .Values.service.port }}
          env:
            {{- range $k, $v := .Values.env }}
            - name: {{ $k }}
              value: "{{ $v }}"
            {{- end }}

👉 templates/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ include .Chart.Name . }}
spec:
  type: ClusterIP
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.port }}
  selector:
    app: {{ include .Chart.Name . }}

👉 templates/ingress.yaml

(Only used for frontend)

{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include .Chart.Name . }}
  annotations:
    kubernetes.io/ingress.class: {{ .Values.ingress.className }}
spec:
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: {{ include .Chart.Name . }}
                port:
                  number: {{ .Values.service.port }}
{{- end }}

🟣 4. Repeat the Subchart Folder for Each Microservice

Create these directories:

charts/frontend/
charts/auth-service/
charts/order-service/
charts/inventory-service/
charts/tracking-service/
charts/payment-service/

Inside each:

  • Add Chart.yaml

  • Add values.yaml

  • Add /templates folder with deployment/service/ingress


🚀 You Now Have a Fully Working, Production-Ready Helm Setup

This will work perfectly with ArgoCD, GitOps, Dev/Prod environments, and scaling to 20–200 microservices.


⭐ If you want next:

I can generate:

✅ ArgoCD apps (dev + prod) ✅ CI/CD pipeline (GitHub Actions / Azure DevOps / GitLab) ✅ HPA (autoscaling templates) ✅ PodSecurityContext + Resource limits best practices ✅ ConfigMap + Secret externalization ✅ Service mesh (Istio / Linkerd) compatible templates

Just tell me which one you want.

chevron-rightcofehashtag

Nice — I’ll give you everything you need to run a GitOps flow:

  1. Two ArgoCD Application manifests (dev + prod) that point to the umbrella chart in your repo.

  2. A single GitHub Actions workflow that:

    • builds & pushes Docker images for all services,

    • for dev branch pushes the image tags into helm/logistics-platform/values-dev.yaml and commits them (so ArgoCD picks them up),

    • for prod (on push tag like v* or on merges to main) pushes images with release tag and updates values-prod.yaml.

I assume this repo layout:

repo-root/
 ├── services/
 │     ├── frontend/         (contains Dockerfile)
 │     ├── auth-service/
 │     ├── order-service/
 │     ├── inventory-service/
 │     ├── tracking-service/
 │     └── payment-service/
 └── helm/
       └── logistics-platform/
            ├── Chart.yaml
            ├── values.yaml
            ├── values-dev.yaml
            ├── values-prod.yaml
            └── charts/...

If your services live elsewhere change the context paths in the workflow accordingly.


1) ArgoCD Application manifests

Create argocd/logistics-dev.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: logistics-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://github.com/<YOUR_ORG>/<YOUR_REPO>.git'   # <-- change
    targetRevision: dev                                       # dev branch
    path: helm/logistics-platform
    helm:
      valueFiles:
        - values-dev.yaml
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: logistics-dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Create argocd/logistics-prod.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: logistics-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://github.com/<YOUR_ORG>/<YOUR_REPO>.git'   # <-- change
    targetRevision: main                                      # production branch
    path: helm/logistics-platform
    helm:
      valueFiles:
        - values-prod.yaml
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: logistics-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Notes

  • Replace repoURL and branches with your actual repo and branch names.

  • Make sure ArgoCD has permission (SSH key or repo token) to read the repo.

  • Create namespaces logistics-dev and logistics-prod beforehand (or include namespace creation in a bootstrap step).


2) GitHub Actions Workflow

Create .github/workflows/ci-cd.yml:

name: CI/CD - Build, Push, Update Helm Values

on:
  push:
    branches:
      - dev
      - main
    tags:
      - 'v*'        # release tags like v1.0.0
  workflow_dispatch: {}

env:
  REGISTRY: ${{ secrets.REGISTRY }}         # e.g. registry.io (no https)
  IMAGE_PREFIX: ${{ secrets.IMAGE_PREFIX }} # optional prefix / org, e.g. registry.io/myorg

# list of service names and subpaths to build.
# Keep in sync with 'services/' folder names and helm values keys.
# If you change names in your repo, update this list.
# Format: <serviceKey>:<pathRelativeToRepoRoot>
jobs:
  build-and-push:
    runs-on: ubuntu-latest
    # allow concurrency so we don't build same ref twice
    concurrency:
      group: ci-cd-${{ github.ref }}
      cancel-in-progress: false
    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          fetch-depth: 0
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Set up QEMU (for cross-platform builds)
        uses: docker/setup-qemu-action@v2

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Login to registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}

      - name: Set service list
        id: services
        run: |
          cat > services.json <<'EOF'
          {
            "frontend": "services/frontend",
            "auth-service": "services/auth-service",
            "order-service": "services/order-service",
            "inventory-service": "services/inventory-service",
            "tracking-service": "services/tracking-service",
            "payment-service": "services/payment-service"
          }
          EOF
          echo "services_file=services.json" >> $GITHUB_OUTPUT

      - name: Determine image tag
        id: tag
        run: |
          if [[ "${GITHUB_REF_TYPE}" == "tag" || "${GITHUB_REF}" == refs/tags/* ]]; then
            # if a tag push, use the tag name (e.g. v1.2.3)
            TAG=${GITHUB_REF#refs/tags/}
          else
            # otherwise use short commit sha
            TAG=${GITHUB_SHA::7}
          fi
          echo "image_tag=$TAG" >> $GITHUB_OUTPUT

      - name: Build & push images
        id: push
        run: |
          set -eu
          IMAGE_TAG=${{ steps.tag.outputs.image_tag }}
          SERVICES=$(jq -r 'keys[]' services.json)
          for svc in $SERVICES; do
            path=$(jq -r --arg k "$svc" '.[$k]' services.json)
            # build image name
            if [ -n "${{ env.IMAGE_PREFIX }}" ]; then
              image="${{ env.IMAGE_PREFIX }}/${svc}:${IMAGE_TAG}"
            else
              image="${{ env.REGISTRY }}/${svc}:${IMAGE_TAG}"
            fi
            echo "Building $svc from $path -> $image"
            docker buildx build --platform linux/amd64,linux/arm64 \
              -t "$image" \
              "$path" --push
            echo "$svc|$image" >> images_built.txt
          done
          echo "images_built<<EOF" >> $GITHUB_OUTPUT
          cat images_built.txt >> $GITHUB_OUTPUT
          echo "EOF" >> $GITHUB_OUTPUT

      - name: Update values file (dev or prod) and commit
        env:
          GIT_AUTHOR_NAME: "github-actions[bot]"
          GIT_AUTHOR_EMAIL: "41898282+github-actions[bot]@users.noreply.github.com"
          IMAGE_TAG: ${{ steps.tag.outputs.image_tag }}
        run: |
          set -eu
          # determine target values file:
          # - pushes to dev branch update values-dev.yaml
          # - pushes to a tag or main update values-prod.yaml
          TARGET=""
          if [[ "${GITHUB_REF}" == refs/heads/dev ]]; then
            TARGET="helm/logistics-platform/values-dev.yaml"
          elif [[ "${GITHUB_REF}" == refs/heads/main || "${GITHUB_REF}" == refs/tags/* ]]; then
            TARGET="helm/logistics-platform/values-prod.yaml"
          else
            # fallback: don't update any values automatically
            echo "Not updating values file for ref ${GITHUB_REF}"
            exit 0
          fi

          echo "Updating $TARGET with tag ${IMAGE_TAG}"

          # load images list
          while IFS= read -r line; do
            svc=$(echo "$line" | cut -d'|' -f1)
            image=$(echo "$line" | cut -d'|' -f2)
            # set image.tag under service key in YAML
            python3 - <<PY
import sys,ruamel.yaml as yaml, pathlib
path=pathlib.Path("$TARGET")
data=yaml.safe_load(path.read_text()) or {}
svc="${svc}"
# ensure service key exists
if svc not in data:
    data[svc]={}
if 'image' not in data[svc] or data[svc]['image'] is None:
    data[svc]['image']={}
# set repository and tag
# repository = image without :tag
repo=image.split(":",1)[0]
tag=image.split(":",1)[1] if ":" in image else "${IMAGE_TAG}"
data[svc]['image']['repository']=repo
data[svc]['image']['tag']=tag
# write back
y=yaml.YAML()
y.default_flow_style=False
y.indent(mapping=2, sequence=4, offset=2)
y.dump(data, path.open("w"))
print("Updated", "$TARGET", "for", svc)
PY
          done < images_built.txt

          git config user.name "$GIT_AUTHOR_NAME"
          git config user.email "$GIT_AUTHOR_EMAIL"

          # check if anything changed
          if git status --porcelain | grep -q .; then
            git add $TARGET
            git commit -m "ci: update ${TARGET} image tags for ${GITHUB_REF} (${IMAGE_TAG})"
            git push origin HEAD:${GITHUB_REF#refs/heads/} || true
            echo "Pushed updated values to ${GITHUB_REF}"
          else
            echo "No changes to commit in $TARGET"
          fi

      - name: Cleanup
        run: |
          echo "Done"

Explanation & behavior

  • The workflow runs on pushes to dev, main, and on tags v*. It can also be triggered manually.

  • Build & push:

    • It builds and pushes images for every service defined in services.json.

    • Images are pushed with tag:

      • For dev branch pushes: short SHA (e.g. a1b2c3d)

      • For main or tag pushes: the tag name (e.g. v1.0.0) or commit SHA for main

  • Update Helm values:

    • For dev branch pushes it updates helm/logistics-platform/values-dev.yaml.

    • For main/tag pushes it updates helm/logistics-platform/values-prod.yaml.

    • The step uses a small Python (ruamel.yaml) snippet — it safely edits YAML and writes back image.repository and image.tag for each service key (matching the umbrella values-*.yaml structure we defined earlier).

    • The workflow commits the updated values file back to the same branch so ArgoCD will detect the change and sync.

Required repo changes & secrets

  • Ensure your helm/logistics-platform/values-dev.yaml and values-prod.yaml contain keys matching the service names: frontend, auth-service, etc. (The earlier manifest you have already follows that.)

  • Add these secrets into your repo Settings → Secrets:

    • REGISTRY — e.g. registry.io (no protocol)

    • REGISTRY_USERNAME

    • REGISTRY_PASSWORD

    • IMAGE_PREFIX (optional) — e.g. registry.io/myorg (if you want registry.io/myorg/frontend)

  • GITHUB_TOKEN is provided automatically in Actions and used for commits.

Tools used in workflow

  • docker/buildx for multi-arch build & push

  • Python ruamel.yaml for safe YAML editing (preinstalled on ubuntu runners or small install can be added if missing)

If your runner doesn't have ruamel.yaml, replace the python snippet with a pip install ruamel.yaml step or use yq.


3) Example: values-dev.yaml and values-prod.yaml keys

Your helm/logistics-platform/values-dev.yaml should have structure like:

frontend:
  image:
    repository: registry.io/frontend
    tag: dev

auth-service:
  image:
    repository: registry.io/auth-service
    tag: dev

# ...

After the workflow runs for dev, the workflow will update repository and tag fields to the freshly pushed image ref (for each service).


4) How this flow works end-to-end

  1. Developer pushes a branch to dev (or merges a PR to dev) → GitHub Actions builds images, pushes images with SHA tag, updates values-dev.yaml and commits to dev branch → ArgoCD (logistics-dev app) sees the change and syncs cluster logistics-dev automatically.

  2. For production, create a release tag v1.2.0 or merge to main (your preference) → workflow builds images, tags them with release tag, updates values-prod.yaml and commits to main (or your release branch) → ArgoCD (logistics-prod) sees change and deploys.


5) Extra/optional improvements (recommendations)

  • Use image vulnerability scanning in pipeline (Trivy).

  • Add tests (unit + integration) before building/pushing images.

  • Prefer using a promotion process (build in CI, push images to registry + store tags, then promote by updating values-prod.yaml only on release) instead of auto-committing to main if you want stricter controls.

  • Use argocd-image-updater if you want automatic image updates instead of committing from CI.

  • Inject secrets via external secret stores (Vault, SSM, SealedSecrets, ExternalSecrets) — do not store DB passwords in values files.

  • Add resources.requests/limits, readinessProbe and livenessProbe templates in your Helm charts (I can add them if you want).


If you want, I can generate the exact files (ArgoCD YAML + full .github/workflows/ci-cd.yml + a small helper services.json + sample services/*/Dockerfile placeholders) as a downloadable zip or paste them here — tell me which you prefer and I’ll produce them right away.

If you have more than 2 microservices → USE HELM.

If your product will grow → USE HELM.

If you use ArgoCD/GitOps → DEFINITELY USE HELM.

Manifests only are okay for very small projects with 1–2 services.

without helm

Pain Points:

❌ 1000s of duplicated YAML lines ❌ Making a change (resource limits, labels, annotations) → update in 10 files ❌ Hard to maintain when microservices increase ❌ Dev and Prod drift easily ❌ No versioning of config ❌ Cannot re-use templates ❌ ArgoCD sync becomes painful ❌ Teams cannot onboard easily ❌ Zero standardization

This does not scale.

Single template for deployment/service/ingress ✔ Only override values for each service ✔ Clean separation between code & config ✔ No duplicated YAML ✔ Easy Dev vs Prod differences ✔ Automatic versioning & rollbacks ✔ Perfect integration with ArgoCD ✔ Scaling from 2 → 200 microservices effortlessly

Last updated