3 tier app architecture

chevron-rightArchitecturehashtag

AWS VPC

├── EKS Cluster
│   ├── Namespace: prod
│   │   ├── Frontend (React App)
│   │   │   └── Service: ClusterIP
│   │   │       └── Exposed via ALB Ingress (Public, HTTPS via ACM)
│   │   │
│   │   ├── APIs (Cart, Orders, Checkout, Inventory, etc.)
│   │   │   ├── Service: ClusterIP
│   │   │   ├── DNS: <service>.<namespace>.svc.cluster.local
│   │   │   └── Routing:
│   │   │       - /cart  → cart-api
│   │   │       - /order → order-api
│   │   │       - /checkout → checkout-api
│   │   │
│   │   └── Service Discovery
│   │       └── CoreDNS resolves *.svc.cluster.local to ClusterIP
│   │
│   ├── Namespace: dev
│   │   ├── Frontend + APIs (same as prod but for testing)
│   │   └── Cloudflare Tunnel Agent (Pod)
│   │       └── Secure developer access to ClusterIP services
│   │
│   └── Ingress Controller
│       └── Routes external traffic → correct ClusterIP services

├── RDS (Private Subnet, Same VPC)
│   ├── Primary Instance (writes)
│   ├── Read Replica (reads)
│   └── Connected via private DNS endpoint
│       - APIs in EKS access RDS directly (VPC internal network)

└── Developer Access
    ├── Bastion Host (EC2 in public subnet)
    │   └── SSH tunnel → RDS (for debugging, never prod direct)
    └── Cloudflare Tunnel → dev namespace (for API/frontend testing)

INRESS RESOURCE

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: frontend-ingress
  namespace: prod
  annotations:
    # ALB settings
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789012:certificate/abc123
    alb.ingress.kubernetes.io/ssl-redirect: '443'

    # (Optional) Routing optimization
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true

spec:
  rules:
  - host: app.company.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: react-frontend
            port:
              number: 80
      - path: /cart
        pathType: Prefix
        backend:
          service:
            name: cart-api
            port:
              number: 8080
      - path: /order
        pathType: Prefix
        backend:
          service:
            name: order-api
            port:
              number: 8080
      - path: /checkout
        pathType: Prefix
        backend:
          service:
            name: checkout-api
            port:
              number: 8080

or via service discovery (frontend code)

fetch("http://orders-api.default.svc.cluster.local:8080/api/v1/orders")

If your React frontend is exposed via ALB to the internet (what we built in the Ingress YAML):

  • End-user browsers cannot resolve *.svc.cluster.local

  • that’s private DNS only inside Kubernetes.

  • Browsers must call something public, like https://app.company.com/cart.

  • The Ingress Resource handles this routing:

    • /cart → forwards to cart-api ClusterIP service

  • In this case, your React code just calls relative paths:

fetch("/cart/api/v1/items")

chevron-rightTraffic flowhashtag

Perfect! Let’s walk through the full request flow for your architecture — from the moment you open your browser and type https://app.company.com all the way down to the right pod in Kubernetes, touching each component along the way. I’ll keep it clear but also sprinkle in some of that friendly storytelling flair.


🚦 Step-by-step traffic flow

1. Browser → DNS

  • You type https://app.company.com in your browser.

  • Your browser (via your local DNS resolver/ISP) queries DNS.

  • app.company.com is an alias (CNAME) that points to the AWS ALB DNS name (something long and generated like k8s-prod-alb-123456.elb.amazonaws.com).

So before any traffic reaches AWS, DNS sends your browser to the ALB endpoint.


2. Client connection → AWS ALB

  • Browser initiates HTTPS connection to the ALB.

  • ALB presents the TLS certificate from ACM (SSL termination).

  • If client accepts → handshake done, secure channel established.

The ALB is now your front door.


3. ALB → Ingress Routing

  • The ALB is controlled by the AWS Load Balancer Controller (running in your cluster).

  • Ingress rules (your YAML) are read by the controller, and it configures the ALB’s routing accordingly.

  • Incoming request inspected by ALB:

    • Path / → goes to react-frontend Service.

    • Path /cart → goes to cart-api Service.

    • /orderorder-api.

    • /checkoutcheckout-api.

Here’s something important:

  • The ALB load balances across Kubernetes Service endpoints using target-type: ip.

  • That means the ALB directly talks to pod IPs (not NodePorts) because AWS Load Balancer Controller registered the Pods’ IPs in target groups.

So yes, the ALB ultimately pushes traffic to Pod IPs (via kube-proxy logic if not direct IP mode, but here it’s direct).


4. Service abstraction in Kubernetes

  • Each backend is a ClusterIP Service.

  • The Service’s job: maintain a list of healthy pod endpoints (via Endpoints or EndpointsSlice).

  • Normally, Services abstract away pod IPs and kube-proxy load balances internally.

  • But since alb.ingress.kubernetes.io/target-type: ip is set:

    • ALB bypasses the Service’s ClusterIP.

    • Instead, the controller registers pod IPs directly in the target group.

    • Health checks from the ALB also probe the pod directly.

This is more efficient — fewer hops.


5. Pod response back

  • Once the traffic hits the right Pod (say, react-frontend-pod-1234), the Pod processes the request.

    • If React frontend → returns static files (HTML/JS/CSS).

    • If API request /cart → the cart-api pod processes logic, maybe queries RDS for data.

  • The Pod sends the response back over the same node network stack.

  • Packet goes back to ALB.

  • ALB sends it back to your client.


🧭 Quick Example of End-to-End Flow

Let’s say you load the shopping cart page:

  1. Browser: https://app.company.com/cart

  2. DNS → resolves app.company.com → ALB DNS name.

  3. Browser connects → HTTPS with ACM cert on ALB.

  4. ALB rule matches /cart → forwards request to cart-api target group.

  5. Target group has backend pod IPs like 10.244.5.18:8080, 10.244.7.22:8080.

  6. ALB load balancer round-robins to one → Suppose it lands on cart-api-pod-2.

  7. That pod queries RDS (private endpoint) to fetch cart items.

  8. Pod responds → flow goes Pod → ALB → Browser.

  9. You see your cart items.


💡 Who handles load balancing where?

Here’s where balancing magic happens at each layer:

  • ALB: External load balancer, HTTP aware, terminates SSL, routes based on path/host, distributes traffic across Pods.

  • Kubernetes Service: Normally would handle Pod balancing internally (via kube-proxy or iptables). But here, since ALB is in target-type: ip mode, ALB directly balances across Pods instead.

  • EKS Nodes: Provide the network, but don’t really “load balance” in this setup.

  • RDS: For DB traffic, Amazon manages its replication and load balancing for read replicas.


🕺 Simplified analogy

Imagine a nightclub (your app):

  • DNS = the address you tell your taxi driver.

  • ALB = the bouncer — checks ID, knows where each group of friends should go (dance floor = frontend, bar = cart-api, VIP lounge = checkout-api).

  • Service = the head usher’s map of which tables are taken (Pod IPs).

  • Pod = the bartender or DJ actually serving you.

  • RDS = the storeroom in the back with all ingredients.

You walk in, bouncer points you to the right spot, staff do the work, and you get your drink (HTTP response) back. Efficient, secure, and everyone stays in their lane.


✅ So to sum up:

  • Load Balancing is done by the ALB at the pod level.

  • Traffic flows Browser → DNS → ALB → Pod IP (skipping Service proxying, thanks to target-type: ip).

  • RDS access stays private inside VPC, directly from Pods.

That’s the clean, scalable flow you’ve designed.

frontend exposed via ingress controller and load balancer

APIs can connect to each other and to webapp via clusterIP

developer test API access cloudflare tunnel and only accessible via warp

Postgres database RDS and in same vpc as eks with nat access

Inside the Cluster (API-to-API communication)

  • In Kubernetes (EKS), services usually talk to each other using ClusterIP Services.

  • Example:

    • react-frontend talks to orders-api at http://orders-api.default.svc.cluster.local:8080.

    • orders-api talks to inventory-api the same way.

  • This is internal-only, secured inside the VPC — no internet exposure needed.

What to Expose Publicly

  • Normally, only the frontend (React app) needs to be exposed externally.

  • APIs stay private and only accessible inside the cluster.

  • You can use:

    • Ingress Controller (NGINX, ALB Ingress Controller, or Istio Gateway)

    • or a Cloudflare Tunnel (safer, avoids direct public exposure).

Security best practices

  • APIs in prod never exposed directly.

  • Cloudflare Access (SSO, MFA) for developer testing.

  • Limit exposure to staging namespaces, not prod.

How APIs Connect to RDS

  • Since your RDS is in a private subnet, it is only reachable inside the VPC.

  • Your EKS worker nodes are also in private subnets of the same VPC → they can directly connect.

developer connect to database via bastion host

Namespaces for Environment Isolation

  • In EKS, you can have:

    • dev namespace → for developers to deploy and test their APIs.

    • stage namespace → for staging/pre-prod workloads.

    • prod namespace → for production workloads.

Last updated