3 tier app architecture
Architecture
AWS VPC
│
├── EKS Cluster
│ ├── Namespace: prod
│ │ ├── Frontend (React App)
│ │ │ └── Service: ClusterIP
│ │ │ └── Exposed via ALB Ingress (Public, HTTPS via ACM)
│ │ │
│ │ ├── APIs (Cart, Orders, Checkout, Inventory, etc.)
│ │ │ ├── Service: ClusterIP
│ │ │ ├── DNS: <service>.<namespace>.svc.cluster.local
│ │ │ └── Routing:
│ │ │ - /cart → cart-api
│ │ │ - /order → order-api
│ │ │ - /checkout → checkout-api
│ │ │
│ │ └── Service Discovery
│ │ └── CoreDNS resolves *.svc.cluster.local to ClusterIP
│ │
│ ├── Namespace: dev
│ │ ├── Frontend + APIs (same as prod but for testing)
│ │ └── Cloudflare Tunnel Agent (Pod)
│ │ └── Secure developer access to ClusterIP services
│ │
│ └── Ingress Controller
│ └── Routes external traffic → correct ClusterIP services
│
├── RDS (Private Subnet, Same VPC)
│ ├── Primary Instance (writes)
│ ├── Read Replica (reads)
│ └── Connected via private DNS endpoint
│ - APIs in EKS access RDS directly (VPC internal network)
│
└── Developer Access
├── Bastion Host (EC2 in public subnet)
│ └── SSH tunnel → RDS (for debugging, never prod direct)
└── Cloudflare Tunnel → dev namespace (for API/frontend testing)
INRESS RESOURCE
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: prod
annotations:
# ALB settings
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789012:certificate/abc123
alb.ingress.kubernetes.io/ssl-redirect: '443'
# (Optional) Routing optimization
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
spec:
rules:
- host: app.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: react-frontend
port:
number: 80
- path: /cart
pathType: Prefix
backend:
service:
name: cart-api
port:
number: 8080
- path: /order
pathType: Prefix
backend:
service:
name: order-api
port:
number: 8080
- path: /checkout
pathType: Prefix
backend:
service:
name: checkout-api
port:
number: 8080
or via service discovery (frontend code)
fetch("http://orders-api.default.svc.cluster.local:8080/api/v1/orders")
If your React frontend is exposed via ALB to the internet (what we built in the Ingress YAML):
End-user browsers cannot resolve
*.svc.cluster.localthat’s private DNS only inside Kubernetes.
Browsers must call something public, like
https://app.company.com/cart.The Ingress Resource handles this routing:
/cart→ forwards tocart-apiClusterIP service
In this case, your React code just calls relative paths:
fetch("/cart/api/v1/items")
Traffic flow
Perfect! Let’s walk through the full request flow for your architecture — from the moment you open your browser and type https://app.company.com all the way down to the right pod in Kubernetes, touching each component along the way. I’ll keep it clear but also sprinkle in some of that friendly storytelling flair.
🚦 Step-by-step traffic flow
1. Browser → DNS
You type
https://app.company.comin your browser.Your browser (via your local DNS resolver/ISP) queries DNS.
app.company.comis an alias (CNAME) that points to the AWS ALB DNS name (something long and generated likek8s-prod-alb-123456.elb.amazonaws.com).
So before any traffic reaches AWS, DNS sends your browser to the ALB endpoint.
2. Client connection → AWS ALB
Browser initiates HTTPS connection to the ALB.
ALB presents the TLS certificate from ACM (SSL termination).
If client accepts → handshake done, secure channel established.
The ALB is now your front door.
3. ALB → Ingress Routing
The ALB is controlled by the AWS Load Balancer Controller (running in your cluster).
Ingress rules (your YAML) are read by the controller, and it configures the ALB’s routing accordingly.
Incoming request inspected by ALB:
Path
/→ goes toreact-frontendService.Path
/cart→ goes tocart-apiService./order→order-api./checkout→checkout-api.
Here’s something important:
The ALB load balances across Kubernetes Service endpoints using
target-type: ip.That means the ALB directly talks to pod IPs (not NodePorts) because AWS Load Balancer Controller registered the Pods’ IPs in target groups.
So yes, the ALB ultimately pushes traffic to Pod IPs (via kube-proxy logic if not direct IP mode, but here it’s direct).
4. Service abstraction in Kubernetes
Each backend is a ClusterIP Service.
The Service’s job: maintain a list of healthy pod endpoints (via Endpoints or EndpointsSlice).
Normally, Services abstract away pod IPs and kube-proxy load balances internally.
But since
alb.ingress.kubernetes.io/target-type: ipis set:ALB bypasses the Service’s ClusterIP.
Instead, the controller registers pod IPs directly in the target group.
Health checks from the ALB also probe the pod directly.
This is more efficient — fewer hops.
5. Pod response back
Once the traffic hits the right Pod (say,
react-frontend-pod-1234), the Pod processes the request.If React frontend → returns static files (HTML/JS/CSS).
If API request
/cart→ the cart-api pod processes logic, maybe queries RDS for data.
The Pod sends the response back over the same node network stack.
Packet goes back to ALB.
ALB sends it back to your client.
🧭 Quick Example of End-to-End Flow
Let’s say you load the shopping cart page:
Browser:
https://app.company.com/cartDNS → resolves
app.company.com→ ALB DNS name.Browser connects → HTTPS with ACM cert on ALB.
ALB rule matches
/cart→ forwards request to cart-api target group.Target group has backend pod IPs like
10.244.5.18:8080,10.244.7.22:8080.ALB load balancer round-robins to one → Suppose it lands on
cart-api-pod-2.That pod queries RDS (private endpoint) to fetch cart items.
Pod responds → flow goes Pod → ALB → Browser.
You see your cart items.
💡 Who handles load balancing where?
Here’s where balancing magic happens at each layer:
ALB: External load balancer, HTTP aware, terminates SSL, routes based on path/host, distributes traffic across Pods.
Kubernetes Service: Normally would handle Pod balancing internally (via kube-proxy or iptables). But here, since ALB is in
target-type: ipmode, ALB directly balances across Pods instead.EKS Nodes: Provide the network, but don’t really “load balance” in this setup.
RDS: For DB traffic, Amazon manages its replication and load balancing for read replicas.
🕺 Simplified analogy
Imagine a nightclub (your app):
DNS = the address you tell your taxi driver.
ALB = the bouncer — checks ID, knows where each group of friends should go (dance floor = frontend, bar = cart-api, VIP lounge = checkout-api).
Service = the head usher’s map of which tables are taken (Pod IPs).
Pod = the bartender or DJ actually serving you.
RDS = the storeroom in the back with all ingredients.
You walk in, bouncer points you to the right spot, staff do the work, and you get your drink (HTTP response) back. Efficient, secure, and everyone stays in their lane.
✅ So to sum up:
Load Balancing is done by the ALB at the pod level.
Traffic flows Browser → DNS → ALB → Pod IP (skipping Service proxying, thanks to
target-type: ip).RDS access stays private inside VPC, directly from Pods.
That’s the clean, scalable flow you’ve designed.
frontend exposed via ingress controller and load balancer
APIs can connect to each other and to webapp via clusterIP
developer test API access cloudflare tunnel and only accessible via warp
Postgres database RDS and in same vpc as eks with nat access
Inside the Cluster (API-to-API communication)
In Kubernetes (EKS), services usually talk to each other using ClusterIP Services.
Example:
react-frontendtalks toorders-apiathttp://orders-api.default.svc.cluster.local:8080.orders-apitalks toinventory-apithe same way.
This is internal-only, secured inside the VPC — no internet exposure needed.
What to Expose Publicly
Normally, only the frontend (React app) needs to be exposed externally.
APIs stay private and only accessible inside the cluster.
You can use:
Ingress Controller (NGINX, ALB Ingress Controller, or Istio Gateway)
or a Cloudflare Tunnel (safer, avoids direct public exposure).
Security best practices
APIs in prod never exposed directly.
Cloudflare Access (SSO, MFA) for developer testing.
Limit exposure to staging namespaces, not prod.
How APIs Connect to RDS
Since your RDS is in a private subnet, it is only reachable inside the VPC.
Your EKS worker nodes are also in private subnets of the same VPC → they can directly connect.
developer connect to database via bastion host
Namespaces for Environment Isolation
In EKS, you can have:
devnamespace → for developers to deploy and test their APIs.stagenamespace → for staging/pre-prod workloads.prodnamespace → for production workloads.
Last updated