Most people learn Kubernetes the wrong way. They see a list of concepts — Pods, Deployments, Services, Ingress, ConfigMaps, Secrets, HPA — and memorize them like a glossary.
That's backwards. Every single Kubernetes concept exists because the previous one wasn't enough. It's not a list. It's a story of things breaking in production.
A Pod runs your container. Simple. Clean. Done.
Until it crashes. Nobody restarts it. It's just gone. In production, that's not acceptable.
A Deployment watches your Pods. One dies, it creates another. You want 3 running, it keeps 3 running. You want to scale to 10, one command does it.
Pods were too fragile for production. Deployment fixed that.
But now you have a new problem. Every Pod gets a new IP when it restarts. You have 3 Pods running your app. Another service needs to talk to them. Which IP do you use? They keep changing.
A Service gives your app one stable IP address. It finds your Pods using labels, not IPs. Pods die and come back with new IPs. The Service doesn't care. It always finds them. It also load balances traffic across all healthy Pods automatically.
Pods had unstable IPs. Service fixed that.
But your app still needs to be accessible from the internet. So you use a LoadBalancer Service, which creates a real cloud load balancer — AWS ALB, Azure LB, GCP LB. Your app gets a public endpoint. Works perfectly.
Until you have 10 services. Now you have 10 load balancers. Each one costs money every month. Your cloud bill doesn't care that 6 of them handle almost no traffic.
One load balancer. All your services behind it. Ingress routes traffic based on rules. Request comes in for /api, goes to the API service. Request comes in for /dashboard, goes to the frontend. One entry point. Smart routing. One cloud load balancer on your bill.
But Ingress is just a set of rules. Something has to execute them — Nginx, Traefik, AWS Load Balancer Controller. Ingress without a controller is just a config file nobody reads.
Your app needs configuration. Database URL. API keys. Environment name. Feature flags. So you do what feels natural — hardcode them inside the container.
Works on your laptop. Deploy to staging — wrong database URL. Deploy to production — wrong API key. You fix it by rebuilding the image every time config changes.
ConfigMap holds your configuration outside the container. You inject it into your Pod at runtime. Change the ConfigMap, redeploy. The image never changes. Same image runs in dev, staging, and production with different configs.
But then your database password is sitting in a ConfigMap. ConfigMaps aren't encrypted. Anyone with basic kubectl access can read them. That's not a mistake — that's a security incident.
Secrets hold sensitive data with separate access controls. Passwords, tokens, certificates, API keys. Your app reads them at runtime. Your image never sees them.
Traffic starts growing. Manual scaling breaks you. Some days 100 users, some days 10,000. You're running 3 Pods. On a busy day all three are maxed out. Users are seeing slow responses.
HPA (Horizontal Pod Autoscaler) watches your Pods continuously. CPU goes above 70%? It adds more Pods. Traffic drops? It scales back down. You define the minimum and maximum, Kubernetes does the rest.
But HPA adds Pods during a traffic spike, and your nodes are full. The new Pods sit in Pending state. They can't be scheduled because there's no capacity.
Cluster Autoscaler (or Karpenter on EKS) watches for Pods stuck in Pending. Not enough capacity — it adds a new node automatically. Load drops, nodes sit underutilized — it removes them. You only pay for the compute you actually need.
HPA scaled your Pods. Karpenter scaled your nodes. Together they make your cluster truly elastic.
Everything is scaling. Pods are coming up. Nodes are being added. One Pod starts consuming 4GB of memory it was never supposed to. Nobody told Kubernetes that. It keeps consuming. It starves every other Pod on that node. Those Pods start failing. A cascade begins.
Requests tell Kubernetes the minimum your Pod needs to be scheduled. Limits tell Kubernetes the maximum it's ever allowed to consume. Your cluster runs predictably. Every Pod gets what it needs. Nothing more.
Each concept exists because the previous one wasn't enough:
That's how you stop memorizing Kubernetes and start understanding it. Every concept is a scar from production.
Don't memorize the glossary. Understand the failures.
— blanho
Cloudflare went from 'that CDN company' to a full cloud platform. Most startups should look there first.
API Gateway handles the outside chaos. Service mesh handles the inside chaos.
Beyond a point, your reputation determines how high you go. And it takes years to build one that matters.