Docker & Kubernetes
Docker & Kubernetes
Conceptual reference only — not tested as a coding round. Know these for verbal questions and SD discussions.
Docker
What it is
Containerization — package app + dependencies into an isolated, portable unit. Same container runs on any machine with Docker installed.
Key concepts
Image → read-only template (layers: base OS + app + deps)
Container → running instance of an image
Registry → store for images (DockerHub, ECR)
Dockerfile → instructions to build an image
Dockerfile (know the pattern)
# Multi-stage build — keep final image small
FROM python:3.11-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.11 /usr/local/lib/python3.11
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Multi-stage builds: build stage has compiler/tools, final stage has only runtime. Smaller image.
Layers and caching
Each RUN/COPY = new layer. Layers are cached.
Put rarely-changing lines first (COPY requirements.txt before COPY . .)
→ pip install cached unless requirements.txt changes
Docker Compose (local multi-service dev)
services:
api:
build: .
ports: ["8000:8000"]
depends_on: [db, redis]
db:
image: postgres:15
environment:
POSTGRES_DB: myapp
redis:
image: redis:7
Kubernetes
What it is
Orchestration — run, scale, and manage containers across a cluster of machines.
Core objects
Pod → smallest deployable unit. 1+ containers sharing network/storage.
Usually 1 container per pod.
Deployment → manages a set of identical pods. Handles rolling updates, rollbacks.
"I want 3 replicas of this pod always running."
Service → stable network endpoint for pods (pods come and go, Service IP is stable).
Types: ClusterIP (internal), NodePort (external), LoadBalancer (cloud LB).
Ingress → HTTP routing rules. "Route /api/* to api-service, /* to frontend-service."
Needs an Ingress Controller (nginx, traefik).
ConfigMap → non-secret config (env vars, config files)
Secret → sensitive config (DB password, API keys) — base64 encoded
Namespace → virtual cluster. dev/staging/prod namespaces on same cluster.
How a deployment works
You write a Deployment manifest:
replicas: 3
image: myapp:v2
resources: { cpu: 500m, memory: 512Mi }
K8s scheduler places pods on nodes with available resources.
If a pod crashes → K8s restarts it automatically.
Rolling update: bring up new pods, kill old ones one by one (zero downtime).
Probes (health checks)
livenessProbe: # K8s restarts pod if this fails
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
readinessProbe: # K8s removes pod from Service if this fails (stops traffic)
httpGet:
path: /ready
port: 8000
Liveness: is the app alive? (restart if dead) Readiness: is the app ready for traffic? (remove from load balancer if not ready)
Resource limits
resources:
requests: # guaranteed allocation
cpu: "250m" # 250 millicores = 0.25 CPU
memory: "256Mi"
limits: # max allowed
cpu: "500m"
memory: "512Mi"
Interview Verbal Answers
"How does your app deploy?" → "Docker image built in CI (GitHub Actions), pushed to ECR. K8s Deployment pulls new image, does rolling update — brings up new pods, waits for readiness probe, then kills old ones. Zero downtime."
"What's a pod vs a deployment?" → "Pod is one running instance. Deployment manages a group of identical pods — handles scaling, rolling updates, self-healing."
"How do you handle secrets in K8s?" → "K8s Secrets for sensitive values, mounted as env vars. In production, use AWS Secrets Manager or HashiCorp Vault and inject at runtime — don't store secrets in git or docker images."
"What happens when a pod dies?" → "K8s controller notices desired state (3 replicas) != actual state (2 running) and schedules a new pod on an available node."
Related
- [[AWS/Lambda]] — serverless alternative to containers
- [[System Design/System Design Basics]] — deployment patterns