Tech Stack Overview
Tech Stack Overview
How to position skills in interviews and on resume. LLM: update when new skills added, certs achieved, or projects shipped.
The Three-Layer Stack
Python (Expert) ─────────────── 5 years, production RAG/LLM systems
Go (Growing) ────────────────── 3 months, goroutines/channels/interfaces
AWS (Production use) ────────── IAM, S3, EC2, Bedrock, Lambda — real usage
Most SDE-3 candidates own one layer. Owning all three = rare. The story: "Python got me here. Go makes me dangerous in backend/infra. AWS makes me deployable end-to-end."
Python — Expert Layer
Depth: 5 years, production systems Domain: LLM/RAG pipelines, backend APIs, data engineering
| Area | Depth | Evidence |
|---|---|---|
| Core language | Expert | Decorators, OOP, generators, context managers |
| Async | Strong | asyncio, aiohttp, event loop internals |
| LLM/AI | Strong | LangChain agents, RAG pipelines (AskTGE) |
| Testing | Good | pytest, mocking, integration tests |
| Performance | Good | cProfile, memory_profiler, vectorized ops |
Interview story: Built and refactored AskTGE — a RAG + multiagent system. Handled SSE streaming, citation handling, performance bottlenecks in a production system. Real complexity, real constraints.
Resources in vault:
- [[Python/Language Core/Python Programming]]
- [[Python/Language Core/Object Oriented Programming]]
- [[Python/Language Core/Decorators]]
- [[Python/Libraries/Logging]]
- [[Python/Python Interview Questions]]
Go — Differentiator Layer
Depth: Growing (3–6 months), focused on concurrency and idiomatic patterns Domain: Backend services, CLI tools, API servers
| Area | Depth | Evidence |
|---|---|---|
| Core syntax | Solid | Variables, types, structs, interfaces |
| Concurrency | Solid | Goroutines, channels, select, sync.Mutex |
| HTTP | Solid | net/http client, JSON decode, timeouts |
| Interfaces | Good | Structural typing, embedding, polymorphism |
| Generics | Learning | Type parameters, constraints |
Why Go matters for job switch: Most Python engineers applying to backend roles at Razorpay, Zerodha, CRED, etc. cannot discuss Go at all. Discussing goroutine scheduling vs OS threads in an interview is immediately differentiating.
The angle to use: "Go is my backend language of choice for services where Python's GIL would be a problem — concurrent I/O-heavy services, CLI tools, anything where goroutines make the design cleaner."
Resources in vault:
- [[Go/Go Topics]] — full index
- [[Go/Channels]] — goroutines + concurrency
- [[Go/Interfaces]] — structural typing
- [[Go/HTTP Clients]] — net/http
- [[synthesis/Concurrency Deep Dive]] — Go vs Python vs distributed
AWS — Cloud Layer
Depth: Broad foundation — production usage across multiple services Note: MLA-C01 cert deferred until after job switch. AWS is a talking point, not the lead.
| Service | Depth | Use case |
|---|---|---|
| IAM | Solid | Roles, policies, assume-role patterns |
| S3 | Solid | Storage classes, lifecycle, presigned URLs |
| EC2 | Good | Instance types, AMI, security groups |
| VPC | Good | Subnets, NAT gateway, security groups |
| Lambda | Good | Event-driven, cold start, layers |
| Bedrock | Real (AskTGE) | Managed LLMs, agents, knowledge bases — production use |
| SageMaker | Familiar | Training, endpoints — not production |
| Kinesis/Glue | Familiar | Streaming, ETL |
| Athena | Familiar | Serverless SQL on S3 |
Resources in vault:
- [[AWS/AWS Topics]] — full index
AI/ML — Domain Expertise
This is the moat for targeting AI companies.
| Area | Depth | Evidence |
|---|---|---|
| RAG systems | Production | Built AskTGE — ingestion, chunking, vector retrieval, generation |
| LangChain | Strong | Chains, agents, tools, LCEL → [[AI & ML/Langchain]] |
| MCP | Solid | Model Context Protocol, servers, tool calling → [[AI & ML/MCP]] |
| Vector DBs | Familiar | Embeddings, similarity search, indexing strategies |
| Prompt engineering | Strong | Production prompts, chain-of-thought, few-shot |
| AWS Bedrock | Learning | Managed models, agent framework |
Resources in vault:
- [[synthesis/LLM & AI Stack]] — full AI synthesis page
- [[System Design/Problem Designs/RAG & LLM System]] — system design angle
Resume Positioning Guide
For AI/ML-heavy roles (Sarvam, Krutrim, AI startups)
Lead with: Python + LangChain + RAG production experience + MCP knowledge Mention: AskTGE (private) as "production RAG + multiagent system" De-emphasize: Go (frame as bonus, not primary)
For backend/infra roles (Razorpay, Zerodha, CRED, Groww)
Lead with: Go + Python + System Design coverage Mention: Goroutines/channels knowledge, concurrent system design De-emphasize: ML-heavy content
For cloud/data roles (AWS partner firms, data engineering)
Lead with: Python data stack + Kinesis/Glue/Athena + Bedrock production experience Mention: AWS breadth (IAM→S3→EC2→Lambda→Bedrock), MLA-C01 planned post-offer
For FAANG/Big Tech
Lead with: NeetCode 150 completion, System Design depth, Python + Go Mention: AWS cert, open source (vectorbuilds.dev projects)
Skills NOT on Stack (gaps to know about)
| Skill | Gap level | When it matters |
|---|---|---|
| Java/Kotlin | High | FAANG backend, Android |
| Kubernetes | Medium | Platform/infra roles |
| Spark/Hadoop | Low | Data engineering heavy roles |
| TypeScript/Next.js | Low | Already have (portfolio) |
| Rust | Low | Systems roles (not target) |
Related
- [[synthesis/Job Switch Hub]] — timeline and targets
- [[synthesis/Interview Prep Hub]] — round-by-round prep
- [[synthesis/Concurrency Deep Dive]] — Python vs Go deep dive
- [[synthesis/LLM & AI Stack]] — AI company targeting