Deep dives into algorithms, machine learning, system design, and engineering best practices.
Demystify LLMs — tokenization, pretraining objectives, scaling laws, emergent abilities, and the engineering behind training models with hundreds of billions of parameters.
Master the most fundamental data structure — arrays and strings. Learn traversal patterns, two-pointer technique, sliding window, and common interview patterns.
Understand how hash tables work internally — hash functions, collision resolution, load factors — and master hash map patterns for solving problems efficiently.
Master singly and doubly linked lists — insertion, deletion, reversal, cycle detection, and the fast/slow pointer technique that solves countless problems.
Go beyond basic push/pop — learn monotonic stacks for next-greater-element problems, queue-based BFS, and how to implement one with the other.
Build deep intuition for binary trees — inorder, preorder, postorder traversals, tree properties, and the recursive patterns that solve 90% of tree problems.
Understand the BST invariant, implement core operations, and learn why balanced trees (AVL, Red-Black) guarantee O(log n) performance.
Learn how heaps maintain partial order for efficient min/max operations. Master the patterns: top-k elements, running median, merge k sorted lists, and task scheduling.
A comprehensive guide to graph traversal, cycle detection, topological ordering, and shortest path algorithms from Dijkstra to Bellman-Ford.
Understand every major sorting algorithm — their mechanics, time complexities, stability, and when to use each. Includes merge sort, quick sort, counting sort, and quick select.
Master binary search and its powerful variants — search on rotated arrays, find boundaries, minimize/maximize with binary search on answer, and search in 2D matrices.
Demystify DP with 6 core patterns — linear, knapsack, string, grid, interval, and state machine. Learn to identify DP problems and build solutions from subproblems.
Master the art of generating all possibilities — permutations, combinations, subsets, N-Queens, and Sudoku. Learn the backtracking template that solves them all.
Learn when and why greedy works — interval scheduling, activity selection, Huffman coding, and the proof techniques that validate greedy choices.
Build a Trie from scratch and use it for autocomplete, spell checking, word search, and IP routing. Understand when tries beat hash maps for string problems.
Learn the Union-Find structure with path compression and union by rank — near-O(1) per operation. Solve connected components, cycle detection, and Kruskal's MST.
Learn the essential bit operations — XOR tricks, bit counting, power-of-two checks, bitmask DP, and how computers represent numbers at the lowest level.
Step-by-step guide to building a Retrieval-Augmented Generation pipeline with vector embeddings, chunking strategies, and LLM integration.
Understand the architecture behind every modern LLM — self-attention, multi-head attention, positional encoding, and the encoder-decoder framework that started it all.
Learn to adapt large language models to your domain without training from scratch. Master LoRA, QLoRA, and the full fine-tuning pipeline from data to deployment.
Explore the revolution in small language models (SLMs) — Phi, Gemma, Qwen, TinyLlama. Learn why sub-7B models are production-ready and how to deploy them on-device.
Master the techniques that make LLMs perform — chain-of-thought, few-shot learning, system prompts, structured output, and the patterns used by top AI engineers.
Understand the alignment techniques that turn base models into helpful assistants — Reinforcement Learning from Human Feedback, Direct Preference Optimization, and reward modeling.
Optimize LLM serving for production — KV caching, continuous batching, speculative decoding, PagedAttention (vLLM), and techniques that cut latency and cost by 10x.
Understand how text embeddings capture meaning, how vector databases enable semantic search, and how to build similarity systems that power recommendations, RAG, and search.
Build AI agents that can use tools, plan multi-step actions, and solve complex tasks autonomously. From ReAct to function calling to multi-agent systems.
Learn how to properly evaluate language models — MMLU, HumanEval, perplexity, LLM-as-judge, and why benchmarks alone don't tell the full story.
Deep dive into how LLMs break text into tokens — BPE, WordPiece, SentencePiece algorithms. Understand why tokenization affects model performance, cost, and multilingual ability.
Go beyond basic self-attention — learn the modern attention variants that make LLMs efficient: grouped query attention, multi-query attention, sliding window, and rotary position embeddings.
A practical guide to pretraining your own language model — data collection and cleaning, training infrastructure, distributed training with FSDP, and the engineering challenges at scale.