☀️ Morning (~30 min): Quantum → ROS+SLAM → repeat
📚 Day (2–3 hrs): CS50P daily · 3B1B daily · AI Policy/Frontier · Andrew Ng · Build
—
☀️ Daily Morning Rotation — 25–30 min every day before everything else
Tue & Fri
Quantum Computing
→
Wed & Sat
ROS + SLAM
→
Sun
Review all three
→
Mon & Thu
Hardware (occasional)
AI SafetyPythonAI/MLEntrepreneurshipQuantum ☀️ROS + SLAM ☀️Hardware ☀️Review
Fresh start — May 4, 2026. Zero — starting from scratch. That's the perfect place to begin because you'll do it in the right order from day one. 1–2 hrs/day · Week 1: May 4–10 · Week 2: May 11–17 · Week 3: May 18–24
✓
Python Syntax TransferNOW · 2 DAYS
You have Java. You do NOT need a "Python for AI" or "Python for OOP" course. You need a syntax transfer, then two data science libraries. CS50P + NumPy + Pandas = everything you need for ML/data science.
Build the Conceptual Anchor — MIT 6.034WEEK 1 · 2 HRS
Watch MIT 6.034 Lectures 1 and 10 by Patrick Winston. These answer WHY AI and ML exist before you see HOW any of it works. After each, write in your own words what AI/ML is. This is your foundation session — the layer that makes everything else click.
3Blue1Brown — Math Made VisualWEEK 1–2 · ALL 7 CHAPTERS
Once Winston gives you the WHY, 3B1B makes the math intuitive before you write code. Neural networks, gradient descent, backprop, transformers, attention — visual first.
Andrew Ng — ML SpecializationWEEK 2–6 · COURSE 1 FIRST
The hands-on code layer. After concept (Winston) and math-visual (3B1B), implementation makes sense. You were on Course 2 — go back to Course 1. Course 1 = what ML is + linear/logistic regression. Course 2 = NNs in code. You can't learn 2 before 1.
Call an API. Build a RAG prototype. Run fast.ai Lesson 1. Don't wait until you "know enough" — the moment something works is when everything crystallizes.
Read Papers + Stay at the FrontierWEEK 4+ · 1 PAPER/WEEK
Daily: 5 min on HuggingFace Papers, read one abstract. Weekly: one full paper using Ng's 3-pass method. This is how frontier intuition develops — not from one big study session but daily ambient exposure.
Zero to One clarification: Philosophy book about startups, not a how-to guide. Thiel argues the most valuable companies create something genuinely new (0→1) rather than copying (1→n). Read nightly. For actual tactics: YC Startup School.
On "LLM illusions" — what your friend was discussing: LLMs predict plausible-sounding text, not truth. Causes hallucinations (confident wrong facts), sycophancy (agreeing to please), and confabulation. Read the Anthropic sycophancy paper to go deep.
Ng's 3-Pass Method: Pass 1 = abstract + headers + conclusion (5 min). Pass 2 = read body, skip proofs (1 hr). Pass 3 = reproduce from scratch (for papers that really matter). Watch the 8-min video first. ↗
Three layers, in order: (1) Conceptual — how hardware enables AI at scale. (2) Programming — Verilog/VHDL, building digital logic. (3) Inference acceleration — deploying NNs on FPGAs, quantization, HLS. Concept before code, always.
Layer 1 — Conceptual Foundation
1
CPU vs GPU vs FPGA vs ASICSTART HERE · 1 SESSION
The four hardware paradigms. CPUs are flexible but slow for parallel math. GPUs have thousands of cores — perfect for matrix ops that power NNs. FPGAs are reconfigurable hardware — you define the circuit in Verilog and it becomes that circuit. ASICs are custom chips (Google TPU, Apple Neural Engine) — fastest but fixed forever.
A neural network is fundamentally matrix multiplications + activation functions. GPUs excel at this because they run thousands of multiply-accumulate (MAC) operations in parallel. FPGAs let you build a custom MAC array tuned exactly to your network. This is why hardware matters for ML.
Threads, blocks, grids, shared memory, memory bandwidth vs compute bound. You don't need to write CUDA right now — understanding the model explains why PyTorch works the way it does and why .to('cuda') matters.
Digital Logic FoundationsBEFORE VERILOG · 2 SESSIONS
Gates, flip-flops, combinational vs sequential logic, finite state machines. Don't skip this even though it feels basic. Every FPGA design is built from these primitives. Nand Game is the best way to learn — build a computer from NAND gates in-browser.
Verilog describes hardware, not software. Key mindset shift: everything runs in parallel, not sequentially. You're defining a circuit, not telling a processor what to do. Modules, wires, registers, always blocks, clocking. HDLBits is the best interactive resource — do it like LeetCode.
A multiply-accumulate unit is the fundamental building block of a neural network accelerator. Build one: takes A and B, computes A×B + accumulator. Simulate with a testbench. This bridges Verilog and NN hardware — you're literally building the circuit a neural network runs on.
Training uses 32-bit floats. FPGAs prefer 8-bit integers (INT8) or even binary weights. Quantization = reducing precision while preserving accuracy. This is the bridge between ML and hardware deployment — why BERT can run on a phone or an FPGA.
Writing Verilog for complex NNs by hand is impractical. HLS tools let you write C/C++ and synthesize it to hardware automatically. This is how real ML inference accelerators are built in industry.
Python library from CERN that converts Keras/PyTorch models to FPGA firmware via HLS. Train a small model → run hls4ml → get synthesizable Verilog. Full stack: ML training → quantization → HLS → FPGA. Also used in CERN's real-time particle physics trigger systems.