LearnAI Sign in to start
H

Hmatasmagen

@hmatasmagen
⚡ 47 🔥 2-day streak 🏅 Builder

What @hmatasmagen is learning

Hmatasmagen has signaled discoverability in 3 topics. Each section below is a sample of the curriculum on LearnAI — open to read what they're working through.

🧠 AI Foundations — What AI is, how it learns, and why it works (or doesn't). ⚡ 47 earned here

The first principles of modern AI: what neural nets actually do, why scaling laws matter, and how today's models went from autocomplete to apparent reasoning. No math required.

Modern AI rests on a tight stack of ideas: neural networks (universal function approximators), gradient descent (the learning rule), the transformer architecture (attention as the workhorse), and scaling laws (more data + more compute = better models, with surprisingly clean exponents). You'll learn why 'pattern, not magic' is the right mental model, where today's models genuinely break down (out-of-distribution data, multi-step reasoning that can't be in-context), and the difference between pre-training, fine-tuning, RLHF, and constitutional AI. Anchored on real models — GPT-4, Claude, Gemini, Llama — not abstractions.

Sample Sparks
AI is pattern, not magic
Modern AI is statistical pattern-matching at scale, not symbolic reasoning. Once you see that, half the hype evaporates and the real capabilities sharpen.
Why scaling worked
Bigger models + more data + more compute kept producing better results long after most researchers thought it would plateau. That bet built today's frontier.
Transformers in 60 seconds
The transformer is just attention + a feedforward block, stacked. Attention lets each token look at every other token. That single trick replaced RNNs, CNNs, and almost everything else.
Pre-training vs. fine-tuning vs. RLHF
Pre-training learns the world. Fine-tuning teaches a job. RLHF makes the model pleasant to talk to. Three stages, three very different signals.
Where models still break
Long-horizon reasoning, novel symbolic tasks, anything truly out-of-distribution. Knowing where the edges are is more useful than memorizing benchmarks.
🛠️ Being an AI Builder — Mindset, workflow, and stack of a modern AI builder.

How modern AI builders work: ship in 1-7 day loops, default to tiny prototypes, use Loom before code, and treat the AI itself as a teammate. The job isn't writing code — it's choosing the right problem to solve in public.

The 2025 AI builder runs a fundamentally different loop than a 2020 software engineer. You'll learn the weekly-ship cadence (compounding > batching), the Loom-before-code spec discipline (if you can't film the experience, the idea isn't crisp), the agentic IDE workflow (steering Claude Code / Cursor / Windsurf rather than typing), and the build-in-public muscle (every Spark you ship is a recruiting asset). Plus the meta-skill: knowing which tiny problem to attack so the AI's compounding leverage actually shows up in revenue or retention, not just commits.

Sample Sparks
Tiny ships > big plans
AI moves so fast that 6-month roadmaps are fiction. Weekly ships compound; quarterly plans rot.
Loom-before-code
Start every project with a Loom video of the experience you want. If you can't make the Loom, the idea isn't crisp enough.
The agentic IDE workflow
Claude Code, Cursor, Windsurf. Pick a goal, hand the tool the keys to your repo, review at the diff. The skill is steering, not typing.
Build in public, every week
Each ship is a recruiting asset. The portfolio compounds whether or not the project does.
Pick the right tiny problem
AI's compounding leverage only shows up where the bottleneck was actually 'time to write code'. Pick those problems on purpose.
🎯 AI Product Management — Ship AI features users actually use and trust.

AI product management is the discipline of shipping non-deterministic features without losing user trust. Eval-driven roadmaps, prompt-as-spec, hallucination guardrails, and how to size an AI bet you can actually pay for.

Classic PM tools (PRDs, A/B tests, NPS) break on AI features because the output is non-deterministic, the cost-per-call scales with usage, and 'wrong' isn't binary. You'll learn the eval-driven roadmap (build the rubric before the feature), the prompt-as-spec discipline (your prompt IS the product spec), trust-budget thinking (every wrong answer spends user trust — meter it), inference-cost unit economics (a viral AI feature can bankrupt you in a weekend), and the new metrics that matter (good-answer rate, time-to-confidence, refusal-quality, escape-hatch usage).

Sample Sparks
Evals are your roadmap
If you can't measure 'is this answer good?', you can't ship it. Build the eval before the feature.
Trust budget
Every wrong answer spends a unit of user trust. Decide how much you're willing to spend before you launch.
Prompt-as-spec
Your prompt is the product spec. Treat it that way: review it, version it, ship behind a flag.
Inference unit economics
A viral AI feature can torch your runway in 72 hours. Calculate cost-per-active-user before you scale.
Refusal quality
How an AI says 'I don't know' is half the product. A great refusal is better than a confident wrong answer.

Last 14 days

7 Sparks completed

Start your own AI learning story

Sign in with Google. Pick the AI topics you want to grow in. Get personalized 5-minute Sparks every day.

Start with LearnAI →