Danshtr
What @danshtr is learning
Danshtr has signaled discoverability in 3 topics. Each section below is a sample of the curriculum on LearnAI — open to read what they're working through.
🧠 AI Foundations — What AI is, how it learns, and why it works (or doesn't). ⚡ 70 earned here
The first principles of modern AI: what neural nets actually do, why scaling laws matter, and how today's models went from autocomplete to apparent reasoning. No math required.
Modern AI rests on a tight stack of ideas: neural networks (universal function approximators), gradient descent (the learning rule), the transformer architecture (attention as the workhorse), and scaling laws (more data + more compute = better models, with surprisingly clean exponents). You'll learn why 'pattern, not magic' is the right mental model, where today's models genuinely break down (out-of-distribution data, multi-step reasoning that can't be in-context), and the difference between pre-training, fine-tuning, RLHF, and constitutional AI. Anchored on real models — GPT-4, Claude, Gemini, Llama — not abstractions.
🛠️ Being an AI Builder — Mindset, workflow, and stack of a modern AI builder.
How modern AI builders work: ship in 1-7 day loops, default to tiny prototypes, use Loom before code, and treat the AI itself as a teammate. The job isn't writing code — it's choosing the right problem to solve in public.
The 2025 AI builder runs a fundamentally different loop than a 2020 software engineer. You'll learn the weekly-ship cadence (compounding > batching), the Loom-before-code spec discipline (if you can't film the experience, the idea isn't crisp), the agentic IDE workflow (steering Claude Code / Cursor / Windsurf rather than typing), and the build-in-public muscle (every Spark you ship is a recruiting asset). Plus the meta-skill: knowing which tiny problem to attack so the AI's compounding leverage actually shows up in revenue or retention, not just commits.
🎯 AI Product Management — Ship AI features users actually use and trust.
AI product management is the discipline of shipping non-deterministic features without losing user trust. Eval-driven roadmaps, prompt-as-spec, hallucination guardrails, and how to size an AI bet you can actually pay for.
Classic PM tools (PRDs, A/B tests, NPS) break on AI features because the output is non-deterministic, the cost-per-call scales with usage, and 'wrong' isn't binary. You'll learn the eval-driven roadmap (build the rubric before the feature), the prompt-as-spec discipline (your prompt IS the product spec), trust-budget thinking (every wrong answer spends user trust — meter it), inference-cost unit economics (a viral AI feature can bankrupt you in a weekend), and the new metrics that matter (good-answer rate, time-to-confidence, refusal-quality, escape-hatch usage).
Last 14 days
3 Sparks completed
Start your own AI learning story
Sign in with Google. Pick the AI topics you want to grow in. Get personalized 5-minute Sparks every day.
Start with LearnAI →