Everyone's using AI to work faster. Generate that document in seconds. Write that code in minutes. Crank out reports at lightning speed. We're all drowning in productivity gains.
But here's the problem: Anthropic's research on AI-assisted coding found that participants using AI scored 17% lower on comprehension tests than those who coded by hand. That's nearly two letter grades worse. They were faster, sure—but they learned almost nothing. And this lesson applies beyond coding tasks.
What if the real cost of AI isn't the subscription fee? What if it's your brain?
The Speed Trap
The promise is seductive. AI tools can write your documents, generate your code, research any topic in seconds. You're 10x more productive. Your output has never been higher. You're shipping faster than ever.
Then reality hits.
You're in a meeting, and someone asks you to explain that document you wrote—the one AI generated in 30 seconds. You stumble. The sentences are grammatically perfect but say too much and too little at the same time. No references, so you can't even check if the core claim is true. You've just shared AI slop with your team: verbose documents that extrapolate paragraphs from single sentences and similar words, gramatically correct but factually suspect.
Or worse: production breaks at 2am. You need to debug the code you "wrote" last week. You stare at it. The logic is clean, the patterns are solid. You have absolutely no idea what it does. The panicked realization: you can't fix what you don't understand.
Anthropic published a study this week that proved what many are discovering the hard way—using AI to produce content doesn't mean you're learning anything. The 17% comprehension gap isn't just a statistic. It's the difference between understanding your work and faking it.
Who's Really Getting Smarter Here?
Here's what the research actually found: using AI didn't guarantee lower scores. How you used it mattered.
The participants who retained knowledge didn't just copy-paste AI output. They asked follow-up questions. They requested explanations. They posed conceptual questions while working independently. They used AI to build comprehension and intent, not replace it.
These are the AI natives—people who understand that the tool isn't magic, it's a collaborator.
If you're early in your career, this is critical. You're supposed to be building foundations—learning frameworks, absorbing concepts, understanding how things work. Delegate all that to AI, and you're building a career on quicksand. You'll be fast now and incompetent later.
If you're experienced, the trap is different but just as dangerous. You can't articulate the concepts in documents with your name on them. You can't explain your code in code reviews. You can't debug your own work. Your expertise—the thing that makes you valuable—atrophies while your output multiplies.
The uncomfortable question: Are you using AI to amplify your skills, or are you delegating your learning away?
The Smart PhD Student Framework
I shared the Anthropic research with my team, and it sparked something: we needed a better mental model for AI.
Here's what works: Treat AI like a smart PhD student. Brilliant at research and writing. Great at finding patterns and generating drafts. Also prone to confidently going down completely wrong paths.
You wouldn't hand your PhD student a task and blindly accept whatever they produce. You'd collaborate. Review their approach. Challenge their assumptions. Ask them to explain their reasoning. Learn from their research while applying your judgment.
That's exactly how to use AI—not as a replacement for thinking, but as a tool that accelerates learning while multiplying productivity.
Example 1: Code Understanding with Learning Modes
Claude Code has [three output styles](https://code.claude.com/docs/en/output-styles) that illustrate this perfectly:
Default mode completes your software engineering tasks efficiently. Fast. Clean. Done. You ship code quickly, but you learn nothing about why it works.
Explanatory mode provides "Insights" between completing tasks—educational moments that explain implementation choices and codebase patterns. You still get the work done, but now you understand the approach and you can make corrections. You're steering and learning the "why" behind the code.
Learning mode takes it further. It's collaborative, learn-by-doing. Claude shares Insights while coding, but also asks you to contribute strategic pieces yourself. It adds `TODO(human)` markers in the code for you to implement. You're not just watching the work happen—you're actively participating in it.
This works for developers at all levels. Junior engineers can benefit from interacting and coding while learning patterns and best practices. Senior engineers can understand Claude's approach well enough to spot problems, make corrections, and improve the solution. Everyone can explain what the code does and debug it later.
The difference between these modes is the difference between delegation and collaboration.
Example 2: Debate and Challenge (Especially for Juniors)
When you are learning a new concept using AI tools, think about how you'd work with a smart PhD researcher in your field. You don't just accept everything they say. You debate. You bounce ideas back and forth. You challenge their assumptions and they challenge yours. You listen carefully to their perspective, then iterate on your own mental models.
That's exactly how to supercharge your learning with AI.
When AI explains a concept, don't just absorb it—question it. "Why this approach instead of that one?" "What are the tradeoffs?" "Where does this break down?" When it suggests a pattern, challenge it: "What if we did the opposite?" "How does this apply to edge cases?"
This is how you develop your own mental models and controversial takes, not just regurgitate what the AI gives you. You're not learning facts—you're building judgment. You're developing the ability to think critically about the domain, not just execute tasks in it.
For junior engineers and product managers, this is the difference between becoming someone who can only use the tools versus someone who understands when the tools are wrong. The best practitioners aren't the ones who accept AI output at face value. They're the ones who've developed strong enough mental models to know when to push back.
When AI tools don't give you what you think it's a good answer, don't throw the baby with the bath water, don't quit and go back to your traditional way of working of doing everything yourself. You are missing the opportunity to learn and leverage these incredible tools.
Example 3: Research Without Hallucinations
For research and writing, I've found a two-step approach that kills hallucinations while building real understanding:
Step 1: Search phase. Use AI to triage sources (like Google)—ask for information on a topic (I find Perplexity really good at this), get a list of links, skim for relevance. Fast triage to find what matters.
Step 2: Deep dive with RAG. Use RAG tools (NotebookLM, Copilot Notebooks, or Claude Projects) to synthesize and build that report only the sources you've verified.
Why this works: The AI only uses sources you've approved. Hallucinations drop dramatically because it's not inventing facts—it's synthesizing information you've already validated. You're saving time by not reading and synthesising 10-20 pages of web pages and PDFs yourself, but you are controlling the inputs and learning the new concepts as the AI explains them.
You end up with a valuable document and actual understanding. You can have productive conversations with your peers because you learned something, not just because you generated something.
Your 30-Day Experiment
Here's the test: Use AI differently for 30 days and measure what actually matters.
If you're junior: Switch to Learning Mode. Use Claude Code's Learning or Explanatory modes for one project. Use ChatGPT Study Mode when learning new concepts. Force yourself to engage with the AI's explanations, not just its output. The goal isn't speed—it's understanding.
If you're experienced: Adopt the challenge protocol. Question anything that looks "okay but off." Always check the references. Verify that sources are actually trusted and correct for the topic. When something feels wrong, dig deeper instead of accepting it.
For managers: Watch for team members who can't explain their AI-generated work in reviews. That's not a performance problem - it's a learning problem that compounds over time.
The measurements that matter:
- What happens if assumption Y is wrong?
- Can you explain what the code is doing?
- Will you be able to debug it later?
If the answer to any of these is "no,=" you're doing it wrong. You've optimized for output instead of outcome. You're faster but dumber.
The win-win: When you get this right, you produce valuable documents and code while actually learning. You create work you can defend. You have productive conversations with peers because you understand the topic. Your expertise grows alongside your output.
That's the real productivity gain—not just doing more, but becoming more capable while you do it.
The Real Productivity Unlock
The companies that win with AI won't be the ones that generate the most content the fastest. They'll be the ones where people get smarter alongside the tools.
AI tools have gotten dramatically better. They hallucinate less. They include references. They can be concise when prompted correctly. But none of that matters if you're just using them to avoid learning.
The research is clear: every time you use AI, you're making a choice. Delegation makes you faster and dumber. Collaboration makes you faster and smarter.
The race isn't to produce the most content. It's to build the sharpest judgment. AI doesn't have to make you dumber. It can make you faster and smarter. But only if you treat it like a sparring partner, not a shortcut.
The contrarian truth: The productivity gains are real and sustainable - but only if you redesign workflows to preserve learning loops.