AI makes you faster. Anthropic's research proves it makes you dumber.

Everyone's using AI to work faster. Generate that document in seconds. Write that code in minutes. Crank out reports at lightning speed. We're all drowning in productivity gains.

But here's the problem: Anthropic's research on AI-assisted coding found that participants using AI scored 17% lower on comprehension tests than those who coded by hand. That's nearly two letter grades worse. They were faster, sure—but they learned almost nothing. And this lesson applies beyond coding tasks.

What if the real cost of AI isn't the subscription fee? What if it's your brain?

The Speed Trap

The promise is seductive. AI tools can write your documents, generate your code, research any topic in seconds. You're 10x more productive. Your output has never been higher. You're shipping faster than ever.

Then reality hits.

You're in a meeting, and someone asks you to explain that document you wrote—the one AI generated in 30 seconds. You stumble. The sentences are grammatically perfect but say too much and too little at the same time. No references, so you can't even check if the core claim is true. You've just shared AI slop with your team: verbose documents that extrapolate paragraphs from single sentences and similar words, gramatically correct but factually suspect.

Or worse: production breaks at 2am. You need to debug the code you "wrote" last week. You stare at it. The logic is clean, the patterns are solid. You have absolutely no idea what it does. The panicked realization: you can't fix what you don't understand.

Anthropic published a study this week that proved what many are discovering the hard way—using AI to produce content doesn't mean you're learning anything. The 17% comprehension gap isn't just a statistic. It's the difference between understanding your work and faking it.

Who's Really Getting Smarter Here?

Here's what the research actually found: using AI didn't guarantee lower scores. How you used it mattered.

The participants who retained knowledge didn't just copy-paste AI output. They asked follow-up questions. They requested explanations. They posed conceptual questions while working independently. They used AI to build comprehension and intent, not replace it.

These are the AI natives—people who understand that the tool isn't magic, it's a collaborator.

If you're early in your career, this is critical. You're supposed to be building foundations—learning frameworks, absorbing concepts, understanding how things work. Delegate all that to AI, and you're building a career on quicksand. You'll be fast now and incompetent later.

If you're experienced, the trap is different but just as dangerous. You can't articulate the concepts in documents with your name on them. You can't explain your code in code reviews. You can't debug your own work. Your expertise—the thing that makes you valuable—atrophies while your output multiplies.

The uncomfortable question: Are you using AI to amplify your skills, or are you delegating your learning away?

The Smart PhD Student Framework

I shared the Anthropic research with my team, and it sparked something: we needed a better mental model for AI.

Here's what works: Treat AI like a smart PhD student. Brilliant at research and writing. Great at finding patterns and generating drafts. Also prone to confidently going down completely wrong paths.

You wouldn't hand your PhD student a task and blindly accept whatever they produce. You'd collaborate. Review their approach. Challenge their assumptions. Ask them to explain their reasoning. Learn from their research while applying your judgment.

That's exactly how to use AI—not as a replacement for thinking, but as a tool that accelerates learning while multiplying productivity.

Example 1: Code Understanding with Learning Modes

Claude Code has [three output styles](https://code.claude.com/docs/en/output-styles) that illustrate this perfectly:

Default mode completes your software engineering tasks efficiently. Fast. Clean. Done. You ship code quickly, but you learn nothing about why it works.

Explanatory mode provides "Insights" between completing tasks—educational moments that explain implementation choices and codebase patterns. You still get the work done, but now you understand the approach and you can make corrections. You're steering and learning the "why" behind the code.

Learning mode takes it further. It's collaborative, learn-by-doing. Claude shares Insights while coding, but also asks you to contribute strategic pieces yourself. It adds `TODO(human)` markers in the code for you to implement. You're not just watching the work happen—you're actively participating in it.

This works for developers at all levels. Junior engineers can benefit from interacting and coding while learning patterns and best practices. Senior engineers can understand Claude's approach well enough to spot problems, make corrections, and improve the solution. Everyone can explain what the code does and debug it later.

The difference between these modes is the difference between delegation and collaboration.

Example 2: Debate and Challenge (Especially for Juniors)

When you are learning a new concept using AI tools, think about how you'd work with a smart PhD researcher in your field. You don't just accept everything they say. You debate. You bounce ideas back and forth. You challenge their assumptions and they challenge yours. You listen carefully to their perspective, then iterate on your own mental models.

That's exactly how to supercharge your learning with AI.

When AI explains a concept, don't just absorb it—question it. "Why this approach instead of that one?" "What are the tradeoffs?" "Where does this break down?" When it suggests a pattern, challenge it: "What if we did the opposite?" "How does this apply to edge cases?"

This is how you develop your own mental models and controversial takes, not just regurgitate what the AI gives you. You're not learning facts—you're building judgment. You're developing the ability to think critically about the domain, not just execute tasks in it.

For junior engineers and product managers, this is the difference between becoming someone who can only use the tools versus someone who understands when the tools are wrong. The best practitioners aren't the ones who accept AI output at face value. They're the ones who've developed strong enough mental models to know when to push back.

When AI tools don't give you what you think it's a good answer, don't throw the baby with the bath water, don't quit and go back to your traditional way of working of doing everything yourself. You are missing the opportunity to learn and leverage these incredible tools.

Example 3: Research Without Hallucinations

For research and writing, I've found a two-step approach that kills hallucinations while building real understanding:

Step 1: Search phase. Use AI to triage sources (like Google)—ask for information on a topic (I find Perplexity really good at this), get a list of links, skim for relevance. Fast triage to find what matters. 


Step 2: Deep dive with RAG. Use RAG tools (NotebookLM, Copilot Notebooks, or Claude Projects) to synthesize and build that report only the sources you've verified. 

Why this works: The AI only uses sources you've approved. Hallucinations drop dramatically because it's not inventing facts—it's synthesizing information you've already validated. You're saving time by not reading and synthesising 10-20 pages of web pages and PDFs yourself, but you are controlling the inputs and learning the new concepts as the AI explains them.

You end up with a valuable document and actual understanding. You can have productive conversations with your peers because you learned something, not just because you generated something.

Your 30-Day Experiment

Here's the test: Use AI differently for 30 days and measure what actually matters.

If you're junior: Switch to Learning Mode. Use Claude Code's Learning or Explanatory modes for one project. Use ChatGPT Study Mode when learning new concepts. Force yourself to engage with the AI's explanations, not just its output. The goal isn't speed—it's understanding.

If you're experienced: Adopt the challenge protocol. Question anything that looks "okay but off." Always check the references. Verify that sources are actually trusted and correct for the topic. When something feels wrong, dig deeper instead of accepting it.

For managers: Watch for team members who can't explain their AI-generated work in reviews. That's not a performance problem - it's a learning problem that compounds over time.

The measurements that matter:

- Why not alternative X?
- What happens if assumption Y is wrong?
- Can you explain what the code is doing?
- Will you be able to debug it later?

If the answer to any of these is "no,=" you're doing it wrong. You've optimized for output instead of outcome. You're faster but dumber.

The win-win: When you get this right, you produce valuable documents and code while actually learning. You create work you can defend. You have productive conversations with peers because you understand the topic. Your expertise grows alongside your output.

That's the real productivity gain—not just doing more, but becoming more capable while you do it.

The Real Productivity Unlock

The companies that win with AI won't be the ones that generate the most content the fastest. They'll be the ones where people get smarter alongside the tools.

AI tools have gotten dramatically better. They hallucinate less. They include references. They can be concise when prompted correctly. But none of that matters if you're just using them to avoid learning.

The research is clear: every time you use AI, you're making a choice. Delegation makes you faster and dumber. Collaboration makes you faster and smarter.

The race isn't to produce the most content. It's to build the sharpest judgment. AI doesn't have to make you dumber. It can make you faster and smarter. But only if you treat it like a sparring partner, not a shortcut.

The contrarian truth: The productivity gains are real and sustainable - but only if you redesign workflows to preserve learning loops.

The Enterprise Release Paradox: Why "Agile" Doesn't Mean Bombarding Customers

Enterprise infrastructure and platform teams are destroying customer trust in the name of agility. They've conflated 'ship fast internally' with 'release frequently to customers'—and it's killing retention.

The Fundamental Misunderstanding

Product teams conflating "agile development" with "frequent customer releases" are destroying value in enterprise software, particularly for infrastructure products that serve as foundational components in complex systems.

Agile is about internal development velocity and responsiveness—not about pushing every iteration to production customers.

When your enterprise platform releases monthly, enterprises face:

  • Unplanned testing cycles
  • Integration validation overhead
  • Deployment risk across critical workloads
  • Opportunity cost as engineering teams handle upgrades instead of building features

Enterprise Customers Don't Want You to Move Fast and Break Things

Enterprise infrastructure products succeed on a fundamentally different premise than consumer products: customers expect you to move consistently forward while ensuring nothing breaks in the process. Reliability, predictability, and backward compatibility aren't just features—they're the entire value proposition. When an image filter changes overnight, users shrug and move on. When a critical feature changes between releases without notice, enterprise customers don't lose a feature—they lose money. Their teams spend unplanned hours troubleshooting, workflows break, and the reliability they depend on evaporates.

This means breaking things isn't an acceptable trade-off for innovation. When you do need to shift things around—and you will—you must make the transition as easy and foolproof as possible. Provide clear migration paths, comprehensive documentation, and automated tools that do the heavy lifting. Give customers a way to roll back to previous behavior if they need to. This escape hatch isn't admitting defeat; it's acknowledging that you can't predict every edge case in their production environment.

This approach preserves the product's value. Customers can't confidently adopt new capabilities when they're uncertain about stability. They delay upgrades, defer feature adoption, and ultimately extract less value from their investment. Steady, reliable progress builds trust that enables faster adoption of genuinely valuable innovations.

I learned this firsthand as a teenager working in IT. Eager to provide "value," I upgraded the networking software at one of my clients—an accountant—with a major version bump to get them the latest features. The entire network stopped working immediately. For more than 24 hours, no one could work while we scrambled to restore the previous version. No one had asked me to do this upgrade. I thought I was being helpful. Instead, I destroyed value—the client lost a full day of productivity, and I destroyed their trust. That was the last time they called me. I had moved fast and broken everything.

A Better Framework: LTS, Semantic Versioning, and Two-Track Releases

The solution isn't to slow down development—it's to decouple internal agility from customer-facing stability through three complementary mechanisms.

Semantic Versioning makes the stability contract explicit through clear version numbering (major.minor.patch).

- Patch releases: Version 2.3.1 → 2.3.2 (Safe—only bug fixes—zero breaking changes, deploy immediately)

- Minor releases: Version 2.3.2 → 2.4.0 (Evaluate—new features and endpoints while maintaining backward compatibility, plan testing)

- Major releases: Version 2.4.0 → 3.0.0 (Plan carefully—breaking changes) 

This transparency lets enterprise teams instantly understand the risk level of any upgrade and make informed decisions about timing and resource allocation. Because patch releases require less extensive testing, they can be paired with automatic update mechanisms, allowing security fixes to be deployed promptly without manual intervention or lengthy testing cycles.

Long-Term Support (LTS) Releases establish a multi-year commitment to specific release lines. Instead of forcing customers to upgrade constantly, you designate certain releases as LTS versions with guaranteed support for 2-5 years. Customers can standardize on an LTS release and plan upgrades around their business cycles, not your development sprints. This removes the treadmill dynamic where staying current requires constant attention and testing resources. 

Two-Track Releases operationalize an approach to be agile and predictable at the same time: a Preview track available on monthly or bi-weekly releases for early adopters who want cutting-edge features, and a Stable track with annual or bi-annual releases paired with LTS designations, plus patch releases on demand for security hotfixes. Customers choose their risk tolerance—early adoption with preview, or stability with Stable versions that won't change unexpectedly (for years for LTS versions). Customers can also leverage both tracks strategically: running Preview releases in non-production environments to get a taste of what's coming, test changes thoroughly, and make informed decisions about when to upgrade minor or major versions in production. This allows teams to minimize surprises during production upgrades by validating changes in advance, giving them time to adapt workflows and prepare for breaking changes without emergency scrambling.

Reliability as Competitive Advantage

Enterprise customers using infrastructure software products don't want your latest features—they want infrastructure that works predictably for years. When your product releases break existing deployments, teams lose trust in the entire company. When your changes break customer workflows, you've just created a compelling reason for them to evaluate competitors.

Stability isn't a constraint on innovation—it's the foundation that makes innovation possible. It enables customers to confidently build on your platform knowing the ground won't shift.

Implementing this framework requires organizational alignment and a reframing of success metrics. Internally, this isn't about "slowing down"—it's about protecting customer value and building sustainable competitive advantage through trust. The metrics that matter shift dramatically: deployment frequency becomes less relevant; what matters now is customer upgrade rates and retention. Are customers actually adopting your stable releases? Are they staying longer? Are they expanding within your ecosystem?  

This shift requires alignment across three critical functions. Engineering needs to understand that backward compatibility and testing rigor are a core part of the value prop, not overhead. Sales needs messaging that emphasizes predictability and long-term partnership over feature velocity. Customer success teams need to shift from managing upgrade crises to proactively planning upgrade windows that fit customer timelines. When all three are aligned around retention and value extraction rather than churn and replacement, the entire organization pulls toward sustainable growth. 

The Path Forward

  1. Separate development cadence from release cadence
  2. Design releases around customer adoption timelines, not development sprints
  3. Invest heavily in backward compatibility and migration tools
  4. Measure success by customer retention and platform adoption, not feature delivery speed

Enterprise software succeeds when it becomes invisible infrastructure that teams can depend on. The moment customers start worrying about your release schedule is the moment you've lost their trust.

Your customers don't want to be your QA department. Give them the stable foundation they're paying for, and save the experimentation for customers who explicitly opt in.

A introduction to Solid, an evolution of the Web

Solid is a Web 3.0 protocol whose main goal is to reshape the relationship between users and their data. Solid is an evolution of the web as we know it, decoupling identities, data and applications. The goal of this new data architecture is to enable new user experiences where the data is organized around and under the control of the individual, with native consent and access control out of the box.

Solid is based on the following key concepts:

  • Decentralization: unlike traditional data architectures where data is stored around applications, creating silos of information, Solid introduces the concept of a Pod (Personal Online Datastore) where data is stored and organized around the identity who controls that data. This allows users to have fine-grained control over where the data is stored and who can access it.

  • Interoperability: data in Pods is stored using a standard and open format called RDF (Resource Description Framework) and can be accessed via common web protocols (HTTPS). This ensures that different applications can read and write data to the same Pod using a single universal API, promoting interoperability.

  • WebIDs: Web Identifiers are unique URIs that serve as an identifier for an entity in Solid. These IDs are an extension of OIDC, allowing for a frictionless integration with existing IdPs and seamless transition to Solid apps for users with existing platform identities.

  • Solid Pods: Pods are secure data stores where users can store their data. A Pod is a construct where a user can have multiple Pods and all data is discoverable by applications through the user’s WebID and available through the individual’s consent. From a user’s and application’s perspective a user’s Pod is the place where all the user’s information resides but from an implementation perspective there are multiple Pods where each Pod stores different information based on regulations and requirements. For example: a user might have a banking Pod where they store all their banking information like credit card transactions, a health Pod where they store their medical records and a Photos Pod for their family pictures, each in a different hosting provider, all linked to the user’s WebID.

  • Privacy and security: data is stored in fewer locations and under the user’s control, minimizing data duplication, the risk of data breaches and unauthorized access to personal information.

  • An evolution, not a revolution: Solid is an evolution of the web as it sits on top of existing technologies like HTTP, Web Servers, OIDC for authentication and RDF for data formatting. This makes it easier to integrate with existing applications and services, while benefiting from 30 years of technology maturity in terms of security, robustness, scalability and performance.

  • Open specification: Solid, being a W3C open specification, prevents vendor lock-in and sets a strong foundation for a thriving ecosystem of developers and vendors in the long term.

In a world where innovation stagnates under extractive data models, Solid proposes a new data architecture that creates value for consumers and producers of data. A way to organize information in a more equitable and scalable fashion for a world that lives and thrives on data sharing.


The Role of a Product Manager: Passion for Problems, Not Technology

As a product manager, I was once criticized for not being passionate enough about a specific technology. This feedback took me by surprise and prompted me to reflect on the role of technology in product management.

This technology had a fervent following, with a vision of transforming the world. You either 'believed' in it or you didn't, almost like a religion. Yes, I'm talking about blockchain. In this case, a particular decentralized protocol based on ethereum was underpinning the main product and I was told was I was not the best product manager I could be if I didn’t "believe" in it.

I don't have anything against blockchain, on the contrary; I am a firm believer in Web 3.0 and a future where individuals control their data, using it to their benefit securely and privately. However, I am not married to blockchain or any specific technology. Let me set this straight, I don’t believe there’s anything wrong with blockchain. On the contrary, I believe it’s a great technology to solve many problems, but I’m not fanatical about it. As a product person, I’m fanatical about user’s problems not one technology or another one. Let’s talk about why that’s important for a product manager.

The feedback came from a technically adept person. Sometimes, technical experts can interpret criticism of a technology's suitability for a specific problem as a critique of the technology itself.

In product development, the opposite issue can arise: engineers might become overly invested in their preferred technologies. With deep knowledge of their 'hammer,' everything starts to look like a nail. Engineers may attempt to solve every problem with their preferred tool. And it makes sense, it takes a long time to master a particular tool or technology and once you do, you don't want to throw that away. And the truth is you don’t have to. To a certain degree I think it's okay for engineers to apply this logic, the trade-offs of researching and learning new technologies for a particular problem can be quite big versus using the tools that you are familiar with to solve the problem at hand, even if they are not the most efficient ones to solve it. You have to consider the opportunity cost of how much time and effort are you going to spend trying to find a better tool vs just getting the job done with the tool you know. 

For a product manager, the approach should be different: you should not be tied to any specific tool or technology. Technologies evolve, and there is always something better on the horizon, driving progress.

As a product manager, your commitment should be to the problem and the customers it affects. Your quest is to identify demand. Your job as a PM is not to build something shiny to attract demand but to identify demand and build a solution for it. Product managers must thrive in the problem space, understanding its nuances, the market landscape, and the key players—users— and find the connections between all the dots. That’s the input we use to put together a product strategy, product roadmap and go-to-market strategy.

Products exist in the problem space, while engineering resides in the solution space. This separation is essential. Now, where does tools and technology belong to? Tools and technology belong to the solution space -to engineering-; and that means product people should detach themselves from any technology when it comes to understanding the difficulties your customers face and bringing a solution for them. A product manager -a technical one in particular- needs to understand the technology that underpins its product, but not be married to it. Being attached to the hip to a particular technology as it will only constraint your ability as product manager to articulate and visualize a potential solution. If you are tied to a technology, the solution will be limited by the technology’s capabilities, restraining your abilities to articulate an optimal solution for the problem at hand. 

PMs should be passionate about identifying customer problems. Engineers should focus on crafting elegant solutions, choosing the right tools for the job.

Urgency is the lifeblood of startups

At the heart of every successful startup are a few fundamental truths: a relentless pursuit of goals, a team driven by mission over money, and a culture that thrives on stepping out of comfort zones.

The Essence of Urgency

Startups, inherently unstable and unproven, operate in a constant state of flux. Without the safety net of stable revenue, they burn through capital in the quest to deliver something revolutionary—a product that not only attracts customers but also retains enough value to fuel further innovation and growth. This journey often involves securing multiple rounds of investment to extend their runway, each tranche designed to propel them toward the next milestone.

The clock is always ticking for these ventures. They're in a perpetual race to create and capture value before their resources dry up. Every step is a leap towards uncharted territory, aiming to solve real-world problems and achieve product-market fit. In this relentless pursuit, there's no luxury of pause; learning and evolving must happen on the fly. Minor victories are acknowledged, but the focus swiftly returns to the overarching mission.

Imagine the startup journey as a marathon with an added twist: participants must invent their running gear en route, all while keeping pace. This race is not just about speed but also innovation—developing new strategies, technologies, and methodologies in real-time. And in this competitive landscape, only the most adaptive and cohesive teams stand a chance at victory.

Team dynamics are crucial. Each member must be prepared to go the extra mile, compensating for the inevitable gaps and inefficiencies with sheer perseverance. Egos and complaints have no place here; the collective goal transcends individual ambitions. Success hinges on the ability to innovate collaboratively, crafting novel solutions and forging paths previously untraveled.

In essence, the startup world demands more than just participation; it requires a commitment to constant evolution, a willingness to embrace the unknown, and an unyielding drive to push beyond conventional limits.