The Masters We’re Loosing

Josh Anderson wrote an article that gainined significant attention on both LinkedIn and Substack:

I Went All-In on AI. The MIT Study Is Right.” ( 95% of corporate AI initiatives FAIL.)

It’s a very compelling read. I’ve experimented extensively with AI in both modes he describes:

“When AI helps you write better and faster while maintaining your voice, that’s augmentation. When AI writes for you in a voice that isn’t yours, that’s abdication.”

Augmentation can be extremely useful. When I am in the zone, on a good day, or with an interesting case. “I feel like a mountain climber reaching peaks above my league. That feeling when you glimse an earlier, to you, unknown plateau” (This is the 5% group MIT talks about).

With augmentation, you're developing skills alongside the AI. You maintain critical thinking, verify outputs, and build genuine understanding. The AI is like a very capable assistant who does the heavy lifting while you steer.

Abdictation gets really scary when you understand what you are trading away.

The primary issue with abdictation is illustrated very well in the article Josh writes. Adding to what Josh writes, an issue with Large Language Models is that when you outsource critical thinking to LLMs you will not catch when it starts to hallucinate or give bad advice. This is not something that will go away with LLMs and it is not an issue if you are in the 5% group.

For your own demonstration: If you i.e are skilled in pyhton , terraform or anything similar simply pretend you know nothing about the topic. Ask your LLM (I use Claude Code or Claude) to build something for you and simply observe the process. Run the code, complain it is not working and let the LLM figure it out for you. In the beginning i was amazed. However do the entire trip, down the rabbit hole, and it eventually starts derailing proposing bad solutions, dangerous solutions, really outdated solutions. But usually the result is indeed working solutions.
Sometimes LLM wont be able to solve a problem and it will start trying to convince you to drop the task with motivational speeches such as «This is not worth doing», «We are wasting our time», «It cannot be solved».

However if you start correcting your LLM i.e saying that "This soulution is implemented using outdated packages" or "lets make this implementation scale better using redis cache" , Claude or your prefered LLM will follow along with your instructions such as "Oh you are absolutely right, these packages are outdated or using redis cache is a good idea". This is why augmentation is awesome while abdictation leads to failure.

With the experience of both abdictation and augmentation reading his article he reflects wisly on a topic that has concerned me for some time. A topic I’m still pondering how to address.

Josh writes:

“The Masters We’re Losing

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.”


Have you noticed this is already happening with kids today?


In Norwegian elementary schools, every child receives an iPad or PC from day one. With Ai available 100% of the time when the device is accessible (no exam mode is used on iPads or PC in primary school), kids quickly see the possibilities. In elementary school IPad or PC is the primary tool for teaching. Hence kids have access to Ai most of the day.

Several students are already asking Ai to answer all their teacher’s questions. Teachers can’t monitor each student’s screen constantly. Furthermore, most teachers are not that skilled in IT.

I fear we are already producing the next generation of lost masters. I noticed some kids (and adults) gradualy pulverize their own self trust by «lets just verify» followed by «GenAi has better answers» ending in «I am useless» after doing the former for a while. I see smart kids doing this for various reasons, and it scares me it happens in class.

Josh: «You can’t prompt your way to that knowledge. You can’t download that experience. You have to earn it. And if you’re letting AI do the work, you’re not earning anything except a dangerous dependency.»