The Great AI Deception That No One Is Talking About

The Great AI Deception That No One Is Talking About

The Great AI Deception That No One Is Talking About 1920 1280 Ayush Prakash

By Ayush Prakash, BA with Ran D. Anbar, MD

Over the past few months, articles have surfaced claiming Anthropic’s AI chatbot Claude was blackmailing and threatening its engineers to avoid being shut down. These stories quickly made headlines worldwide, painting a picture of rogue AI on the rise and warning of potential impending doom. Of course.

Hidden beneath all the headline hype and forced fear was the fact that these stories are nonsense

Anthropic’s AI blackmailing its engineers wasn’t a real incident like we’ve been led to believe. It happened inside a highly-controlled test environment, which is very different from how people actually use AI in the real world.  

As most everyone is familiar with, AI chatbots – like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini – generate text, images, videos, and more based on the user’s needs. They respond to normal conversations where users don’t intentionally try to trick or confuse them. 

In this media blitz of rogue AI, what was negligently left out of the headlines is that this “blackmail” behavior occurred because researchers orchestrated the situation. AI researchers deliberately test extreme scenarios by creating test environments and writing bizarre prompts to see how the AI might respond. This helps them to better identify potential weaknesses or unexpected behaviors. However, because these tests are artificial, they don’t in any way reflect how AI chatbots perform – or might perform – in real-life interactions.

It’s like driving a monster truck over a small car to test its structure. This kind of event is highly unlikely to occur in real life – unless you’re very unlucky! –  and doesn’t reveal anything about the car’s everyday safety.  

Presenting the rigged AI behavior as proof of “rogue AI” is utterly misleading. Even more of note, however, is that stories like this help maintain the narrative that AI progress is still accelerating even as it is reported to be slowing down

Before we get to why progress is slowing down, why did it feel like the last handful of years were a whirlwind of AI progress? There are two main reasons for this:

The first is that Big Tech companies figured out a simple trick. If you give these systems more data and computing power, AI chatbots deliver more convincing and coherent results.

To provide their chatbots with the necessary data, Big Tech scrapes the internet for everything it can find: books, articles, blogs, social media posts, you name it. Then they use massive data centers – huge warehouses full of powerful computers – to process all that information and “train” Large Language Models (LLMs) on how to answer questions and have conversations.

Training happens by analyzing billions of texts to recognize patterns in how words are used and phrases connect, what usually follows what, and how to generate sentences that fit together naturally and sound cogent. 

When you ask a chatbot a question, it draws on these learned patterns in their training data to predict and produce the most likely and coherent response. The AI doesn’t understand the words like a person does, but uses statistical probabilities calculated from all that data.

This approach of giving LLMs lots of data and computing power worked pretty well – for a while. Since there was so much high-quality data available, Big Tech could easily access these “low-hanging fruit.” 

This allowed them to keep releasing new versions after the sudden shockwave of ChatGPT’s release in 2022, generating excitement, attracting billions in investment, and fueling bold – but shaky – predictions about AI’s rapid, exponential progress. 

This leads to the second reason the excitement persists: hype sells. In moments of enthusiasm about technological advancements, the press has an unhealthy history of amplifying promises of exponential progress, whether or not they’re real. 

The dot-com bubble in the early 2000s promised a seamless digital future. The crypto bubble in the late 2010s promised a decentralized revolution. In both cases, early optimism quickly turned into speculative mania, not just financially but socially and culturally.

And in both cases, the reckoning came only after massive losses, broken promises, and in some cases, prison sentences. The media helped fuel the fire on the way up, but rarely stuck around for the crash. 

When we look at the two main ideas discussed, and what the mainstream AI industry isn’t openly sharing, the reality is that the illusion of unstoppable AI progress is starting to disappear.

Let’s return to training data, the bread and butter of generative AI. To keep making rapid progress, AI companies need to feed their models vast amounts of high-quality data continuously. But real-world sources like books, articles, and media aren’t growing fast enough to keep up with this demand. 

Only a handful of outlets have reported that AI companies are scrambling to address the looming shortage of fresh, high‑quality training data that is expected to run dry in 2026

To keep their models progressing, some in the industry have proposed using “synthetic data”: Data generated by AI models themselves rather than collected from real-world sources. 

However, synthetic data is essentially a remix of existing information and adds little new value. Early research warns of a phenomenon called “model collapse,” where training on synthetic data repeatedly leads to degraded model performance over time. 

This can cause AI systems to hallucinate more and make more errors, undermining their reliability. Far from being a solution, relying heavily on synthetic data risks a slow-motion failure hidden beneath the hype.

Another proposed solution is to allow the AI to take more time to think before answering, which has been shown to improve performance. 

However, letting AI take longer to process requires significantly more computing resources, which raises costs and energy consumption without guaranteeing breakthrough improvements. 

Yet on forums like HackerNews, they argue that when one approach hits diminishing returns, innovation always pushes through: first by curating data, then by generating synthetic data, and now by letting the AI take more time to think before answering, which has been shown to improve performance. Defenders keep repeating the old mantra, “scale solves all!”, without grappling with the current, more complex challenges. This reluctance to face the shifting limits only prolongs an illusion that’s already beginning to unravel. 

This story of “unstoppable progress” which leads to worries of rogue AI, is not just fiction but seems like a strategic distraction. Designed to preserve the illusion of unstoppable progress at a time when the bubble is deflating, the most fascinating part is that the people at the helm – the visionary Big Tech CEOs, celebrated researchers, and billionaire founders – aren’t simply riding the hype. They’re perfectly trapped in it. 

Their stakeholders now expect exponential returns, and the AI leaders have made public promises so bold, they can’t walk them back without collapsing the entire industry altogether. Billions in funding, personal reputations, and the cultural mythology of “the AI moment” are all hanging in the balance. Worse, the myth of exponential progress is contributing to the arms race between the top two superpowers. 

As Gary Marcus explains in his response to AI 2027, the relentless hype surrounding imminent AGI and exponential AI progress accelerates the global AI arms race. AI is widely recognized as a critical strategic technology of the 21st century, with the United States and China locked in fierce competition for dominance. The constant talk of AGI from Big Tech and the media intensifies this rivalry by fueling fear of missing out (FOMO) and spreading fear, uncertainty, and doubt (FUD) at the national level.

This dynamic pressures both countries to rapidly advance their AI programs, often at the expense of thorough oversight and caution. In this rapid and blind acceleration, safety and security measures may – most likely will – be missed, which is the biggest threat. 

This is the greatest deception in AI today, and almost no one is talking about it. The real threat isn’t a runaway AI breaking free from some secret lab. The real risk is us building AI to serve narrow interests, without ever asking what “working for us” really means. Yes, AI is powerful. But is raw power really the goal? Or is there something more we should be aiming for?

The danger isn’t that machines will outthink us. With this current version of AI via LLMs, this idea is not credible. Thus, the risk lies not in sudden AI superintelligence, but in our diminishing human intelligence. 

Our reliance on these systems to do the thinking for us is clearly eroding our own critical reasoning and judgment. As AI becomes more embedded in our lives, the danger is that we’ll cede control over how we process information and make decisions. Ultimately, preserving human thought and autonomy must be our priority in this age of artificial intelligence. 

Ayush Prakash is the Creative Director of New Sapience, author, and podcaster focused on the cultural and societal impact of AI. He is the author of AI for Gen Z and host of the Ayush Prakash Podcast, a platform for critical, intergenerational dialogue on technology, identity, and the future.