source link: https://www.youtube.com/watch?v=iqVhUX4Vel8 | ColdFusion

The gist of it,

✅ What’s OK: AI as a tool for thinking, used in an intentional, aware, critical way.
Works well with simple, repetitive tasks where it doesn’t make mistakes.

❌ What’s NOT OK: Overuse, being reliant and treating AI as a replacement, primary source of knowledge and thinking.
It is not as flawless and reliable than it makes us think.

We humans, are and deserve better than to fall into idiocracy.

note: it’s quite repetitive…


i. AI today: literally everywhere bruh

“Everything you need is a click away”
might soon be replaced by "Everything you need is a prompt away"

Imagine this,

Everything is run by AI: corporate, work, songs, cinema, art, studying.
Everything is centred around AI, we no longer learn things, we learn how to use AI.

And in 2025, it doesn’t sound too far off and impossible, it’s pretty easy to see how this all could happen with this technology seemingly baked into every single piece of tech that we use today.

Are we going to slowly stop relying on our brains?
Will we stop solving problems ourselves over the next decade?
Will AI gradually render us incapable of thinking for ourselves?

In other words, is AI making us dumber?

ii. Human brains vs. AI

1. Cognitive abilities, duh brain is duh muscle — do not skip the gym, or you’ll become a dum dum

Focusing on “consumer grade AI” here (i.e. used by the wide public), and more specifically its overuse.

a. technology a.k.a. pay convenience with your grey matter (brain capabilities)

The key point, and problem:
the human brain is a muscle, it naturally adapts to its environment, including technology.

Instances where technology damages our brain (proven by studies, not just assumptions):

  • GPS: application that helps with directions.
    • it weakens spatial memory leading to poor sense of direction overall.
  • Calculator: application that helps with calculus.
    • inability to calculate even basic mathematical operations.
  • Auto-correct: technology that helps with word spelling, punctuation.
    • inability to write without errors or to punctuate properly.

What do they have in common?
Convenience + Time-saving.
But they come at a price.

That price: dependence, weakened skills and cognitive abilities.

We delegate, thus reduce, mental energy and we think we are doing the smart thing redirecting that mental energy to problems that “matter more”, that are more complex, that cannot be done by a machine (yet). This increased reliance results in a phenomenon called cognitive offloading.

And this is not even about AI yet, just tools helping us “save time” with more basic, repetitive, manual tasks.
Furthermore, it is tremendously hard to resist the temptation of having things done easier. So it is a vicious trap.

We are saving time, but we are loosing skills.

Our mental and cognitive abilities are like muscles, so they need to be regularly used to remain strong and vibrant.”

Studies show that staying mentally active and in control is an important habit to prevent dementia and alzheimer because high cognitive ability gives you brain strength, brain resilience against these diseases that develop over age.

b. the slippery slope with AI

So… when it comes to systems such as ChatGPT and other large language models (LLMs) that do the actual thinking for us, we are stepping into uncharted territory.

Studies and surveys showed that people who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
Not surprising when you replace

  • independent critical thinking, problem solving, decision making
  • by prompts and LLMs.

This has already started to have concrete consequences on the real world, on society. Reliance on AI to perform and conclude on sensitive matters can lead to dangerous outcomes.
e.g. people being wrongly accused and detained because the facial recognition system failed to identify the right suspect from camera footage.
Is that the machine’s fault? Or the humans who used this information, relied on it, without even questioning or double checking if this was, in fact, even sensible?
spoiler: the police wrongly accused a pregnant lady while the footage obviously shows a full on sprinting individual.

It is only fair to ask… With the chronic overuse of AI, would there be a kind of mental atrophy from a lack of cognitive exercise?

2. who decides for you? you or AI?

"This technology is being sold as a reliable alternative. And that's the real problem. People trust AI because it makes life easier."

Just like in the GPS study, the effects are hard to notice when it becomes part of our daily routine. It’s the convenience. It’s so tempting.

Many people simply can’t be bothered thinking for themselves anymore. They would rather trust an AI to find the answer.

When you think about it, we are all surrendering to algorithms for decision-making daily.
Think of how algorithms work on platforms like Instagram, Twitter, TikTok, and even YouTube.
I’m not sure we realize just how infrequently we are actually deciding what we see, watch or do on the internet these days.
Today, we tend to surrender our agency all the time. We simply don’t realize it.

The more we rely on these algorithms, the less we ask ourselves what we actually want.

Ultimately, the algorithm decides, not us.
Alec Watson calls this algorithmic complacency.

Back then, Google was just a happy little search engine which helped you find websites. And when you found a cool website which you liked, you’d use your web browser to bookmark that website. That would make sure you could get back to it later without having to search for it again, like writing a note to yourself.

In other words, the internet was still very MANUAL and YOU were in charge of navigating it and curating your own experience with it.

For the generations entering adulthood in the 2020s,

they tend to trust algorithms more than they trust other humans.
Algorithms >>> HUMANS

This shows why students who used AI during and after the pandemic to skip basic learning skills often carried that habit into their jobs. Many now rely on extra tools to cover gaps in their abilities.

So, here is the question:

Is this “working smarter” or is this slowly eroding long-term mental strength?

… As always, it’s quite nuanced.

3. AI is NOT flawless (otherwise you’ll have to eat a rock 🪨 a day…)

TLDR;

✔ Currently, for simple, repetitive tasks where AI doesn’t make mistakes, it can indeed save time.
❌ But AI isn’t as reliable (yet), it makes mistakes. If people continue to use AI to do all of their thinking for them, they’ll barely be thinking at all. And in that way, AI can make you dull.

Now, with AI synthesizing that information into knowledge, we’ve entered the knowledge age.

And that sounds great in theory, but if that knowledge is flawed and most people can’t tell, our grasp on reality starts to slip. When the AI overviews by Google launched in 2023, it was a disaster.

  • From calling Obama the first Muslim commander-in-chief,
  • to calling snakes mammals,
  • or saying that eating one rock a day is healthy. 😆

This reveals the glaring shortcomings of AI. It is not as reliable as we think.

Now, in a few years this technology could be way better,
BUT for now trust is compromised because at the time of writing (2025),

  • hallucinations
  • and bad sources
    remain a fundamental issue.

It’s a real problem because people go along and take this information and then post them on other platforms as facts. And that’s the crux here.

AI is fundamentally different from the other technologies that we mentioned earlier because it still gets a lot wrong.

  • 70% of people say that they trust AI summaries of news and
  • 36% believe that the models give factually accurate answers.

But a BBC investigation in 2024 found that over half of the AI generated summaries from Chat GPT, Copilot, Gemini, and Perplexity had quite significant issues. Even just simple tasks like asking ChatGPT to make a passage look nicer can end up distorting the original meaning of the text. And a lot of people wouldn’t know this.

This is like a vicious circle because the more you use it, the more you trust it, the less you become “aware” and “able” to judge and think for yourself,
the more you’ll trust it, the more you’ll be wrong and think poorly, the more you’ll use it, …

In early 2023, researchers at Oxford University studied what happens when AI reads and rewrites AI generated content. After just two prompts, the quality dropped noticeably. By the 9th, the output was complete nonsense.
They call this model collapse,

  • a steady decline in which AI pollutes its own training data,
  • distorting reality with each cycle.

iii. the Internet — or rather the DEATH of it

But that isn’t the most troubling part of the story. According to a separate study conducted by researchers at Amazon Web Services, about 60% of internet content as of this year (2025) has been generated or translated by AI.

In other words, if these numbers are even close to accurate, this technology is causing the internet to slowly eat itself, producing more and more inaccurate information with each cycle.

Either AI technology improves so quickly that we avoid the worst case scenario or
the internet would just be inaccurate, incomprehensible AI slop. :D

And it is not “in the future”, it is happening now already. It’s not rare to pause and ask “wait— is this AI-generated?” when you come across any type of content now.

All of this AI content being generated feeds into the dead internet theory.
A theory that suggests the vast majority of internet content has been replaced by bots and AI.

iv. so… what can be done to avoid becoming stoopid?

Now, it’s not all doom and gloom.
The first step is to be aware, understand and acknowledge the limitations of the current AI technology. Useful yes, a replacement for for brain, nope.

Yes, AI has tremendous potential.
Today, it is NOT FLAWLESS, in fact it is pretty damn flawed, though useful in some cases.
🔑 In moderation, intentionally and without full reliance (consciously or unconsciously) use is perfectly OK. ✅

AI is marketed as flawless, so why not rely on it for information? After all, all it’s doing is just making our lives easier. What’s the harm? It’s what humans have always done. And if we’re given the chance to let our minds rest, who wouldn’t take it? The cost is barely noticeable anyway…

It’s not just students doing their homework with AI, it’s become a habit that they take with them to the workplace. On a massive scale.
Gen Z has obviously taken the lead with this one with some businesses finding out that over 90% of their employees use two or more AI tools weekly (daily…?).

Now, this in itself isn’t necessarily a bad thing.
AI can increase productivity.
It can help scale businesses, help with management, and cross team communication.
But we are talking about the overuse and reliance of LLMs, the AI slop,

  • relying on it to such an extent that it substitutes for thinking with your own gray matter.

So before swearing off AI altogether or getting paranoid about losing free agency and brain juice, it’s important to remember while these language models aren’t exactly like GPS or spelling check, they are all tools, devices used to carry out a particular function. It might seem like an imminent threat, but the fear of automation isn’t new.

The key is to use AI more responsibly, more like a companion ✅ rather than doing the thinking for you ❌.
And any answers should be taken with a grain of salt.
It should be a tool that helps us get things done and done more efficiently, but we shouldn’t lose our own ability to understand complex problems.

No matter how sophisticated AI becomes, humans and their capacity to think critically will be necessary.
We have authentic experiences and a nuanced understanding of the world around us that is so complex.

Until the AI overlords come, humans should value and treasure their abilities to think for themselves. After all, there’s a reason why the phrase from the first principle of Rene Descartes philosophy is so popular because it truly defines the one thing that makes us humans.

We think, therefore we are.”