source link: https://www.youtube.com/watch?v=4xq6bVbS-Pw | Micode
(older) AI is making you dumber tackles same topic, same concern, but slightly different angle…


AI dependence

chatgpt has become

  • a multi-tasking life assistant
  • a life confidant even

handles both personal and professional areas of life

the real issue (danger):

over-reliance loss of cognitive abilities
in other words, high risk of becoming dumb

and not so much “AI taking over the world by becoming conscious etc.”

“one day, humans might stop thinking.”

gpt genesis

when it got launched for public use end of 2022:
division among people:

  • skeptical ppl: “reliability? immediate + long-term consequences?”
  • enthusiastic ppl: “awesome, productivity +++”
    • esp. developers and students

the field of education took a big toll

what’s the point of studying and learning anymore if chatgpt can do anything for us?

early AI limitations + recent improvements

at the start, all of these were limitations

  • hallucination
  • inconsistencies
  • poor reliability
  • no personalisation
  • no memory
  • no serious sources
  • training cut-off date (2021)

but most of them have improved significantly:

  • great reduction in hallucination, inconsistencies (apparently /8)
  • highly personalised + huge memory capability
  • connected to the internet
  • multimodality
  • crazy top performance in all areas (text, audio, image and video even)

early adopters

developers and students specifically

  • generate code with little knowledge open door to immense possibilities and applications
  • do any type and topic of homework for students no need to spend X hours researching, writing, learning

the majority of students (or ppl) use AI to do their work for them (like >60-80%)

learning process, brain activity

how do we learn even?

at school

what’s the point of having teachers and go to school?

e.g. learning new language, say japanese:

  • phase 1: learn the theory
    • concretely speaking, study basic foundation: alphabet, vocabulary, grammar, etc.
    • i.e. collect information
    • under the hood,
      • the brain encodes the information
        • prefrontal cortex (~ RAM): deals with focus and reasoning
        • hippocampus (~ hard drive): store new information
      • so when one learns something new: both zones get activated and interact to create connections!
  • phase 2: practice to consolidate
    • anything that requires you to recall and make use of the learnt information consolidates the learnings
      • the connections are stronger,
      • your brain almost can handle the information in autopilot mode (less effort required, faster, more efficient)
  • phase 3: last but not least, metacognition (kind of awareness + feedback loops!)
    • meta = beyond and cognition = thought process
    • metacognition = process by which learners use knowledge of the task at hand, knowledge of the learning strategies, knowledge of themselves to plan their learning
      • i.e. knowledge and understanding of one’s own thinking (strengths, weaknesses)

so to summarise, school covers all 3 pillars:

  • provide theoretical knowledge and material
  • consolidate and evaluate knowledge with practical exercises
  • give personalised feedback by grading work you do

however, by using AI to do homework,
skip “phase 2: practice” altogether
skip the consolidation part, the brain connections you might create are not sustained

skip practice you actually don’t assimilate the learnings / knowledge, at all.

for coding

junior programmers are more impacted by this issue,
they seem to have lost ability to code properly

with AI coding assistants,
one can generate a functional end-to-end product (website, application, tool)

the pitfalls:

  • code files are super verbose and can be overly complex (not optimised, not well designed)
  • integrating additional complex features can become extremely challenging
  • bugs that cannot be figured out / fixed by AI (nor the programmer)

not scalable, not even maintainable

basically, nobody understands the code,
so nobody can actually manipulate it properly

conclusion:

with AI, we manage to produce outputs, results
but can be at the expense of:

  • real knowledge, learning
  • real understanding

google effect

not a unique phenomenon with LLMs and AI only,
it was already the case with other technologies such as GPS, calculators,
or even the internet as a whole.

Google effect: no longer bother with remembering information, as they are available at one’s fingertips

consequence:

  • fewer brain activities: “just google it” <<< solicit brain to encode and retrieve information (thus create and reinforce brain connections)
    • quantity: fewer brain connections
    • quality: weaker brain connections

outsourcing learning, memory and reflection processes to external tools
leads to immediate cognitive depreciation

cognitive debt

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task — an experiment by MIT

3 groups of ppl, essay-writing task, different conditions:

  • 1 must use AI only
  • 1 can use the internet but not AI
  • 1 must rely on their own knowledge only

the results show that brain activity measurements during essay writing task:

  • (lowest) AI only <<< internet <<< own knowledge (highest)

“This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking

foundations matter

so are we all going to be stupid?

not really, there is hope.

a common aspect between junior programmers and students:
they haven’t finished the consolidation part of their learning/knowledge yet

junior vs. senior

there are meaningful differences between junior vs. senior programmers in terms of AI usage— and they can be counter intuitive:

  • AI-generated code usage: senior >>> junior
  • but also, AI-generated code review: senior >>> junior

junior: use AI as a replacement for their work
senior: use AI as an augmentation-tool for their work

kitchen brigade analogy

(good) analogy:

  • hands-on coding isn’t so much about directing / managing / orchestrating tasks from above (by prompting and asking AI to code on one’s behalf)
  • but more so similar to cooking and kitchen brigade dynamic
    • different members in a kitchen brigade, with hierarchy
      • ⭐⭐⭐ chef de cuisine (chief)
      • ⭐⭐ sous-chef (under-chief, right-hand of chief)
      • ⭐ chef de partie (senior)
      • cuisinier / commis de cuisine (most junior)
    • very structured: everyone has a set of tasks/responsibility,
    • and ordered level of expertise (the higher up, the more skilled, each can do all the work below)

in order to reach a certain level, one has to go through each of the ones below,
go through the basics, the practice, the experience, the failures, the learning, the refining, etc.
i.e. respect the whole learning journey (valid for any skill, not just cooking or coding)
3 pillars: theory practice metacognition (feedback)

AI can be considered as the lower roles in the hierarchy
they are cooks, very good at achieving specific tasks

which means, the more complex the output they generate,
the more skilled one needs to be in order to manage and orchestrate properly (… a chief ⭐)

so the problem with junior developers who rely heavily on AI-coding assistants:

  • skip the practice phase by relying on AI,
  • thus weak foundational skills
  • so, when faced with too much complexity (managing AI outputs, or more difficult problems)
    • everything crumbles because weak foundational capabilities

AI shrinks skills and capabilities, when foundations are not yet built

vs. senior people who seemingly use AI more than junior people:

  • strong foundational skills
  • take the time to review AI-generated outputs, correct, customise, etc.
  • so, when faced with the same level of complexity
    • manageable
    • + work productivity also increased!
    • all that thanks to stronger, assimilated, year-long foundation and practice (but also the intentional action of reviewing and owning the AI-generated code)

AI multiplies outputs and capabilities, when foundations are strong

AI, your private tutor

good news is:

building strong foundations with AI, as a junior person, is actually possible!

since AI is so accessible, so skilled, so educational and patient,
just leverage it,

turn AI into your personal home teacher, your private tutor (at free.99$)

avoid asking for direct answers, outputs, solutions (same issue otherwise, cognitive debt boo)
instead, explicitly phrase it as “I want to learn XXX skills, guide me there without giving me the final solutions, nor hints.”

when you think about it,

being able to afford such a private tutor at the tip of your fingertips is absolutely unprecedented
a golden opportunity

one doesn’t need to be well-off and privileged anymore,
one just needs the right prompt to benefit from this — which is quite revolutionary

… truly accessible?

however,
some people (and teachers) are still against and skeptical of using AI as a private tutor
namely for the uncertainty, the reliability of this approach

what’s the real impact of using AI as a learning assistant?

well, some studies (PS2 PAL @Harvard) actually showed that using AI tutors:

  • encourage growth mindset (more confidence in ability to succeed despite weaknesses and difficulties)
  • students feel more motivated and engaged to studying

they are positive effects—but can be argued that they can be provided by human teachers as well,
which is true.

the real differentiator (for regular schools, not private tutors):

  • teachers can’t personalise their lessons to every single student (content, pace, review)
  • vs. AI tutors, which can tailor lessons to each student
    • + capacity to SCALE!

and the results of the study actually show better performance for students who had access to AI tutors vs. students who only attended regular classes

BUT, it only worked because of the training and fine-tuning those AI tutors actually underwent:
trained on specific and structured course material (physics or something), thus follow a precise protocol to guide and supervise students within a pre-defined context.

… which is different from any basic widely accessible LLM (in the sense it’s not going to strictly follow a framework for you to learn something specifically, the general purpose is to assist humans broadly).

personal willingness to face hardships

so the question boils down to

are we willing to intentionally create and follow a framework while learning—whether by ourselves or with AI?

  • do we consciously choose the longer, more tedious, more challenging path of learning step by step—but also keep our cognitive functions alive?
  • or do we choose instant gratification, easy and quick solutions—but at the expense of shrinking our cognitive capacities?

this not only applies to students, individuals,
but also to businesses (immediate results vs. long-term and sustainable outcomes).

reality is… the majority of people would choose the instant gratification, the easier and lower effort path,
and this “laziness” or tendency to choose comfort > effort is pretty normal for humans,
it makes sense in terms of human evolution
effort = energy, and we are wired to save energy as much as possible to survive

while being normal,
the potential risk of over-reliance on AI to “conserve energy with lower effort” is absurdly dizzying at this point,

because it’s no longer about specific tasks like outsourcing remembering all the street names with GPS, or manual computations with Excel,

this is about outsourcing every little mental process that requires an ounce of thinking.
the scope is immensely different.

and yeah, this is our ancient human nature to be attracted to easy solutions,
but it’s the very real process of facing difficult problems that enabled humankind to grow and progress this much,

efforts shape the brain, the outcome doesn't.

education = future

and sure, AI brings up many other major issues and concerns,
environment, privacy, digital sovereignty, job market shift,

but at the core, this cognitive plague is even more pressing,
because a world that fails at educating and shaping the future (current) generation,
will not be able to sustain any other field that does well.

education isn’t just a sector among others,
it is the base of everything else.

AI is not going to make us more stupid,
we are going to make ourselves more stupid by systematically choosing the easiest, shortest path.

the good news: this is a choice.
a choice that is made with every prompt.

  • do we choose to use AI as a replacement?
  • or do we choose to use AI as an augmentation-tool?

at the end, it’s not even about asking “is AI really that intelligent?”

but rather “do we humans still want to be intelligent?”


more sources: