Thursday, July 31, 2025

Artificial Intelligence (AI) for Schoolers, Undergraduates, Postgraduates and Professionals: A critical appraisal

Progressive development of the human brain, a marvel of evolution, underpins our intelligence. Over millions of years, driven by many factors, the capabilities of the human brain have expanded significantly, particularly in the frontal areas and the prefrontal cortex. It facilitated advanced intellectual abilities like abstract thought, language, and problem-solving, setting humans apart and enabling unprecedented levels of learning and cultural variability. In human communication, the development of languages, which are symbolic expressions of thought, has revolutionised the ability of humans to interconnect with each other in a very meaningful way.

From early childhood to adulthood, human intelligence undergoes continuous development, influenced by both genetic predisposition and environmental factors. In infancy, and perhaps also for the first couple of years, intelligence is largely sensorimotor, with babies learning through direct interaction with their environment and developing object permanence. As children progress into the preoperational stage of about 2 to 7 years, symbolic thought and language emerge, though logical reasoning is still developing. The concrete operational stage of 7 to 11years brings logical thought for concrete situations, allowing children to grasp concepts like conservation. Finally, in the formal operational stage of age 12 and over and throughout adulthood, abstract thinking, hypothetical reasoning, and systematic problem-solving become prominent. While fluid intelligence, like problem-solving, may peak in early adulthood, crystallised intelligence, like knowledge and vocabulary, continues to grow well into middle age, demonstrating that intellectual development is a lifelong process.

Now, enter Artificial Intelligence (AI) into this narrative. It represents the next frontier in mimicking and augmenting human capabilities. It is a broad field of computer science dedicated to creating machines that can perform tasks traditionally requiring human intelligence. This includes everything from understanding natural language, recognising patterns in images, to complex decision-making and problem-solving.

At its core, AI aims to replicate cognitive functions like learning, reasoning, perception, and creativity. Unlike traditional programming, where every step is explicitly coded, many AI systems, particularly those relying on machine learning and deep learning, are designed to study vast amounts of data. They identify intricate patterns and relationships that even humans might miss sometimes, and AI continually refines its performance without explicit reprogramming.

The ultimate goal of AI research and development is General AI, which would possess human-level intelligence across a wide range of tasks, including abstract reasoning, common sense, and emotional understanding. While it is still largely theoretical, progress in areas like generative AI, like large language models that can create human-like text or images, points towards the potential for more sophisticated AI systems in the future.

The applications of AI are already pervasive, transforming industries from healthcare and finance to manufacturing and entertainment. AI-powered tools assist in medical diagnoses, optimise supply chains, personalise online experiences, and drive autonomous vehicles. As AI continues to evolve, it promises to further enhance human productivity and capabilities, paving the way for a future where humans and intelligent machines collaborate to tackle increasingly complex challenges.

It is ever so important to critically assess some aspects of the usage of AI by school students, undergraduates, postgraduates and qualified professionals. There is increasing evidence that students who are still in school are becoming overly dependent on AI. The worst scenario is where they use chatbots like ChatGPT and Gemini, just to name two, with Large Language Model (LLM) capabilities to do all the school academic work for them. Not to be left behind, it seems to be spreading to adult students in universities and professionals as well.

ChatGPT is a chatbot (abbreviation of ‘Chatting Robots’) interface developed by OpenAI that is powered by their Generative Pre-trained Transformer (GPT) series of LLMs. So, while ChatGPT itself is the conversational AI application, the “brain” behind it, the part that understands your queries and generates human-like text responses, is an LLM, such as GPT-3.5, GPT-4, or GPT-4o.

Gemini is Google’s family of Large Language Models (LLMs). When you interact with the Gemini chatbot, you are interacting with an application that uses an underlying Gemini LLM. Gemini is also a multimodal AI model, meaning it can process and understand different types of data beyond just text, such as images, audio, and video.

Additional use of AI-powered writing assistants such as Grammarly and Jenni AI does help in all kinds of writing. The great advantage of these is that they detect and point out language errors, and suggest suitable alternatives without autocorrecting what is already written. Then it becomes a very valuable learning experience for he user.

What we should consider is whether there is evidence to suggest that excessive use of AI and almost total dependence on AI in school and university work, which used to involve intensive studying, exercise of memory and working out things, is interfering detrimentally with proper learning and development of academic capabilities? Is it also likely to produce a generation that cannot do much without AI?

This has become one of the most significant and debated topics in education right now. There is indeed a growing body of evidence and widespread concern that excessive and uncritical dependence on AI in school and university work is detrimentally interfering with proper learning and the development of academic capabilities.

When students use AI to generate answers, summarise texts, or solve problems, they bypass the mental effort required for these tasks. This “cognitive offloading” prevents them from developing the neural pathways and cognitive strategies necessary for critical analysis, synthesis, and independent problem-solving. AI can provide correct answers, but it doesn’t ensure the student understands why that answer is correct or the underlying principles. This superficial learning makes it harder to apply knowledge to new, complex situations. Research studies indicate that reliance on AI chatbots can impair the development of critical thinking, memory, and language skills. Participants who used chatbots showed reduced brain connectivity and lower theta brainwaves (associated with learning and memory). A staggering 83% struggled to recall accurate quotes from their AI-generated work, compared to only 10% in non-AI groups.

The act of actively studying, reviewing, and trying to recall information strengthens memory. If AI provides instant answers, students do not engage in this effortful retrieval, leading to weaker long-term memory formation. This is an extension of the “Google Effect,” where people are less likely to remember information, which they know they can easily look up. With AI, this effect is magnified as AI can process and present information in highly structured ways, further reducing the need for personal recall.

While AI can “generate” text, it does so by mimicking patterns in its training data. True creativity often involves novel connections, imaginative leaps, and thinking outside existing patterns; skills that are not exercised when simply prompting AI. The average or most common response is provided by AI, rather than insightful or in-depth ones. This can lead to a homogenization of thought and a decrease in truly original student work.

While AI can correct grammar and improve flow, over-reliance on it for drafting entire essays can prevent students from developing their characteristic voice, sentence structure mastery, and the ability to construct complex arguments logically and persuasively from scratch. If AI can instantly summarise research or find specific facts, students may not develop the diligence and discernment needed to evaluate sources, cross-reference information, or explore topics in depth.

The ease of generating content makes cheating and plagiarism more accessible and harder to detect, undermining academic integrity. Studies show significant percentages of students admitting to using AI for assignments in ways that constitute cheating.

Will it produce a generation that cannot do much without AI?

An equally serious concern is whether continued trends of excessive reliance could lead to a generation that struggles with tasks requiring the ability to analyse problems, identify assumptions, evaluate evidence, and form reasoned judgments without immediate AI input. Will that future generation lack the capacity to break down complex issues, brainstorm solutions, and execute strategies based on their own knowledge and reasoning? Will it lead to a breed of humans with a diminished ability to retrieve and synthesise information from their own minds, leading to a constant need for external tools? Will they have a significantly reduced capacity for generating novel ideas, arguments, or artistic expressions? Will they be quite uncomfortable in grappling with ill-defined problems or information gaps, as AI often provides a seemingly complete, albeit sometimes inaccurate, answer?

What we are trying to say is that AI is NOT inherently bad, but rather, misuse or overuse of it in developmental stages can stunt fundamental cognitive growth.

The Way Forward is Augmentation; Not Replacement

The consensus among educators and professionals is that AI should be viewed as a powerful tool for the enhancement of human intelligence, and not a replacement for fundamental human learning and skills development. It must be used to support rather than substitute human decision-making, cognition, and enhancement of higher functions of the brain. Education needs to adapt by taking firm steps to equip students with the knowledge of how AI works, its capabilities, its limitations, and its ethical implications. It needs to move away from tasks easily completed by AI, towards assignments that require critical thinking, creativity, synthesis of diverse sources, and the application of knowledge in complex, real-world scenarios.

We will need to prioritise and explicitly teach “human-centric” skills like critical thinking, creativity, emotional intelligence, communication, collaboration, and ethical reasoning, which AI cannot replicate. We should go that extra mile to encourage students to use AI outputs as a starting point, but always to verify, critique, and improve upon them with their own intellect. Young people would need to recognise that the world will continue to change rapidly and individuals will need to continuously adapt and learn new skills, even with AI being present. AI should not be the be-all and end-all of everything in this world.

Finally, and in summary, the evidence points to a clear risk that unbridled and excessive AI dependence can undermine core academic capabilities. The challenge for educators and institutions is to harness AI’s benefits while safeguarding and cultivating the uniquely human intellectual and social skills essential for a competent and adaptable future workforce.

This author, even at his current age of three score and ten plus years, uses AI quite a lot. Yet for all that, its usage is carefully employed to get information, opinions and suggestions. He double-checks the information provided by AI to ensure the veracity of the material. Even after collating all details and information, the writing is done using human intelligence to virtually “humanise” the end product and then embellished by correcting the faults in the language, if any, as pointed out by writing assistants. An amalgamation of all these endeavours seems to be the way to go, to enable us to make the most of a valuable treasure trove.



from The Island https://ift.tt/i2aFjCw

No comments:

Post a Comment