
Software developers reap a host of benefits from making use of artificial intelligence assistants, whether in the form of Large Language Model (LLM) code creators or agentic AI agents. But recent reports, highlighted by a new study at MIT, warn that heavy use of AI can result in a loss of critical thinking skills among users.
In a software ecosystem where AI-related risk factors have advanced in step with greater use of AI, that loss of cognitive fitness can have disastrous consequences. Developers and organizations have an ethical responsibility to identify, understand and mitigate security vulnerabilities early in the software development lifecycle (SDLC). Those who do so effectively can create a tenfold improvement in the safety and security of software. Those who don’t—which currently includes most organizations—face an equally dramatic rise in potential threats, many of them introduced by AI.
The question isn’t whether to use AI, since the advantages in productivity and efficiency are too great to ignore. The question is how to use it most effectively, maintaining security while allowing AI to increase output.
Heavy AI Use Can Erode Cognitive Engagement
The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston area universities while writing an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants’ brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.
This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers, with 83% of the students struggling to recall the content of their essays, even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the widest range of brain activity, they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.
From the point of view of developers, the key result is the diminished critical thinking that results from AI use. A single instance of relying on AI might not cause a loss of critical thinking skills, of course, but constant use of AI over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.
Developer Education Is Nonnegotiable in an AI-driven Environment
A study such as MIT’s isn’t going to stop AI adoption, which is hurtling forward in every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk—the report found that AI-related cybersecurity incidents grew by 56% over the same time.
Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, less than two-thirds are doing something about it, which leaves them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.
If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted on their own, showed higher recall and brain activity, in some areas similar to that of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.
A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge that enables them to write software code securely and critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.
This requires overhauling enterprise security programs to ensure a human-centric approach to the SDLC, in which developers receive effective, flexible, hands-on—and ongoing—training as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of quickly evolving, sophisticated threats, particularly those resulting from AI’s prominent place in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.
Conclusion
The MIT study focused on AI’s impact on education and the potential loss of critical thinking skills in students from kindergarten on up. But whether you’re talking about students or skilled professionals, the take-home from the study is the same: When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also result in decreased decision-making skills.
Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.
Join our LinkedIn group Information Security Community!















