
Study Finds Only 250 Documents Can Poison AI Models
For years, experts assumed that Large Language Models (LLMs) – the backbone of advanced AI systems like chatbots – were difficult to manipulate and required significant expertise to alter. However, new research from the UK-based Anthropic AI Security Institute, in collaboration with the prestigious Alan Turing Institute, has shattered this assumption, revealing that it only takes around 250 documents to poison AI models of various sizes.
The study, part of the Anthropic AI Data Poisoning Report, highlights a vulnerability in LLMs, including popular systems that power chatbots like ChatGPT, Claude, and Grok. These models, which can range from 600 million to 13 billion parameters in size, are surprisingly susceptible to attacks that aim to degrade their functionality or make them produce harmful content, such as malware.
The researchers found that it doesn’t take much for a perpetrator to manipulate a model’s behavior. By providing the system with a small set of carefully crafted documents – somewhere between 100 to 250 – the AI can be tricked into generating incorrect, harmful, or distorted outputs. This manipulation, known as “data poisoning,” is an alarming vulnerability, as it undermines the integrity and trustworthiness of the model.
What’s even more concerning is that AI models seem to have difficulty distinguishing between meaningful data and “gibberish.” This blind spot makes it easy for malicious actors to exploit the model’s algorithms with seemingly random input, creating the potential for widespread disruption.
Data poisoning doesn’t just affect well-known models like ChatGPT and Claude; it extends to other lesser-known models, such as those used in Retrieval Augmented Generation (RAG) systems. As AI continues to evolve, the risk of these vulnerabilities being exploited only grows. To prevent these kinds of attacks, experts stress the importance of training AI models with high-quality, carefully curated data. This would help ensure the AI is not only accurate but also safe and reliable for its intended purpose.
Verizon Report: AI-driven Mobile Threats and Human Errors Expose Organizations to Greater Cybersecurity Risks
In addition to concerns over AI vulnerabilities, the global cybersecurity landscape is facing a new set of challenges, as outlined in Verizon’s latest Mobile Security Index (MSI) report. The report reveals that human errors, combined with the rise of AI-driven cyber threats, are putting organizations worldwide at severe risk. The findings come from a survey of 760 cybersecurity professionals, over 500 of whom admitted that various sectors – including government, healthcare, finance, and manufacturing – are increasingly vulnerable to these types of attacks.
One of the primary drivers of this heightened risk is the rapid growth in mobile device usage within organizations. More than 70% of surveyed organizations acknowledged that their employees frequently use generative AI tools on mobile devices, which has created a gaping hole in their cybersecurity defenses. With mobile phones being the primary tool for many employees, this trend has led to a surge in attacks, with more than 85% of organizations experiencing an increase in cyber incidents.
The rise of sophisticated AI tools is also complicating the threat landscape. As AI-driven attacks become more automated and advanced, organizations face increasingly difficult challenges in defending their mobile networks. The report highlights that, despite the growing threat, only 17% of organizations have effective defenses against AI-based attacks, and an even smaller proportion – just 12% – have measures in place to protect against deepfake vishing attacks (where AI is used to create fake voices for fraudulent purposes).
Perhaps most troubling is the finding that over half of employees (50%) admitted to clicking on malicious links in nearly 80% of test scenarios. This illustrates the ongoing human error factor, which remains one of the weakest links in cybersecurity. Despite advanced tools and systems, the report underscores that the most effective form of defense may still be training employees to recognize threats and act accordingly.
Chris Novak, the VP of Global Cybersecurity Solutions at Verizon Business, remarked, “Mobile security is not just about perimeter defense. It’s a battle that’s fought with the palm of every employee’s hand.” His statement reflects the growing consensus that, as mobile devices continue to serve as primary gateways for both work and cyber threats, ensuring personal responsibility and robust cybersecurity training among employees is more critical than ever.
Conclusion: The Evolving Threats of AI and Mobile Devices
The findings from both the Anthropic AI Security Institute and Verizon’s Mobile Security Index reveal a stark reality: AI-driven threats and mobile device vulnerabilities are reshaping the global cybersecurity landscape. Whether it’s the alarming ease with which AI models can be poisoned or the growing risk posed by mobile devices used in corporate environments, organizations must adapt quickly. Addressing these issues requires not only advanced security tools but also comprehensive employee education to safeguard against both human error and malicious AI-driven attacks.
As AI continues to revolutionize industries, it’s crucial for companies to stay ahead of these evolving threats, ensuring they maintain the integrity of their systems while protecting sensitive data from both external attacks and internal lapses.
Join our LinkedIn group Information Security Community!














