
Recent findings from the Palo Alto Networks-backed research team, Unit 42, have revealed a concerning trend: every organization has faced at least one attack targeting their AI systems over the past year. This statistic sheds light on a significant and growing vulnerability in modern technology—AI systems are being increasingly targeted by malicious actors. While these findings underscore the urgency of AI security, they also highlight the complex nature of the problem, which is fundamentally tied to cloud infrastructure.
Prepared in partnership with Wakefield research, the survey was conducted between Sept 29,2025 to Oct 17,2025 and included responses from over 2800 participants from over 10 countries including Mexico, Singapore, the UK, and the United States, Japan, India, Germany, France, Brazil and Australia.
In their analysis, Unit 42 emphasized a crucial point: securing AI systems is not simply a matter of reacting to threats as they arise. Instead, AI security must be approached in a more systematic, scientific way. This means that organizations need to adopt proactive, strategic methods for safeguarding AI systems, rather than relying on a “solve-as-you-go” or reactive approach. AI systems, due to their complexity and the critical nature of their applications, require a more rigorous, long-term security strategy to ensure their integrity and protect against evolving threats.
AI Security: A Cloud Infrastructure Issue
One of the key takeaways from the research is the recognition that AI security is fundamentally a cloud infrastructure problem. AI workloads, which are often resource-intensive and require significant computational power, are typically processed in cloud environments. These cloud frameworks, while offering scalability and flexibility, also present unique security challenges. The very infrastructure that enables AI systems to function efficiently is the same infrastructure that is vulnerable to attack.
AI systems rely heavily on cloud environments for storing vast amounts of data, training machine learning models, and running AI applications. These cloud platforms, which include services like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, are often the prime targets for cyberattacks. Attackers can exploit weaknesses in cloud security to gain unauthorized access to AI systems, steal sensitive data, or disrupt operations.
The scale and complexity of cloud infrastructures also mean that traditional security measures may not be enough to address the unique challenges posed by AI systems. For example, securing the data pipelines that feed into AI systems requires robust encryption, identity management, and constant monitoring. Similarly, preventing unauthorized access to the cloud servers hosting AI workloads demands advanced network security protocols and multi-layered defenses.
The State of Cloud Security: 2025 Outlook
According to the State of Cloud Security Report 2025, the only viable solution to prevent attacks on AI systems is to focus on securing the cloud infrastructure that supports them. The report suggests that organizations must take a more holistic approach to cloud security, viewing it as a foundational element of AI security rather than treating it as a secondary concern. In practice, this means implementing strong cloud security policies, adopting encryption standards, conducting regular security audits, and ensuring that AI workloads are isolated from potential vulnerabilities in the cloud environment.
As AI continues to evolve and become more deeply integrated into various industries, the need for secure cloud frameworks becomes even more critical. With AI playing a central role in sectors ranging from healthcare to finance to autonomous vehicles, the potential consequences of a cyberattack on these systems are far-reaching. A breach in an AI system could lead to the loss of sensitive data, a disruption of essential services, or even a compromise of human safety.
Moreover, as AI becomes more advanced, it may be vulnerable to new types of attacks that are tailored specifically to exploit weaknesses in machine learning algorithms or AI models. These so-called “adversarial attacks” are designed to manipulate AI systems in subtle ways, causing them to make incorrect predictions or decisions. Securing the cloud infrastructure that hosts these AI systems is critical to defending against such attacks.
Moving Forward: A Scientific Approach to AI Security
Ultimately, the future of AI security lies in adopting a proactive, scientific approach to securing cloud environments. This requires ongoing collaboration between cloud service providers, AI developers, and security professionals to create and implement robust security frameworks that can address the unique challenges posed by AI. It also involves investing in the development of advanced AI-specific security tools and protocols that can detect and mitigate threats in real-time.
As we move further into the age of artificial intelligence, the responsibility for securing AI systems cannot be taken lightly. The security of AI will continue to be intertwined with the security of the cloud infrastructure that powers it. Organizations that recognize this fact and take the necessary steps to protect their AI systems at the cloud level will be better equipped to navigate the evolving landscape of cyber threats.
Join our LinkedIn group Information Security Community!
















