Data privacy and security become most worrisome for AI adoption decision makers

AI Cyber Threat Image

Decision makers keen on integrating AI tools into their operations are expressing concern over data privacy and security. This sentiment extends to their cautious approach towards embracing generative AI, as revealed by a study conducted by Coleman Parkes Research, sponsored by SAS, a leading data analytics firm.

Despite the allure of increased productivity promised by Large Language Models (LLMs) in corporate settings, strategic AI advisor Marinela Profi from SAS acknowledges the array of business challenges they pose.

Compounding these concerns is the specter of data poisoning, which acts as a significant deterrent to swift AI adoption. The process of training generative AI relies heavily on extensive datasets to yield meaningful outputs, presenting an opportunity for threat actors to manipulate the information fed into LLMs.

Consider the training of ChatGPT-4 across its eight spheres, each with 220 billion parameters. This expansive training model necessitates intricate system interconnections, service integrations, and networked devices, thereby creating vulnerabilities ripe for exploitation by hackers through avenues such as configuration errors, backdoor tampering, flooding, API targeting, or other vulnerabilities.

The responsibility falls squarely on the models being trained and adopted to maintain provenance and integrity in collecting, storing, and utilizing information, a challenge exacerbated by the interconnected nature of modern systems.

One potential solution to mitigate the risk of data poisoning is through Retrieval Augmented Generation (RAG), wherein only business-specific data is fed into language models, minimizing the risk of data manipulation.

Until companies are equipped to effectively address challenges like data poisoning, it’s prudent for them to exercise caution in adopting AI trends. Implementing measures such as model monitoring, routine data validation, and anomaly detection tools can bolster defenses against such threats. Otherwise, their investments in AI adoption may face significant jeopardy, potentially leading to the emergence of further vulnerabilities.

Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display