Research says over 77 percent of data is shared to ChatGPT by employees

default-cybersecurity-insiders-image

Artificial intelligence (AI) tools, including large language models (LLMs) like ChatGPT, can be a transformative asset for businesses, enhancing productivity, innovation, and efficiency. However, their effectiveness and safety hinge on how responsibly they are used. Recent findings indicate a growing concern about the potential misuse of these tools in corporate settings, particularly regarding sensitive data leakage. According to research by LayerX Security, a staggering 77% of corporate data is being shared with AI tools by employees, often without them realizing the potential risks involved.

The LayerX Security Enterprise AI and SaaS Data Security Report for 2025 highlights a troubling trend: a significant portion of confidential company information is being inadvertently exposed through interactions with AI platforms like large language models. The report reveals that many employees, though well-meaning, are unknowingly contributing to data leaks that could have catastrophic consequences for their organizations.

The Scope of the Problem

Among the employees surveyed, a striking 50% admitted to pasting sensitive business data into generative AI tools. Even more concerning, 18% of these employees reported sharing highly sensitive information, including proprietary development data. This inadvertent sharing of confidential details poses a severe risk to companies, as once such information enters AI platforms, it can be stored or used in ways that are outside the control of the company.

Interestingly, despite these risks, many employees continue to use AI tools to boost their productivity. 45% of corporate staff acknowledged utilizing AI tools to streamline their work processes, and within this group, nearly half are turning to ChatGPT alone. This widespread adoption of AI for day-to-day tasks highlights the potential benefits but also underscores the urgency for organizations to develop better training, policies, and safeguards.

The Emerging Data Management Crisis

The findings point to an alarming trend: companies are facing a growing identity and data management crisis. If left unaddressed, this issue could escalate, exposing organizations to extreme cybersecurity risks. With so much sensitive data being shared without proper oversight, businesses could face not only intellectual property theft and data breaches but also severe reputational damage and legal consequences.

The issue is compounded by the fact that many employees are unaware of the risks associated with using AI tools. While they may be acting in good faith, their lack of awareness about how AI platforms store, analyze, and potentially share data can put the entire organization at risk.

The Need for Responsibility and Awareness

As AI continues to revolutionize the workplace, it is crucial for businesses to foster a culture of responsible AI use. This means implementing strict data management policies, providing employees with comprehensive training on AI tool usage, and ensuring that tools like ChatGPT are deployed with the appropriate security measures in place. Organizations must also consider employing AI auditing and data monitoring systems to track and control the flow of sensitive information.

In conclusion, while AI tools have the potential to be a boon for companies, their benefits can only be fully realized if they are used responsibly and securely. As businesses continue to embrace these advanced technologies, they must remain vigilant about the risks they pose to data security. Failure to do so could leave them exposed to vulnerabilities that could undermine their very foundations.

Join our LinkedIn group Information Security Community!
Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display