
Whether done knowingly or inadvertently, the sharing of official documents with online platforms such as ChatGPT is widely regarded as a potential security risk. Any disclosure of internal data to a third-party service—especially one operated outside an organization’s direct control—can expose sensitive information and lead to privacy, compliance, or national security concerns for the data owner.
While such incidents may not always qualify as classic “insider threats,” they nonetheless pose significant risks to both public and private organizations. Unauthorized data exposure can result in reputational damage, legal consequences, financial penalties, or a combination of these outcomes for the individual or entity responsible.
A recent example highlighting these concerns involves the US Cybersecurity and Infrastructure Security Agency (CISA). According to a media report published by Politico, CISA Deputy Director Madhu Gottumukkala is alleged to have uploaded internal office documents to ChatGPT for analysis or research-related purposes, thus, raising data sovereignty concerns now.
What makes this incident particularly noteworthy is that CISA—an agency operating under the US Department of Homeland Security (DHS)—had reportedly barred the use of ChatGPT in April 2025, citing security and data protection concerns. Despite this restriction, Mr. Gottumukkala, an electronic engineer by academics, was reportedly granted special permission by DHS to use the AI platform for a limited period, ostensibly to support his work.
Although the documents shared were reportedly not classified, they were intended strictly for internal office use. Their alleged upload to an external AI-based platform has raised questions about policy compliance, data handling practices, and oversight within sensitive government agencies.
DHS is said to have become aware of the incident in August 2025, prompting the launch of an internal investigation. Details of the probe remain undisclosed, but experts note that even non-classified government data can carry operational or strategic value. There are also broader concerns about how such data might be retained or processed by AI platforms, potentially contributing to unintended data exposure or misuse.
Incidents of this nature are not uncommon across government and corporate environments, particularly as employees increasingly rely on AI tools to enhance productivity. However, observers have expressed surprise given Mr. Gottumukkala’s academic background, which includes a Doctorate in Information Systems from Dakota State University of North America.
The episode underscores the urgent need for clearer guidelines, stricter enforcement, and comprehensive cybersecurity training. As AI tools become more deeply embedded in daily workflows, agencies like CISA may need to implement enhanced education programs to ensure employees fully understand the dos and don’ts of handling sensitive information in digital and AI-assisted work environments.
Join our LinkedIn group Information Security Community!
















