Google to do data privacy invasion in the name of Gemini AI Development


Google is poised to delve into a potential data privacy quagmire in its pursuit of AI advancement with the impending release of its ChatGPT counterpart, Gemini, stemming from the 2017 ‘Project Ellmann’ and slated for an April 2024 debut. With Android smartphones deeply entrenched in the fabric of daily life, housing the digital troves of emails, documents, audio files, videos, and photos within the tech giant’s cloud data centers, privacy concerns loom large.

Given that Google possesses a wealth of personal data in digital format on its servers, speculations arise about the potential utilization of this information to nourish its AI chatbot. This chatbot not only comprehends text but also exhibits the capability to analyze and extract content from images, videos, and audio files stored by users.

Recent revelations suggest that Google intends to permit its new Gemini Chatbot access to user photos and search histories for constructing search response content, a move that is increasingly perceived as a privacy infringement under the guise of AI development.

Insiders from Alphabet Inc’s subsidiary have reportedly leaked information on Telegram, disclosing that Google has already incorporated AI into users’ phones and Chrome devices to enhance service delivery. The introduction of the apps ‘Artificial System Intelligence’ & ‘Private Compute Services,’ running inconspicuously in the background as an update on Pixel phones, raises questions about its true nature as an AI tool.

The opaqueness surrounding the operations of private companies within their data centers and their interactions with devices integral to everyday life, such as smartphones and chat assistants, adds a layer of uncertainty for users. Notably, Google faced allegations in 2019 of amassing billions of medical records pertaining to the U.S. public through ‘Project Nightingale.’ However, this issue seemingly waned amidst the chaos of the Covid-19 pandemic and subsequent crises, eclipsed by more sensational headlines.

The pivotal question emerges: is Project Gemini, Google’s expansive language AI model, a potential threat to the privacy of online users? While technology itself isn’t inherently at fault, the focus shifts to the ethical considerations and decisions made by the individuals developing and utilizing it. An introspective examination by both employees and the user base becomes imperative in navigating the delicate balance between technological innovation and safeguarding privacy.

Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display