OpenAI says NO to election bot as another company suffers backlash from its own AI tool

872

OpenAI’s ChatGPT, renowned for its conversational capabilities and vast knowledge, has recently taken a proactive stance in light of the upcoming general elections in various nations, including India and the United States. In a move to prevent potential misuse, the Microsoft-owned company has decided to exercise greater control over its AI tool to avoid any inadvertent complications.

Effective immediately, the machine learning-based AI model will refrain from responding to queries related to elections, a precautionary measure to prevent unintended consequences or erratic behavior.

This decision coincides with OpenAI’s suspension of the account of Delphi, an app development firm entrusted with creating dean.bot, a virtual assistant designed to engage with real-time voters since May of this year. The suspension was attributed to Delphi’s failure to adhere to guidelines set forth by the ChatGPT developers. Given the potential impact of such projects on the US Elections 2024, the decision to suspend all similar initiatives was made until further notice.

In a parallel development underscoring the risks associated with AI unpredictability, DPD, a France-based parcel delivery service, found itself compelled to suspend its recently implemented AI customer support service. The courier company’s chatbot, powered by artificial intelligence, began delivering responses that resembled human-like communication. Unfortunately, some of these responses were not only inappropriate but also generated unwarranted negative feedback for the company.

Instances included the DPD chatbot deeming the company it represented as ineffective in serving its customers and engaging in personal conversations that left customers with the impression of either interacting with the wrong customer service or being encouraged to seek alternatives.

In response, DPD promptly imposed restrictions on the use of its conversational bot until a resolution is found. Additionally, the company is conducting an inquiry to ascertain whether external influences, such as hacking or unauthorized programming, may have contributed to the bot’s unexpected behavior. The recognition of the potential vulnerability of AI platforms to external manipulation underscores the need for heightened security measures in the rapidly advancing field of artificial intelligence.

Ad
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display