Facebook AI algorithms have immensely failed, say UK Ministers


UK Ministers are being urged to rethink on the amendments of Online Safety Bill that gives special powers to social media giant Facebook(FB) to use AI technology to weed out hateful and disinformation filled content.

According to an article published in the Wall Street Journal, Online Safety Bill offers powers to tech companies such as FB to use Artificial Intelligence to curb the spread of harmful content.

But in practical, the machine learning driven AI tech of Facebook has failed to curb vile content.

“Mark Zuckerberg’s company cannot differentiate a human from a primate and a car crash from a car parked in a mechanic shed. So, under such circumstances, it wouldn’t be wise to use it to filter harmful content like hate speech,” said Philip Bond of ResPublica- a Britain-based independent research firm.

He added that by automating content moderation, the actual trouble wasn’t solving at all. Instead, a new situation has risen that needs a special focus on an immediate note.

Probably, Mr. Bond was referring to the discussion that Mr. Zuckerberg had with the Senate almost 2 years ago, claiming AI tools can help de

So, ResPublica has put a petition before few decision-making parliamentarians who, after review, have concluded that Facebook’s AI tech has failed to live to the expectations of those in favor of automated content moderation tools.

Note- The revelation comes just a couple of weeks after Frances Haugen testified before congress that Facebook always puts profits before people and indulges in sharing its user information with ad companies and assisting some companies in spreading misinformation.

Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display