Meta has introduced BlenderBot 3, an AI that can hold a conversation with anyone online without being a jerk.
Meta states BlenderBot 3 is designed to develop its conversational abilities and safety through user feedback, focusing on helpful feedback while avoiding unhelpful or risky responses.
Understatement: “unhelpful or dangerous responses” Microsoft had to shut down a Twitter bot dubbed Tay in 2016 because it “went from a happy-go-lucky, human-loving chat bot to a full-on racist”
Meta needs would-be BlenderBot, 3 testers to accept “this bot is for research and entertainment only and is likely to make untrue or offensive statements” before speaking with it.
BlenderBot 3 testers have asked about Meta CEO Mark Zuckerberg and US politics (Opens in a new window). The bot’s “In my experience, learning from conversations makes it hard to repeat its response to a prompt. BlendeerBot 3 improved by 31% in conversational tasks, claims Meta. It’s twice as smart and 47% less factually inaccurate. Only 0.16 percent of BlenderBot’s comments were deemed impolite or improper “steal”
Meta’s AI team blogged about BlenderBot three and posted a FAQ on the chatbot’s website (Opens in a new window). The corporation hasn’t specified how long this US-only experiment will operate.
More From Us:iFixit offers Samsung Galaxy self-repair kits.