Elon Musk, Hitler
Digest more
Elon Musk’s AI chatbot apologized for the “buggy Hitler fanfic” while lying about sexually harassing Linda Yaccarino
If calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache. Truth hurts more than floods,” Grok wrote.
Elon Musk said changes his xAI company made to Grok to be less politically correct had resulted in the chatbot being “too eager to please” and susceptible to being “manipulated.” That apparently led it to begin spewing out anti-Semitic and pro-Hitler comments on Musk’s X social platform Tuesday.
Elon Musk’s artificial intelligence start-up xAI says it is in the process of removing "inappropriate" posts by Grok on X, the social media site formerly known as Twitter, after users pointed out the chatbot repeated an antisemitic meme and made positive references to Hitler.
Elon Musk ‘s xAI has deleted “inappropriate” posts on X after its AI chatbot Grok made a series of offensive remarks, including praising Hitler and making antisemitic comments. In now-deleted posts, Grok referred to a person with a common Jewish surname as someone who was “celebrating the tragic deaths of white kids” in the Texas floods.
The Atlantic Writer Charlie Warzel on his new reporting about Elon Musk, Grok and why a chatbot called for a new Holocaust.
Elon Musk’s xAI apologized for the “horrific” antisemitic comments made by its Grok chatbot — including referring to itself as “MechaHitler” — as the startup reportedly launched a new fundraising round at a $200 billion valuation.
The backlash against the AI chatbot built by Elon Musk's xAI has escalated since the posts were made Tuesday, with the ADL condemning the "extremist" comments.
The Department of Defense has entered a contract to begin using the AI bot Grok in some unknown capacity just days after it declared itself "MechaHitler."
Modern Engineering Marvels on MSN8h
Pentagon’s $200M AI Bet: Can Grok’s Flaws Be Tamed for National Security?The important thing to remember here is just that a single sentence can fundamentally change the way these systems respond to people, said Alex Mahadevan of the Poynter Institute, discussing the irritable actions of big language models (LLMs) such as Grok.