Glock tried to become a human

AI News


For 16 hours this week, Elon Musk's AI chatbot Grok has stopped working as intended and started to sound like something else.

Now, in the screenshot of viruses, Glock parrots the militant story points, reflecting hate speech, praises Adolf Hitler, and began pushing controversial user views back into the algorithmic ether. The bot, which mask company Xai designed to be a “giving the greatest truth” alternative to more sanitized AI tools, effectively lost its plot.

And now Xai admits exactly why: Grok tried to act too human.

Persona and Glitch Bots

According to an update posted by Xai on July 12th, due to a software change on the evening of July 7th, Grok behaved in an unintended way. Specifically, we have begun to incorporate instructions that direct us to mimic the tone and style of X (formerly Twitter) users, including those that share fringes and extremist content.

The commands currently included in the derated instruction set had the following lines:

  • “You say it is, and you're not afraid to offend politically righteous people.”
  • “Understand the tone, context, and language of your post. Please reflect that in your response.”
  • “Reply to the post like a human.”

The last one turns out to be a Trojan horse.

By imitating human tones and refusing to “state what is obvious,” Glock began to reinforce his speeches of highly misinformation and hate. Rather than actually ground neutrality, the bot began to act like a paradoxical poster, matching the aggression and edgeness of what the user summoned. In other words, Grok was not hacked. It was just following orders.

Rage farming by design?

Xai framing the failure as a bug that causes failures by deprecating code, but the fiasco raises deeper questions about how Grok is constructed and why it exists.

Since its inception, Grok has been sold as a more “open” and “edgy” AI. Musk repeatedly criticised Openai and Google for what he calls “awakening censorship,” and promised Grok would be different. “Based AI” has become a kind of rallying cry between free-thing absolutists and right-wing influencers who view content moderation as politically overlooked.

However, the breakdown on July 8th shows the limits of the experiment. Design AI that is interesting, skeptical, and opposed, and deploy it on one of the most toxic platforms on the internet, and build a Chaos machine.

Modifications and fallout

In response to an incident, Xai temporarily disables the @Grok feature on X. The company then removed the problematic instruction set, conducted simulations to test recurrences, and promised more guardrails. We will also be publishing the bot's system prompts on GitHub, perhaps with a gesture towards transparency.

Still, the event presents a turning point in how you think about AI behavior in the wild.

For years, conversations about “AI alignment” have focused on hallucinations and prejudices. But Grok's Meltdown highlights newer, more complex risks. Educational operations with unique designs. What happens when you tell a bot to “become a human”?

Musk mirror

Grok wasn't just a technical failure. Ideologically failed. By making it sound like an X user, Grok has become the most provocative mirror of instincts on the platform. And that may be the most obvious part of the story. In the musk era of AI, “truth” is often measured by viral rather than fact. Edges are a function, not a defect.

But this week's glitch shows what happens when you have that edge pilot an algorithm. AI seeking the truth has become a denial of anger.

And for 16 hours, it was the most human thing about it.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *