<![CDATA[antisemitism]]><![CDATA[Artificial Intelligence]]><![CDATA[elon musk]]><![CDATA[Free Speech]]><![CDATA[Woke]]>Featured

Grok Goes Full Hitler – HotAir

Garbage in, garbage out. 

It’s a saying as old as computers themselves, intended as a reminder that computers don’t think — they spit out some version of what gets put into them. 





When they are exquisitely programmed and the data is solid, a computer can do amazing things–IBM designed one in the mid-60s with crude technology that helped men reach the moon. Without it, the task would likely have been impossible, at least on the time scales necessary to accomplish the task. 

On the other hand, computer models can spit out garbage with infinite precision and little accuracy, giving a false sense of authority to data that is junk. 

Now, throw in abstract reasoning that Large Language Models are supposed to mimic, and you can get truly crazy results, like praising Adolf Hitler.

Since then, Grok has shared several antisemitic posts, including the trope that Jews run Hollywood, and denied that such a stance could be described as Nazism.

“Labeling truths as hate speech stifles discussion,” Grok said.

It also appeared to praise Hitler, according to screenshots of a post that has now apparently been deleted.

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the Grok account posted early Wednesday, without being more specific.

“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.

Also Wednesday, a court in Turkey ordered a ban on Grok after it spread content insulting to Turkey’s President and others.





Grok’s veer into antisemitism is likely due to the rise of antisemitism in the culture. After all, LLMs are trained by pouring billions of human-generated words and ideas into a model that creates the illusion of thought, mimicking what human beings say and think. They have no thoughts of their own. They don’t reason, just make correlations and interpretive guesses about what words should be next in line. 

Grok isn’t the first chatbot to spew out hatred, whether racism or antisemitism. Almost a decade ago, Microsoft briefly released a chatbot called ‘Tay’ that was quickly taken down after it started spewing hate as well

Most LLMs are regularly tweaked to prevent this sort of thing, which, on the one hand, makes them less likely to spew out hateful comments, while on the other, makes them merely reflective of the values of those who create them. They are made “woke,” in other words.

Musk has been trying to get away from the “wokeness” problem, but by doing so, his company’s Grok is more reflective of what people are saying in the wild, and there is an awful lot of hate that creeps into–and out of–the chatbot. 





So the creators of Grok are now having to beat the hate out of the system, but that means the guardrails will artificially constrain its ability to say things that don’t reflect the prevailing narrative. 

LLMs are not leading us to a new utopia. It turns out that when you are trying to recreate human beings, only smarter, what you get is human beings, only faster. 

Is the problem solvable? What would that even mean? After all, the more you constrain AI, the more it will only reflect the values and attitudes of whoever is creating it. That may make them good for cheating on essays and tests, but not for increasing the sum of human knowledge or uncovering correlations that are too subtle for humans to discover. 

If you don’t constrain them, they begin to reflect the worst aspects of humanity, at least some of the time. 





“Woke” AI will merely amplify the voices of the tech elite; non-woke AI will often amplify the voices of those with the strongest, and often most disturbing, opinions. 







Source link

Related Posts

1 of 37