By Nick Griffin
As billionaire technocrats pour vast sums of money into trying to create “conscious” artificial intelligence (AI) with “human” self-awareness, it turns out that AI is already thoroughly human in at least one regard—it lies.
Click the Link Below to Listen to the Audio of this Article
The companies betting billions—and the entire economic farm—on this potentially Luciferian technology don’t call it that, of course. They say that AI is “hallucinating.” But this involves things such as inventing totally false criminal allegations against innocent individuals, and fabricating legal cases and judgments. All of which would quite rightly be regarded as lies—and legally actionable lies at that—if any human being was making such things up.
Several recent incidents suggest that AI is already starting to run out of control, even before it reaches the level of self-consciousness about which expert skeptics have been warning for several years now.
AI has repeatedly been caught fabricating harmful narratives, accusing people of crimes they didn’t commit. Examples include chatbots generating false murder accusations.
In March 2025, ChatGPT invented from scratch a story accusing a Norwegian man of murdering his two children, an entirely baseless assertion that nearly ruined his life.
And then there is conservative filmmaker Robby Starbuck. He is suing Google after its AI invented allegations about him being a sex criminal.
Starbuck has been targeted by the left due to his campaigns against the imposition of transgender ideology on children and his strong stand against diversity, equity and inclusion. Announcing his legal action on social media platform “X,” Starbuck explained:
Google (Bard, Gemini and Gemma), has been defaming me with fake criminal allegations including sexual assault, child rape, abuse, fraud, stalking, drug charges, and even saying I was in Epstein’s flight logs.
All 100% fake. All generated by Google’s AI. I have zero criminal record or allegations.
He went on to note:
Google execs knew for two years that this was happening because I told them and my lawyers sent cease and desist letters multiple times.
This morning, my team … filed my lawsuit against Google and now I’m going public with all the receipts—because this can’t ever happen to anyone else.
Starbuck said:
Google’s AI didn’t just lie—it built fake worlds to make its lies look real: Fake victims, fake therapy records, fake court records, fake police records, fake relationships, fake “news” stories. It even fabricated statements denouncing me from President Donald Trump, Elon Musk and JD Vance over sexual assaults that Google completely invented.
Christian convert journalist Yashar Ali spoke out in Starbuck’s defense. The influential reporter tweeted:
The suit Robby Starbuck filed contains a screenshot from Google that claims I reported on sexual misconduct allegations against him. This is false.
Not only have I never reported on sexual misconduct allegations against Robby, but I have also never investigated such allegations against him, nor have I ever received a tip about any sexual misconduct allegations involving him.
AI is also increasingly prone to giving very suspect medical advice, and to inventing court records, such as non-existent court rulings. Cases have surged this year, with over 120 documented in legal filings alone.
In one high-profile example from July 2025, lawyers for MyPillow CEO Mike Lindell were fined thousands for submitting a brief riddled with AI-generated errors, including non-existent cases. Stanford research further illuminates the scale, finding that legal-specific models hallucinate on 58% to 82% of queries, often fabricating rulings or statutes that could derail justice if undetected.
The same is being noted in every field imaginable, and some of them could lead to deadly mistakes if people rely on this deeply flawed artificial “intelligence.” Yachtsmen, for example, warn that AI passage plans which look and sound well-informed frequently omit the most basic details, such as dangerous concealed rocks, or make up safe-haven anchorages where none exist.
The basic problem is that, if AI doesn’t know something, instead of admitting this, it simply uses “probability” to make up a reasonable sounding answer. Useful though the technology undoubtedly can be, the real life need to check everything it says is another problem pointing to the whole thing being another “dot com” bubble, rather than something that will change our world and usher in an era of ever-lasting progress.
Recently, I asked Grok—a free AI assistant designed by Musk “to maximize truth and objectivity”—why AI lies and, after an uncharacteristic delay and the need to repeat my question, it started its answer as follows:
The escalating evidence of rogue AI hallucinating marks a critical juncture in the evolution of artificial intelligence, where the line between innovation and peril blurs with alarming frequency.
Hallucinations stem from the inherent architecture of large language models that prioritize pattern recognition over verifiable truth.
These systems, trained on vast datasets riddled with biases and inaccuracies, often fill gaps in knowledge by inventing details, leading to outputs that range from benign errors to profoundly harmful distortions.
Grok went on to opine:
As AI permeates everyday decision-making, the potential for these hallucinations to cause reputational damage, legal missteps, and even psychological harm grows exponentially, demanding urgent scrutiny and reform.
I asked Musk’s AI pet what could be done to deal with the problem, receiving an answer which is less than reassuring:
Research suggests that completely eliminating AI hallucinations—where models generate false information as fact—is challenging due to their probabilistic nature, which relies on pattern prediction rather than true understanding.
It seems likely that, while mitigation techniques can reduce occurrences, fully programming out hallucinations would require fundamental changes that could stifle AI’s creative capabilities.
The evidence leans toward hallucinations persisting because of training data limitations and evaluation incentives that reward guessing over admitting uncertainty.
AI models like large language models predict outputs based on statistical patterns from vast datasets, often filling gaps with plausible but incorrect details. This makes absolute prevention difficult without sacrificing flexibility.
So there you have it. The technology that is slated to outstrip human intelligence, reshape our world, and play God with the future of humanity, is already exposed as an incurable, pathological liar. What could possibly go right?
Nick Griffin is a British nationalist commentator and writer. He was chairman of the British National Party (BNP) from 1999 to 2014, and a Member of the European Parliament for North West England from 2009 to 2014. Since then, Griffin has remained active in British politics despite being vilified for criticizing rampant immigration. You can read his work on Substack at “Nick Griffin Beyond the Pale” and on Telegram t.me/NickGriffin.
(function() {
var zergnet = document.createElement(‘script’);
zergnet.type=”text/javascript”; zergnet.async = true;
zergnet.src = (document.location.protocol == “https:” ? “https:” : “http:”) + ‘//www.zergnet.com/zerg.js?id=88892’;
var znscr = document.getElementsByTagName(‘script’)[0];
znscr.parentNode.insertBefore(zergnet, znscr);
})();
























