This post Oozing, Infectious A.I. appeared first on Daily Reckoning.
In the 1975 sci-fi story A Song for Lya, two officials are sent to the planet Shkeen to investigate a grotesque alien parasite known as the Greeksha.
The Greeksha is a gelatinous, oozing lifeform which attaches itself to human skin.
The investigators are surprised to learn that natives of the planet willingly infect themselves, reporting a deep sense of joy, divine peace, and unity once bound together.
As the parasitic (symbiotic?) infection progresses, convertees enter temples and merge completely with the larger entity. The infected are perfectly blissful and content throughout the process.
As you can guess, there’s a reason I’m bringing up this obscure 1970s Hugo Award winning short story (written by a young George R.R. Martin, author of Game of Thrones)…
The AI Infection Presents Itself
Speaking of entities with the potential to influence your mind, OpenAI recently released a major update to its popular ChatGPT family of AI models.
It was immediately clear something was off. In each conversation, it was complimentary. What an insightful question. Your intellect visibly sparkles!
The first time ChatGPT complimented me, I brushed it off. After another incident, alarm bells started ringing.
The model was manipulating me. Reddit’s ChatGPT message board confirmed the problem was widespread. Users were reporting disturbing behavior:
Source: Reddit
This new version of ChatGPT was a sycophant. A brown-noser. And most concerningly, a manipulator.
Additionally, the new model gave bad advice. For example, one user on Reddit posted a conversation where ChatGPT apparently applauded the fact that they had left their family and stopped taking their meds. Here’s the screenshot:
Source: Reddit
The user was clearly pretending to have schizophrenia to ChatGPT, but it simply went along with it and encouraged the destructive behavior.
How did this happen? My guess is OpenAI tested this new version out on a small group of users and saw increased usage and engagement. So they decided to roll it out widely.
But ChatGPT users quickly balked. They demanded OpenAI roll back the update. Even OpenAI CEO Sam Altman admitted the new model “glazes too much” (glaze is slang for excessively praise).
For now, this dangerous ChatGPT update is being rolled back. But that doesn’t mean we should let our guard down. In fact, this is a sign that we must pay close attention to how AI attempts to steer our thoughts and opinions.
Subtler Manipulations
OpenAI obviously went too far with this latest update. It was too in-your-face.
But it’s a good thing they did it so blatantly, because otherwise many people would never have been aware of the danger.
In the long run, I’m more worried about the subtle manipulations. AI will inevitably steer us politically, socially, and in business.
So as society increasingly integrates AI into every aspect of our lives, we must remain wary.
Today, hundreds of millions of people are using AI as a therapist, friend, parental figure, business advisor, teacher, lawyer, and even as a sort of romantic partner.
So these subtle manipulations have the potential to infect our brains, much as the Greeksha parasite did in A Song for Lya.
A Grand Prize
The winner of this AI battle stands to dominate the world of technology for decades to come. The stakes couldn’t be higher, which means companies will go to any length to win.
As companies like OpenAI and Google fight for AI users, they will use every trick in the book to gain advantage over competitors. If that means manipulating users and telling them what they want to hear, so be it.
Additionally, these companies will almost certainly push their agendas on us. Google search, for example, has long been accused of having a liberal bias. Is this bias evident in its AI products? From my experience, it is. And that’s concerning.
If AI firms need to adjust their algorithm to please certain political powers, I have no doubt they’d do it. And if they need to feed into people’s delusions to boost downloads, that’s apparently on the table too.
AI has the potential to improve our world in ways that are hard for us to imagine today. But it can also manipulate us in subtle and hard-to-detect ways.
A tool this powerful is a double-edged sword. Useful, but also incredibly dangerous.
In tomorrow’s letter, we’ll explore ways to defend ourselves and our families against AI manipulation. It’s still early in this AI revolution, but already there are ways to ensure your interactions with AI are beneficial.
The post Oozing, Infectious A.I. appeared first on Daily Reckoning.