Breaking NewsThe Daily Reckoning

Superintelligence Will Never Arrive – The Daily Reckoning

Readers know at least two things about artificial intelligence (AI). The first is that an AI frenzy has been driving the stock market higher for the past three years even with occasional drawdowns along the way. The second is that AI is a revolutionary technology that will change the world and potentially eliminate numerous jobs, including jobs requiring training and technical skills.

Both points are correct with numerous caveats. AI has been driving the stock market to record highs, but the market has the look and feel of a super-bubble. The crash could come anytime and bring the market down by 50% or more.

That’s not a reason to short the major stock indices today. The bubble can last longer than anyone expects. If you short the indices, you can lose a lot of money being wrong. But it is advisable to lighten up on equity allocations and increase your allocation to cash in order to avoid the worst damage when the crash does come.

On the second point, AI will make some jobs obsolete or easily replaceable. Of course, as with any new technology, it will create new jobs requiring different skills. Teachers will not become obsolete. They’ll shift from teaching the basics of math and reading, which AI does quite well, to teaching critical thinking and reasoning, which computers do poorly or not at all. Changes will be pervasive, but they will still be changes and not chaos.

The Limitations

Artificial Intelligence is a powerful force, but there’s much less there than meets the eye. AI may be confronting material constraints in terms of processing power, training sets and electricity generation. Semiconductor chips keep getting faster and new ones are on the way. But these chips consume enormous amounts of energy, especially when installed in huge arrays in new AI data centers. Advocates are turning to nuclear power plants, including small modular reactors to supply the energy needs of AI. This demand is non-linear, which means that exponentially larger energy sources are needed to make small advances in processing output. AI is fast approaching practical limits on its ability to achieve greater performance.

This near insatiable demand for energy means that the AI race is really an energy race. This could make the U.S. and Russia the two dominant players (sound familiar?) as China depends on Russia for energy and Europe depends on the U.S. and Russia. Sanctions on Russian energy exports can actually help Russia in the AI race because natural gas can be stored and used in Russia to support AI and cryptocurrency mining. It’s the law of unintended consequences applied to the short-sighted Europeans and the resource-poor Chinese.

image 1

Your Editor touring the HiPerGator AI computer located at the University of Florida in Gainesville, Florida. The HiPerGator is the third-fastest non-government computer in the world. It runs on NVIDIA semiconductors generously donated by an alum who is the co-founder of NVIDIA. I use the HiPerGator in connection with my work for the Florida Institute of National Security, which uses AI to explore kinetic and financial war fighting scenarios. I have built extensive neural networks that will be running on the HiPerGator.

AI Lacks Common Sense

Another limitation on AI, which is not well known, is the Law of Conservation of Information in Search. This law is backed up by rigorous mathematical proofs. What it says is that AI cannot find any new information. It can find things faster and it can make connections that humans might find almost impossible to make. That’s valuable. But AI cannot find anything new. It can only seek out and find information that is already there for the taking. New knowledge comes from humans in the form of creativity, art, writing and original work. Computers cannot perform genuinely creative tasks. That should give humans some comfort that they will never be obsolete.

A further problem in AI is dilution and degradation of training sets as more training set content consists of AI output from prior processing. AI is prone to errors, hallucinations (better called confabulations) and inferences that have no basis in fact. That’s bad enough. But when that output enters the training set (basically every page in the internet), the quality of the training set degrades, and future output degrades in sync. There’s no good solution to this except careful curation. If you have to be a subject matter expert to curate training sets and then evaluate output, this greatly diminishes the value-added role of AI.

Computers also lack empathy, sympathy and common sense. They process but they do not really think like humans. In fact, AI does not think at all; it’s just math. In one recent experiment, an AI computer was entered into a competition with a group of 3- to-7-year-olds. The challenge was to draw a circle with the tools at hand. Those tools were a ruler, a teapot and a third irrelevant object such as a stove. The computer reasoned that a ruler was a drafting instrument like a compass and tried to draw a circle with a ruler. It failed. The children saw that the bottom of a teapot was a circle and simply traced the teapot to draw perfect circles. The AI system used associative logic. The children used common sense. The children won. That result will not vary in future contests because common sense (technically abductive logic) cannot be programmed.

High-flying AI companies are quickly finding that their systems can be outperformed by newer systems that simply use big ticket AI output as a baseline training set. This is a shortcut to high performance at a small fraction of the cost. The establishment AI companies like Microsoft and Google call this theft of IP, but it’s no worse than those giants using existing IP (including my books, by the way) without paying royalties. It may be a form of piracy, but it’s easy to do and almost impossible to stop. This does not mean the end of AI. It means the end of sky-high profit projections for AI. The return on the hundreds of billions of dollars being spent by the AI giants may be meager.

Sam Altman: Innovator or Salesman?

The best-known figure in the world of AI is Sam Altman. He’s the head of OpenAI, which launched the ChatGPT app a few years ago. AI began in the 1950s, seemed to hit a wall from a development perspective in the 1980s (a period known as the AI Winter), was largely dormant in the 1990s and early 2000s, then suddenly came alive again in the past ten years. ChatGPT was the most downloaded app in history over its first few months and has hundreds of millions of users today.

Altman was pushed out by the board of OpenAI last year because the company was intended as a non-profit entity that was developing AI for the good of mankind. Altman wanted to turn it into a for-profit entity as a prelude to a multi-hundred-billion-dollar IPO. When the top engineers threatened to quit and follow Altman to a new venture, the board quickly reversed course and brought Altman back into the company, although the exact legal structure remains under discussion.

Meanwhile, Altman has charged full speed ahead with his claims about superintelligence (also known as advanced general intelligence (AGI) with the key word being “general,” which means the system can think like humans, only better). One way to understand superintelligence is the metaphor that humans will be to the computer as apes are to humans. We’ll be considered smart, but not smarter than our machine masters. Altman said that “in some ways ChatGPT is already more powerful than any human who ever lived.” He also said he expects AI machines “to do real cognitive work” by 2025 and will create “novel insights” by 2026.

This is all nonsense for several reasons. The first as noted above is that training sets (the materials studied by large language models) are becoming polluted with the output from prior AI models so that the machines are getting dumber not smarter. The second is the Law of Conservation of Information in Search I also described above. This law (supported by applied mathematics) says that computers may be able to find information faster than humans, but they cannot find any information that does not already exist. In other words, the machines are not really thinking and are not really creative. They just connect dots faster than we do.

A new paper from Apple concludes, “Through extensive experimentation across diverse puzzles, we show that frontier LRMs [Large Reasoning Models] face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.” This and other evidence point to AI reaching limits of logic that brute force computing power cannot overcome.

Finally, no developer has ever been able to code abductive logic; really common sense or gut instinct. That’s one of the most powerful reasoning tools humans possess. In short, superintelligence will never arrive. More and more, Altman looks like just another Silicon Valley salesman pitching the next big thing with not much behind it.

Source link

Related Posts

1 of 16