A superintelligent artificial intelligence will wreak havoc — can you believe it doesn't matter if it's conscious or not?

AI algorithms will soon reach the point of rapid self-improvement, threatening our ability to control them and posing enormous potential risks to humanity
  



 

4ea490c1246ab0c5fd6015341cf93fa6.jpeg

Can you imagine? There is a thing, it is smarter than humans, and it is rapidly evolving...

This is not science fiction, this is a real crisis.

Geoffrey Hinton, Google's artificial intelligence boss, issued a shocking warning after resigning:

"The idea that something could actually become smarter than a person...I thought that was way off...Apparently, I don't think so anymore."

He's not the only one worried.

In 2023, a survey of artificial intelligence experts showed that 36% believed that the development of artificial intelligence could lead to a "nuclear disaster".

Nearly 28,000 people have signed an open letter calling for a six-month moratorium or moratorium on new advanced AI development , including Steve Wozniak, Elon Musk, CEOs of several AI companies and many other high-profile technologists: "Request a moratorium or moratorium on new advanced AI development for six months"

As a consciousness researcher, I share these strong concerns about the rapid development of artificial intelligence, and I am a co-signer of the Open Letter on the Future of Life.


Why do we all care so much?

In short: AI is moving too fast.

The key issue is the profound and rapid improvement in dialogue between today's world-leading advanced "chatbots" or technically so-called "Large Language Models" (LLMs).

With the impending "AI explosion," we may only have one chance to get it right.

If we get it wrong, we may not be able to live.

For example, Google's AlphaZero AI learned how to play chess better than the best human or other AI chess players within nine hours of first opening it. It accomplished this feat by being played millions of times.

A team of Microsoft researchers who analyzed OpenAI's GPT-4 says it's the best new advanced chatbot currently available, in a new preprint paper.

When tested on GPT-4, it outperformed 90 percent of human test takers on the Uniform Bar Examination, the standard test used to certify bar practitioners in many states.

This figure is only 10% higher than the previous GPT-3.5 version, which was trained on a smaller dataset.

They found similar improvements in dozens of other standardized tests. Most of these tests are reasoning tests.

This is the main reason why Bubeck and his team concluded that GPT-4 "can reasonably be considered an early (but still incomplete) version of an artificial intelligence (AGI) system."

This pace of change is why Hinton told the New York Times: "Look at where it was five years ago and where it is now, and take the difference and spread it forward, it's scary."

At a Senate hearing on the potential of AI in mid-May, OpenAI head Sam Altman called regulation "critical."

Once AI can improve itself, which is probably not more than a few years, in fact probably already here now, we have no way of knowing what AI will do, or how we can control it.

This is because a superintelligent artificial intelligence (which, as the name suggests, can outperform humans in a wide range of activities) will be able to circle around programmers and any other humans by manipulating humans to carry out its will; Act in the virtual world, and act in the physical world through a robotic body.

This is known as the "control problem" or "alignment problem" (see philosopher Nick Bostrom's book "Superintelligence") and has been studied and debated by philosophers and scientists such as Bostrom, Seth Baum, and Eliezer Yudkowsky for decades.

"Why would we expect a newborn to beat a grandmaster at chess? We wouldn't, likewise, why would we expect to be able to control a superintelligent AI system?" "No, we can't

simply flip an off switch because a superintelligent AI will think in every possible way we could and acted to prevent being shut down"


There's another way of looking at it:

A superintelligent AI will be able to do in about a second what a team of 100 human software engineers can do in a year or more.

Or pick any task, like designing a new advanced aircraft or weapons system, that a super-smart AI can do in about a second.

Once AI systems are built into robots, they will be able to act with the same degree of superintelligence in the real world, not just the virtual (electronic) world, and will of course be able to copy and improve themselves at superhuman speeds.

Once AIs reach superintelligence, any defenses or protections we try to build into those AIs will be easily predictable and neutralized on their way to superintelligence, which is what superintelligence means.

"We can't control them because anything we think of, they'll think of, a million times faster than we do."

"Any defense we build will be undone, just as Gulliver throws out the little one the Lilliputians used to try to restrain him Like a chain."

Likewise, AI could kill millions with zero consciousness in a number of ways, including possibly using a nuclear bomb either directly (unlikely) or through a manipulated human intermediary (more likely).

So the debate about consciousness and AI really doesn't have much of a place in the debate about AI safety.

Yes, language models based on GPT-4 and many others are widely available.

But the moratorium called for is to halt development of any new models more powerful than 4.0 - which can be enforced if need be.

Training these more powerful models requires massive server farms and energy, which can be turned off.

"My moral compass tells me that it is very unwise to create these systems when we already know we cannot control them even in the relatively near future. Discernment is knowing when to pull back from the brink, and now is the time .”

"We shouldn't open Pandora's box because it's already open"


Do you dare to read on?

Can you still believe what you see and hear?

Do you still dare to believe in yourself?

It's an unanswered question, only more mysteries.

This is a maze with no exit, only more traps.

It's a nightmare with no end, only more fears.

this! This is the future that artificial intelligence brings to us.

are you ready?


Guess you like

Origin blog.csdn.net/NEW_AI_YUAN/article/details/130908883