ChatGPT is superintelligence rather than AGI. Three OpenAI leaders personally wrote: How should we govern superintelligence?

[Xinzhiyuan] Now is a good time to start thinking about how to govern superintelligence -future AI systems will be more powerful than general artificial intelligence (AGI).

AI has never had such a wide impact on human life and brought so many worries and troubles to human beings like today.

Like all other major technological innovations in the past, the development of AI has two sides, one for good and one for evil, which is one of the important reasons why regulators around the world are actively involved.

In any case, the development of AI technology seems to be an irresistible trend. How to make AI more securely developed and deployed in line with human values ​​and ethics is a question that all current AI practitioners need to seriously consider.

Today, the three co-founders of OpenAI—CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever—co-authored an article discussing how to govern superintelligence. Now, they argue, is a good time to start thinking about superintelligent governance — future AI systems even more capable than AGI.

Based on what we've seen so far, it's conceivable that within the next decade, artificial intelligence (AI) will reach a level of skill that surpasses that of human experts in most fields and will be able to carry out as many productive activities as one of the largest corporations today .

Regardless of the potential benefits and disadvantages, superintelligence will be more powerful than other technologies that humans have had to deal with in the past. We can have an incredibly prosperous future; but we must manage risk to get there. Given the possibility of existential risk, we cannot just react reactively. Nuclear energy is one example of a technology with this property; another is synthetic biology.

Likewise, we must mitigate the risks of current AI technologies, but superintelligence will require special handling and coordination.

a starting point

There are a number of important ideas in our chances of successfully leading this development; here we provide some initial reflections on three of them.

First, we need some coordination between leading development efforts to ensure that superintelligence development is done in a way that both keeps us safe and helps these systems integrate smoothly with society. There are many ways this can be achieved; the world's major governments could come together to establish a project of which many of the current efforts are part, or we could do so through "collective consent" (with the support of a new organization, as suggested below ) way to limit the growth of cutting-edge AI capabilities to a certain annual rate.

Of course, individual companies should also be held to extremely high standards for developing responsibly.

Second, we may end up needing something akin to the International Atomic Energy Agency (IAEA) to oversee the development of superintelligence; any effort beyond a certain threshold of capabilities (or resources such as computing) would need to be overseen by this international body, which can check systems, require audits, test for compliance with security standards, limit deployment and security levels, etc. Tracking computing and energy usage would help a lot and give us some hope that this idea is actually achievable. As a first step, companies could voluntarily agree to start implementing elements such an agency might one day require, and as a second step, individual countries could implement it. It is important that such an agency focus on reducing "existential risks" rather than issues that should be left to individual countries, such as defining what AI should be allowed to say.

Third, we need to have sufficient technological capabilities to make superintelligence safe. This is an open research problem, and we and others are putting a lot of effort into it.

Content not covered

We believe it is important to allow companies and open source projects to develop models below a significant threshold of capacity, without the kind of regulation we describe here (including burdensome mechanisms such as licenses or audits).

Today's systems will create enormous value for the world, and while they do have risks, the levels of those risks appear to be on par with other Internet technologies, and society's response seems appropriate.

By contrast, the systems we focus on will possess power beyond any technology hitherto, and we should be careful not to downplay our focus on technologies that are well below this threshold by applying similar standards to them.

Public Engagement and Potential

But yes, the governance of the most robust systems, and decisions about their deployment, must have strong public scrutiny. We believe that people around the world should democratically determine the boundaries and defaults of AI systems. We don't yet know how to design such a mechanism, but we plan to do some experiments. We continue to believe that within these broad boundaries, individual users should have a great deal of control over how they behave with AI.

Given the risks and difficulties, it's worth thinking about why we're building this technology.

At OpenAI, we have two fundamental reasons. First, we believe it will lead to a better world than we can imagine today (we've seen early examples of this in areas such as education, creative work, and personal productivity). The world faces many problems and we need more help to solve them; this technology can improve our society, and we will surely be amazed by the creativity of everyone using these new tools. The economic growth and improvement in quality of life will be phenomenal.

Second, we believe that stopping building a superintelligence would be a difficult and risky decision. Due to its enormous advantages, the cost of building superintelligence is decreasing every year, the number of builders is increasing rapidly, and it is essentially part of the technological path we are on. Stopping it requires something similar to a global surveillance regime. And even having such a system in place is no guarantee of success. So we have to get it right.

References:

https://openai.com/blog/governance-of-superintelligence

*The above content is reproduced from Xinzhiyuan, and the reverse thinking may make minor deletions to the content, which does not mean that this website agrees with its views and is responsible for its authenticity.

The development of superintelligence is a topic of endless debate among scientists.
 
  OpenAI highlighted the enormous potential and associated risks of superintelligence, an evolution of the more traditional concept of artificial general intelligence (AGI). The company believes that such a powerful technology could emerge within the current decade, potentially solving major global problems. OpenAI's strategy involves creating an automatically aligned researcher with human-level capabilities, and iteratively training and aligning superintelligence using vast computing resources. This process, known as Hypersmart Calibration, requires innovation in AI calibration techniques, extensive validation, and adversarial stress testing. OpenAI is dedicating significant resources and research to this challenge, and encourages outstanding researchers and engineers to join. However, it remains to be seen whether the shift in terminology from AGI to superintelligence will have a profound impact on the ongoing debate around the risks and benefits of artificial intelligence.
 
  OpenAI highlights the potential of superintelligence, possibly the most impactful technology ever created, capable of solving major global problems. However, it also acknowledges the enormous risks associated with superintelligence.
 
  While superintelligence seems far off, OpenAI thinks it could be within this decade. Managing these risks will require new governance institutions and address the challenge of marrying superintelligence with human intent. Interestingly, OpenAI is using the term instead of the more traditional artificial general intelligence (AGI). The rationale for this is as follows:
 
  "Here, we focus on superintelligence, rather than AGI, to emphasize higher levels of capability. We have a lot of uncertainty about how fast this technology will develop over the next few years, so we choose more difficult goals to calibrate a more capable system. Current AI calibration techniques, such as reinforcement learning from human feedback, are insufficient to control a potentially superintelligent AI. Humans cannot reliably supervise humans much smarter than we are systems, existing technologies cannot scale to superintelligence. OpenAI emphasizes the need for technological breakthroughs to overcome these challenges."
 
  OpenAI's approach involves building an automatically calibrated researcher with roughly human-level abilities. Vast amounts of computing resources will be used to scale its efforts and iteratively tune the superintelligence. Key steps include developing a scalable training method, validating the resulting model, and stress testing the calibration pipeline. According to the title of OpenAI's announcement, the concept is called the Superintelligence Consortium.
 
  To address the difficulty of evaluating tasks that are challenging for humans, AI systems can be used for scalable supervision. Supervised generalization to unsupervised tasks, as well as detection of problematic behaviors and interiors, is critical to verify consistency. Adversarial testing, including training dislocation models, will help confirm the effectiveness of the calibration technique.
 
  OpenAI expects its research focus to evolve as more is learned about the problem, and it plans to share its own roadmap in the future. OpenAI has assembled a team of top machine learning researchers and engineers to tackle the superintelligence calibration problem. OpenAI will dedicate 20 percent of its secure computing to this effort over the next four years.
 
  While success isn't guaranteed, OpenAI remains optimistic that a concerted effort can solve the problem. The goal is to provide evidence and arguments to convince the machine learning and security community that the problem has been solved. It is actively working with interdisciplinary experts to consider wider human and social issues.
 
  OpenAI encourages outstanding researchers and engineers, even those who have not previously worked in calibration, to join its work. OpenAI considers superintelligence calibration to be one of the most important unsolved technical problems and considers it an actionable machine learning problem with potentially significant contributions.
 
  New divides appear to have emerged in the heated debate about artificial intelligence, AGI, and complex, interconnected issues ranging from utility to human destruction. Now, the vocabulary has changed a bit, but whether this is science or semantics remains to be seen.
Article link: Smart Manufacturing Network https://www.gkzhan.com/news/detail/158824.html

 

Guess you like

Origin blog.csdn.net/sinat_37574187/article/details/131760319
Recommended