In the era of technology explosion, how to prevent AI from going out of control?

fcf801c7e9d0ec6ab007a597e5c35349.jpeg

source/technologyreview

Compilation/Squidward

More than 350 technology company executives and scientists signed a joint statement, expressing their concerns and warnings about the potential dangers of artificial intelligence (AI), and even saying that the loss of control brought about by artificial intelligence is no less than a pandemic and nuclear war. The statement, released by the nonprofit Center for AI Safety, was signed by some of the biggest names in tech, including leaders from Open AI, Microsoft and Google, which are companies that learned from generative artificial intelligence. The company that benefits the most from intelligence.

However, many people still worry about the dangers of artificial intelligence. A survey at the Yale CEO Summit found that 42% of participating executives believe that artificial intelligence has the potential to bring about catastrophe and wipe out humanity within 10 years.

The loss of control of AI refers to the unforeseen behavior or results of the AI ​​system during operation. These results may be inconsistent with human expectations, and may even have negative effects. For example, a self-driving car system fails, causing the vehicle to lose control, which will pose a serious threat to the safety of the driver and other traffic participants. Or, an AI system misjudged a certain person's behavior, causing the person to be treated or punished unfairly, which would have a serious impact on the person's rights and freedom.

It is not yet clear how AI will destroy humanity. Many experts speculate that it could be caused by bad actors using their massive data sets to create biological weapons or introduce new viruses. It could also mean using AI to hack into mission-critical computer systems or deliberately disseminating false information that could cause panic around the world. In another scenario, AI becoming highly accurate could become a problem in itself. Imagine an AI algorithm so dedicated to eradicating a specific disease that it destroys everything in its path.

While many doomsday scenarios may never happen, AI does have the ability to cause the dangers being discussed. Part of the problem is that technology is advancing faster than anyone expected. Take ChatGPT as an example, which is a hot generative artificial intelligence solution launched by OpenAI. It failed miserably when Accounting Today gave it the CPA exam in April, but within a few weeks it passed with flying colors.

01

Developing Regulations for AI

As technology companies large and small jump on the generative AI bandwagon, building data sets of massive scale unimaginable just a few short months ago, it will clearly require regulatory oversight.

In October 2022, the White House Office of Science and Technology Policy released a blueprint for the "Artificial Intelligence Powers Act", requiring respect for privacy and fairness when using or building artificial intelligence. The blueprint identifies five principles to guide its design, use, and deployment to protect the American public. These guidelines include:

  • Safe and effective systems: AI solutions should be thoroughly tested to assess concerns, risks and potential impacts.

  • Algorithmic Discrimination Protection: Solutions should be designed in a fair way to remove the possibility of bias.

  • Data Privacy: People should have the right to decide how their data is used and be protected from privacy violations.

  • Notification and explanation: There should be clear transparency when using AI.

  • Human Alternatives, Considerations, and Fallbacks: You should be able to opt out of interacting with AI in favor of human alternatives.

Since the blueprint was established and ChatGPT and other generative AI solutions released, the Biden administration has been holding regular meetings to better understand it and develop a regulatory strategy.

In mid-June 2023, the European Parliament has drafted its own regulations on the safe use of artificial intelligence, and is stepping closer to adoption. The AI ​​Act prohibits real-time facial recognition in public places, scoring systems and models that use manipulative technology, full disclosure when generating AI system development content, and providing data sources when requested.

02

How to enforce regulations

While it is clear what needs to be included in a code of conduct for transparent, fair, safe, and just AI, how to enforce it is the million-dollar question, and here are some considerations.

create a standards body

As with the US Food and Drug Administration's (FDA's) Good Manufacturing Practice (GMP) regulations for life sciences companies, clear guidelines need to be developed and communicated to companies wishing to earn the "Good AI Practice" designation. That would require the oversight of a federal agency akin to the U.S. Food and Drug Administration, tasked with inspecting any company developing an AI solution and collecting the required documentation.

enforce disclaimer

Whether generative AI is used to develop content, marketing materials, software code, or research, a highly visible public disclaimer indicating that some or all of the content is machine-generated should be required.

conduct an independent risk assessment

Google and its AI research lab DeepMind recommend several steps to ensure "high-risk AI systems" provide detailed documentation about their solutions. Foremost among these recommendations is that risk assessments by independent bodies should be made mandatory.

Make AI Explainable

When AI makes decisions that affect people's lives, individuals should be able to fully understand how the algorithm made those decisions.

Building AI governance in the cloud

When deploying artificial intelligence in the public cloud, not only does the federal government need to obtain permission, but the federal government must also send special personnel to closely monitor the cloud and the projects deployed in the cloud, so that evil artificial intelligence cannot enter.

Make AI Ethics a Compulsory Course for All Data Scientists

All software engineering and data science students are required to complete a required course in AI ethics before they can work in the industry. An AI ethics certification could be created and implemented. Just like the Hippocratic Oath that doctors pledge to "first do no harm," data scientists must take an oath to do the same when building AI solutions.

03

in conclusion

We are in a new cycle of historical technological development, and generative artificial intelligence has the potential to revolutionize every aspect of society , for good and bad. However, like all other major turning points in history, human beings need to take control of the "steering wheel" of AI development, make corresponding judgments on the basis of fairness, transparency and respect for human rights, and ensure that the potential of artificial intelligence is used to benefit mankind.

ab0a5cea7845795b038e80e23ea10926.gif

Gyro Finance Contact Information

Business Cooperation|Contribution:

Xiao Huang (WeChat ID 18925291949)

Ning (WeChat ID 13631579042) 


recommended reading

70bdd8154a844121bb6a5f575dc7ff77.png8f6503b10eb79795f689d9459d5d88af.pnge19a89bf937e0e4619d6cdd5a6ce2212.png

Guess you like

Origin blog.csdn.net/tuoluocaijing/article/details/132439735