Will ChatGPT be cool? Buffett, Musk call for pause

ChatGPT has quickly become popular all over the world, and it has undoubtedly become a high-profile "star product" in the field of artificial intelligence. However, as ChatGPT is more and more widely used, it has been used for academic fraud, hacking weapons and leaking users' sensitive chat information. Wait for a series of negative news. At this point, the society began to re-examine the artificial intelligence technology similar to ChatGPT.
insert image description here
Compared with the intelligent robots serving the industrial field in the past, which can only replace human beings in complex, mechanical and dangerous manual labor, artificial intelligence technology similar to ChatGPT has stronger dialogue, flexibility and good "independent thinking ability" , reshaping society's perception of the field of artificial intelligence. At present, artificial intelligence products released by technology giants have been integrated into various industries such as writing, programming, and painting, causing a new round of technological changes.

It is undisputed that once the artificial intelligence technology like Chat GPT is fully developed, it will be able to serve the society well and benefit mankind. At the same time, Chat GPT artificial intelligence technology is also a "double-edged sword". Its security issues, compliance issues, and impact on social personnel work assignments have attracted the attention of many industry leaders and countries.

Many celebrities called for thinking about the research and development of artificial intelligence similar to ChatGPT

When the whole society is immersed in the "fantasy" that ChatGPT-like artificial intelligence technology will set off the fourth industrial revolution, many leaders in the technology industry put forward opposing opinions, and even expressed faint concerns about the rapid development of ChatGPT-like artificial intelligence technology.

Buffett doubts whether artificial intelligence technology will benefit society

Recently, Buffett, the godfather of the investment industry, talked about his thoughts on the rapid development of ChatGPT-like artificial intelligence in an interview with the media. Buffett told reporters that there is no doubt that artificial intelligence technology has indeed made incredible progress in terms of technical capabilities, but based on the impact on the entire society, there are currently no large-scale experimental results to support the development of artificial intelligence is beneficial to mankind From this point of view, the development of artificial intelligence should be cautious and rational.

Buffett’s concerns are not unreasonable. The rapid popularity of ChatGPT has covered up the security and compliance issues behind it. Humans’ direct experience of this type of technology is only advanced and efficient, and they do not have a deep understanding of its potential threats.
insert image description here
Judging from the information currently disclosed by the media, there is already evidence that some hackers are using artificial intelligence products like ChatGPT to write malicious code, phishing attack emails, not to mention ChatGPT's own security issues. (Previously, Samsung had leaked information due to employees training data on ChatGPT; ChatGPT user chat information list leaked, etc.).

Musk joins call for moratorium on training of more powerful AI systems

Before Buffett expressed his concerns about the development of artificial intelligence technology, Tesla founder Musk had jointly issued an open letter with thousands of technologists calling for the suspension of training artificial intelligence systems that are more powerful than GPT-4.

On March 29, Musk took action and signed an open letter jointly signed by thousands of industry and academic celebrities, calling on all artificial intelligence laboratories to suspend training for at least 6 months. Artificial intelligence systems that are more powerful than GPT-4, so that Develop and implement security protocols.

Regarding this move, Musk said that considering the strong technical capabilities demonstrated by ChatGPT, its security issues need to be supervised. In addition, Elon Musk has repeatedly emphasized that artificial intelligence has great capabilities and comes with huge risks, which is undoubtedly a double-edged sword. He even pessimistically believes that artificial intelligence technology is one of the biggest risks facing human civilization in the future.
insert image description here
It is worth mentioning that, unlike Buffett, Musk and others have a "negative" attitude towards the development of artificial intelligence technology. Bill Gates, the former richest man in the world, is relatively optimistic about this, and said that suspending development cannot completely solve the problem. , the rational use of the development of artificial intelligence technology is the optimal solution.

Countries carefully examine the development of ChatGPT-like artificial intelligence technology

Many movies, TV series, and novels have imagined that artificial intelligence robots serve the society and benefit mankind. They have also conducted in-depth discussions on the safety, reliability, loyalty, and whether artificial intelligence will replace humans to rule the world. Products like ChatGPT are bringing this The concept is slowly being pushed into reality. When humans begin to truly face the "double-edged sword" of artificial intelligence development, they need to seriously consider its safety, compliance and other issues.

Italy fired the "first shot" of ChatGPT's security issues.

Ten days ago, the Italian government banned the operation of Chat GPT in Italy on the grounds that OpenAI illegally collected a large amount of personal data of Italian users and did not establish a mechanism to check the age of ChatGPT users to prevent minors from accessing illegal materials. After a few days of buffering, the Italian data protection agency made a slight concession and put forward a series of requirements to OpenAI. If it can be met, Italy will allow ChatGPT to continue operating in the country.

So far, Italy has fired the "first shot" of ChatGPT security compliance issues.

Italy's security measures for ChatGPT have attracted the attention of European countries. At present, European countries are studying whether they need to take strict restrictions on artificial intelligence technology similar to ChatGPT. Among them, the Spanish data protection agency has asked EU privacy regulators to assess privacy compliance issues surrounding the use of ChatGPT.

Immediately afterwards, German regulators also announced the ban on the use of ChatGPT, and European countries such as France, Ireland, and Spain also began to consider stricter regulations on AI chatbots.

U.S. follows EU in plan to regulate artificial intelligence

One stone caused a thousand waves, and Italy strengthened its supervision of ChatGPT, which not only aroused concerns about artificial intelligence in EU countries, but also affected the United States on the other side of the ocean.

On April 11, local time in the United States, the "Wall Street Journal" suddenly broke out that the Biden administration has begun to study whether it is necessary to restrict such tools due to the possible security threats posed by artificial intelligence technologies such as ChatGPT in the American society.

Immediately afterwards, the U.S. Department of Commerce officially solicited public opinions on relevant accountability measures on April 11 (the period for soliciting opinions is 60 days, including whether new artificial intelligence models with potential risks should undergo approval and certification procedures before they are released. ) The move is seen as the first step in potential regulation of artificial intelligence technology in the United States.

Regarding the urgent setting of rules to regulate the development of artificial intelligence technology similar to ChatGPT, Alan Davidson, director of the National Telecommunications and Information Administration, a subsidiary of the U.S. Department of Commerce, pointed out that since artificial intelligence technology is only in its initial stage, it is already so "advanced", the government needs to Considering potential criminals using this technology to carry out criminal activities, some necessary boundaries have to be established.

The Chinese government issued the administrative measures on the field of artificial intelligence

Not only the United States, but my country has also begun to tighten research and development in the field of human intelligence. On April 11, the Cyberspace Administration of China released the "Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)" (hereinafter referred to as the "Administrative Measures") on the grounds of promoting the healthy development and standardized application of generative artificial intelligence technology. It is reported that the "Administrative Measures" has a total of 21 items, of which "generative artificial intelligence" includes technologies that generate text, pictures, sounds, videos, codes, etc. based on algorithms, models, and rules.

The "Administrative Measures" restricts the division of responsibilities for information generated by the application of artificial intelligence technology, and clearly points out that organizations and individuals that use generative artificial intelligence products to provide services such as chat and text, image, and sound generation, including support by providing programmable interfaces, etc. If others generate text, images, sounds, etc. by themselves, they shall bear the responsibility of the producer of the content generated by the product; if personal information is involved, they shall bear the statutory responsibility of the personal information processor and fulfill the obligation of personal information protection.

In addition, there are also requirements for manufacturers of artificial intelligence products. The "Administrative Measures" propose that before products are launched on the market, they should submit a security assessment to the national network information department in accordance with the "Regulations on the Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities". , and in accordance with the "Internet Information Service Algorithm Recommendation Management Regulations" to perform algorithm filing and modification, cancellation filing procedures.

When talking about artificial intelligence products like ChatGPT at the social level, they are more concerned about the data used by manufacturers to train artificial intelligence products. To optimize the training data, the following requirements should be met:

(1) Comply with the requirements of the "Network Security Law of the People's Republic of China" and other laws and regulations;
(2) Do not contain content that infringes on intellectual property rights
; Other circumstances stipulated by regulations;
(4) The authenticity, accuracy, objectivity, and diversity of data can be guaranteed;
(5) Other regulatory requirements of the national network information department on generative artificial intelligence services.
Finally, the "Administrative Measures" emphasize that for the generated content found during operation and reported by users that does not meet the requirements of this method, in addition to taking measures such as content filtering, it should be prevented from being generated again through model optimization training within 3 months.

At present, after a large number of training iterations, artificial intelligence technology like ChatGPT has developed rapidly. On the contrary, the existing legal systems of various countries are obviously not complete in the supervision of artificial intelligence technology like ChatGPT. The problem.

epilogue

With the rapid development of computing technology, the wave of artificial intelligence technology revolution is inevitable, and some "artificial" will inevitably be replaced in the future. In addition, a large number of applications of artificial intelligence technology like ChatGPT will definitely lead to security issues such as data leakage, optimization of network hacking tools, large-scale fraud, and AI fictional pictures.

However, none of the above-mentioned factors can be an excuse to suspend the research and development of artificial intelligence technology. Human beings should not face the arrival of artificial intelligence with fear. We must always keep in mind that "people" are the foundation of the development of artificial intelligence technology. , to formulate regulations that can minimize the harm caused by "derivatives".

Guess you like

Origin blog.csdn.net/java_cjkl/article/details/130304408