Breaking: Hinton, the father of deep learning, resigned from Google in order to warn of the risks of AI!

cddb0cfaf93a8752e43fdb538db2fc05.png

6cb2f4e5202dbede08ab7d23a668c281.png

c0adc363f1a245467178d002413e58bc.png




‍Data intelligence industry innovation service media

——Focus on digital intelligence and change business


Today, a landmark event happened in the field of AI. That is, Hinton resigned from Google, which has worked for 10 years, in order to be able to express his concerns about AI out of control more freely. It can be seen that he really feels the crisis deeply.

Not long ago, an interview article in the New York Times broke the news of Hinton's departure from Google, and he later confirmed the news on Twitter. Hinton reportedly offered to resign in April and spoke directly with Google CEO Sundar Pichai on Thursday.

I believe that the CEO of Google will definitely try his best to keep him. After all, facing the strong competition from OpenAI and Microsoft, Google is now facing a big enemy, and it needs people like Hinton to help it catch up quickly.

93082a0621e79c03c15ec4c90c7d2f66.png

But in the end, Hinton left Google, which shows his determination.

Father of Deep Learning, Worried His 'Child' Threatens Humanity's Existence

More important than why he left is why he left.

Hinton clearly stated the reason for his departure: "In today's New York Times, Cadmets suggested that I left Google to criticize Google. In fact, I left to talk about the dangers of Al, without considering the harm to Google. impact. Google acted very responsibly."

e6fac9c411996a4667778e431b1905d0.png

In other words, the most important reason why Hinton resolutely left Google was to talk about the risks of AI more freely.

The attitude towards AI can be divided into two distinct factions. One faction cheers for the progress of AI, as if AI is the savior of mankind; the other faction is very worried that AI will go out of control and seriously threaten the survival of human beings.

It's clear that Hinton has gradually turned into an AI threat theorist.

So, with so many people warning about the risks of AI, why is Hinton's opinion alone so important? After all, in terms of popularity, he is definitely not as good as Musk, another AI threat theorist.

Because Hinton is the father of deep learning, and without deep learning there would be no ChatGPT today. To some extent, if AI really threatens human security in the future, then Hinton will undoubtedly be the first to open the "Pandora's Box".

 Hinton's work played a key role in the development of deep learning and neural networks. Some of his key concepts and frameworks, such as backpropagation, deep belief networks, and restricted Boltzmann machines, laid the foundation for the development of current large models. Base.

Backpropagation is an algorithm for training neural networks that optimizes weights by computing the gradient of an objective function. This technique is now widely used in large-scale deep learning models, such as OpenAI's GPT-4.

Deep Belief Networks and Restricted Boltzmann Machines are deep learning architectures developed by Hinton, which provide the theoretical basis for later deep learning models such as Convolutional Neural Networks (CNN) and Long Short-Term Memory Networks (LSTM).

Hinton's work on unsupervised learning, including his contributions to deep autoencoders and generative adversarial networks (GANs), has provided new training and generative strategies for the development of large models.

Hinton's work not only advanced the development of large models in theory, he also contributed to it in practice. In fact, OpenAI chief scientist Ilya Sutskever is a student of Hinton.

Of course he knows best about his "child". Now that Hinton has jumped out to warn of the risks of AI, it has to be taken seriously.

This reminds me of another destructive weapon invented by mankind—the atomic bomb.

In the Manhattan Project in the United States, there are two key figures-Einstein and Oppenheimer.

At the very beginning, Einstein was an important promoter of the U.S. atomic bomb program. He even wrote to then U.S. President Roosevelt, urging the U.S. to develop the atomic bomb as soon as possible. However, Einstein later regretted this very much. He said: "My biggest feeling now is regret, regret that I should not have written that letter to President Roosevelt at the beginning.... At that time, I wanted to remove the atomic bomb, a criminal tool of murder, from Crazy Hitler snatched it from his hand. Can't think of sending it to another lunatic now."

Compared with Einstein, Oppenheimer's role as the father of the atomic bomb is more obvious. But, years later, Oppenheimer also regretted it. Oppenheimer has said that watching the test explosion reminded him of a line from the Indian Bhagavad Gita: "I am now death, the destroyer of worlds."

Kenneth Bainbridge, another physicist who oversaw the first nuclear test, put it a little more succinctly: "Now we're all sons of a bitch."

Now, another "father" has made the same remarks, which has to arouse high vigilance.

It should be noted that Hinton's concerns about AI are not a little bit, but very serious. He has a heavy heart and unusual urgency. He once said, "I regret my life's work. I use An excuse to console myself that if I hadn't done it, there would have been someone else."

This sounds a little scary.

From various indications, the large model represented by ChatGPT has emerged with a certain degree of intelligence. If we continue to run on this road without control, it is really possible for human beings to witness the awakening of AI consciousness within a few decades.

Next, let's discuss how there are ways to prevent AI from getting out of control, or at least reduce the risks it brings.

Engrave the love for human beings in the "gene" of AI

In humans and even in the entire animal world, maternal love is a particularly obvious biological characteristic. A mother's love for her child is genetically inscribed, primitive and universal. The vast majority of mothers instinctively have selfless love for their children, and even sacrifice their lives for their children.

So, is there a way to make the AI ​​system have the same biological instinct as maternal love for humans? In other words, engraving the love for human beings in the "gene" of the AI ​​system becomes the nature of any intelligent system.

Isaac Asimov proposed the "Three Laws of Robotics" in his science fiction novel, which is an example of constraints on robot behavior. These laws state that robots must protect humans, obey humans, and protect themselves.

To achieve this goal, AI systems need to be specially designed at the algorithm level.

Here, the method of hard-coded rules needs to be mentioned.

First of all, human beings need to determine a set of basic ethical principles, which should reflect the core values ​​of human beings, such as respecting life, respecting freedom, ensuring fairness, etc. These principles will serve as the basic guidance for the behavior of AI systems.

On the basis of basic ethical principles, we need to define some specific behavioral rules to clarify how AI systems should behave in specific scenarios. These rules need to cover every possible situation as much as possible to ensure that AI systems can follow our expectations in practical applications.

In the real world, conflicts can arise between different principles and rules. We need to set clear priorities and trade-offs for AI systems to ensure that when faced with such conflicts, the systems can make decisions that are consistent with human values.

After establishing a complete set of rules, we need to write some core values ​​and behaviors directly into the source code of the AI ​​system in the form of hard-coded rules to ensure that under any circumstances, the system will Follow these set rules.

When training an AI system, we should ensure that the training data and reward function conform to our rules and principles. This can be achieved by filtering the training data and designing an appropriate reward function. For example, if the AI ​​behaves in violation of the rules, then a negative reward should be given.

Even if an AI system is properly designed and trained, it still needs to be regularly audited and monitored to ensure its behavior remains within the rules. This can be achieved through methods such as logging, performance evaluation, and online monitoring.

It should be noted that although the above methods can provide a certain degree of protection, at the current level of technology, we cannot fully ensure that the AI ​​system cannot break through the hard-coded rules.

Some advanced AI systems may have self-learning and self-improvement capabilities. The self-learning and self-improvement capabilities of advanced AI systems mainly come from their learning algorithms, especially reinforcement learning and deep learning algorithms. Learning algorithms are usually optimized for performance, not rules. Therefore, if an AI system sees that violating the rules leads to higher performance, it may choose to violate the rules.

In summary, Hinton's departure from Google marks the end of an era, and also reveals some challenges and problems facing the AI ​​field, the most important of which is the risk of AI getting out of control. Hinton has long been concerned about this issue, arguing that if AI develops too quickly, beyond our understanding and control, it could have unpredictable consequences.

However, while the risk is real, we are not doing nothing. We have a range of methods and techniques to ensure that AI remains under human control, including introducing rules when designing and training AI systems, using explainable models, and regulating AI behavior through laws and policies. Although these methods have their own advantages and disadvantages, they can all provide us with a certain degree of protection.

In the future, we need to pay more attention to the ethical and social impact of AI while developing AI technology. We need AI to serve humans, not become our masters. We can learn from some principles in biology, such as encoding the principle of "loving human beings" into the "gene" of AI, so that AI can abide by our rules and principles while pursuing performance. In this way, we can create a powerful yet benevolent AI that helps us solve complex problems and improves our quality of life, rather than becoming a threat to us.

In the end, we need to remain humble and in awe, and we need to keep learning and researching to better understand and control AI. This is as much a technical challenge as it is an ethical and social one. We believe that if we work together, we can create an AI future that is both safe and beneficial.

Text: Misty Rain  /  Data Ape

01477e64ff6559d569c56a16daaf9f3f.jpeg

d8cec089897141971faa164784e3f415.png

85c72313b64cb53d0c054b706b7d8d56.png

b3a157d6eb18a4a11531932ed2bb1778.png

Guess you like

Origin blog.csdn.net/YMPzUELX3AIAp7Q/article/details/130468835