Three Laws of Robotics only human wishful thinking, AI may not comply

  

Three Laws of Robotics is just wishful thinking humans may not comply with AI

  [June 10] at the news sci-fi blockbuster "Terminator" (Terminator), the robot wants full possession of this world, the human ruthless. In this regard we must ask: "If the movie scene repeats itself in reality, what happens?" Asked the question should not be blamed, because through movies and science fiction, robots take over the Earth scene almost everywhere, so we the future of artificial intelligence (AI) has been formed that impression. However, since humans can survive and cooperation through laws, why can not the law also applied to the AI ​​people? This relates to Asimov's three laws of robotics! To better understand the future, we must first look back at the past .

  Three laws

  Isaac Asimov (Isaac Asimov, 1920-1992) in addition to being a professor of biochemistry, is also considered one of the "Big Three" science fiction writers of his time. In the mid-20th century, Asimov proposed three laws, if compliance with these laws, we can stop the robot uprising. This includes three laws:

  First Law: A robot may not injure a human being or as a result of not let human hurt;

  Second Law: A robot must obey human commands to it, unless those commands conflict with the first law;

  Third Law: A robot must protect its own existence as long as such protection does not conflict with the first law or Second Law.

  Now, if you are familiar with programming, you'll know the machine will start counting from zero, rather than from the beginning. Thus, as computer fan said, referring to the law article 0 should be collective rather than an individual. If these laws sounds familiar, it is because they appeared in "I, Robot" (I, Robot) in this story. It is worth noting that they do not distinguish between the "robot" (body) and "artificial intelligence" (the brain), because when Asimov mentioned robot, his mind appeared intelligent humanoids.

  So, if these laws had been formulated as early as the 1950s come out, why do we so afraid of robots? What prompted elon Musk (Elon Musk) even Stephen Hawking (Stephen Hawking) will be treated as a human AI "the biggest threat to the survival"? Long story short, this is because Asimov's three laws do not work.

  Plan defects

  Back to the present. In keeping with Asimov's point of view, let us assume that we do have a complex enough AI agents to adapt to these laws. For ease of discussion, we assume that, even though they are the laws of narrative mechanism, but they also apply to the real world.

  Technical problem: if the law is in English, Chinese and AI agents can only handle what will happen or even if the agent is manufactured in the United States, how do we know how it understands the legal Therefore, we need a way to (i) the legal translation?? into every possible language, (ii) the meaning behind these words convey to every possible language (in order to cover all possible cases, you must also use Latin and other languages, and has been the demise of binary machine language).

  For humans, these tasks are closely related. But for the machines, these are two very different tasks. The first task is to generate a corresponding string of sentences in different languages, the second task is to be understood that these strings. If I tell you to sing in Spanish "Despacito", like only the first task. You may very well, but you do not know what they mean (assuming you do not understand Spanish). On the other hand, only the second task just had an idea in mind, but do not know how to express.

  Fortunately, natural language processing (NLP) field experienced a huge leap in the past few years. For the first task, neural network having a short and long term memory cells (Long Short-Term Memory) may be the sequence - a sequence of conversion. In addition, end-to-speech - speech translation model Translatotron, last month (May 2019) published. For the second task, Word2Vec model has proved its worth, it related word combinations together to generate semantic sentence. As shown below:

  

Three Laws of Robotics is just wishful thinking humans may not comply with AI

  Indeed, the machine can now understand the language. However, they still have a lot of things can not be done. One example is to understand idioms. Although the "spill the beans" may be "leaked secret" metaphor meaning, but there can not be a metaphor translation. Thus, the machine will literally translated into corresponding words of each word. If translated into French in the correct order, then the expression is "jeter les haricots", it sounds definitely gives people a sense out of context.

  However, for the sake of discussion, let us assume a bold metaphor translation problems will be resolved in the coming years. As a result, all technical issues will be on the AI ​​agents understand the law to be resolved, we will be safe, right? Ready, because this is an interesting place! When Asimov proposed these laws, he unknowingly put them based on other assumptions: we humans know exactly where to draw the moral bottom line should be. But we will abide by it?

  Let us take the first law of "harm" the word example. In this law, let us also think about the meaning of "person" of the word. What it is defined to include? For example, in the 14th century, slaves were considered closer than cattle and other livestock. Now, the right to life of the fetus is the subject of much discussion. However, in the future, if a pregnant woman due to some disease, high risk of death in childbirth, her AI abortion doctors should recommend it? People need to remember that, although logically, women surviving in the abortion higher risk, but once the baby is born, there is more living space. So, anyway, the robot eventually hurt humans.

  The next decision even make mankind into a state of denial us. Let us consider Dan Brown (Dan Brown) hell scene description, and application of the law of the zero bar. AI will display a button, and was told that if it presses the button, half of humanity will die immediately, but the species will survive for centuries. If you do not (and therefore, not as a) the human population will reach surplus state in 50 years, our species will collapse. If you are in the position of AI, how would you do?

  in conclusion

  Asimov's laws of robotics attempt to address the threat of AI uprising. The robot these laws to comply with the technical barriers that currently make them understand that we limit legal. The real obstacles (the philosophical and ethical) may be our hypothesis that in such a vague restriction, the robot will act according to the way we want, even if we do not know their meaning. Even if we convey the meaning is correct, these laws can simply read "Anyway, just trying to be difficult to do, right?" And then, they may have caused irreparable impact.

  Zhengzhou painless How much money http://mobile.zhongyuan120.com/

  Zhengzhou painless Hospital http://m.zyfuke.com/

Guess you like

Origin blog.csdn.net/ld109573496/article/details/91411686