How do we protect artificial intelligence?

  Universities around the world are carrying out significant research in artificial intelligence (AI) area, Allen Institute (Allen Institute) and other agencies as well as Google and Facebook and other technology companies are also conducted research in this area. One possible outcome is that we will soon have a dog and a mouse or as complex artificial intelligence. Now is the time to start thinking about, under what conditions, if these deserve our artificial intelligence is usually given ethics of animal protection.

  So far, the discussion on "Artificial Intelligence rights" or "robot rights" revolves around the question: should we have a human-like artificial intelligence or higher for a bear what kind of moral obligation - such as "Star Trek "(Star Trek) in the android data, or" Western World "(Westworld) in Dolores (Dolores). However, this idea from the wrong place to start, could have serious moral consequences. Before we create a human-like complex, it is worth considering the ethics of artificial intelligence as humans, we are likely to create an inferior human beings so complicated, it is worth considering some of the human artificial intelligence inferior to human ethical considerations.

  We have been very careful to study how to use some non-human animals. Evaluation of Animal Care and Use Committee recommends that, in order to ensure that vertebrates are not unnecessarily killed or unnecessary suffering. If it comes to human stem cells, particularly human brain cells, it will be more stringent regulatory standards. Biomedical research under intense scrutiny, but there is no artificial intelligence research currently under scrutiny. AI research may bring some of the same ethical risk. Maybe it should be.

  You might think that this ethics of artificial intelligence should not be protected unless they are conscious, that is, unless they have real experience with real pleasure and pain. we agree. But now we are faced with a difficult philosophical question: How do we know when we create something to bring pleasure and pain as if artificial intelligence data or Dolores, it can complain and defend himself, to launch a? discussions on their rights. However, if artificial intelligence like a mouse or a dog slurred speech, or for other reasons can not convey their inner world to us, it may fail to report themselves are suffering. Dalian gynecological check how much money mobile.0411bh.com  

  Here there is a confusing and difficult because of the awareness of scientific research yet on what is consciousness and how we judge whether it exists consensus. In some of the points - - "free" point of view - - there is only good awareness of the need for some organizational information processing, such as its flexible information system model object in the environment, with the ability to pay attention to the guidance and long-term action plan. We may have been created at the edge of such a system. In the other point of view - "conservative" view - consciousness may require very specific biological characteristics, such as the brain in the low-level details of the structure much like the mammalian brain: in this case, we create artificial consciousness from the get worse far away.

  It is not clear which view is correct, it is unclear whether other interpretation would ultimately prevail. However, if a liberal point of view is correct, we may soon create many sub-human artificial intelligence should be the moral protection. This moral hazard. Discussion on "artificial intelligence risk" is usually focused on the risk of new artificial intelligence technology could bring to us humans, such as taking over the world and destroy us, or at least confuse our banking system. Less discussed is our moral hazard of artificial intelligence, because we may abuse them.

  It may sound like science fiction plot, but as long as researchers in the field of artificial intelligence to develop a conscious artificial intelligence or strong artificial intelligence systems, and these systems are likely to be eventually conscious, we should take seriously problem. Such research is needed ethics review, as we did on animal studies and studies on human nerve tissue samples.

  In the study of animals and even human trials, only after serious violations of ethics is exposed, it will establish appropriate protective measures (for example, in an unnecessary biopsy, syphilis study medicine in Nazi war crimes and Tuskegee) . With artificial intelligence, we have the opportunity to do better. We recommend the establishment of oversight committees, to study cutting-edge artificial intelligence to assess and take into account these issues. These committees like the Animal Care Committee and stem cell oversight committee, as should consist of scientists and non-scientists - designer of artificial intelligence, consciousness of scientists, ethicists and community members interested. The Commission's task is to identify and assess the risks of new ethics of artificial intelligence designed to have a deep understanding of the scientific and ethical issues, weigh the risks and benefits of research.

  These committees are likely to have all current AI research allowed to judge. In most mainstream consciousness theory, we have not yet created artificial intelligence is worth considering ethical consciousness experience. But we may soon cross the ethical boundaries of this crucial. We should be prepared for.


Guess you like

Origin blog.51cto.com/14198725/2406460
Recommended