The moral challenge of AI

Artificial intelligence (AI) is almost ubiquitous, and most aspects of our lives have been penetrated by them, from what we decide to read, which flight to book, what to buy online, whether the job application is successful, whether we receive a bank loan, and even how to treat Cancer etc. All these things can now be determined automatically using a complex software system. With the shocking progress that AI has made in the past few years, it may help our lives become better in many ways.
Write picture description here
In the past two years, the rise of AI has been unstoppable. A lot of money has been invested in AI startups, and many established technology companies (including giants such as Amazon, Microsoft, and Facebook) have opened new research laboratories. It is no exaggeration to say that software now means AI. Some people predict that AI is about to bring about great changes, and its influence even exceeds the Internet.

We have asked many technical experts about the impact of countless brilliant machines on humans in this rapidly changing world. It is worth noting that almost all the answers are centered around ethics. For Peter Norvig, Google’s research director and machine learning pioneer, data-driven AI technology has recently achieved many successes. The key question is to figure out how to ensure that these new systems can improve society as a whole, not just control Its main body. Novig said: "AI has proven its value in many practical tasks, from marking pictures, understanding language to helping diagnose diseases, etc. The challenge now is to ensure that everyone can benefit from this technology."

The biggest problem is that the complexity of software often means that it is almost impossible to accurately explain why the AI ​​system makes such a decision. Today's AI is mainly based on a successful technology called machine learning, but you can't lift its lid and see its inner working scene. For this reason, we can only choose to believe it. The challenge also follows, that is, we need to find new monitoring and auditing methods in many areas, especially in areas where AI is playing an important role.
Write picture description here
For Jonathan Zittrain, a professor of Internet law at Harvard Law School, one of the major dangers is that increasingly complex computer systems may prevent them from receiving the necessary scrutiny. He said: "With the help of technology, our system has become more and more complex. I am very worried about the reduction of human autonomy. If we set up the system and then forget it, the self-evolution of the system will bring about The consequences may be beyond our reach. There is no clear moral consideration for this."

This is where other technical experts worry. Missy Cummings, director of the Human and Autonomous Laboratory at Duke University in the United States, questioned: "How can we prove that these systems are safe?" Cummings was the first female fighter pilot in the US Navy and is now Drone expert.

AI does need regulation, but we still don’t know how to regulate it. Cummings said: "At present, we do not have a generally accepted method or industry standards for testing these systems. It is very difficult for these technologies to implement extensive supervision." In a rapidly changing world, regulators often I found myself at a loss for this. In many key areas, such as the criminal justice system and the medical field, many companies have used AI to explore attempts to make parole decisions or disease diagnosis. But if we give the decision power to the machine, we may lose control. Who can guarantee that the machine can make the right decision in every case?

Danah Boyd, a principal researcher at Microsoft Research, said that many serious questions about values ​​are being written into these AI systems. Who will be responsible in the end? Boyd said: "Regulators, civil society, and social theorists are increasingly eager to see these technologies maintain fairness and ethics, but these concepts are very vague."
Write picture description here
One area full of ethical issues is the workplace. AI will help robots perform more complex tasks and lead to more human workers being replaced. For example, China's Foxconn plans to replace 60,000 workers with robots. Ford's factory in Cologne, Germany, has also invested in robots to coordinate with human workers.

More importantly, if more and more automation has had a huge impact on employment, it will also have a negative impact on people's mental health. Ezekiel Emanuel, a bioethicist and former medical adviser to President Obama, said: “If you think about things that make people’s lives meaningful, you will find three things: Meaningful interpersonal relationships, strong interests, and meaningful work. Meaningful work is an important factor in defining someone’s life. In some areas, losing a job when a factory closes can lead to suicide, drug abuse, and the risk of depression increase."

As a result, we may need to see more ethical needs. Kate Darling, an expert on law and ethics at the Massachusetts Institute of Technology, believes: “The company is following the market incentive mechanism. This is not a bad thing, but we can’t just rely on ethics to control it. It helps with regulation. In place, we have seen its existence in the areas of privacy and new technologies, and we need to find a way to deal with it."

Darling pointed out that many big-name companies (such as Google) have established ethics committees to oversee the development and deployment of AI. Some people believe that this mechanism should be widely adopted. Darling said: "We don't want to stifle innovation, but at a certain level, we may want to create a certain structure."

Little is known about who can be elected to the Google Ethics Committee and what it can do. But in September 2016, Facebook, Google, and Amazon formed a joint organization with the goal of finding solutions to the security and privacy threats posed by AI. OpenAI is a similar organization that aims to develop and promote open source AI that can benefit everyone. Google’s Novig said: “Machine learning technology is publicly researched and spread through open publications and open source code. It is very important that we can share all rewards.”
If we can set industry standards and ethical standards, and fully understand the existence of AI Risks, and then establishing a regulatory mechanism with ethicists, technical experts, and business leaders as the core is very important. This is the best way to use AI for human welfare. Strand said: "Our job is to reduce people's worries about robots taking over the world in science fiction movies and pay more attention to how technology can be used to help humans think and make decisions, rather than completely replace it."

Guess you like

Origin blog.csdn.net/jessiaflora/article/details/78777643