Neural network internal logic: no open AI "black box"

Reprinted from: http://www.elecfans.com/rengongzhineng/592200.html

 

Along with big data, AI after years of silence, he ushered in a new climax. Most involved in the revolution in science, artificial neural network to release the artificial intelligence. But scientists have found that this technique implies a key question: artificial neural network is a "black box."

We all know that no matter how complex artificial neural network, can be regarded as the three parts: the input layer, hidden layer and output layer. Among them, we pass deep learning , the neural network training superimposed layer by layer, in order to effectively adjust the right neurons at all levels of heavy network. However, there is a problem, remove the input and output, we know nothing about what happened to the hidden layer, and that is no way for the internal logic of cognitive behavioral neural network.

 

Neural network internal logic: no open AI "black box"

Marco Ribeiro graduate student at the University of Washington used a method called counterfactual probe to understand the "black box." This particular method is through minor changes to the input point, then look at the output of the change, and record these changes. Clearly, however, this method requires thousands of actions and try, and you can not help us a comprehensive understanding of artificial neural networks.

And Google, another computer scientist Mukund Sundararajan designed a detector, greatly reducing the input. Different from the random input Ribeiro taken Sundararajan research innovation is the introduction of a blank reference.

First, Sundararajan enter a zero array arrangement, so that the input data is then gradually shift to the target data to be tested, in order to turn research by the internal logic output changes. It is worth noting that, with each step change, scientists will be able to see its exact trajectory of change, which can predict changes in this feature. But it still can not be trusted to predict the result is still there is a big error.

The US state of Washington computer scientist at Microsoft Research RiCh Caruana into the generalized additive model (GAM) to process the complex relationships among data. We all know that statistics GAM is based on linear regression method of linear trend then found in a set of data. Caruana increased the process, he first use machine learning to change output, then the output of the input data networks to GAM, to find out the correlation between the change in order to study the internal logic of neural networks.

In addition, in the image field of research, scientists also conducted using the formula confrontation Network (GAN) neural network research, however, all these efforts are attempts, universal research method has not yet come.

Now, scientists have not only recognized the urgency of this issue, many governments are also aware of the problem. According to a directive of the European Union next year, all companies have a huge influence on the need to explain the internal logic of its model to the public. In addition, the US military's blue sky research institutions Defense Advanced Research Projects Agency is also called to a "interpretable AI" new plans to invest $ 70 million.

Google's machine learning researcher Maya Gupta said the Silicon Valley researchers are also trying to open the AI's "black box." In addition to accuracy after running operation, the hearts of all there is a very large scruples: because they do not know what it's doing, it is uncertain can not believe it.

With the vigorous trend of artificial intelligence (AI) applications for security risks, the neural network internal logic of this "blind spot" is indeed an urgent need to circumvent.

Guess you like

Origin www.cnblogs.com/shuaishuaidefeizhu/p/11239002.html