【机器学习】两分钟论文导读系列-验证AI程序的关键任务

【两分钟论文】系列视频资源见链接:http://www.insideai.cn/vedio/papers/reluplex.html
下一篇:依据语音内容生成面部动画 - NVIDIA
关注公众号:【袋马AI 】获取论文资源

在这里插入图片描述
两分钟论文导读系列是由Károly Zsolnai-Fehér录制。介绍机器学习领域的高引用的热门论文.

这个课程解读的论文是斯坦福大学的《Reluplex:An Efficient SMT Solver for Verifying Deep Neural Networks》。论文介绍了一种验证神经网络重要特征的方法。帮助学习系统明确可能的对抗性训练。视频内容是英文的,因此我把视频内容翻译了下,翻译内容如下(如有更多翻译的建议欢迎在公众中回复):

Dear Fellow Scholars, this is Two Minute Papers with Karoly Zsolnai-Feher.
亲爱的学者们,这是Karoly Zsolnai-Feher的两分钟论文系列。

This paper does not contain the usual fireworks that you’re used to in Two Minute Papers, but I feel that this is a very important story that needs to be told to everyone.
这篇文章不像之前的论文那样出彩,但是我认为有一些重要的故事,仍然值得告诉每个人。

In computer science, we encounter many interesting problems, like finding the shortest path between two given streets in a city, or measuring the stability of a bridge.
在计算机科学中,我们遇到许多有趣的问题,比如在城市中两条给定的街道之间找到最短的路径,或者测量桥梁的稳定性。

Up until a few years ago, these were almost exclusively solved by traditional, handcrafted techniques.
直到几年前,这些问题几乎完全是通过传统手工技术解决的。

This means a class of techniques that were designed by hand by scientists and are often specific to the problem we have at hand.
这意味着有一类由科学家手动设计的技术,是专门针对我们手头的问题而设计的
Different problem, different algorithm.
不同的问题,不同的算法。

And, fast forward to a few years ago, we witnessed an amazing resurgence of neural networks and learning algorithms.
几年前,我们目睹了神经网络和学习算法的惊人复兴。

Many problems that were previously thought to be unsolvable, crumbled quickly one after another.
许多以前被认为是无法解决的问题,一个接一个地被迅速攻破。

Now it is clear that the age of AI is coming, and clearly, there are possible applications of it that we need to be very cautions with.
现在很明显,人工智能的时代即将到来,显然,我们需要非常小心地使用它。

Since we design these traditional techniques by hand, the failure cases are often known because these algorithms are simple enough that we can look under the hood and make reasonable assumptions.
由于我们手动设计这些传统的技术,失败的情况是可以被预见的,因为这些算法非常简单,我们可以通过观察做出合理的假设。

This is not the case with deep neural networks.
深层神经网络所处理的问题的情况并非如此。

We know that in some cases, neural networks are unreliable.
我们知道,在某些情况下,神经网络是不可靠的。

But it is remarkably hard to identify these failure cases.
但是很难识别出这些错误的案例。
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
For instance, earlier, we talked about this technique by the name pix2pix where we could make a crude drawing of a cat and it would translate it to a real image.
例如,早些时候,我们讨论了这个名为pix2pix的技术,在这里我们可以绘制一只猫的简笔画,将它转换成一个真实的猫的形象。

It worked spectacularly in many cases, but twitter was also full of examples with really amusing failure cases.
它在很多情况下都非常有效,但在Twitter上也有很多非常有趣的失败案例。

Beyond the unreliability, we have a much bigger problem.
除了不可靠之外,神经网络还有一个更大的问题。

And that problem is adversarial examples.
就是对抗样本问题。

In an earlier episode, we discussed an adversarial algorithm, where in an amusing example, they added a tiny bit of barely perceptible noise to this image, to make the deep neural network misidentify a bus for an ostrich.
在前面的一集中,我们讨论了一种对抗性算法,在一个有趣的例子中,他们给这幅图像添加了一点几乎无法察觉的噪音,从而使深层神经网络把公共汽车误认为成鸵鸟。

在这里插入图片描述
We can even train a new neural network that is specifically tailored to break the one we have, opening up the possibility of targeted attacks against it.
我们甚至可以训练出一种新的神经网络,它是专门为打破现有的神经网络而设计的,从而产生了针对它的定向攻击的可能性。

To alleviate this problem, it is always a good idea to make sure that these neural networks are also trained on adversarial inputs as well.
为了缓解这一问题,确保这些神经网络接受了对抗性输入的训练,是一个好主意。

But how do we know how many possible other adversarial examples exist that we haven’t found yet?
但是,我们怎么知道还有多少其他可能的对抗性例子,还没有被找到呢?

The paper discusses a way of verifying important properties of neural networks.
本文讨论了一种验证神经网络重要特征的方法。

For instance, it can measure the adversarial robustness of such a network, and this is super useful, because it gives us information whether where there are possible forged inputs that could break our learning systems.
例如,它可以度量一个网络的对抗鲁棒性,这是非常有用的,因为它为我们提供了信息,让我们可以知道那些输入数据可能会破坏我们的学习系统。

The paper also contains a nice little experiment with airborne collision avoidance systems.
本文还对机载避碰系统进行了较好的实验研究。

The goal here is avoiding midair collision between commercial aircrafts while minimizing the number of alerts.
这里的目标是避免商用飞机之间的空中碰撞,同时尽量减少警报的数量。
在这里插入图片描述
As a small-scale thought experiment, we can train a neural network to replace an existing system, but in this case, such a neural network would have to be verified.
作为一个小规模的实验,我们可以训练一个神经网络来取代现有的系统,但在这种情况下,这样的神经网络必须得到验证。
And it is now finally a possibility.
现在终于有可能了。

Now, make no mistake, this does not mean that there are any sort of aircraft safety systems deployed in the industry that are relying on neural networks.
现在,毫无疑问,这并不意味着在工业中部署的任何类型的飞机安全系统都依赖于神经网络。

No no no, absolutely not.
不,不,绝对不是。

This is a small-scale “what if” kind of experiment that may prove to be a first step towards something really exciting.
这是一个小规模的“万一”实验,可能被证明是迈向真正令人兴奋的事情的第一步。

This is one those incredible papers that, even without the usual visual fireworks, makes me feel that I am a part of the future.
这是一篇令人难以置信的论文,即使目前看起来成果有限,但是仍然让我觉得自己是未来的一部分。

This is a step towards a future where we can prove that a learning algorithm is guaranteed to work in mission critical systems.
这是迈向未来的一步,我们可以证明,学习算法能够保证关键任务系统中的工作。

I would also like to note that even if this episode is not meant to go viral on the internet, it is still an important story to be told.
我还想指出,即使这一集没有在互联网上传播开来,它仍然在讲一个重要的故事。

【两分钟论文】系列视频资源见链接:http://www.insideai.cn/vedio/papers/reluplex.html
下一篇:依据语音内容生成面部动画 - NVIDIA
关注公众号:【袋马AI 】获取论文资源

猜你喜欢

转载自blog.csdn.net/maerdym/article/details/83278786