Less than the scene, still see an academic report most dry goods!
Hi, everybody. Here is an academic report column, Xiao Bian read core operation and personally run time to time that will be, as we dedicate the best of science and technology research papers, reports record dry goods for the students, and find ways to get first-hand the PPT and live video - enough dry goods, fresh enough! Man of few words said, quickly over here, hope Outstanding Youth of these outstanding young scholars, experts and academic reports, allowing you to read the more valuable knowledge in his spare time.
Artificial Intelligence Forum Today voluminous, hard goods, of course, but there are dry cream of the crop. AI said that the future · Youth Academic Forum since its launch January 19, 2019, the forum has been held for twelve cumulative attract thousands of people to join, registration groups in more than 30 provinces, and abroad Thirteen countries, four hundred universities and research institutes. Twelfth AI said the future · Youth Academic Forum (Baidu PhD scholarship particular concert) was held in Beijing Baidu Technology Park Auditorium K6 2020 January 5 pm. Dong Yin Peng Qinghua University to bring you the report " Adversarial Robustness of Deep Learning ."
Dong Yin Peng full field report video
Dong Yin Peng, Department of Computer Science Artificial Intelligence Research Institute of Tsinghua University third-year doctoral tutor Professor Zhu. The main research directions for machine learning and computer vision, depth study of the robustness of the learning environment in the fight against focusing.
Content report: For an existing deep learning model is easy to be deceived by the attacker against a sample question, Dr. Dong results of three studies in depth learning in confrontation environment robust.
Adversarial Robustness of Deep Learning
深度学习这两年取得了很多进展,相关模型也被用于各类系统。但与此同时,深度学习模型的可靠性也受到诸多考验。多种发现表明,深度学习模型容易被攻击者的对抗样本所欺骗,即攻击者向原始样本中添加一些微小的样本,而这些样本会导致模型将该项本错误地归类。一些样本在人眼看来和原张没有区别,但深度学习模型却做出了错误的判断,这会带来一些很实际的安全隐患。在实际系统当中也存在对抗样本,如在交通数据中添加一些噪声,就会使得自动驾驶系统预测错误。
对抗样本可以归结为优化问题,为求解这样的优化问题,有很多方法用于寻找对抗样本,或是直接优化对抗样本。很多方法都需要获取网络梯度,即网络参数信息,这被称为白盒攻击,而无需网络梯度的方法被称为黑盒攻击。基于对抗样本的迁移性能,即针对一种模型的对抗样本,也能欺骗其他模型,这可以产生对抗样本。另一方面,可以通过估计模型梯度,或者随机搜索的方法寻找对抗样本。
董博士的第一项工作是动量迭代式样本生成方法。对抗样本的迁移性能和白盒攻击能力是无法两全需要权衡的。借鉴优化领域的动量算法,在对抗样本的生成过程中,记录并使用了动量叠加过程,这样既提高了对抗样本的迁移性能,提升了对黑盒模型的攻击能力,又能够对白盒模型不过于敏感。
现有一些方法能够提升模型的防御力。董博士的第二项工作是通过图像变换、频域变换,相比其他算法,在攻击效率不变的同时,减少对当前模型的敏感程度,更好地攻击具有防御的黑盒模型。
第三项工作是结合未知网络梯度的攻击方法和网络梯度估计方法,更有效地提升黑盒攻击。
AI未来说*青年学术论坛
第一期 数据挖掘专场
第二期 自然语言处理专场
第三期 计算机视觉专场
第四期 语音技术专场
5. 中科院刘斌:基于联合对抗增强训练的鲁棒性端到端语音识别
第五期 量子计算专场
1. 清华大学翟荟:Discovering Quantum Mechanics with Machine Learning
3. 荷兰国家数学和计算机科学中心(CWI)李绎楠:大数据时代下的量子计算
第六期 机器学习专场
3. 百度胡晓光:飞桨(PaddlePaddle)核心技术与应用实践
4. 清华大学王奕森:Adversarial Machine Learning: Attack and Defence
5. 南京大学赵申宜:SCOPE - Scalable Composite Optimization for Learning
第七期 自动驾驶专场
2. 清华大学邓志东:自动驾驶的“感”与“知” - 挑战与机遇
3. 百度朱帆:开放时代的自动驾驶 - 百度Apollo计划
第八期 深度学习专场
第九期 个性化内容推荐专场
第十期 视频理解与推荐专场
第十一期 信息检索与知识图谱专场
留言 点赞 发个朋友圈
我们一起分享AI学习与发展的干货
推荐文章阅读
长按识别二维码可添加关注
读芯君爱你