Interpretation! A collection of 8 NeurIPS 2019 papers, including Beijing Post, Xidian, and DeepMind

NeurIPS 2019 will be held in Vancouver, Canada on December 8. As one of the top academic conferences on machine learning, NeurIPS 2019 has attracted much attention. The scholar has compiled eight accepted papers of NeurIPS 2019, covering deep neural networks, generative confrontation networks and other fields for your reference.

Reflection Separation using a Pair of Unpolarized and Polarized Images(Spotlight论文)

• Author: Lu Wei tour, Cuizhao Peng, Li Si, Marc Pollefeys, Shibo Xin (Beijing University of Posts and Telecommunications, Zurich Federal Institute of Technology, Peking University, Pengcheng Laboratory)

• Paper address: https://papers.nips.cc/paper/9598-reflection-separation-using-a-pair-of-unpolarized-and-polarized-images.pdf

This paper focuses on solving the inevitable reflection interference problem when taking pictures through glass in daily life. It is the first to propose a pair of polarized and unpolarized images (the former collects changes caused by reflection, the latter guarantees high-quality images with normal illumination) as The input is used for the problem of reflection cancellation, which greatly simplifies the shooting requirements of existing methods that rely on three or more polarized images.

Based on the polarization imaging principle of the camera and the analysis of the light propagation characteristics, the paper gives the relationship between the physical parameters corresponding to each pixel in the captured image, the shooting angle of view, and the geometric position of the glass to specify the constraints of the physical parameters, and then extracts more effective nerves. Network learning parameters; at the same time, a new data generation scheme is proposed based on the principle of polarization imaging, and a corresponding neural network structure is designed to separate a clean background layer and a reflective layer from the mixed image. The practical shooting method and effective elimination algorithm of this method are expected to be applied to smart image enhancement of mobile phones, surveillance and other equipment.

1005.jpg

Memory-oriented Decoder for Light Field Salient Object Detection(Poster论文)

• Author: Zhang Miao, Li Jing Jing, Ji Wei, Pu Yong Day, Lake Lu Chuan (Dalian University of Technology)

• Paper address: https://papers.nips.cc/paper/8376-memory-oriented-decoder-for-light-field-salient-object-detection.pdf

This research innovatively combines 4D light field data with deep neural networks, and designs a memory-oriented decoder to fully integrate the complementary information between 4D light field and RGB images. Inspired by the rich spatial information and multi-aggregation information of light field data, a memory-oriented spatial fusion module is specially designed to weight the advantages of different light field characteristics, and the convolutional long and short-term memory network is used to summarize the characteristics of different light fields. Then, through the design of a global perception module and a deep supervision mechanism to strengthen the high-level information of the features and make the network learn useful features more clearly. In the final decoding stage, a memory-oriented feature integration module is designed, which uses a recursive attention network to gradually improve the positioning ability of the network and optimize the spatial details of the result. It is also the first time in this field to use a recursive network structure to decode the network. .

1006.jpg

In addition, this research provides a large-scale 4D light field data set, which provides important data support for the application of deep learning in the field of light field saliency, and solves the current lack of light field data. And through a large number of experiments on three public data sets, it is proved that this method is better than the most advanced 25 salient object detection methods, especially in complex scenes.

MarginGAN:Adversarial Training in Semi-Supervised Learning(Poster论文)

• Author: Dong Hao would like, Tong Lin (Xi'an University of Electronic Science and Technology, Peking University, Pengcheng Laboratory)

• Paper address: https://papers.nips.cc/paper/9231-margingan-adversarial-training-in-semi-supervised-learning.pdf

The paper proposes a novel three-component generative confrontation network model MarginGAN. From the perspective of classification interval, it solves the problem of impairing the performance of the classifier due to inaccurate pseudo-labels in semi-supervised learning, and improves the accuracy of semi-supervised learning. MarginGAN consists of three components: a generator, a discriminator, and a classifier. In addition to fighting against the discriminator like traditional GAN, the generator also fights against the classifier. The generator maximizes the interval between the generated pictures, and the classifier reduces the interval.

1007.jpg

Episodic Memory in Lifelong Language Learning(Poster论文)

• : : Masson d'Autume's Cyprien , Sebastian Ruder , Lingpeng Kong , Dani Yogatama (DeepMind)

• Paper address: https://papers.nips.cc/paper/9471-episodic-memory-in-lifelong-language-learning.pdf

This paper introduces a lifelong language learning mechanism, the model can learn from the text instance stream without any data annotation. The key part is a memory model that performs sparse experience replay and local adaptation, which can alleviate catastrophic forgetting. Text classification and question answering experiments proved the complementary advantages of sparse experience replay and local adaptation, which enabled the model to continuously learn from new data sets. Experiments show that randomly selecting examples to be stored in the memory can significantly reduce the space complexity of the scene storage module (50-90%) with minimal performance degradation. The episodic memory component is an important part of universal language intelligence, and the model proposed in this article is the first step in this direction.

1008.png

Numerically Accurate Hyperbolic Embeddings Using Tiling-Based Models (Spotlight论文)

• Author: Tao Yu, Chris De Sa (Cornell University)

• Paper address: https://papers.nips.cc/paper/8476-numerically-accurate-hyperbolic-embeddings-using-tiling-based-models.pdf

When using hyperbolic space to embed hierarchical data (as shown in the figure, tree), excellent performance can be achieved, but when using floating-point numbers to represent points in the hyperbolic space, the embedding error will be infinitely larger due to the floating-point arithmetic error. To solve this problem, the author proposes a new hyperbolic space model, which uses integer tiling to represent the hyperbolic space, and has a provable bounded numerical error. The model can use ordinary 32-bit floating-point numbers to obtain the same high-precision floating-point embedding performance, while being able to store embeddings in less space. Through a series of experiments to evaluate the model, it not only effectively compresses the hyperbolic embedding (compressing the WordNet embedding to the original 2%), but also can learn more accurate embedding (improving the Mammals embedding performance by 10%).

1009.jpg

A New Defense Against Adversarial Images:Turning a Weakness into a Strength( Poster论文)

• Author: Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger (Cornell University, Ohio State University)

• Paper address: https://papers.nips.cc/paper/8441-a-new-defense-against-adversarial-images-turning-a-weakness-into-a-strength.pdf

When using neural networks for classification, there are some low-density misclassification areas around natural images. Effective search using gradient-based methods can lead to the existence of adversarial samples. Although many methods for detecting these attacks have been proposed, when the attacker fully understands the detection mechanism and adjusts the attack strategy accordingly, these methods can easily be breached again.

The author adopts a novel perspective and regards the ubiquity of the confrontation direction as an advantage rather than a weakness. Assuming an image is tampered with, these confrontation directions will either become difficult to find using the gradient method, or will have a higher density than the natural image. In response to this feature, the author developed a practical test method to successfully detect and counter the attack, and achieved unprecedented accuracy in the context of the white box where the attacker fully understands the detection mechanism.

1010.png

Learning to Confuse:Generating Training Time Adversarial Data with Auto-Encoder( Poster论文)

• Author: Ji Feng, QiZhi Cai, ZhiHua Zhou (Nanjing University, Innovation Works)

• Paper address: https://papers.nips.cc/paper/9368-learning-to-confuse-generating-training-time-adversarial-data-with-auto-encoder.pdf

This paper focuses on the security of the artificial intelligence system at this stage. Specifically, the article proposes one of the most advanced methods for efficiently generating adversarial training data-DeepConfuse. By hijacking the training process of the neural network, it teaches the noise generator to A bounded perturbation is added to the training sample, so that the machine learning model trained by the training sample has as poor generalization ability as possible when facing the test sample, which very cleverly realizes "data poisoning".

The research of this technology is not only to reveal the threats of similar AI intrusion or attack technology to the system security, but is committed to the in-depth study of related intrusion or attack technology, and to develop targeted prevention of “AI hackers”. "The perfect plan" has a positive guiding role in the promotion and development of the cutting-edge research direction of AI security attack and defense.

1011.jpg

From the experimental data, it can be found that on different data sets such as MNIST, CIFAR-10, and a reduced version of IMAGENET, the system model trained using the "unpoisoned" training data set and the "poisoned" training data set has the classification accuracy There is a big difference, and the effect is very impressive.

Optimal Stochastic and Online Learning with Individual Iterates(Spotlight论文)

• Author: Leiyun Wen, Yang Peng, Tang Ke, Zhou Dingxuan (South University of Technology, University of Kaiserslautern, City University of Hong Kong)

• Paper address: https://papers.nips.cc/paper/8781-optimal-stochastic-and-online-learning-with-individual-iterates.pdf

Model training in machine learning can often be transformed into optimization problems. Stochastic optimization algorithms provide simple and effective strategies for solving large-scale optimization problems. Solving speed and sparsity are two important indicators for evaluating algorithm performance. Classical stochastic optimization algorithms need to sacrifice sparsity to achieve optimal solution speed, or sacrifice algorithm efficiency to achieve sparsity. The research team conducted an exquisite analysis of the stochastic optimization algorithm, and proposed a stochastic optimization algorithm that can achieve the optimal solution speed while ensuring the sparsity of the model, and verified the performance of the algorithm through non-trivial theoretical analysis and experiments.

1012.jpg

A few days ago, the NeurIPS2019 official website has published all accepted papers, and we have downloaded them all for everyone. Students who need it, please go to the WX official account of " Academic Headlines" , move your little finger to point "Now Watching" to this article , and reply "NeurIPS2019" in the backstage of the official account to get it.

Related Reading:

NeurIPS 2019 paper inventory: Google butcher list, domestic Tsinghua University

Guess you like

Origin blog.csdn.net/AMiner2006/article/details/103196899