2019 will be an important meeting top AAAI ICML ICLR IJCAI NIPS important article summary and hire directory

AAAI2019 accepted article directories http://www.aaai.org/Library/AAAI/aaai19contents.php

AAAI 2019 杰出论文奖
How to Combine Tree-Search Methods in Reinforcement Learning

AAAI 2019 Outstanding Paper Award honorable mention
Solving Imperfect-Information Games via Discounted Regret Minimization

AAAI 2019 杰出学生论文奖
Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference

AAAI 2019 Outstanding Student Paper Award honorable mention
Learning to Teach in Cooperative Multiagent Reinforcement Learning

 

 

ICLR2019 accepted article directories https://chillee.github.io/OpenReviewExplorer/index.html?conf=iclr2019

ICLR 2019
two Best Paper
1. Ordered neurons: the tree structure will be integrated into the recurrent neural network Neurons the Ordered: Integrating Tree Structures INTO Recurrent Neural Networks
2. Lottery hypothesis: looking sparse, trainable neural network The Lottery Ticket Hypothesis : Finding Sparse, Trainable Neural Networks
few score paper:
Large Scale Training for the GAN High Fidelity Natural Image Synthesis
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and Gans by Constraining Information Flow
ALISTA: Analytic Weights Are Learned Select this item Weights of As of As Good LISTA in
the Ordered Neurons: Integrating Tree Structures INTO Recurrent Neural Networks
Slimmable Neural Networks
Posterior to the Attention Sequence Sequence Learning Models for
 

 

ICML 2019 https://icml.cc/Conferences/2019/Schedule?type=Poster 
two best paper
common assumptions 1. The challenge unsupervised decoupled representation of the Common Assumptions in at The Unsupervised Challenging Learning Representations of DISENTANGLED
2. sparse Gaussian regression process variation rate of convergence of convergence for Sparse Variational Rates Gaussian process regression
. 7 Pian nomination
1. interpretability analogy: trend appreciated Explained word embedded analogies: Towards understanding Word Embeddings
2.SATNet: differential satisfiability solver logical reasoning and learning bridge depth SATNET: bridging, the using deep learning and logical reasoning a differentiable satisfiability Solver
3. depth exponential tail nerve stochastic gradient noise analysis a tail-index analysis network stochastic gradient noise in deep of neural networks,
4. Fu tend to randomize unified analysis Towards a unified analysis Fourier Fourier features Random characteristics of
5. amortization of Monte Carlo integration amortized Monte Carlo integration
The depth of reinforcement learning intrinsic motivation 6. The social impact of multi-agent AS Influence Intrinsic Motivation for Social Multi-Agent Reinforcement Learning Deep
7. random beam as well as where to find them: sampling sequence without replacing the Gumbel-Top-k method Stochastic Beams and Where to Find Them: The Gumbel- Top-k Trick for Sampling Sequences Without Replacement
 

 

IJCAI 2019 accepted article directories https://www.ijcai.org/Proceedings/2019/

IJCAI2019
an Outstanding Paper (Distinguished Paper)
Boosting for Comparison-Based Learning
 IJCAI-JAIR Best Paper
Clause Elimination for SAT and QSAT

 

NIPS 2019 accepted article directories https://nips.cc/Conferences/2019/Schedule?type=Poster

NIP2019
Outstanding Paper Award for
the paper name: Distribution-Independent PAC Learning of Halfspaces with Massart Noise

New Directions Paper Award Outstanding
Thesis Title: Uniform convergence may be unable to explain generalization in deep learning

Outstanding Paper Award Honorable Mention
Paper Name: Nonparametric density estimation & convergence of GANs under Besov IPM losses

Outstanding Paper Award Honorable Mention New Directions
Paper Title: Putting An End to End-to -End: Gradient-Isolated Learning of Representations

Published 36 original articles · won praise 11 · views 6538

Guess you like

Origin blog.csdn.net/t20134297/article/details/104021780