DL research motivation

Hal Daume III wrote a great piece about this issue. Here's an excerpt:
[...] There are lots of ways to be better than such a baseline, and so "beating" it does not teach me anything. I always tell students not to get too pleased when they get state of the art performance on some standard task: someone else will beat them next year. If the only thing that I learn from their papers is that they win on task X, then next year there's nothing to learn from that paper. The paper has to teach me something else to have any sort of lasting effect: what is the generalizable knowledge.
The point is that an evaluation is not an end in itself. An evaluation is there to teach you something, or to substantiate something that I want to teach you. If I want to show you that X is important, then I should show you an experiment that isolates X to the best of my ability and demonstrates an improvement, preferably also with an error analysis that shows that what I claim my widget is doing is actually what it's doing.
To deep learning researchers: Stop throwing all the tricks in the book to push your numbers up (ie dropout, sophisticated optimizers, fancy parameter initialization, dataset augmentation). If you need these tricks to prove your point, then I'm sorry to say you don't have one. Your results are meaningless since you've confounded it with so many other things. Instead of spending so much effort pushing your method to state-of-the-art performance, how about spending some time simplifying it. Show that the intuitions you had for your idea is in fact what's happening. Coming from someone on the application side, if your idea isn't simple to implement or intuitive, no one will use it
Occam's razor has so much truth in this field. There are too many times where I reproduce a result only to find it only works exactly in the conditions described in the paper. This is a great way to waste another person's time.
....In other words, I just want to stop reading crappy papers........

猜你喜欢

转载自denniszjw.iteye.com/blog/2381257
DL
今日推荐