Paper Reading - Im2Text: Describing Images Using 1 Million Captioned Photographs

Link of the Paper: http://papers.nips.cc/paper/4470-im2text-describing-images-using-1-million-captioned-photographs.pdf

Main Points:

  1. A large novel data set containing images from the web with associated captions written by people, filtered so that the descriptions are likely to refer to visual content.
  2. A description generation method that utilizes global image representations to retrieve and transfer captions from our data set to a query image.
  3. A description generation method that utilizes both global representations and direct estimates of image content (objects, actions, stuff, attributes, and scenes) to produce relevant image descriptions.

Other Key Points:

  1. Image captioning will help advance progress toward more complex human recognition goals, such as how to tell the story behind an image.

猜你喜欢

转载自www.cnblogs.com/zlian2016/p/9454975.html