Low Light Image Dataset

https://docs.activeloop.ai/datasets

[LOL] : paper , a paired dataset  containing low-light/normal-light images

  1. Contains 500 low-light/normal-light image pairs.
  2. The original image was resized to 400×600 and converted to Portable Network Graphics format.
  3. A three-step approach was used to remove misalignment between pairs of images in the dataset. 

【Exclusive Dark】paper

The Exclusive Dark dataset consists of only ten different types of low-light images (i.e., low-light, ambient, object, single, weak, strong, screen, window, shadow, and twilight), captured in visible light only with image- and object-level annotations.  

[SID] : paper , a paired data set under extremely low light .

light level (outdoor scene 0.2 lux - 5 lux; indoor scene: 0.03 lux - 0.3 lux)。 

Raw data is collected while imaging in low light with short exposure times (typically 0.1 sec or 0.04 sec).
The ground truth is captured at long exposure times (typically 10 or 30 seconds) where noise is negligible. 

【HDR】paper

The HDR dataset itself took 74 scenes. Model Canon EOS-5D Mark III camera, the original image size is 5760 x 3840, in order to reduce the risk of misalignment, it is down-sampled to 1500 x 1000. Use data augmentation rotate flip from 74 to 740. When entering, use stride 20 to cut into pieces of 40x40 size. LDR HDR and gt cut the same position. After cutting, select more than 50% of the ref img over/underexposed patches, a total of 1 million pieces.

【RAISE】

RAISE is a challenging real-world image dataset mainly used to evaluate digital forgery detection algorithms. It consists of 8156 high-resolution RAW images, uncompressed and guaranteed to be native to the camera (i.e., never touched or processed). All images were collected from 4 photographers over a period of 3 years (2011-2014), using 3 different cameras to capture different scenes and moments in more than 80 locations in Europe. 

[GLADNet]  : paper , synthesize low-light image pairing datasets on RAISE original images .

780 raw images are collected from RAISE [12], 700 of which are used to generate training pairs and 80 are used for validation. Adobe Photoshop Lightroom offers a range of parameters for RAW image adjustments, including exposure, brightness, and contrast. A low-light image is synthesized by setting the exposure parameter E to [-5, 0], the vibration parameter V to [-100, 0], and the contrast parameter C to [-100, 0]. To prevent color bias, 700 grayscale image pairs converted to color image pairs were added to the training dataset. To keep the black and white regions the same before and after augmentation, five black-to-black and five white-to-white training pairs are added. Finally, all images were resized to 400×600 and converted to Portable Network Graphics format.

【DPED】 : paper , code , which contains real photos taken from three different mobile phones and a high-end SLR camera.

DPED Dataset DSLR Photo Enhancement Dataset

The main task of the article is to solve the problem of image conversion, converting ordinary photos into DSLR-quality images.

Figure 3: A total of more than 22K photos were collected over 3 weeks, including 4549 photos from Sony smartphones, 5727 photos from iPhones, and 6015 photos from Blackberries; for each smartphone photo, there is a corresponding photo from a Canon DSLR. The photos were taken during the day in various locations and under various lighting and weather conditions. Images were captured in automatic mode and we used default settings for all cameras throughout the collection.

Figure 4: To remove pixel misalignment between image pairs, a nonlinear transformation is used, and the input network is a fixed-resolution image (Figure 4). First, for each (phoneDSLR) image compute and match SIFT keypoints. Then, both images are cropped to the intersection.

【SICE】 : paper , a large-scale multi-exposure image dataset, including multi-scene high-resolution image sequences.

In this paper, MEF and HDR technologies are used to reconstruct the reference image (the article indicates that it has higher contrast and high visibility reconstruction effect). From camera shooting to image screening to reference image generation, 1200 kinds of sequences and 13 kinds of MEF/HDR algorithms are used to generate 1200×13=15600 kinds of fusion results. After careful screening, 589 high-quality reference images and their corresponding sequences are retained. 4413 images.

【RELLISUR】: paper that contains real low-light low-resolution images paired with normal-light high-resolution reference images .

This paper presents RELLISUR, a real-world dataset for the task of low-light image super-resolution. The dataset contains 850 different LLLR/NLHR series collected by manipulating the camera lens and exposure time. Various SOTA models of LLE and SR are applied to RELLISUR for benchmarking. 

The RELLISUR dataset contains real low-light low-resolution images paired with normal-light high-resolution reference images. This dataset aims to fill the gap between low-light image enhancement and low-resolution image enhancement (super-resolution (SR)), although the visibility of real-world images is often limited by low-light and low-resolution, but is currently only addressed separately in the literature. The dataset contains 12,750 paired images of different resolutions and low-light illumination, facilitating the learning of deep learning-based models that can directly map from low-visibility degraded images to high-resolution, high-quality, detail-rich images.

【LLVIP】 : Visible-Infrared Paired Dataset  for Low Light Vision

[MIT-Adobe FiveK] : Learning photography global tone adjustment using input/output image pair database (with about 4% low-light images)

[Dark Face] : Face detection dataset under low light conditions

The DARK FACE dataset provides 6,000 real-world low-light images taken at night, in school buildings, streets, bridges, overpasses, parks, etc., all of which are labeled with bounding boxes of human faces, as the main training and/or validation set. We also provide 9,000 unlabeled low-light images collected from the same setup. In addition, we provide a unique set of 789 pairs of low-light/normal-light images captured under controlled realistic lighting conditions (but not necessarily containing faces), which can be used as part of the training data when discretizing the participants. There will be a holdout test set of 4,000 low-light images annotated with bounding boxes of faces.

[UFDD] : UFDD is used for face detection under adverse conditions, including weather-based degradation, motion blur, focus blur, etc.

【NightOwls dataset】 : A dataset for pedestrian detection at night.

Nighttowls contains 279,000 frames, 40 sequences, recorded by an industry standard camera at night in 3 countries, including different seasons and weather conditions. All frames are fully annotated and contain additional object attributes such as occlusion, pose and difficulty, as well as tracking information to identify the same object across multiple frames. Contains a large number of background frames for evaluating the robustness of the detector, a validation set for local hyperparameter tuning, and a test set for central evaluation on the submission server.

[AVA] , paper , a large database for aesthetic visual analysis (Aesthetic Visual Analysis), about 250,000 photos.

  1. We show in experiments that not only the size of the training data is important for performance, but also the aesthetic quality of the images used for training.
  2. Semantic Annotation: Provides 66 text labels describing the semantics of images. About 200,000 images contain at least one marker and 150,000 images contain 2 markers.

Guess you like

Origin blog.csdn.net/qq_39751352/article/details/126097765