OpenCV project development practice---exposure fusion (images taken with different exposure settings are combined into one image)

In this tutorial, we will learn exposure fusion using OpenCV. At the end of this article, C++ and Python codes will be shared for readers to download and verify.

What is Exposure Fusion?

Exposure fusion is a method of combining images taken using different exposure settings into a single image that looks like a tone-mapped high dynamic range (HDR) image. The previous article has a detailed introduction to the use of HDR.

 

When we take a photo with a camera, there are only 8 bits per color channel to represent the brightness of the scene. However, the brightness of the world around us can theoretically vary from 0 (pitch black) to almost infinite (looking directly at the sun). Therefore, a point-and-shoot or mobile camera determines exposure settings based on the scene in order to use the camera's dynamic range (0-255 values) to represent the most interesting parts of the image. For example, in many cameras, face detection is used to find faces and set the exposure so that the faces look bright.

This begs the question - can we take multiple photos at different exposure settings and capture a wider range of scene brightness? The answer is yes! The traditional approach is to use HDR imaging and then tone mapping.

HDR imaging requires us to know precise

Guess you like

Origin blog.csdn.net/tianqiquan/article/details/133188785