Why is the image range limited to use clip instead of proportional scaling to the range of 0~255?

1. Problem

For example, a super-resolution image matrix is ​​now generated through the model, but its pixel value range is between -20~270 and is not strictly 0~255, so some methods are generally used to limit it to 0~255

  • clip: Directly intercept the value from 0 to 255, all values ​​less than 0 are set to 0, and all values ​​greater than 255 are set to 255
np.clip(0,255)
  • Proportional scaling: scale -20~270 to 0~255
img=(img-np.min(img))/(np.max(img)-np.min(img))*255

So why is almost all code using clip?

2. Reason

In fact, there are also some "outliers" in the super-resolution images generated by the model, and some of the "outliers" appear to be very large or small, and these abnormal points can be eliminated through clip, and the proportional scaling is limited to 0 ~\255, but did not remove outliers.

In short, the "normal" pixel value itself will be in the range of 0~\255, and those beyond this range can be understood as outliers, so they are eliminated by clip.

The following picture is to project the pixel value on the Z axis. You can see that some very high or very low points can be understood as abnormal points. If you intercept from 0,255, you can remove this abnormal value.
insert image description here

Guess you like

Origin blog.csdn.net/qq_40243750/article/details/127904435