Fog Detection Algorithm

[34]      N. Hautière, J.-P. Tarel, J. Lavenant, and D. Aubert, ‘‘Automatic fog detection and estimation of visibility distance through use of an onboard camera,’’ Mach. Vis. Appl., vol. 17, no. 1, pp. 8–20, Apr. 2006.

 

[36]      G. Li, J.-F. Wu, and Z.-Y. Lei, ‘‘Research progress of image haze grade evaluation and dehazing technology,’’ (in Chinese), Laser J., vol. 35, no. 9, pp. 1–6, Sep. 2014.

Li et al. pointed out that for the visibility of the image , the intensity of the dark channel and the contrast of the image can be used as features for the classification of blurred and sharp images [36]

 

[38]      D. J. Jobson, Z.-U. Rahman, G. A. Woodell, and G. D. Hines, ‘‘A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes,’’ in Proc. SPIE, May 2006, pp. 624601-1–624601-8.

[39]      Y. Zhang, G. Sun, Q. Ren, and D. Zhao, ‘‘Foggy images classification based on features extraction and SVM,’’ in Proc. Int. Conf. Softw. Eng. Comput. Sci., Sep. 2013, pp. 142–145.

A method for measuring the visual contrast of images et [38] were first proposed by Jobson . Using the atmospheric scattering model , the angular deviation of different hazy images was studied , and a clear image of the same scene as the haze image classification was given [39] . They also used SVM to classify foggy images. Although their methods can achieve good classification performance , it is difficult to obtain both clear images and hazy images of the same scene in practical applications.

 

[37]      X. Yu, C. Xiao, M. Deng, and L. Peng, ‘‘A classification algorithm to distinguish image as haze or non-haze,’’ in Proc. IEEE Int. Conf. Image Graph., Aug. 2011, pp. 286–289.

The remainder extracts the visibility of the image, the visual contrast of the image, and the intensity of the dark channel as features and classifies the foggy image using (SVM) [37]

 

[8] M. Pavlic, H. Belzner, G. Rogoll, and S. Ilic, “Image based fog detection in vehicles,” IEEE Intelligent Vehicles Symposium, pp.1132–1137, June 2012.

Pavlic proposes a fog image classification method using global features from the power spectrum of the Fourier transform and the support vector machine in the vehicle vision system on the highway

[9] C. Busch and E. Debes, “Wavelet transform for visibility analysis in fog situations,” IEEE Intelligent Systems, vol.13, no.6, pp.66–71, Nov. 1998.

 

[10] L.CaraffaandJ.P.Tarel,“Daytimefogdetectionanddensityestimation with entropy minimization,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol.2, no.3, pp.25–31, Aug. 2014.

First, a Canny-Deriche filter is used to extract the image edges to highlight the edges of the roads. Then a region growing algorithm is used to find the road surface layer. Third , they established four conditions to obtain the target area. Finally , by calculating the measurement bandwidth , the visible distance of the image is obtained.

[11] N. Hauti`ere, J.-P. Tarel, H. Halmaoui, R. Br´emond, and D. Aubert, “Enhancedfogdetectionandfreespacesegmentationforcarnavigation,” Machine Vision and Applications, vol.25, no.3, pp.667–679, April 2014.

 

[12] J. Mao, U. Phommasak, S. Watanabe, and H. Shioya, “Detecting foggy images and estimating the haze degree factor,” Journal of Computer Science & Systems Biology, vol.7, no.6, pp.226–228, 2014.

 

[13] C.O.Ancuti,C.Ancuti,C.Hermans,andP.Bekaert,“A fast semi-inverse approach to detect and remove the haze from a single image,” Proc. Asian Conf. Comput. Vis. (ACCV), pp.501–514, 2010.

Ancuti et al. first proposed a haze area detection algorithm based on a "semi-inverse" image.  By selecting the pixel maximum value of the original image and its inverse image, a semi-inverse image was obtained , and the method was formulated as

Sc (x) = max [Ic(x),1 − Ic(x)] (1)

where c represents one of the RGB channels,  I is the original image, and 1−  I c  ( x ) represents the inverse image of the original image.

After renormalizing the inverse image, Ancuti detects the foggy area in the h* channel of the Lch color space , and treats the pixels with large difference between the semi-inverse image and the original image as clear pixels, and treats the remaining pixels as foggy pixels . The basis of this fog detection method is that the intensity values ​​of pixels in foggy areas of an image are usually much larger than those in clear areas. In sky or foggy regions of an image , pixels typically have high intensities in all color channels, i.e. Ifog c(x) > 0.5. Therefore, the semi-inverse image will have the same values ​​as the original image in these regions. However, in clear regions , the semi-inverse image has at least one channel, and the pixel values ​​will be replaced by the inverse image. In other words, the outputs of this formula (1) are the original image of the foggy area and the inverse image of the clear area, respectively. The foggy area can then be easily detected by the difference between the original image and its semi-inverse image. The algorithm is simple and effective, and can be used for fog area detection in foggy images, but it is not suitable for judging whether the current scene is foggy or not. This is because sky areas or white areas of clear images will be mistaken for foggy areas by this algorithm.

 

[14] C. Liu, X. Lu, S. Ji, and W. Geng, “A fog level detection method based on image HSV color histogram,” IEEE International Conference on Progress in Informatics and Computing, pp.373–377, May 2014.

 

[15] S. Bronte, L.M. Bergasa, and P.F. Alcantarilla, “Fog detection system based on computer vision techniques,” Proc. IEEE International Conference on Intelligent Transportation Systems, pp.1–6, Oct. 2009.

 

[16] S. Alami, A. Ezzine, and F. Elhassouni, “Local fog detection based on saturation and RGB-correlation,” Proc. IEEE International Conference Computer Graphics, Imaging and Visualization, pp.1–5, March 2016

 

[17]Jeong K, Choi K, Kim D, et al. Fast Fog Detection for De-Fogging of Road Driving Images[J]. Ieice Transactions on Information & Systems, 2018, 101(2):473-480.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324687174&siteId=291194637