Actual combat | OpenCV image watermarking example (with source code)

Click the card below to follow the public account of " OpenCV and AI Deep Learning "!

Visual / image heavy dry goods, delivered as soon as possible!

Source丨https://davidsteccieblog.blogspot.com/2017/10/removing-watermarks.html?view=flipcard

Translation and finishing丨OpenCV and AI deep learning

Guided reading

This article mainly shares an example of using OpenCV to remove image watermarks. The method in the code is worth learning from.

Background introduction

The author was preparing a lecture on Ballintoor Castle when he came across an original painting from 1860 on the website of the Royal Institute of British Architects (RIBA). Thought the painting was fantastic, but it has a watermark. Since the author has a background in image processing, I decided it should be fun to try and write an algorithm for removing watermarks, and the overall effect is not bad when done!

Implementation steps and effects

Let's take a look at the original image containing the watermark:

picture

There are basically 3 different areas in the image:

  • untouched area outside letters

  • black lines around letters

  • Internal areas of letters with reduced color and contrast

At first glance, the contrast between the letters (watermarks) and the background in the picture is low. After trying, I found that the S channel in the HSV color space can highlight the letters very well:

picture

Threshold the S channel, and then mark the connected domain result:

picture

connectivity = 4ret, thresh_s = cv2.threshold(image_saturation, 42, 255, cv2.THRESH_BINARY_INV)  # 50 too high, 25 too lowoutput = cv2.connectedComponentsWithStats(thresh_s, connectivity, cv2.CV_32S)blob_image = output[1]stats = output[2]pop_thresh = 50big_blob_colour_map = make_random_colour_map_with_stats(stats, pop_thresh)all_blob_colour_map = make_random_colour_map_with_stats(stats)big_blob_coloured_image = big_blob_colour_map[blob_image]                       # outputall_blob_coloured_image = all_blob_colour_map[blob_image]                       # outputdisplay_and_output_image("big_blob_coloured_image", big_blob_coloured_image)display_and_output_image("all_blob_coloured_image", all_blob_coloured_image)letter_mask = coloured_image_to_edge_mark(big_blob_coloured_image)

Connected domain filter, keep the letter part:

picture

The letter edge contours can be obtained by the morphological gradient method:

picture

The image below may look slightly different from the starting point image with the watermark. However, the black border is now the "edge mask" I created. In these letters, I increased the contrast of the black and white image to exactly match the contrast of the surrounding sepia image. This is achieved through a "histogram technique" that matches the intensity distribution histogram of the inner image to that of the outer image.


In addition to the "edge mask", I also have a "letter mask" for the inside of the letters, basically the same color as the earlier ones.

picture

    In these letters, the missing hue and saturation information needs to be provided. In the edge mask area, we need to provide the missing color, saturation and intensity information. The inpaint image inpainting technique can be used to fill in missing areas, which is a technique used to remove scratches from photos.


    In simple terms, the hue and saturation information is drawn inside the letter mask, and the intensity information is drawn inside the edge mask. The eye is less sensitive to hue and saturation information, so a lot of inaccurate guesswork in the large letter area is not a problem. However, this looks horrible when the intensity information is plotted inside the letter mask. The eye is very sensitive to intensity information. Therefore, it is necessary to plot the intensity in the smallest possible area (i.e. only the edge mask), and use the intensity information inside the letters - this requires complex histogram work (histogram color migration, see source).

    Final processing result:

picture

The above source code can be found in the public account [ OpenCV and AI Deep Learning ], reply: You  can get it after removing the watermark  .

Guess you like

Origin blog.csdn.net/stq054188/article/details/121981533