How to reduce the color difference between stitched images to achieve smooth color transition?

Reduce the difference in tones and color shades of different image data in machine learning

Keywords: Eliminate tonal differences in different image data/eliminate differences in color depth/pathology/machine learning/deep learning/artificial intelligence

Machine learning training image data may affect the training and prediction results due to the color depth and hue between the image data. This method can reduce the impact of different color depths/hues on training

This chestnut uses color pictures

1

2

3

4

5

6

7

8

9

10

11

12

13

14

import numpy as np

import cv2

import histomicstk as htk

 

root = 'list.txt'

infer_path = './abc.png'

infer = cv2.imread(infer_path)

for path in open(root):

    path = path.replace('\n', '')

    print('>>>',path)

    input_img = cv2.imread(path)

    meanRef, stdRef = htk.preprocessing.color_conversion.lab_mean_std(infer)

    imNmzd = htk.preprocessing.color_normalization.reinhard(input_img, meanRef, stdRef)

    cv2.imwrite(path, imNmzd)

  Among them, histomicstk is a library, which can be installed using pip install histomicstk, root is the list of target files traversal (refer to the previous note I wrote), and infer_path is the standardized image (the subjective feeling color/tone etc. obtained by observing with the human eye) Nice picture)

I am stitching multiple images, but I want to improve the color transition between them. Here are two images:

  • http://imgur.com/nG5I0nr
  • http://imgur.com/EZFzNeL
  • This is the stitched image:
  • http://imgur.com/C23iOqJ
  • You may see that the color transition is very poor. I want them to look like the same image (or at least close to it)

    my current way of doing it:

    I first use filter2Ddelete seams, then use Laplacian transformation to get a mask of the connected points of the image, and then use this mask to repair:

    Seam removal:
    kernel = np.ones((5,5),np.float32)/25
    seam_removal = cv2.filter2D(data_map,-1,kernel)
    

    This is a mask of the connected points of the image I obtained and used this mask to patch after some bloat:
  • http://imgur.com/L3tmlGy
  • However, as you can see in the final image, this does not improve the blending effect at all.

     

     

    Best answer

    I don't know if this is a good idea, but I think you can use Kmeans to "adjust" the color of the image.

    First , you convert the image to  RGB

    image= cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    

    Second , use the  Kmeans  algorithm to train the first image and find the "cluster color"
    clt = KMeans(n_clusters = 20)
    clt.fit(img1)
    colors = clt.cluster_centers_
    

    The third is generated by transforming the colors in image 2 using the cluster colors found in step 2 . You can refer to this tutorial .

    Finally , just merge the two images into one.

    I have the second method, that is, you can change the tones of the two images to the same tone. You can take a look

Guess you like

Origin blog.csdn.net/c2a2o2/article/details/110929297