Weighted average fusion eliminates the stitching of image stitching (Python code)

Here is an algorithm that uses weighted average fusion to eliminate image stitching for everyone to use.

https://blog.csdn.net/xiaoxifei/article/details/103045958

As shown in the figure below, if two images are directly stitched together, a stitching seam will be generated at the stitching position. The stitching seam is caused by the difference in the light fields of the two images. Although the two images are continuous, they are continuous. However, there are certain differences in the specific numbers in the two figures. For two images with overlapping parts, the method to eliminate this seam is mainly based on feature point matching; but the weighted average fusion method is the simplest and most effective method. Here is the python code of this algorithm.

The first step is to calculate the weight. Here, the weight calculated by the s-curve is used. The weight is distributed as shown in the figure below

The second step is to perform the fusion. The result of the fusion is shown below, and there is no clear seam here.

 

The code is as follows:

 
  1. def calWeight(d,k):

  2. '''

  3.  
  4. :param d: 融合重叠部分直径

  5. :param k: 融合计算权重参数

  6. :return:

  7. '''

  8.  
  9. x = np.arange(-d/2,d/2)

  10. y = 1/(1+np.exp(-k*x))

  11. return y

  12.  
  13.  
  14.  
  15. def imgFusion(img1,img2,overlap,left_right=True):

  16. '''

  17. 图像加权融合

  18. :param img1:

  19. :param img2:

  20. :param overlap: 重合长度

  21. :param left_right: 是否是左右融合

  22. :return:

  23. '''

  24. # 这里先暂时考虑平行向融合

  25. w = calWeight(overlap,0.05) # k=5 这里是超参

  26.  
  27. if left_right: # 左右融合

  28. col, row = img1.shape

  29. img_new = np.zeros((row,2*col-overlap))

  30. img_new[:,:col] = img1

  31. w_expand = np.tile(w,(col,1)) # 权重扩增

  32. img_new[:,col-overlap:col] = (1-w_expand)*img1[:,col-overlap:col]+w_expand*img2[:,:overlap]

  33. img_new[:,col:]=img2[:,overlap:]

  34. else: # 上下融合

  35. row,col = img1.shape

  36. img_new = np.zeros((2*row-overlap,col))

  37. img_new[:row,:] = img1

  38. w = np.reshape(w,(overlap,1))

  39. w_expand = np.tile(w,(1,col))

  40. img_new[row-overlap:row,:] = (1-w_expand)*img1[row-overlap:row,:]+w_expand*img2[:overlap,:]

  41. img_new[row:,:] = img2[overlap:,:]

  42. return img_new

 
  1. if __name__ =="__main__":

  2. img1 = cv2.imread(r".\test_new1.png",cv2.IMREAD_UNCHANGED)

  3. img2 = cv2.imread(r".\test_new2.png",cv2.IMREAD_UNCHANGED)

  4. img1 = (img1 - img1.min())/img1.ptp()

  5. img2 = (img2 - img2.min())/img2.ptp()

  6. img_new = imgFusion(img1,img2,overlap=128,left_right=False)

  7. img_new = np.uint16(img_new*65535)

  8. cv2.imwrite(r'.\test_new3.png',img_new)

You can get the above effect

Guess you like

Origin blog.csdn.net/c2a2o2/article/details/111039596