Opencv the image pyramid - Image Fusion

Image pyramid

Usually, we used in the past is a constant size of the image. However, in some cases, we need to use different resolutions (same) image. For example, when searching for something (such as faces) in the image, we were not sure what size the object will be displayed in the image. In this case, we will need to create a set of the same images with different resolutions, and search for objects in all images. The set of images having different resolutions referred to as an "image pyramid" (because when they are stacked on the bottom, the highest resolution image is located at the top of the image, is located at the top of the lowest resolution, looks like a pyramid).
Here Insert Picture Description

Gaussian pyramid: downsampling (narrow)

Here Insert Picture Description

Gaussian pyramid: up-sampling (amplified)

Here Insert Picture Description

Laplacian pyramid

Here Insert Picture Description

For example - Image Fusion

Claim

The right half of the left half of the apple and oranges fused to get the new "breed."
Here Insert Picture DescriptionHere Insert Picture Description

Code

1, first, apples and oranges read images, and, since the layer 6 do downsampling, so that the adjustment of their length and width dimensions are an integer multiple of 64 (or fractional sampling will occur).

img1 = cv.imread('Apple.png', 3)
A = cv.resize(img1, (448, 448), interpolation=cv.INTER_CUBIC)  # 因为要做6层,所以图像尺寸的大小必须是64的整数倍
img2 = cv.imread('Orange.png', 3)
B = cv.resize(img2, (448, 448), interpolation=cv.INTER_CUBIC)
print(A.shape)
print(B.shape)

2, Apple generated image of a Gaussian pyramid.

G = A.copy()
gpA = [G]
plt.subplot(231), plt.imshow(cv.cvtColor(gpA[0], cv.COLOR_BGR2RGB))
for i in range(5):
    G = cv.pyrDown(G)
    gpA.append(G)
    plt.subplot(2, 3, i+2), plt.imshow(cv.cvtColor(gpA[i+1], cv.COLOR_BGR2RGB))

plt.show()

Get results:
Here Insert Picture Description3, Orange generated image of a Gaussian pyramid.

G = B.copy()
gpB = [G]
plt.subplot(231), plt.imshow(cv.cvtColor(gpB[0], cv.COLOR_BGR2RGB))
for i in range(5):
    G = cv.pyrDown(G)
    gpB.append(G)
    plt.subplot(2, 3, i+2), plt.imshow(cv.cvtColor(gpB[i+1], cv.COLOR_BGR2RGB))

plt.show()

Get results:
Here Insert Picture Description4, Apple generated image Laplacian pyramid.

lpA = [gpA[5]]
plt.subplot(231), plt.imshow(cv.cvtColor(lpA[0], cv.COLOR_BGR2RGB))
for i in range(5, 0, -1):
    GE = cv.pyrUp(gpA[i])
    L = cv.subtract(gpA[i-1], GE)
    lpA.append(L)
    plt.subplot(2, 3, 6-i+1), plt.imshow(cv.cvtColor(L, cv.COLOR_BGR2RGB))
plt.show()

Get results:
Here Insert Picture Description5 generate oranges image Laplacian pyramid.

lpB = [gpB[5]]
plt.subplot(231), plt.imshow(cv.cvtColor(lpB[0], cv.COLOR_BGR2RGB))
for i in range(5, 0, -1):
    GE = cv.pyrUp(gpB[i])
    L = cv.subtract(gpB[i-1], GE)
    lpB.append(L)
    plt.subplot(2, 3, 6-i+1), plt.imshow(cv.cvtColor(L, cv.COLOR_BGR2RGB))
plt.show()

Get results:
Here Insert Picture Description6 stitching.

LS = []
i = 1
for la, lb in zip(lpA, lpB):
    print(la.shape)
    rows, cols, dpt = la.shape
    ls = np.hstack((la[:, 0: int(cols/2)], lb[:, int(cols/2) :]))
    LS.append(ls)
    plt.subplot(2, 3, i), plt.imshow(cv.cvtColor(ls, cv.COLOR_BGR2RGB))
    i += 1
plt.show()

Get results:
Here Insert Picture Description7 reconstruction.

ls_ = LS[0]  # 取出最模糊的那张图片
plt.subplot(2, 3, 1), plt.imshow(cv.cvtColor(ls_, cv.COLOR_BGR2RGB))
for i in range(1, 6):
    ls_ = cv.pyrUp(ls_)  # 加零,高斯模糊
    ls_ = cv.add(ls_, LS[i])  # 加差值,提高分辨率
    plt.subplot(2, 3, i+1), plt.imshow(cv.cvtColor(ls_, cv.COLOR_BGR2RGB))
plt.show()

Results obtained:
Here Insert Picture Description8, comparative results.

# 对比直接拼接的效果
real = np.hstack((A[:, :int(cols/2)], B[:, int(cols/2):]))
plt.subplot(1, 2, 1), plt.imshow(cv.cvtColor(real, cv.COLOR_BGR2RGB))
plt.subplot(1, 2, 2), plt.imshow(cv.cvtColor(ls_, cv.COLOR_BGR2RGB))

got the answer:
Here Insert Picture Description

Published 36 original articles · won praise 1 · views 528

Guess you like

Origin blog.csdn.net/qq_36758914/article/details/104023971