Simple panorama stitching code modification

foreword

Original code address
github

The record in this article is mainly because there is a problem with the modification and operation of the code, so I started to build the environment and re-modified

1. Create a virtual environment

conda create -n newCv python=3.7.0

insert image description here

2. Activate the virtual environment

conda activate newCv


3. Install opencv and corresponding packages

3.1 View the installable opencv version

The latest version of the relevant opencv installation can be seen opencv installation

pip install opencv-python==

insert image description here
to install

pip install opencv-python==3.4.2.16

carry out testing
insert image description here

3.2 Install the corresponding package

pip install imutils

3.3 Install opencv-contrib-python

The solution to the error
insert image description here
: add a version of the matching contrib package

pip install opencv-contrib-python==3.4.2.16

4. Add launch.json

{
    
    
  // 使用 IntelliSense 了解相关属性。 
  // 悬停以查看现有属性的描述。
  // 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
  "version": "0.2.0",
  "configurations": [
    {
    
    
      "name": "Python: Current File",
      "type": "python",
      "request": "launch",
      "program": "${file}",
      "console": "integratedTerminal",
      "justMyCode": true,
      "args": ["-f", "images/bryce_left_01.png", "-s", "images/bryce_right_01.png"]
    }
  ]
}

F5 to run the corresponding code

5. Running results

Original image
insert image description here
insert image description here
feature matching
insert image description here
stitching result
insert image description here

6. Splicing code process arrangement

insert image description here

The main method is actually quite simple. The main method is to obtain the corresponding features and stitching results from the encapsulated stitcher.stitch

6.1 stitcher.stitch method

	def stitch(self, images, ratio=0.75, reprojThresh=4.0,
		showMatches=False):
		# unpack the images, then detect keypoints and extract
		# local invariant descriptors from them
		(imageB, imageA) = images
		(kpsA, featuresA) = self.detectAndDescribe(imageA)
		(kpsB, featuresB) = self.detectAndDescribe(imageB)

		# match features between the two images
		M = self.matchKeypoints(kpsA, kpsB,
			featuresA, featuresB, ratio, reprojThresh)

		# if the match is None, then there aren't enough matched
		# keypoints to create a panorama
		if M is None:
			return None

		# otherwise, apply a perspective warp to stitch the images
		# together
		(matches, H, status) = M
		result = cv2.warpPerspective(imageA, H,
			(imageA.shape[1] + imageB.shape[1], imageA.shape[0]))
		result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB

		# check to see if the keypoint matches should be visualized
		if showMatches:
			vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches,
				status)

			# return a tuple of the stitched image and the
			# visualization
			return (result, vis)

		# return the stitched image
		return result

insert image description here

Guess you like

Origin blog.csdn.net/m0_47146037/article/details/126840842