Windows10 下安装 Python、OpenCV 开发环境

本文是根据文章(https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/)搭建对应的Windows下的开发环境 

1、下载 Python 安装包,安装时一开始记得勾选上添加变量,我使用的是 64 位

(作者没有使用 Windows 10 商店里的 Python,因为作者无法将 OpenCV 的 lib 复制到 Python 的 site-packages 目录)

2、安装完成后,打开 powershell,输入 python --version,应该能看到刚刚安装的版本

3、直接在 powershell 里运行:

pip install imutils

pip install numpy

此步会安装两个所需要的库

4、下载 OpenCV,在 https://opencv.org/releases/ 下载 Windows 版本,解压之后把 "C:\Users\Zgram\Downloads\opencv\build\python\cv2\python-3.8\cv2.cp38-win_amd64.pyd" (注意其中的 3.8 是和你安装的 Python 版本要对应的) 复制到 C:\Users\Zgram\AppData\Local\Programs\Python\Python38\Lib\site-packages (Python 的安装目录 \Lib\site-packages)

5、运行 下面的命令,好像是导入 OpenCV 必须的,否则会报 

DLL load failed while importing cv2: The specified module could not be found 错误

pip install opencv-contrib-python

用 python ball_tracking.py 命令运行下面的 python 代码,有摄像头的话会打开摄像头,跟踪小球。

没有摄像头的话可以使用命令 python ball_tracking.py --video ball_tracking_example.mp4 ,视频下载地址在这里

# USAGE
# python ball_tracking.py --video ball_tracking_example.mp4
# python ball_tracking.py

# import the necessary packages
from collections import deque
from imutils.video import VideoStream
import numpy as np
import argparse
import cv2
import imutils
import time

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video",
	help="path to the (optional) video file")
ap.add_argument("-b", "--buffer", type=int, default=64,
	help="max buffer size")
args = vars(ap.parse_args())

# define the lower and upper boundaries of the "green"
# ball in the HSV color space, then initialize the
# list of tracked points
greenLower = (29, 86, 6)
greenUpper = (64, 255, 255)
pts = deque(maxlen=args["buffer"])

# if a video path was not supplied, grab the reference
# to the webcam
if not args.get("video", False):
	vs = VideoStream(src=0).start()

# otherwise, grab a reference to the video file
else:
	vs = cv2.VideoCapture(args["video"])

# allow the camera or video file to warm up
time.sleep(2.0)

# keep looping
while True:
	# grab the current frame
	frame = vs.read()

	# handle the frame from VideoCapture or VideoStream
	frame = frame[1] if args.get("video", False) else frame

	# if we are viewing a video and we did not grab a frame,
	# then we have reached the end of the video
	if frame is None:
		break

	# resize the frame, blur it, and convert it to the HSV
	# color space
	frame = imutils.resize(frame, width=600)
	blurred = cv2.GaussianBlur(frame, (11, 11), 0)
	hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)

	# construct a mask for the color "green", then perform
	# a series of dilations and erosions to remove any small
	# blobs left in the mask
	mask = cv2.inRange(hsv, greenLower, greenUpper)
	mask = cv2.erode(mask, None, iterations=2)
	mask = cv2.dilate(mask, None, iterations=2)

	# find contours in the mask and initialize the current
	# (x, y) center of the ball
	cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
		cv2.CHAIN_APPROX_SIMPLE)
	cnts = imutils.grab_contours(cnts)
	center = None

	# only proceed if at least one contour was found
	if len(cnts) > 0:
		# find the largest contour in the mask, then use
		# it to compute the minimum enclosing circle and
		# centroid
		c = max(cnts, key=cv2.contourArea)
		((x, y), radius) = cv2.minEnclosingCircle(c)
		M = cv2.moments(c)
		center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))

		# only proceed if the radius meets a minimum size
		if radius > 10:
			# draw the circle and centroid on the frame,
			# then update the list of tracked points
			cv2.circle(frame, (int(x), int(y)), int(radius),
				(0, 255, 255), 2)
			cv2.circle(frame, center, 5, (0, 0, 255), -1)

	# update the points queue
	pts.appendleft(center)

	# loop over the set of tracked points
	for i in range(1, len(pts)):
		# if either of the tracked points are None, ignore
		# them
		if pts[i - 1] is None or pts[i] is None:
			continue

		# otherwise, compute the thickness of the line and
		# draw the connecting lines
		thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5)
		cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)

	# show the frame to our screen
	cv2.imshow("Frame", frame)
	key = cv2.waitKey(1) & 0xFF

	# if the 'q' key is pressed, stop the loop
	if key == ord("q"):
		break

# if we are not using a video file, stop the camera video stream
if not args.get("video", False):
	vs.stop()

# otherwise, release the camera
else:
	vs.release()

# close all windows
cv2.destroyAllWindows()
原创文章 59 获赞 41 访问量 10万+

猜你喜欢

转载自blog.csdn.net/rzdyzx/article/details/103696985