AI accompany you to watch "Animal World"

AI accompany you to watch "Animal World"

Artificial Intelligence Recognizes Small Animals in Video

Project address (fork is used) AI will accompany you to watch "Animal World"

Beep station: Beep Beep

GitHub:Github

Principle: Identify animals in each frame of the video and generate a new video (with identification marks) to use PaddleHub's mobilenet_v2_animals model to identify the video,

Remember the "Animal World" on China Central Channel, childhood~~

Each issue has a new theme, and you will never tire of it. Now that AI is so hot, how can there be no AI! ! !

Let’s teach you fifty lines of code to take you into Ai’s "Animal World"

image

Before starting, it is necessary to remind some students that the imported python package may not be installed in the environment

It still needs to be downloaded manually, just pip install 包名don’t do the demonstration here in cmd.

1. Import the package

These are commonly used Opencv, PaddleHub, numpy, time and moviepy and PIL for drawing and editing videos

import cv2
import paddlehub as hub
import numpy
import time
from moviepy.editor import *
from PIL import Image, ImageDraw, ImageFont

2. Picture transfer type

Because we are going to import the result of Ai recognition into the video next, I save every frame in the video as a picture, and convert it into an object that can be drawn on the image, and add the recognition result for easy operation

# ---------------------------------转换图片------------------------------
def cv2ImgAddText(img, text, left, top, textColor=(0, 255, 0), textSize=20):
    if (isinstance(img, numpy.ndarray)):  # 判断是否OpenCV图片类型
        img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    # 创建一个可以在给定图像上绘图的对象
    draw = ImageDraw.Draw(img)
    # 字体的格式
    fontStyle = ImageFont.truetype(
        "font/simsun.ttc", textSize, encoding="utf-8")
    # 绘制文本
    draw.text((left, top), text, textColor, font=fontStyle)
    # 转换回OpenCV格式
    return cv2.cvtColor(numpy.asarray(img), cv2.COLOR_RGB2BGR)

3. Read the video frame and store the new video

Read every frame in the video, and identify the picture every 5 frames, add the result to the frame, and write the output video while reading

# ------------------------------------读取视频帧--------------------------------- 
def open(path):
	cap = cv2.VideoCapture(path)  #打开原视频
	fps = cap.get(cv2.CAP_PROP_FPS)		#读取帧率
	print(fps)
	fourcc = cv2.VideoWriter_fourcc(*'mp4v') # avi DIVX  mp4v
	videoWriter = cv2.VideoWriter('saveVideo.mp4',fourcc,fps,(1280,720))
									#视频地址,视频编码,帧率,画面大小(要和图片一致)
	print (cap.isOpened() )
	success = True
	i = 1
	while(success): 
		success, frame = cap.read() 
		if success==False:
			break
		cv2.imwrite("video.jpg" , frame) 
		if i%5==1 :		#每5帧判断一次
			i=2
			result = classifier.classification(images=[cv2.imread(r'video.jpg')])
			print(result)
			font = cv2.FONT_HERSHEY_SIMPLEX  # 定义字体
		img = cv2ImgAddText(cv2.imread(r'video.jpg'), str(result[0]), 50, 130,  (255, 255, 255), 25)
										# 图像,       文字内容, 横坐标,纵坐标,  颜色,  字体大小
		videoWriter.write(img)        #把图片写进视频
		# cv2.imshow("animals",img)	
		if cv2.waitKey(100) & 0xff == ord('1'):#按下1退出
			break  
		i+=1
	cap.release()   #释放视频
	cv2.destroyAllWindows()  #删除窗口
	videoWriter.release() #释放编辑视频

4. Run the script to identify the animals in the video

if __name__=='__main__':
	classifier = hub.Module(name="mobilenet_v2_animals")
	path=r"C:\Users\Skr-Skr-Skr\Desktop\dw1_1.mp4"  #  原视频路径
	video = VideoFileClip(path)
	audio = video.audio
	audio.write_audiofile('test.mp3')				#  提取原视频声音
io
	audio.write_audiofile('test.mp3')				#  提取原视频声音
	open(path)

After running the script, .mp3 and .mp4 are the videos identified by our AI

 

Q: Why do you want to extract mp3?

Answer: When the last obtained video is converted into a picture for Ai recognition, there is no audio, so the audio of the original video needs to be extracted (mp3 is only an audio format, and other formats can also be used). Video and audio are combined, which is the result we want

 

Q: Python is so powerful, can't it synthesize video and audio?

Answer: I tried sox for synthesizing video before, but I didn’t get it right all night (dish is the original sin). If you are interested, you can try it yourself. Hahaha)

Personal homepage: I got the diamond level on AI Studio, and I lighted 9 badges to cross-check each other~ Ula__----

Guess you like

Origin blog.csdn.net/qq_38758774/article/details/114868301