Java audio and video processing——JavaCV

Table of contents

 

Introduction

Maven

Software Environment

JavaCV-Examples

OpenCV Cookbook Examples

Overview

Example

OpenCV documentation

How to use JavaCV example

Organization of sample code

Example list

Why Scala?

Study address

Simple image processing code example

1. Open and save a picture

 2. Draw a straight line

3.Draw a circle

4. Draw discounts

5. Add text watermark

6. Crop and partially enlarge

7. Face detection

Simple video processing code example

1. Open the video file

2. Capture the frames at the specified time of the video and save them as images

3. Record screen

4. Add watermark to video


Introduction

JavaCV uses wrappers from JavaCPP preset libraries commonly used by researchers in the field of computer vision (OpenCV, FFmpeg, libdc1394, FlyCapture, Spinnaker, OpenKinect, librealsense, CL PS3 Eye Driver, videoInput, ARToolKitPlus, flandmark, Leptonica and Tesseract), and provides utility classes to make its functionality easier to use on Java platforms, including Android.

JavaCV also features hardware-accelerated full-screen image display (CanvasFrame and GLCanvasFrame), easy-to-use methods to execute code in parallel on multiple cores (parallel), user-friendly geometry and color calibration of cameras and projectors (GeometricCalibrator, ProCamGeometricCalibrator, ProCamColorCalibrator), feature points Detection and matching (ObjectFinder), a set of classes that implement direct image alignment for projector-camera systems (mainly GNImageAligner, projectvetransformer, projvecolortransformer, ProCamTransformer and ReflectanceInitializer), a blob analysis package (Blobs), and others in JavaCV classes Function. Some of these classes also have OpenCL and OpenGL corresponding classes, and their names end with CL or start with GL, such as: JavaCVCL, GLCanvasFrame, etc.

To learn how to use the API, as documentation is currently lacking, please refer to the Sample Usage section below as well as the sample programs, including two for Android (FacePreview.java and RecordActivity.java), which can also be found in the samples directory. You can also refer to the source code of ProCamCalib and ProCamTracker, as well as examples ported from the OpenCV2 Cookbook and related wiki pages.

Maven

  <dependency>
    <groupId>org.bytedeco</groupId>
    <artifactId>javacv-platform</artifactId>
    <version>1.5.8</version>
  </dependency>

Software Environment

To use JavaCV, you first need to download and install the following software:

Java SE 7 or newer implementation:

Additionally, although not always required, some features of JavaCV also rely on:

Finally, make sure everything has the same bits, 32-bit and 64-bit modules cannot be mixed under any circumstances.

JavaCV-Examples

This project contains examples of using JavaCV and other library wrappers from the javacpp-presets project.

  • OpenCV_Cookbook - JavaCV version of the examples provided in Robert laganiere's book "OpenCV Computer Vision Application Programming Cookbook". The original examples in the cookbook were written in C++, here they are translated to use the JavaCV API.

  • Example using flandmark library.

  • Example of using JVM wrapping flr/Point Gray FlyCapture SDK.

  • Example of using JVM to wrap the Flir/Point Gray Spinnaker SDK

OpenCV Cookbook Examples

Overview

The OpenCV Cookbook example illustrates the use of OpenCV and JavaCV. These examples start from C++ code ported from Robert Laganie:1's book "OpenCV 2 Programming Guide for Computer Vision Applications". Later updated to the 4th edition of the book "OpenCV Computer Vision Application Programming Cookbook 4th Edition". The examples in the book use the OpenCV c++ API. Here they are converted to use JavaCV and JavaCPP-Presets api.

OpenCV (Open Source Computer Vision) is a library of hundreds of algorithms for computer vision and video analysis. OpenCV can be run on the JVM in two ways. The first is the Java wrapper provided by OpenCV. The second is a wrapper based on JavaCPP (JVM's c++ wrapper engine), called the OpenCV JavaCPP preset. There are also JavaCPP presets for other computer vision related libraries, such as: FFmpeg, libdc1394, PGR FlyCapture, OpenKinect, videoInput, ARToolKitPlus, landmark, etc. JavaCV combines the libraries from the JavaCPP preset and adds some extra features to make them easier to use on the JVM.

The OpenCV Cookbook Examples project demonstrates using OpenCV with the JavaCV and OpenCV JavaCPP presets. The current version has been updated to match the second edition of Robert Laganie's book "OpenCV Computer Vision Application Programming Cookbook 2nd Edition". It is designed to be used with OpenCV .4 (JavaCV section 1).

Although the code in the examples is primarily written in Scala, one of the leading JVM languages. It can be easily converted to Java and other languages ​​running on the JVM, such as Groovy. The usage of JavaCV API is very similar in most JVM languages. Some examples are provided in the Java version.

Example

Below is a quick preview comparing the original C++ example to the Scala and Java code using the JavaCV wrapper.

Below is a raw C++ example that opens an image (without error checking), creates a window, displays the image in the window, and waits 5 seconds before exiting.

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgcodecs/imgcodecs.hpp>

int main() {
    // Read an image
    cv::Mat src = cv::imread("data/boldt.jpg");
    display(src, "Input")

	// Apply Laplacian filter
    cv::Mat dest;
    cv::Laplacian(src, dest, src.depth(), 1, 3, 0, BORDER_DEFAULT);
    display(dest, "Laplacian");

    // wait key for 5000 ms
    cv::waitKey(5000);

    return 1;
}

//---------------------------------------------------------------------------

void display(Mat image, char* caption) {
    // Create image window named "My Image"
    cv::namedWindow(caption);

    // Show image on window
    cv::imshow(caption, image);
}

The c++ example above is converted to Scala using the JavaCV wrapper:

import javax.swing._
import org.bytedeco.javacv._
import org.bytedeco.opencv.global.opencv_core._
import org.bytedeco.opencv.global.opencv_imgcodecs._
import org.bytedeco.opencv.global.opencv_imgproc._
import org.bytedeco.opencv.opencv_core._

object MyFirstOpenCVApp extends App {

  // Read an image.
  val src = imread("data/boldt.jpg")
  display(src, "Input")

  // Apply Laplacian filter
  val dest = new Mat()
  Laplacian(src, dest, src.depth(), 1, 3, 0, BORDER_DEFAULT)
  display(dest, "Laplacian")

  //---------------------------------------------------------------------------

  /** Display `image` with given `caption`. */
  def display(image: Mat, caption: String): Unit = {
    // Create image window named "My Image."
    val canvas = new CanvasFrame(caption, 1)

    // Request closing of the application when the image window is closed.
    canvas.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE)

    // Convert from OpenCV Mat to Java Buffered image for display
    val converter = new OpenCVFrameConverter.ToMat()
    // Show image on window
    canvas.showImage(converter.convert(image))
  }
}

Now express the same example in Java. Note that the use of the JavaCV API is exactly the same as in Scala and Java code. The only practical difference is that in Java the code is more verbose and you have to explicitly provide a type for each variable, whereas in Scala this is optional.

import org.bytedeco.javacv.CanvasFrame;
import org.bytedeco.opencv.opencv_core.Mat;

import javax.swing.*;
import java.awt.image.BufferedImage;

import static opencv_cookbook.OpenCVUtilsJava.toBufferedImage;
import static org.bytedeco.opencv.global.opencv_core.BORDER_DEFAULT;
import static org.bytedeco.opencv.global.opencv_imgcodecs.imread;
import static org.bytedeco.opencv.global.opencv_imgproc.Laplacian;

public class MyFirstOpenCVAppInJava {

    public static void main(String[] args) {

        // Read an image.
        final Mat src = imread("data/boldt.jpg");
        display(src, "Input");

        // Apply Laplacian filter
        final Mat dest = new Mat();
        Laplacian(src, dest, src.depth(), 1, 3, 0, BORDER_DEFAULT);
        display(dest, "Laplacian");
    }

    //---------------------------------------------------------------------------

    static void display(Mat image, String caption) {
        // Create image window named "My Image".
        final CanvasFrame canvas = new CanvasFrame(caption, 1.0);

        // Request closing of the application when the image window is closed.
        canvas.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);

        // Convert from OpenCV Mat to Java Buffered image for display
        final BufferedImage bi = toBufferedImage(image);

        // Show image on window.
        canvas.showImage(bi);
    }
}

OpenCV documentation

If you are looking for a specific OpenCV operation, please use the OpenCV documentation (OpenCV documentation index). The quick search box is particularly useful. This document contains descriptions of alternative uses of the C/c++ OpenCV API.

How to use JavaCV example

The OpenCV Cookbook sample project is created as a companion to Robert Laganie:1's book "OpenCV Computer Vision Application Programming Cookbook Second Edition". The recommended approach is to read the Cookbook and refer to the JavaCV examples when you have questions about how to convert Cookbook's C++ code to JavaCV. This book explains how algorithms work. The JavaCV examples only provide very brief comments related to the details of the JavaCV API.

The easiest way to use the JavaCV examples is to browse online for the code located in [src/main]. You can also download it to your computer using Git or a ZIP file.

With minimal setup, you can easily execute the examples on your own computer. This is one of the benefits of JavaCV - it provides all the binaries needed to run OpenCV on various platforms. This setup is explained in the README in Chapter 1.

Organization of sample code

The code is organized into packages that correspond to chapters in the Cookbook version 1, such as opencv_cookbook.chapter8. It's very similar to the second edition. Individual examples roughly correspond to chapters in each chapter of the book.

Chapter 1 describes the IDE setup for running these examples, gives a basic example of loading and displaying images, and a basic GUI example of performing basic image processing.

Example list

Chapter 1: Working with Images

  • Ex1MyFirstOpenCVApp - loads an image and displays it in a window (CanvasFrame)

  • Ex2MyFirstGUIFXApp - Simple GUI application built using ScalaFX (JavaFX wrapper). The application has two buttons "Open Image" and "Process" on the left side. The open image is shown in the center. When the Process button is pressed, the image is flipped over and its red and blue channels are swapped. For the Swing version, see Ex2MyFirstGUIApp.

  • Ex2MyFirstGUIApp - Simple GUI application built with Scala Swing. The application has two buttons "Open Image" and "Process" on the left side. The open image is shown in the center. When the Process button is pressed, the image is flipped over and its red and blue channels are swapped. For JavaFX version, see Ex2MyFirstGUIFXApp.

  • Ex3LoadAndSave - Read, save, display and draw images.

Chapter 2: Manipulating Pixels

  • Ex1Salt - Sets individual, randomly selected, pixels to a fixed value. Use ImageJ's ImageProcessor to access pixels.

  • Ex2ColorReduce - Reduces the colors in an image by modifying the color values ​​of all bands in the same way.

  • ex3 sharpen - sharpen image using kernel convolution: filter2D().

  • Ex4BlendImages - Blends two images using weighted addition: cvAddWeighted().

  • ex5roillogo - Paste a small image into a larger image using regions of interest: IplROI and cvCopy().

Chapter 3: Processing images with classes

  • Ex1ColorDetector - Compares RGB colors to the target color, colors similar to the target color are assigned white in the output image, other pixels are set to black.

  • Ex2ColorDetectorSimpleApplication—The first example follows the same process but demonstrates a simple UI.

  • ex3colordetectormvapplication—Same processing as the first example, but demonstrates a more detailed UI.

  • Ex4ConvertingColorSpaces - Similar to the first example, but the color distance is calculated in the Lab* color space. Explain the use of cvtColor function.

Chapter 4: Calculating pixels using histograms

  • Ex1ComputeHistogram - Computes a histogram using the utility class Histogram1D and prints the values ​​to the screen.

  • Ex2ComputeHistogramGraph - Displays a graph of a histogram created using the utility class Histogram1D.

  • Ex3Threshold - Separates pixels in an image into foreground and background using the OpenCV threshold() method.

  • Ex4InvertLut—Creates an inverted image by inverting its lookup table.

  • Ex5EqualizeHistogram - Enhance images using histogram equalization.

  • Ex6ContentDetectionGrayscale - Uses a histogram of areas in a grayscale image to create a "template" through the entire image to detect pixels similar to that template. Give an example to illustrate the use of cvCalcBackProject() method.

  • Ex7ContentDetectionColor - Uses area histograms to create a "template" in a color image, using the entire image to detect pixels similar to that template. Depends on tool classes ColorHistogram and ContentFinder.

  • Ex8MeanShiftDetector - uses a regional histogram in a color image to create a "template", using a mean shift algorithm to find the best matching position of the "template" in another image. Give an example to illustrate the usage of cvMeanShift() method.

  • Use the helper class ImageComparator to calculate image similarity measures.

  • The Helper class that performs histogram and lookup table operations corresponds to part of the sample code of the c++ class Histogram1D in the OpenCV Cookbook. Examples of the use of OpenCV methods: cvLUT(), cvEqualizeHist(), cvCreateHist(), cvCalcHist(), cvQueryHistValue_1D(), cvReleaseHist().

  • ColorHistogram - Helper class that simplifies the use of the cvCalcHist() method for color images.

  • A helper class for template matching using the cvCalcBackProject() method.

  • ImageComparator - Helper class for calculating image similarity using cvCompareHist().

Chapter 5: Transforming images using morphological operations

  • Erosion and Dilation - Morphological erosion and dilation: cvErode() and cvDilate().

  • Ex2OpeningAndClosing - Morphology opening and closing: cvMorphologyEx().

  • Ex3EdgesAndCorners - Detect edges and corners using morphological filters.

  • Ex4WatershedSegmentation - Image segmentation using watershed algorithm.

  • Ex5GrabCut - Use grabCut() to separate objects from the background.

  • MorphoFeatures - Equivalent to the C++ class of the same name, containing methods for morphological corner detection.

  • WatershedSegmenter - Helper class for the section "Segmenting images using watersheds".

Chapter 6: Filtering Images

  • Ex1LowPassFilter - Blur and Gaussian filter.

  • Ex2MedianFilter - Remove noise with a median filter.

  • Ex3DirectionalFilters - Use Sobel edge detection filters.

  • Edge detection using Laplacian filter.

  • LaplacianZC - Computes Laplacian and zero crossings, for Ex4Laplacian.

Chapter 7: Extracting lines, contours and components

  • Ex1CannyOperator - detect contours with Canny operator.

  • Ex2HoughLines - Detect lines using standard Hough transform methods.

  • Ex3HoughLineSegments - Detect line segments using the probabilistic Hough transform method.

  • Ex4HoughCircles - Detect circles using the Hough transform method.

  • Ex5ExtractContours - Extract contours from binary images using connected components.

  • Ex6ShapeDescriptors - Compute various shape descriptors: bounding box, enclosing circle, approximate polygon, convex hull, centroid.

  • LineFinder - A helper class for detecting line segments using the probabilistic Hough transform method, for use with Ex3HoughLineSegments.

Chapter 8: Discover points of interest

  • Ex1HarrisCornerMap - Computes the Harris corner intensity image.

  • Ex2HarrisCornerDetector - Uses Harris corner intensity images to detect well-positioned corners, replacing several closely located detections (blurring) with one corner. Use the HarrisDetector helper class.

  • Ex3GoodFeaturesToTrack - Example of using the GoodFeatures tracking detector.

  • Ex4FAST - Example using the FAST detector.

  • Ex5SURF - Example using SURF detector.

  • Ex6SIFT - Example using SIFT detector.

  • Helper class for detection and localization of Harris angle intensity images. Closer detections (blurred) are replaced by a single detection.

Chapter 9: Discover points of interest

  • Ex2TemplateMatching - Finds the best match between a small patch from the first image (template) and the second image.

  • ex7descripbingsurf - Compute SURF features, extract their descriptors, and find the best matching descriptor between two images of the same object.

Chapter 10: Estimating projection relationships in images

  • Ex1FindChessboardCorners - Demonstrates one of the camera calibration steps, detecting checkerboard patterns in a calibration board.

  • Ex2CalibrateCamera - Camera calibration example showing how to correct for geometric distortions that may be introduced by optics. Use the CameraCalibrator helper class.

  • Ex3ComputeFundamentalMatrix - Computes a fundamental matrix describing the projective relationship between two images using detected and matched features between the two images.

  • Ex4MatchingUsingSampleConsensus - illustrates the use of RANSAC (random sampling consensus) strategy. Most calculations are done by the RobustMatcher helper class.

  • Ex5Homography - Another way to describe the relationship between points in two images, using homography. An example showing how to stitch together two images of partial views of an object. Most calculations are done by the RobustMatcher helper class.

  • Helper class that implements the camera calibration algorithm.

  • RobustMatcher - Implements RANSAC based algorithms such as Ex4MatchingUsingSampleConsensus and Ex5Homography.

Chapter 11: Processing Video Sequences

  • Ex1ReadVideoSequence - Reads and displays video.

  • Ex2ProcessVideoFrames - Process frames from a video file using a Canny edge detector; display the output video on the screen. Use the helper class VideoProcessor.

  • Ex3WriteVideoSequence - Process frames from a video file using a Canny edge detector; writes the output to the video file. Use the helper class VideoProcessor.

  • Ex4TrackingFeatures - Track moving objects in video, marking tracking points in the video displayed on the screen. Most of the implementation is done in the FeatureTracker helper class.

  • Ex5ForegroundSegmenter - Detect moving objects in videos via background estimation. The background is modeled using the "simple" moving average method, implemented in the helper class "BGFBSegmenter".

  • Ex6MOGMotionDetector - A more sophisticated motion detector that uses a mixture of Gaussians method to model the background.

  • BGFBSegmenter - Separates "static" background from "moving" foreground by using a moving average to model the background. Example of using 'Ex5ForegroundSegmenter'.

  • FeatureTracker - Track moving features using optical flow algorithms such as Ex4TrackingFeatures.

  • VideoProcessor - Helper class for processing video files, loading and applying processing to individual frames, such as: Ex2ProcessVideoFrames, Ex3WriteVideoSequence, Ex4TrackingFeatures and Ex5ForegroundSegmenter.

Chapter 15: OpenCV Advanced Features

  • Ex1FaceDetection - Detect faces in images using pre-trained deep learning neural network models.

Other common technologies

  • OpenCVUtils - Read and write image files, display images, draw features of images, convert between OpenCV images and data representations.

Why Scala?

Scala was chosen because it is more expressive than Java. You can get the same results with less code. Smaller boilerplate code makes examples easier to read and understand. Compiled Scala code is fast, similar to Java and C++.

Unlike Java or C++, Scala supports writing scripts - code that can be executed without explicit compilation. Scala also has a console, called the REPL, where a single line of code can be entered and executed on the spot. These two features make building opencv-based programs in Scala easier than in Java. Last but not least, the IDE's support for Scala reaches a level of maturity that allows easy creation, modification, and execution of Scala code. In particular, the Scala plugin for JetBrains IDEA works very well. There is also Scala support for Eclipse and NetBeans.

Study address

https://github.com/bytedeco/javacv

Welcome to OpenCV Java Tutorials documentation! — OpenCV Java Tutorials 1.0 documentation

GitHub - bytedeco/javacv-examples: Examples of using JavaCV / OpenCV library on Java Virtual Machine

If you have any questions about classes, APIs, etc., you can read the method introduction of openCV and FFmpeg. javaCV is just an encapsulation of them, or ask ChatGPT directly. It is simple and efficient!

Simple image processing code example

The image processing API is mainly concentrated in the opencv-4.6.0-1.5.8.jar package. This package has two directories "bytedeco.opencv" and "opencv". There are many classes and static methods with the same name under the two packages. Please try to use the classes and methods under the "bytedeco.opencv" package.

1. Open and save a picture

// 打开一张图
Mat image = imread("D:\\2projects_database\\javacvdemo\\src\\main\\java\\com\\example\\img\\future_city.jpg");
// 保存图像       imwrite("D:\\2projects_database\\javacvdemo\\src\\main\\java\\com\\example\\img\\future_city_add_text.jpg", image);

 2. Draw a straight line

/**
 * 画直线
 * @param image 图片
 * @param x1 起点横坐标
 * @param y1 起点纵坐标
 * @param x2 终点横坐标
 * @param y2 终点纵坐标
 * @param color 线条颜色
 */
public static void drawLine(Mat image, int x1, int y1, int x2, int y2, Scalar color) {
    Point pt1 = new Point(x1, y1);
    Point pt2 = new Point(x2, y2);
    line(image, pt1, pt2, color);
}

3.Draw a circle

/**
 * 画圆圈
 * @param image 图像
 * @param x 圆心横坐标
 * @param y 圆心纵坐标
 * @param radius 半径
 * @param color 线条颜色
 * @param thickness 线条厚度
 * @param lineType 线条类型
 * @param shift 坐标值的小数位数
 */
public static void drawCircle(Mat image, int x, int y, int radius, Scalar color, int thickness, int lineType, int shift) {
    Point center = new Point(x, y);
    circle(image, center, radius, color, thickness, lineType, shift);
}

4. Draw discounts

/**
 * 画折现
 * @param image 图像
 * @param points 端点数组
 * @param color 线条颜色
 */
public static void drawCurve(Mat image, Point[] points, Scalar color) {
    for (int i = 0; i < points.length - 1; i++) {
        line(image, points[i], points[i+1], color);
    }
}

5. Add text watermark

/**
 * 添加文字水印
 * @param image 图像
 * @param text 文字内容
 * @param position 文字位置
 * @param fontFace 字体类型
 * @param fontScale 字体大小
 * @param color 字体颜色
 */
public static void addTextWatermark(Mat image, String text, Point position, int fontFace, double fontScale, Scalar color) {
    putText(image, text, position, fontFace, fontScale, color);
}

6. Crop and partially enlarge

/**
 * 裁剪图像并局部放大
 * @param image 图像
 * @param x 起始位置横坐标
 * @param y 起始位置纵坐标
 * @param width 裁剪宽度
 * @param height 裁剪高度
 * @param zoomFactor 放大倍数
 */
public static void cropAndZoomImage(Mat image, int x, int y, int width, int height, int zoomFactor) {
    Rect roi = new Rect(x, y, width, height);
    Mat croppedImage = new Mat(image, roi);
    resize(croppedImage, croppedImage, new Size(width*zoomFactor, height*zoomFactor));
    // 保存图像
    imwrite("D:\\2projects_database\\javacvdemo\\src\\main\\java\\com\\example\\img\\cropAndZoomImage.png", croppedImage);
    System.out.println("resize rows:" + croppedImage.rows());
    System.out.println("resize cols:" + croppedImage.cols());
}

7. Face detection

package com.example.img.code;

import org.bytedeco.opencv.opencv_core.*;
import org.bytedeco.opencv.opencv_objdetect.CascadeClassifier;
import static org.bytedeco.opencv.global.opencv_imgcodecs.imread;
import static org.bytedeco.opencv.global.opencv_imgcodecs.imwrite;
import static org.bytedeco.opencv.global.opencv_imgproc.LINE_8;
import static org.bytedeco.opencv.global.opencv_imgproc.rectangle;

/**
 * @Author yrz
 * @create 2023/5/12 11:22
 * @Description TODO
 */

public class FaceDetector {
    public static void main(String[] args) {
        // Load the image
        Mat image = imread("D:/2projects_database/javacvdemo/src/main/java/com/example/img/face.jpg");

        // Load the face cascade classifier
        CascadeClassifier faceCascade = new CascadeClassifier("D:/2projects_database/javacvdemo/src/main/java/com/example/img/haarcascade_frontalface_alt.xml");

        // Detect faces in the image
        RectVector faceDetections = new RectVector();
        faceCascade.detectMultiScale(image, faceDetections);

        // Draw a rectangle around each detected face
        for (Rect rect : faceDetections.get()) {
            rectangle(image, new Point(rect.x(), rect.y()), new Point(rect.x() + rect.width(), rect.y() + rect.height()),
                    new Scalar(0, 255, 0, 0), 2, LINE_8, 0);
        }
        // Save the image with the detected faces
        imwrite("D:/2projects_database/javacvdemo/src/main/java/com/example/img/face_output.jpg", image);
    }
}

Simple video processing code example

The video processing API is mainly concentrated under the javacv-1.5.8.jar package.

1. Open the video file

/**
 * 打开视频文件
 * @param filename
 */
public static void readDisplayVideo(String filename){
    FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(filename);
    // Open video video file
    try {
        grabber.start();
    } catch (FFmpegFrameGrabber.Exception e) {
        e.printStackTrace();
    }
    // Prepare window to display frames
    CanvasFrame canvasFrame = new CanvasFrame("Extracted Frame", 1);
    canvasFrame.setCanvasSize(grabber.getImageWidth(), grabber.getImageHeight());
    // Exit the example when the canvas frame is closed
    canvasFrame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
    long delay = Math.round(1000d / grabber.getFrameRate());
    // Read frame by frame, stop early if the display window is closed
    Frame frame;
    try {
        while ((frame = grabber.grab()) != null && canvasFrame.isVisible()) {
            // Capture and show the frame
            canvasFrame.showImage(frame);
            // Delay
            Thread.sleep(delay);
        }
        // Close the video file
        grabber.release();
    } catch (FFmpegFrameGrabber.Exception e) {
        e.printStackTrace();
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}

2. Capture the frames at the specified time of the video and save them as images

/**
 * 抓取视频指定时间的帧保存为图像
 * @param videoPath
 * @param imagePath
 * @param timeInSeconds
 */
public static void grabFrameAtTime(String videoPath, String imagePath, long timeInSeconds) {
    FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(videoPath);
    Java2DFrameConverter converter = new Java2DFrameConverter();
    try {
        grabber.start();
        grabber.setTimestamp(timeInSeconds);
        Frame grab = grabber.grabImage();
        BufferedImage bufferedImage = converter.getBufferedImage(grab, converter.getBufferedImageType(grab) ==
                BufferedImage.TYPE_CUSTOM ? 1.0 : 1.0, false, null);
        saveImage(bufferedImage, imagePath);
        grabber.stop();
    } catch (Exception e) {
        e.printStackTrace();
    }
}
public static void saveImage(BufferedImage image, String imagePath) {
    try {
        ImageIO.write(image, "jpg", new File(imagePath));
    } catch (IOException e) {
        e.printStackTrace();
    }
}

3. Record screen

/**
 * 录屏
 * @param filename 文件名称
 * @param seconds 时长
 */
public static void recordScreen(String filename, int seconds) {
    final int FRAME_RATE = 30;
    final Dimension SCREEN_SIZE = Toolkit.getDefaultToolkit().getScreenSize();
    // 创建录屏对象,并设置相关属性
    FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(filename, SCREEN_SIZE.width, SCREEN_SIZE.height);
    recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
    recorder.setFormat("mp4");
    recorder.setFrameRate(FRAME_RATE);
    Java2DFrameConverter converter = new Java2DFrameConverter();
    try {
        // 初始化录屏对象
        recorder.start();
        Robot robot = new Robot();
        BufferedImage screenShot;

        // 系统当前时间
        LocalDateTime now = LocalDateTime.now();
        System.out.println(now);
        // 30秒后
        LocalDateTime plus = now.plus(seconds, ChronoUnit.SECONDS);
        System.out.println(plus);

        // 开始录制
        while (true) {
            // 获取屏幕截图并写入文件
            screenShot = robot.createScreenCapture(new Rectangle(SCREEN_SIZE));
            recorder.record(converter.getFrame(screenShot));
            // 停止时间
            LocalDateTime time = LocalDateTime.now();
            if(plus.isBefore(time)){
                System.out.println(time);
                break;
            }
        }
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        // 关闭录制器
        try {
            recorder.stop();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

4. Add watermark to video

/**
     * 给视频添加水印
     * @param filename
     * @param outname
     * @param picName
     * @throws Exception
     */
    public static void addWatermark (String filename, String outname, String picName) throws Exception {
        // Load the video
        FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(filename);
        grabber.start();
        // Create a new video recorder
        FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(outname, grabber.getImageWidth(), grabber.getImageHeight(), grabber.getAudioChannels());
        recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        recorder.setFormat("mp4");
        recorder.setFrameRate(grabber.getFrameRate());
        // 开启录制
        recorder.start();
        // Create a new Java2DFrameConverter
        Java2DFrameConverter converter = new Java2DFrameConverter();
        // Create a new BufferedImage to hold the watermark image
        // 图片水印
        BufferedImage watermarkImage = ImageIO.read(new File(picName));
        // 自定义文字水印
//        BufferedImage watermarkImage = createWatermarkImage("Hello, world!", new Font("Arial", Font.BOLD, 50),
//                Color.WHITE, new Color(0, 0, 0, 0));
        // Loop through each frame in the video
        Frame frame;
        while ((frame = grabber.grabFrame()) != null) {
            // Convert the frame to a BufferedImage
            BufferedImage image = converter.getBufferedImage(frame);
            if(image == null){
                continue;
            }
            // Create a new Graphics2D object to draw the watermark
            Graphics2D g2d = image.createGraphics();
            // Draw the watermark on the image
            g2d.drawImage(watermarkImage, 0, 0, null);
            // Dispose of the Graphics2D object
            g2d.dispose();
            // Convert the BufferedImage back to a Frame and write it to the output video
            recorder.record(converter.convert(image));
        }
        // Stop the grabber and recorder
        grabber.stop();
        grabber.release();
        recorder.stop();
        recorder.release();
    }

    private static BufferedImage createWatermarkImage(String text, Font font, Color foreground, Color background) {
        FontRenderContext frc = new FontRenderContext(null, true, true);
        TextLayout layout = new TextLayout(text, font, frc);
        Rectangle2D bounds = layout.getBounds();

        BufferedImage image = new BufferedImage(
                (int) bounds.getWidth(), (int) bounds.getHeight(), BufferedImage.TYPE_INT_ARGB);
        Graphics2D g2d = image.createGraphics();

        g2d.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_ON);
        g2d.setRenderingHint(RenderingHints.KEY_FRACTIONALMETRICS, RenderingHints.VALUE_FRACTIONALMETRICS_ON);

        if (background != null) {
            g2d.setBackground(background);
            g2d.clearRect(0, 0, image.getWidth(), image.getHeight());
        }

        g2d.setFont(font);
        g2d.setColor(foreground);

        layout.draw(g2d, 0, -(float) bounds.getY());

        if (background != null) {
            ColorConvertOp colorConvert = new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_sRGB),
                    null);
            colorConvert.filter(image, image);
        }

        g2d.dispose();

        return image;
    }

Supongo que te gusta

Origin blog.csdn.net/qq_27890899/article/details/130968922
Recomendado
Clasificación