foreword
1.1 Introduction to Video Recognition Scenarios
In the field of home security monitoring, based on real-time video motion detection, it is one of the important application scenarios to discover the appearance of people, pets, packages, etc. in the monitoring environment, and to be able to notify users anywhere of the detection results in real time. However, the technical implementation of this scenario faces the following challenges: First, the video detection notification based on the camera has a large number of false alarms due to wind, rain, moving cars and other events that are not of user concern, which seriously affect the user experience. The second is that the implementation of this solution involves a high technical field and complexity, such as device-side event detection and triggering, video codec processing, video storage, machine vision, etc., requiring the team to have strong technical and professional capabilities.
The latest Amazon Rekognition Streaming Video Events introduced by Amazon in this article solves these challenges very well. Next, we will introduce the implementation principle and integrated application of this solution in detail, and analyze the differences between this solution and other implementation solutions. Based on These analyzes give suggestions for the selection of technical solutions for reference.
1.2 Introduction to Amazon Kinesis Video Streams
Amazon Kinesis Video Streams (KVS) is a fully managed Amazon service that you can use to capture video from millions of sources including smartphones, security cameras, webcams, vehicle cameras, drones, and other sources. Massive real-time video data transfer to the Amazon cloud, or build applications for real-time video processing or batch-oriented video. Advantages of Amazon KVS include:
-
It can provide real-time video transmission services for massive devices.
-
It is very convenient to build intelligent vision applications by integrating with managed services such as Amazon Rekognition.
-
Use Amazon KVS HTTP Live Streaming (HLS) to easily stream live and recorded media from Amazon KVS to your browser or mobile application.
-
Amazon KVS enables you to control access to streams using IAM and provides security for data at rest and in motion. Fully managed with no infrastructure to manage.
-
Amazon KVS uses Amazon S3 as the underlying data storage, and Amazon KVS can quickly search and retrieve video clips based on timestamps generated by devices and services.
Amazon KVS can be divided into three components: Producer, Stream, and Consumer. It provides the Producer SDK, KVS Stream API, and Consumer SDK respectively to facilitate functional integration between developers and Amazon KVS.
1.3 Introduction to Amazon Rekognition
Amazon Rekognition provides pre-trained and customizable computer vision (CV) capabilities to extract information and gain insights from your images and videos. Provides high-precision face analysis and detection, face comparison and face search, label text detection and other functions. Based on proven and highly scalable deep learning techniques also developed by Amazon's computer vision scientists, Amazon Rekognition is capable of analyzing billions of images and videos every day. It does not require machine learning expertise to use. Amazon Rekognition includes an easy-to-use API for quickly analyzing any image or video file stored in Amazon S3.
Amazon Rekognition
Streaming Video Events solution analysis
Amazon Rekognition Streaming Video Events is a newly launched function. It is based on the device detecting a specific event in the monitoring environment, and pushes the video stream of the device to the KVS on the cloud. With the help of Amazon Rekognition Video, it analyzes the tags in the video according to the tags you expect to detect. Data, save the detection results to Amazon S3, and send the detection results to SNS. Currently, it provides the function of detecting human figures, packages, and pet tags in real-time video.
1
Architecture Description
The overall architecture of the Amazon Rekognition Streaming Video Events solution is divided into: the device side, the Amazon cloud technology cloud, and the user application side.
-
Device side
Usually, it is a camera device with automatic video detection capability. By integrating the Amazon KVS SDK as the Producer of Amazon KVS, it pushes the video stream to the Amazon KVS Stream in the cloud, and triggers the video detection API provided by Amazon Rekognition on the Amazon cloud on demand.
-
amazon cloud technology cloud
Amazon KVS Stream: Provides device video persistent storage, video retrieval, video online viewing, video download and other capabilities, and provides data for video detection to Rekognition.
Amazon Rekognition: Provides the ability to automatically perform video and image analysis, including fully managed capabilities such as video or image-based content review, face detection, face comparison, and label detection, and can be easily integrated with Amazon KVS, Amazon S3, Amazon Service integration such as SNS.
Amazon S3: Provides storage of video detection results.
Amazon SNS: Fully managed pub/sub messaging, SMS, email, and mobile push notifications provide notification delivery capabilities in this scenario.
The overall architecture diagram of the program is as follows:
2
Execution Process Description
As shown in the figure below, the execution process of Amazon Rekognition Streaming Video Events is divided into the presetting phase and the event processing phase.
-
Preset stage
Step1: Create an S3 bucket and an SNS topic for storage and fanout of video detection results, respectively.
Step2: When device registration is created, create the corresponding KVS Stream and rekognition stream processor and bind the stream processor to the S bucket and SNS topic in step1.
-
event processing stage
step1: The IPC device detects an event.
Step2: The IPC device calls the PutMedia API of the Amazon KVS Producer SDK to stream the video to the KVS stream, and at the same time calls the API to trigger the rekognition stream processor to analyze the video data.
step3: Amazon Rekognition Stream Processor analyzes the video based on the startup parameters, including the start and stop conditions for processing the video, the time for processing the video, and other information.
step4: Amazon Rekognition Stream Processor automatically saves the video analysis results to Amazon S3 and triggers the SNS topic.
Step5: The user retrieves video information through the application program or receives a notification to complete other business processes.
Scheme integration verification
Currently, Amazon Rekognition Streaming Video Events is supported by Amazon cloud technology regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), and Europe (Ireland) This function will be provided in other regions one after another. For this experiment, we chose Ireland.
1
Experiment preparation
The Amazon cloud technology resources used in this experiment are all created in Ireland, and you need to prepare an Amazon cloud technology account in advance.
Create an Admin User in Amazon IAM, assign the arn:aws:iam::aws:policy/AdministratorAccess policy, and use this user for subsequent resource creation.
2
Simulate Amazon KVS Producer
Push video to Amazon KVS Stream
The experimental steps in this section completely refer to Amazon KVS Workshop. The following is only a brief description of the experimental steps. For the detailed execution process, please refer to. Collecting and Storing Videos Using Amazon Kinesis Video Streams Experiments with Amazon Cloud9 in https://catalog.us-east-1.prod.workshops.aws/workshops/b95b9381-baf0-4bef-ba31-63817d54c2a6/en-US/ links.
Create Amazon Cloud9 as
Amazon KVS的Producer
-
For how to create Amazon Cloud9, please refer to the link below
-
https://catalog.us-east-1.prod.workshops.aws/workshops/b95b9381-baf0-4bef-ba31-63817d54c2a6/en-US/lab-1/cloud9/step-1-b
Created in Amazon Cloud9
Start the Amazon KVS Producer application
-
Execute the following command to download the Amazon KVS Producer SDK
cd
git clone --recursive https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp.git
*Swipe left to see more
-
Execute the following command to compile the Amazon KVS Producer SDK
cmake -DBUILD_GSTREAMER_PLUGIN=ON ..
mkdir -p ~/amazon-kinesis-video-streams-producer-sdk-cpp/build
cd ~/amazon-kinesis-video-streams-producer-sdk-cpp/build
*Swipe left to see more
Create an Amazon KVS video stream
-
Create an Amazon KVS Stream
How to create Amazon KVS Stream:
https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/gs-createstream.html
Record the ARN of Amazon KVS Stream, which will be used later in the experiment.
arn:aws:kinesisvideo:eu-west-1:*AccountID*:stream/* stream-name */16*****5
*Swipe left to see more
-
Set environment variables of Amazon Cloud9 to prepare for uploading video to Amazon KVS Stream later
Execute the following command on Amazon Cloud9 Terminal to obtain the temporary token of IAM
aws sts get-session-token
*Swipe left to see more
Execute the following command in Amazon Cloud9 Terminal to set environment variables
export AWS_DEFAULT_REGION="The region you use (e.g. ap-northeast-1, us-west-2)"
export AWS_ACCESS_KEY_ID="The AccessKeyId value of the result above"
export AWS_SECRET_ACCESS_KEY="The SecretAccessKey value of the result above"
export AWS_SESSION_TOKEN="The SessionToken value of the result above"
*Swipe left to see more
Upload local video to Amazon KVS Stream
-
Set environment variables for Amazon KVS application execution
Execute the following command in Amazon Cloud9 Terminal to set environment variables
export GST_PLUGIN_PATH=$HOME/amazon-kinesis-video-streams-producer-sdk-cpp/build
export LD_LIBRARY_PATH=$HOME/amazon-kinesis-video-streams-producer-sdk-cpp/open-source/local/lib
*Swipe left to see more
-
Execute the following command in the Amazon Cloud9 Terminal to start the Amazon KVS application
while true; do ./kvs_gstreamer_file_uploader_sample kvs-workshop-stream ~/sample.mp4 $(date +%s) audio-video && sleep 10s; done
cd ~/amazon-kinesis-video-streams-producer-sdk-cpp/build
*Swipe left to see more
-
Open the Amazon KVS stream on the Amazon Console and click Media Playback to check whether the video is uploaded successfully
As shown in the figure below, the video is successfully displayed, indicating that the video stored on Amazon Cloud9 has been successfully uploaded to Amazon KVS Stream.
3
Prepare for Amazon Rekognition Streaming
Resources required for Video Events
Create an Amazon S3 Bucket
Create an Amazon S3 Bucket, name the Bucket as: video-event-analytics, create a folder video-result in the Bucket, and ensure the default configuration for other parameters when creating the Amazon S3 Bucket. After the creation is successful, the information of the Amazon S3 Bucket is as follows.
As shown in the figure below: record the Amazon S3 Bucket ARN that needs to be used in the follow-up experiment
Create SNS and configure mail subscription
Create a topic, the selection type is: Standard type, and the topic name is: Video-Event-SNS.
-
Record the ARN of the topic, which will be used in subsequent experiments
-
Create SNS Topic Subscription
Among them, select the ARN of the SNS created above for Topic ARN, and select Email for Protocol. In practical applications, you can select the corresponding protocol type according to the subscription method you need.
Create an Amazon Rekognition Service Role
Create an Amazon Rekognition Service Role, which is used to grant Amazon Rekognition Processor permission to operate other services. This experiment needs to give Amazon Rekognition Processor permission to operate Amazon S3 and Amazon SNS.
-
As shown in the figure below, create an Amazon IAM policy to grant the necessary permissions to access Amazon KVS, SNS, and S3
"Action": [
"sns:Publish"
],
"Resource": [
"arn:aws:sns:eu-west-1:your-accountid:video-event-sns"
]
},
{
"Sid": "S3Permissions",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::your-s3-bucket-name/*"
]
}
]
}
*Swipe left to see more
-
Create an Amazon Rekognition Service Role as shown in the figure below and associate the policy created in the previous step
Select Trusted Entity Type as Amazon Cloud Technology Service, and select the supported Amazon Cloud Technology Service scenario as Amazon Rekognition
-
Associate the Amazon IAM policy created in the previous step to the Amazon Rekognition Service Role
Start Amazon Rekognition Streaming
Video Events processing flow
Upgrade Amazon Cloud9's Amazon Cli to the latest version
Because Amazon Rekognition Streaming Video Events is a new feature that will be released at the end of April 2022. To use Amazon Cli to create Amazon Rekognition-Stream-Processor and use this feature, you need to upgrade your Amazon Cli to a version that includes the new feature of Rekognition. Here we Amazon Cli is upgraded to the latest version.
-
Execute the following command to upgrade Amazon Cli to the latest version
If your Amazon Cli is already the latest version, you can skip this step.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
sudo ./aws/install —bin-dir /usr/local/bin —install-dir /usr/local/aws-cli —update
*Swipe left to see more
Create an Amazon Rekognition-Stream-Processor
-
json file ready to create Amazon Rekognition-Stream-Processor
The json file for creating Amazon Rekognition-Stream-Processor parameters is shown in the figure below, where Amazon KinesisVideoStream-Arn, Amazon S3Destination-Bucket, Amazon S3Destination-KeyPrefix, and Amazon RoleArn need to be modified to the resources created in your own experimental environment.
ConnectedHome is a new parameter provided by the new function of Amazon Rekognition-Stream-Processor. The Labels of ConnectedHome currently supports "PERSON", "PET", "PACKAGE", and "ALL. You can fill in the label you want to detect. Details You can refer to:
https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html
{
"DataSharingPreference": {
"OptIn":true
},
"Input": {
"KinesisVideoStream": {
//kvs视频来源的ARN
"Arn": "your kvs stream arn"
}
},
"Name": "video_event_stream_processor",
"Output": {
"S3Destination": {
"Bucket": "video-event-analytics",
"KeyPrefix": "video-result"
}
},
"NotificationChannel": {
//发送检测结果通知的 SNSTopicArn
"SNSTopicArn": "your sns topic arn"
},
//streaming processor 执行的 Role
"RoleArn": "your streaming processor role",
"Settings": {
"ConnectedHome": {
"Labels": [
"PERSON"
],
"MinConfidence": 80
}
},
"RegionsOfInterest": [
{
"BoundingBox": {
"Top": 0.11,
"Left": 0.22,
"Width": 0.33,
"Height": 0.44
}
},
{
"Polygon": [
{
"X": 0.11,
"Y": 0.11
},
{
"X": 0.22,
"Y": 0.22
},
{
"X": 0.33,
"Y": 0.33
}
]
}
]
}
*Swipe left to see more
-
Description of main parameters of Amazon Stream-Processor Request
The following are the main parameters required for Amazon Rekognition Streaming Video Events
-
Save the json file to Amazon Cloud9, execute the following command in Amazon Cloud9 Terminal to create Amazon Rekognition-Stream-Processor
aws rekognition create-stream-processor —region eu-west-1 —cli-input-json file://createstreamprocessor.json
*Swipe left to see more
-
Execute the following command to view the details of Amazon Rekognition-Stream-Processor
aws rekognition describe-stream-processor --name video_event_stream_processor --profile sunny --region eu-west-1
*Swipe left to see more
describe-stream-processor description, as shown in the figure below, stream-processor is currently in STOPPED state
{
"Name": "video_event_stream_processor",
//StreamProcessorArn
"StreamProcessorArn": "your stream proccessor arn",
"Status": "STOPPED",
"CreationTimestamp": "2022-05-05T09:50:07.013000+00:00",
"LastUpdateTimestamp": "2022-05-05T09:50:07.013000+00:00",
"Input": {
//kvs视频来源的ARN
"KinesisVideoStream": {
"Arn": "your kvs stream arn"
}
},
//保存检测结果的S3
"Output": {
"S3Destination": {
"Bucket": "your s3 Bucket arn",
"KeyPrefix": "your key prefix"
}
},
//streaming processor 执行的 Role
"RoleArn": "your streaming processor role",
"Settings": {
"ConnectedHome": {
"Labels": [
"PERSON"
],
"MinConfidence": 80.0
}
},
//发送检测结果通知的 SNSTopicArn
"NotificationChannel": {
"SNSTopicArn": "Your SNSTopicArn"
},
"RegionsOfInterest": [
{
"BoundingBox": {
"Width": 0.33000001311302185,
"Height": 0.4399999976158142,
"Left": 0.2199999988079071,
"Top": 0.10999999940395355
}
},
{
"Polygon": [
{
"X": 0.10999999940395355,
"Y": 0.10999999940395355
},
{
"X": 0.2199999988079071,
"Y": 0.2199999988079071
},
{
"X": 0.33000001311302185,
"Y": 0.33000001311302185
}
]
}
],
"DataSharingPreference": {
"OptIn": true
}
}
*Swipe left to see more
Start Amazon Rekognition-Stream-Processor
-
Prepare the json file for starting Amazon Rekognition-Stream-Processor, as shown in the figure below
{
"Name": "video_event_stream_processor",
"StartSelector": {
//设置分析kvs stream视频的开始时间
"KVSStreamStartSelector": {
"ProducerTimestamp": 1651702500
}
},
//设置分析kvs stream视频的接触时间
"StopSelector": {
"MaxDurationInSeconds": 30
}
}
*Swipe left to see more
StartSelector
In addition to setting the start position of the video analysis through the ProducerTimestamp (the time when the Producer generates the video), you can also set the start time through the Amazon KVS FragmentNumber (the video frame number).
StopSelector
In addition to setting MaxDurationInSeconds (maximum video detection duration) to stop detection and analysis, you can also pass NoDetectionForDuration (stop detection if the set tag is not detected within the fixed detection time) in the future.
-
Execute the following command to start Amazon Rekognition-Stream-Processor
aws rekognition start-stream-processor —profile sunny —region eu-west-1 —cli-input-json file://startstreamprocessor.json
{
"SessionId": "16d15f11-02a1-4955-8248-f3184d66cb94"
}
}
*Swipe left to see more
Observe execution results
After the stream-processor completes the detection of the set target tags (Person/Package/Pet/ALl) according to the video detection start and end conditions set in start-stream-processor, it saves the detection results to Amazon S3 and passes the configured Amazon SNS sends out.
-
Observe the detection results saved in Amazon S3
Execute Amazon S3 is video-event-analytics —recursive —profile sunny to display the detection results saved to Amazon S3, as shown in the following figure:
video-result: corresponds to the ObjectKeyPrefix configured in stream-processor
video_event_stream_processor: corresponds to the name configured in stream-processor
7aa6f62b-dfba-42bb-aa80-b5e50e7403e8: corresponds to the SessionId returned by start-stream-processor.
2022-05-05 12:08:37 25072 video-result/video_event_stream_processor/7aa6f62b-dfba-42bb-aa80-b5e50e7403e8/notifications/5_1.0_heroimage.jpg
022-05-05 08:58:29 0 video-result/
2022-05-06 08:32:45 69316 video-result/video_event_stream_processor/16d15f11-02a1-4955-8248-f3184d66cb94/notifications/5_1.0.jpg
2022-05-06 08:32:46 25072 video-result/video_event_stream_processor/16d15f11-02a1-4955-8248-f3184d66cb94/notifications/5_1.0_heroimage.jpg
2022-05-05 10:15:11 69316 video-result/video_event_stream_processor/4923874d-049f-40e4-a38e-5df20955b809/notifications/5_1.0.jpg
2022-05-05 10:15:11 25072 video-result/video_event_stream_processor/4923874d-049f-40e4-a38e-5df20955b809/notifications/5_1.0_heroimage.jpg
2022-05-05 12:08:37 69316 video-result/video_event_stream_processor/7aa6f62b-dfba-42bb-aa80-b5e50e7403e8/notifications/5_1.0.jpg
*Swipe left to see more
-
Observing the result sent by SNS
After the stream-processor detection is completed, two types of notification messages will be sent to Amazon SNS. The following figure shows the result information of video tag detection
{
"inputInformation": {
"kinesisVideo": {
"streamArn": "=your kvs stream arn"
}
},
//通知消息类型LABEL_DETECTED
"eventNamespace": {
"type": "LABEL_DETECTED"
},
//标签描述
"labels": [
{
"id": "704ecb70-0ec5-4f14-966d-90a1837ae1b5",
"confidence": 97.671364,
"name": "PERSON",
"frameImageUri": "s3://video-event-analytics/video-result/video_event_stream_processor/16d15f11-02a1-4955-8248-f3184d66cb94/notifications/5_1.0.jpg",
"croppedImageUri": "s3://video-event-analytics/video-result/video_event_stream_processor/16d15f11-02a1-4955-8248-f3184d66cb94/notifications/5_1.0_heroimage.jpg",
"videoMapping": {
"kinesisVideoMapping": {
"fragmentNumber": "91343852333196827415010615634393321213860465930",
"serverTimestamp": 1651739574510,
"producerTimestamp": 1651739574000,
"frameOffsetMillis": 1000
}
},
"boundingBox": {
"left": 0.45954865,
"top": 0.4983085,
"height": 0.48800948,
"width": 0.099167705
}
}
],
"eventId": "ba763aa1-821c-3592-91e2-dec731329d1a",
"tags": {},
"sessionId": "16d15f11-02a1-4955-8248-f3184d66cb94",
"startStreamProcessorRequest": {
"name": "video_event_stream_processor",
"startSelector": {
"kvsProducerTimestamp": 1651702500
},
"stopSelector": {
"maxDurationInSeconds": 30
}
}
}
*Swipe left to see more
The following figure shows the information that the detection process is completed
{
"inputInformation": {
"kinesisVideo": {
"streamArn": "your kvs stream arn",
"processedVideoDurationMillis": 30000.0
}
},
"eventNamespace": {
//通知消息类型STREAM_PROCESSING_COMPLETE
"type": "STREAM_PROCESSING_COMPLETE"
},
"streamProcessingResults": {
"message": "Stream Processing Success."
},
"eventId": "16d15f11-02a1-4955-8248-f3184d66cb94",
"tags": {},
"sessionId": "16d15f11-02a1-4955-8248-f3184d66cb94",
"startStreamProcessorRequest": {
"name": "video_event_stream_processor",
"startSelector": {
"kvsProducerTimestamp": 1651702500
},
"stopSelector": {
"maxDurationInSeconds": 30
}
}
}
*Swipe left to see more
With the help of the above detection results, you can expand and enrich your business application scenarios.
4
Production Application Reference Architecture
Architecture Description
Add the following content on the basis of the 2.1 reference architecture
Back-end services: Provide APIs based on specific business scenarios, such as device registration, detection information storage, detection result viewing, and video event processing APIs. This service can be deployed on Amazon EC2, Amazon Cloud Technology Container Service, and Amazon Lambda as needed.
Database: Persistently store video detection results and business information (such as user device information) for subsequent queries. Based on the type of data and read and write requirements, it is recommended to use Amazon DocumentDB or DynamoDB for storage.
SNS subscription processing: According to business needs, it can be integrated with SMS, email, APP push, and can also be integrated with other devices such as (Alexa) to enrich your business functions.
The expanded architecture is shown in the figure below:
Execution Process Description
The following figure shows the execution process of Amazon Rekognition Streaming Video Events, which is divided into the preset phase and the event processing phase
-
Preset stage
The processing in the presetting phase is similar to the description of the execution process in 2.2. The background service provides a unified API to complete the Amazon KVS Stream, Amazon Streaming-Processor creation and corresponding relationship binding when the device is registered.
-
event processing stage
step1: The IPC device detects an event.
Step2: The IPC device calls the PutMedia API of the Amazon KVS Producer SDK to stream the video to the Amazon KVS stream, and at the same time calls the API of the backend service to trigger the Amazon Rekognition Stream Processor to analyze the video data.
step3: Amazon Rekognition Stream Processorr analyzes the video according to the startup parameters, including the start and stop conditions for processing the video, the time for processing the video, and other information.
step4: Amazon Rekognition Stream Processorr automatically saves the video analysis results to Amazon S3 and triggers the SNS topic.
Step5: Subscription of SNS topic triggers the backend service API, which saves the detection results and business information to the database according to certain business logic.
step6: Subscription to SNS topic triggers message push, and pushes the processing result to end users.
step7: The subscription of SNS topic triggers other integrated devices.
Step8: The terminal application calls the API of the backend service to view the processing results, display the pictures stored in Amazon S3 in the detection results or review the video clips stored in Amazon KVS.
Summarize
In the scenario of video detection based on specified tags, Amazon provides a total of the following four technologies.
Solution 1: Based on Amazon KVS+Rekognition Detecting labels in a video
Solution 2: Based on Amazon KVS+Rekognition Detecting labels in an image
Solution 3: Self-built Detecting labels based on Amazon KVS+
Solution 4: Based on Amazon KVS+Rekognition Detecting labels in streaming video events
Among them, solutions 1, 2, and 3 need to continuously read videos or extract images from Amazon KVS, and then call the API provided by Amazon Rekognition to complete video analysis. In this way, Amazon Rekognition API calls are very frequent, and the corresponding cost is also high. In order to reduce the cost, you can combine the ability of the end-side device to detect events, and provide an event processing API in the back-end service. The API triggers the start and stop Amazon Rekognition analyzes the motion of the video, but there are many factors to be considered to implement this API, and the technical difficulty of development is also relatively large. Solution 4 uses the Amazon Rekognition Streaming Video Events function recently released by Amazon Rekognition to provide the implementation of this API. Developers can use the API of Amazon Rekognition Streaming Processor to precisely control the creation, start, and stop of Amazon Rekognition Streaming Processor, thereby reducing the implementation time. Predetermine the difficulty of video detection requirements for tags, and quickly implement the video detection function online. For solution 3, you need to build a video detection model, manage model training, reasoning, and automatic expansion. If the detection capability of the API provided by Amazon Rekognition cannot meet your specific business needs, and the development team has strong AI development capabilities, you can consider using it. the program. However, if you are providing intelligent vision capabilities for your products for the first time, Amazon Rekognition Streaming Video Events can help you quickly realize this requirement and reduce development and maintenance costs. Benefit from video detection capabilities, greatly reducing the initial research and development investment.
References
Amazon KVS official introduction
https://aws.amazon.com/cn/kinesis/video-streams/?amazon-kinesis-video-streams-resources-blog.sort-by=item.additionalFields.createdDate&amazon-kinesis-video-streams-resources-blog.sort-order=desc
Amazon Doc of Rekognition Streaming Video Events Working with streaming video events
https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video.html
https://docs.aws.amazon.com/rekognition/latest/dg/streaming-labels-detection.html
Amazon Rekognition Streaming Video Events Blog
https://aws.amazon.com/cn/blogs/machine-learning/abode-uses-amazon-rekognition-streaming-video-events-to-provide-real-time-notifications-to-their-smart-home-customers/
Real Time Face Identification on Live Camera Feed using Amazon Rekognition Video and Amazon KVS
https://medium.com/zenofai/real-time-face-identification-on-live-camera-feed-using-amazon-rekognition-video-and-kinesis-video-52b0a59e8a9
Amazon KVS Workshop
https://catalog.us-east-1.prod.workshops.aws/workshops/b95b9381-baf0-4bef-ba31-63817d54c2a6/en-US/
Easily perform facial analysis on live feeds by creating a serverless video analytics environment using Amazon Rekognition Video and Amazon KVS
https://aws.amazon.com/cn/blogs/machine-learning/easily-perform-facial-analysis-on-live-feeds-by-creating-a-serverless-video-analytics-environment-with-amazon-rekognition-video-and-amazon-kinesis-video-streams/
Amazon Rekognition API Update Notes
https://awsapichanges.info/archive/changes/34d853-rekognition.html
The author of this article
Sun Jinhua
Senior Solution Architect, Amazon Cloud Technology
Responsible for helping customers with the design and consultation of cloud architecture. Before joining Amazon Cloud Technology, he started his own business and was responsible for the construction of the e-commerce platform and the overall architecture design of the car company's e-commerce platform. He once worked in the world's leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of the LTE equipment system. Has rich experience in high concurrency, high availability system architecture design, microservice architecture design, database, middleware, IOT, etc.
Shen Shaoyong
Senior Solution Architect, Amazon Cloud Technology
Mainly responsible for architecture consulting and design of cloud computing solutions based on Amazon cloud technology, currently serving customers in various industries such as mobile Internet (including media, games, advertising, e-commerce, blockchain, etc.), finance, manufacturing, etc., providing relevant cloud Consulting and architectural design for various cloud computing solutions such as computing, AI, big data, Internet of Things, and high-performance computing.
I heard, click the 4 buttons below
You will not encounter bugs!