Today suddenly a younger brother added me to VX and said that he wanted to consult me about some technical problems (I can finally install X). After reading his description of requirements, it is probably to do a Java web version of the face recognition function, then store the characteristics of the characters, and then scan the face for comparison. But I won't. . .
However, as a warm-hearted man who loves fans, let alone if there is any difficulty, even if there is no difficulty in creating difficulties, I have to join. Since people have such sincere consultation, it shows that I still have the value of being needed. Unexpectedly, there is an unexpected harvest~
After reading his situation, I suddenly remembered how helpless he looked when he was doing his final project, how similar it was. Whenever I see a consultation like this, I do my best to help anyone who can help. After all, this is how I came here.
Face Recognition SDK
人脸识别
The technology is very complicated, and it is Java
a bit impractical to tear a recognition algorithm by hand. After all, my strength does not allow me to be so arrogant, so I should use the SDK of the third party!
After looking around, I found a free face recognition SDK: ArcSoft
:, address: https://ai.arcsoft.com.cn
.
Official website homepage -> Developer Center in the upper right corner -> Select "Face Recognition" -> Add SDK , which will be generated APPID
and SDK KEY
used later, select different environments ( based on this articlewindows环境
) according to your needs, and then download SDK
it as a compressed package.
Java project construction
Finally, after my hard search, I finally found a ArcSoft
Demo Java版本
, open source is really a beautiful thing, not much to say!
1. Download the demo project
github address: https://github.com/xinzhfiu/ArcSoftFaceDemo
, build a database locally, create a table: user_face_info
. This table is mainly used to store portrait features, and the main fields face_feature
use binary types to blob
store face features.
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
-- ----------------------------
-- Table structure for user_face_info
-- ----------------------------
DROP TABLE IF EXISTS `user_face_info`;
CREATE TABLE `user_face_info` (
`id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
`group_id` int(11) DEFAULT NULL COMMENT '分组id',
`face_id` varchar(31) DEFAULT NULL COMMENT '人脸唯一Id',
`name` varchar(63) DEFAULT NULL COMMENT '名字',
`age` int(3) DEFAULT NULL COMMENT '年纪',
`email` varchar(255) DEFAULT NULL COMMENT '邮箱地址',
`gender` smallint(1) DEFAULT NULL COMMENT '性别,1=男,2=女',
`phone_number` varchar(11) DEFAULT NULL COMMENT '电话号码',
`face_feature` blob COMMENT '人脸特征',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
`fpath` varchar(255) COMMENT '照片路径',
PRIMARY KEY (`id`) USING BTREE,
KEY `GROUP_ID` (`group_id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 ROW_FORMAT=DYNAMIC;
SET FOREIGN_KEY_CHECKS = 1;
2. Modify the application.properties
file
The whole project is still relatively complete, and you only need to change some configurations to start it, but there are a few points to pay attention to, which will be highlighted later.
config.arcface-sdk.sdk-lib-path
: The path to store SDK
the three .files in the compressed packagedll
config.arcface-sdk.app-id
: Developer Center'sAPPID
config.arcface-sdk.sdk-key
: in the developer centerSDK Key
config.arcface-sdk.sdk-lib-path=d:/arcsoft_lib
config.arcface-sdk.app-id=8XMHMu71Dmb5UtAEBpPTB1E9ZPNTw2nrvQ5bXxBobUA8
config.arcface-sdk.sdk-key=BA8TLA9vVwK7G6btJh2A2FCa8ZrC6VWZLNbBBFctCz5R
# druid 本地的数据库地址
spring.datasource.druid.url=jdbc:mysql://127.0.0.1:3306/xin-master?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=UTC
spring.datasource.druid.username=junkang
spring.datasource.druid.password=junkang
lib
3. Create a folder in the root directory
Create a folder in the project root directory lib
and put arcsoft-sdk-face-2.2.0.1.jar in the downloaded SDK archive into the project根目录
4. Introduce arcsoft
dependency packages
<dependency>
<groupId>com.arcsoft.face</groupId>
<artifactId>arcsoft-sdk-face</artifactId>
<version>2.2.0.1</version>
<scope>system</scope>
<systemPath>${basedir}/lib/arcsoft-sdk-face-2.2.0.1.jar</systemPath>
</dependency>
pom.xml
The file needs to be configured with includeSystemScope
properties, otherwise it may not be arcsoft-sdk-face-2.2.0.1.jar
referenced
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<includeSystemScope>true</includeSystemScope>
<fork>true</fork>
</configuration>
</plugin>
</plugins>
</build>
5. Start the project
So far the configuration is complete, the run
Application
file is started
Test it: http://127.0.0.1:8089/demo
, the following page will start successfully
operate
1. Input face image
Enter the name on the page, click 摄像头注册
to activate the local camera, and after submitting, transfer the current image to the background, identify and extract the current facial signs, and save it to the database.
2. Face comparison
After entering the face image, test whether the recognition is successful, submit the current image, and find that the recognition is successful 相似度92%
. But as a programmer, you have to be skeptical about everything. Isn't this the result that the old iron wrote to death on the page?
In order to further verify, this time I blocked the face and tried again, and found that the prompt "face does not match" proves that there is really a comparison.
Source code analysis
Take a brief look at the project source code and analyze the implementation process:
Pages and JS are written by back-end programmers at first glance, don't ask me why? I understand naturally, hahaha~
1. JS calls up the local camera to take a picture and uploads the image file string
function getMedia() {
$("#mainDiv").empty();
let videoComp = " <video id='video' width='500px' height='500px' autoplay='autoplay' style='margin-top: 20px'></video><canvas id='canvas' width='500px' height='500px' style='display: none'></canvas>";
$("#mainDiv").append(videoComp);
let constraints = {
video: {width: 500, height: 500},
audio: true
};
//获得video摄像头区域
let video = document.getElementById("video");
//这里介绍新的方法,返回一个 Promise对象
// 这个Promise对象返回成功后的回调函数带一个 MediaStream 对象作为其参数
// then()是Promise对象里的方法
// then()方法是异步执行,当then()前的方法执行完后再执行then()内部的程序
// 避免数据没有获取到
let promise = navigator.mediaDevices.getUserMedia(constraints);
promise.then(function (MediaStream) {
video.srcObject = MediaStream;
video.play();
});
// var t1 = window.setTimeout(function() {
// takePhoto();
// },2000)
}
//拍照事件
function takePhoto() {
let mainComp = $("#mainDiv");
if(mainComp.has('video').length)
{
let userNameInput = $("#userName").val();
if(userNameInput == "")
{
alert("姓名不能为空!");
return false;
}
//获得Canvas对象
let video = document.getElementById("video");
let canvas = document.getElementById("canvas");
let ctx = canvas.getContext('2d');
ctx.drawImage(video, 0, 0, 500, 500);
var formData = new FormData();
var base64File = canvas.toDataURL();
var userName = $("#userName").val();
formData.append("file", base64File);
formData.append("name", userName);
formData.append("groupId", "101");
$.ajax({
type: "post",
url: "/faceAdd",
data: formData,
contentType: false,
processData: false,
async: false,
success: function (text) {
var res = JSON.stringify(text)
if (text.code == 0) {
alert("注册成功")
} else {
alert(text.message)
}
},
error: function (error) {
alert(JSON.stringify(error))
}
});
}
else{
var formData = new FormData();
let userName = $("#userName").val();
formData.append("groupId", "101");
var file = $("#file0")[0].files[0];
var reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = function () {
var base64 = reader.result;
formData.append("file", base64);
formData.append("name",userName);
$.ajax({
type: "post",
url: "/faceAdd",
data: formData,
contentType: false,
processData: false,
async: false,
success: function (text) {
var res = JSON.stringify(text)
if (text.code == 0) {
alert("注册成功")
} else {
alert(text.message)
}
},
error: function (error) {
alert(JSON.stringify(error))
}
});
location.reload();
}
}
}
2. Analyze pictures in the background and extract portrait features
The background parses the pictures sent from the front end, extracts the portrait features and stores them in the database. The extraction of portrait features mainly relies on FaceEngine
the engine. I read the source code all the way, and I really don’t understand what kind of algorithm it is.
/*
人脸添加
*/
@RequestMapping(value = "/faceAdd", method = RequestMethod.POST)
@ResponseBody
public Result<Object> faceAdd(@RequestParam("file") String file, @RequestParam("groupId") Integer groupId, @RequestParam("name") String name) {
try {
//解析图片
byte[] decode = Base64.decode(base64Process(file));
ImageInfo imageInfo = ImageFactory.getRGBData(decode);
//人脸特征获取
byte[] bytes = faceEngineService.extractFaceFeature(imageInfo);
if (bytes == null) {
return Results.newFailedResult(ErrorCodeEnum.NO_FACE_DETECTED);
}
UserFaceInfo userFaceInfo = new UserFaceInfo();
userFaceInfo.setName(name);
userFaceInfo.setGroupId(groupId);
userFaceInfo.setFaceFeature(bytes);
userFaceInfo.setFaceId(RandomUtil.randomString(10));
//人脸特征插入到数据库
userFaceInfoService.insertSelective(userFaceInfo);
logger.info("faceAdd:" + name);
return Results.newSuccessResult("");
} catch (Exception e) {
logger.error("", e);
}
return Results.newFailedResult(ErrorCodeEnum.UNKNOWN);
}
3. Comparison of portrait characteristics
Face Recognition: After extracting portrait features from the incoming image from the front end, compare and analyze it with the existing portrait information in the library
/*
人脸识别
*/
@RequestMapping(value = "/faceSearch", method = RequestMethod.POST)
@ResponseBody
public Result<FaceSearchResDto> faceSearch(String file, Integer groupId) throws Exception {
byte[] decode = Base64.decode(base64Process(file));
BufferedImage bufImage = ImageIO.read(new ByteArrayInputStream(decode));
ImageInfo imageInfo = ImageFactory.bufferedImage2ImageInfo(bufImage);
//人脸特征获取
byte[] bytes = faceEngineService.extractFaceFeature(imageInfo);
if (bytes == null) {
return Results.newFailedResult(ErrorCodeEnum.NO_FACE_DETECTED);
}
//人脸比对,获取比对结果
List<FaceUserInfo> userFaceInfoList = faceEngineService.compareFaceFeature(bytes, groupId);
if (CollectionUtil.isNotEmpty(userFaceInfoList)) {
FaceUserInfo faceUserInfo = userFaceInfoList.get(0);
FaceSearchResDto faceSearchResDto = new FaceSearchResDto();
BeanUtil.copyProperties(faceUserInfo, faceSearchResDto);
List<ProcessInfo> processInfoList = faceEngineService.process(imageInfo);
if (CollectionUtil.isNotEmpty(processInfoList)) {
//人脸检测
List<FaceInfo> faceInfoList = faceEngineService.detectFaces(imageInfo);
int left = faceInfoList.get(0).getRect().getLeft();
int top = faceInfoList.get(0).getRect().getTop();
int width = faceInfoList.get(0).getRect().getRight() - left;
int height = faceInfoList.get(0).getRect().getBottom() - top;
Graphics2D graphics2D = bufImage.createGraphics();
graphics2D.setColor(Color.RED);//红色
BasicStroke stroke = new BasicStroke(5f);
graphics2D.setStroke(stroke);
graphics2D.drawRect(left, top, width, height);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ImageIO.write(bufImage, "jpg", outputStream);
byte[] bytes1 = outputStream.toByteArray();
faceSearchResDto.setImage("data:image/jpeg;base64," + Base64Utils.encodeToString(bytes1));
faceSearchResDto.setAge(processInfoList.get(0).getAge());
faceSearchResDto.setGender(processInfoList.get(0).getGender().equals(1) ? "女" : "男");
}
return Results.newSuccessResult(faceSearchResDto);
}
return Results.newFailedResult(ErrorCodeEnum.FACE_DOES_NOT_MATCH);
}
The general flow chart of the entire face recognition function is as follows:
Summarize
The design idea of the whole project is relatively clear, the difficulty lies in the 人脸识别引擎
and 前端JS
part of the code, and other functions are relatively common.
Source address: https://github.com/xinzhfiu/ArcSoftFaceDemo/ If you have any technical questions, please feel free to communicate
Hundreds of various technical e-books have been sorted out and given to friends. Follow the official account to reply [ 666 ] to get it by yourself. We have established a technical exchange group with some friends to discuss technology and share technical information, aiming to learn and progress together. If you are interested, join us!
Whether you are just entering the industry or a programmer with several years of experience, I believe this interview outline will give you a lot of help, long press the QR code to follow "Programmer's internal affairs", reply "offer" to receive it by yourself, I wish you good luck Everyone's offer is soft