Review of Fingerprint Recognition (10): Deep Learning Methods

This article will be updated from time to time

1 Introduction

In the field of pattern recognition, fingerprint recognition is a rare subfield that can achieve a high recognition rate by relying on traditional technology. As early as the 1970s, the automatic fingerprint identification technology at that time could already help the police solve crimes. Perhaps because the traditional technology is too successful, the application of deep learning in the field of fingerprint recognition started relatively late. However, with the vigorous development of deep learning, researchers have gradually implemented various modules of fingerprint recognition based on various deep learning technologies, and achieved better and better performance.

In 2019, when Springer Press contacted the authors of the first two editions of Handbook of Fingerprint Recognition (the first edition in 2004 and the second edition in 2009) to write the third edition, Professor Maltoni was a little hesitant. He feels that if it is a monograph on face recognition 10 years ago, it must be rewritten now, because the traditional face recognition technology has been subverted by deep learning in the past 10 years. However, in the field of fingerprint recognition, deep learning is not yet a revolution, let alone a subversion of traditional technologies. Handbook does not seem to need to do a major update.

I haven't done market research, but I believe that quite a few fingerprint recognition products are not based on deep learning technology. An attendance machine worth tens of yuan, a fingerprint lock worth more than 100 yuan, and no WIFI module, it is impossible to run deep learning algorithms. I believe that in another 10 years, deep learning will not be able to completely replace traditional fingerprint technology. There are many application scenarios that require extremely low-cost, extremely low-power identification solutions. It is precisely because traditional fingerprint technology can achieve very good performance on a low-cost computing platform that the application of fingerprint recognition can be pervasive. This is a major advantage of fingerprints over biometrics such as faces.

However, deep learning did have a huge impact on the field of fingerprinting. Just talk about on-site fingerprinting. The ideal on-site fingerprint identification system for the police should be as accurate and convenient as the live fingerprint identification system. The user inputs the on-site fingerprint image; if the fingerprint is not in the library, the system will say no; if there is, it will only return the correct fingerprint in the library. It is difficult to improve the recognition performance of algorithms designed by traditional experience, and it is unlikely to achieve such an ideal goal. In recent years, the design of on-site fingerprint recognition algorithms in academia and industry have all adopted deep learning-based solutions. In other application fields, deep learning is also the preferred solution when hardware conditions permit.

This article does not intend to list all fingerprint identification papers using deep learning, but only introduces the representative papers that I am familiar with. It is a selective review.

The following discusses from the aspects of feature extraction, matching, synthesis, and pseudo-fingerprint detection.

2. Fingerprint Feature Extraction

The characteristics of fingerprints can be divided into three levels from coarse to fine. Level 1: ridge direction field and frequency map (singular points are special points of the direction field); level 2: ridge skeleton diagram (detail points are special points of ridge lines); level 3: inner and outer contours of ridge lines (The sweat pores are the inner contour). These features are defined around the ridge and are anatomically significant.

Tertiary features of fingerprints (Feng and Jain, 2011)

Posture is not a feature of fingerprints, but defines a coordinate system for fingerprint features. If it is to be graded, it can be counted as a 0-level feature. The posture is also anatomical, the center is located in the center of the fingertips, and the direction points to the fingertips.

attitude

The pose defines the coordinate system of the fingerprint, which can be regarded as the 0-level feature of the fingerprint

In recent years, researchers have proposed a variety of fingerprint feature extraction methods based on deep learning.

2.1 Pose Estimation

Fingerprint pose can play a fundamental role in fingerprint feature extraction and various matching methods. However, the fingerprint pose estimation problem has been neglected (or avoided) in the field of fingerprint recognition for a long time. The reason is that it is generally believed that the posture of a fingerprint is difficult to measure accurately, and if it is not measured accurately, it will not be used. Using an inaccurate posture is harmful to the recognition performance. However, in recent years, the performance of attitude estimation has been steadily improved, and it has played a significant role in direction field estimation, fingerprint retrieval, and fingerprint matching. Fingerprint pose estimation is similar to the object detection problem in computer vision, where early approaches were based on voting (Yang et al., 2014) or traditional classifiers (Su et al., 2016). In recent years, researchers have proposed a variety of fingerprint pose estimation methods based on deep networks.

2.1.1 Method based on Faster R-CNN

Ouyang et al. (2017) first applied deep learning to fingerprint pose estimation. The author realizes the center and direction estimation of fingerprints based on the Faster R-CNN object detection framework, and ensures a single accurate pose output through intra-class and inter-class combination strategies. Experiments on the rolling fingerprint dataset show that the pose estimated by this method is more accurate than the previous method (Yang et al., 2014; Su et al., 2016) (the deviation from the true pose is small; the distance between the matching minutiae points after alignment is short), and Runs fast. The experiment also proves that by utilizing the pose constraints to the minutiae-based fingerprint retrieval algorithm, higher retrieval accuracy is achieved.

The Fingerprint Pose Estimation Method of Ouyang et al. (2017)

2.1.2 Joint Estimation of Attitude and Singularity

Yin et al. (2021) designed a unified deep network for joint extraction of fingerprint pose and singular points based on the analysis of traditional fingerprint pose and fingerprint singular point extraction algorithms. The network structure is shown in the figure below. The network consists of four parts, which are feature extraction skeleton, singular point estimation module, attention mechanism module and pose regression module. The feature extraction module consists of a three-layer convolutional module and a dilated convolutional pyramid for extracting underlying features. The singular point estimation module extracts the probability heat maps of the central singular point and the triangular singular point respectively. The attention mechanism module is used to calculate the meaningful regions for pose estimation in the feature layer. The attitude regression module finally outputs the center position and angle of the fingerprint through the fully connected layer.

Yin et al. (2021) proposed to jointly estimate the pose and singularity of fingerprints

The author used the 2000 library fingerprints of NIST SD4 and its hand-labeled singularities and poses to train the network. For on-site fingerprints, 200 on-site fingerprints from Haixin's on-site fingerprint library were used for retraining. Each fingerprint undergoes random translation and rotation transformations during training for data augmentation. The author tested on the NIST SD4, NIST SD14 rolling fingerprint library, FVC2004 DB1A flat fingerprint library and NIST SD27 field fingerprint library, and compared the pose results by comparing the position difference of the hand-marked minutiae points and the pose-based index results. The case in the figure below shows that the method is superior to the existing pose estimation algorithm, and can output the position of the center and the triangle singularity at the same time.

Comparison of the results of three pose estimation algorithms. The blue line is the pose of the hand mark, and the red line is the pose estimated by the algorithm. (a, b) are the output results of two existing algorithms, and (c) is the result of the algorithm of Yin et al. (2021).

2.1.3 Pose Estimation with Dense Voting

Duan et al. (2023) argue that the widespread incompleteness problem in the fingerprint field (e.g., flat fingerprints, field fingerprints, etc.) can be alleviated by voting strategies. The author combines the voting strategy and deep learning technology to convert the estimation of the fingerprint center position and orientation into a dense offset vector estimation, and realizes accurate fingerprint pose estimation on various types of fingerprint images. The network structure is shown in the figure below. The network consists of three modules, namely feature extraction skeleton, dense estimation module and pose (voting) integration module. The feature extraction module is built based on ResNet-18 (removed the last classification layer) for extracting local features. The dense estimation module performs estimation in each local area, and outputs a total of 6 channels, which are orientation-independent offset vector field, orientation-dependent offset vector field, fingerprint foreground segmentation, and fingerprint orientation attention image. The attitude (voting) integration module integrates the above dense estimation output to obtain the final fingerprint attitude.

Duan et al. (2023) proposed to estimate the center and orientation of fingerprints based on dense voting

The author uses the corresponding fingerprint data for network training for rolling fingerprints, flat fingerprints and on-site fingerprints, and then does not fine-tune the model for specific data sets. During the training process, each training fingerprint data will undergo random translation, rotation, horizontal flip and Gaussian additive noise for data augmentation. The author tested on 10 different types of fingerprint data sets including rolling, planar, non-contact and on-site fingerprints. The attitude was evaluated by the consistency of minutiae alignment, fingerprint retrieval after introducing attitude constraints, and fingerprint matching performance. Estimated effect. The following example shows that this method exhibits more accurate and consistent pose estimation results than previous pose estimation algorithms.

Comparison of predictions for fingerprint poses, where each row is a different fingerprint from the same finger. Cyan and blue are the output results of existing algorithms, and red is the result of the algorithm of Duan et al. (2023).

2.2 Direction Field Estimation

Estimating the local ridge direction of a fingerprint is a key step in a fingerprint recognition system, and it is very important for subsequent steps such as fingerprint ridge enhancement, fingerprint classification, and fingerprint matching. Traditional directional field estimation methods do not utilize machine learning techniques and perform poorly for low-quality fingerprints such as on-site fingerprints. Dictionary-based methods (Feng et al., 2013; Yang et al., 2014) can learn the prior laws of fingerprints from a large number of samples, which improves the performance of direction field estimation. Recently, researchers have turned to deep learning-based schemes to further improve the reasoning capabilities of direction field estimation algorithms.

2.2.1 Direction block classification

Cao and Jain (2015) regarded the direction field estimation of fingerprint image patches as a classification problem and proposed a convolutional neural network (ConvNet) based method for the direction field estimation of in-field fingerprints. Given image patches extracted from a scene image, their orientation patches are predicted by a trained ConvNet and stitched together to form an orientation field for the entire scene fingerprint.

Specifically, all the orientation fields with a block size of 16×16 pixels are firstly obtained from the NIST SD4 database using traditional algorithms. The database contains approximately 400 rolled fingerprints for each of the five fingerprint types. Orientation patches of size 10×10 are selected from these orientation fields. These orientation blocks are then clustered into 128 orientation patterns using the fast K-means clustering method (partially see the figure below).

A subset of representative orientation blocks learned from the NIST SD4 database (Cao and Jain, 2015)

From another larger rolling fingerprint database NIST SD14, a large number of fingerprint blocks with a size of 160 × 160 pixels are selected and assigned to the corresponding orientation patterns by calculating the orientation similarity with each orientation pattern. For each direction mode, a total of 10,000 fingerprint blocks were collected for the training of the 128-category ConvNet classification network (the structure is shown in the figure below). In order to simulate live fingerprints, noises such as lines are superimposed on these image blocks to obtain more training samples.

Estimating the network structure of the direction field (Cao and Jain, 2015)

Given a scene image, the process of estimating the direction field of this method is as follows (see the figure below): (1) Use a preprocessing step to remove large-scale background noise and enhance the ridge structure; (2) Divide the preprocessed image into overlapping images blocks, and each block is sent to a trained ConvNet to predict its orientation pattern; (3) all predicted orientation patterns are stitched together to form the entire orientation field.

Flow chart of Cao and Jain (2015) direction field estimation method

2.2.2 Direction Field Residual Regression

Duan et al. (2021) argued that the previous on-site fingerprint direction field extraction algorithms mainly focus on local image features, and the overall law of the fingerprint direction field has not received more attention. The orientation field of the fingerprint has a remarkable regularity: the distribution of the orientation field in the same area of ​​the fingerprint is similar, but the distribution of the orientation field is different in different areas. This rule contains rich prior information, which is helpful for the direction field extraction of on-site fingerprints.

The author analyzed the distribution law of the direction field of fingerprints under different pattern types, and carried out statistics on the distribution law of the direction field of high-quality fingerprints in NIST SD4 according to different fingerprint pattern types, and used the average direction field obtained by statistics as initialization, and used the deep network Predict the error between the real direction field and the average direction field, so as to ensure the reasonable distribution of the overall direction field, improve the accuracy of local direction estimation, and use the average direction field as prior knowledge in the location where the image noise is strong. In order to simulate live fingerprints for training, the author randomly selected 8100 high-quality rolling fingerprints in the NIST SD14 fingerprint library, and extracted the orientation field and fingerprint segmentation as annotations, and then randomly combined these fingerprints with natural (grayscale) images through the minimum The on-site fingerprints are synthesized by projection as the training data for the direction field prediction network.

Direction Field Estimation Network by Duan et al. (2021)

The authors use 258 pairs of rolling fingerprints and field fingerprints in the NIST SD27 database to evaluate the prediction performance of the orientation field. Firstly, the prediction performance of the direction field is evaluated in the real fingerprint foreground area, and then 27,000 rolling fingerprints in the NIST SD14 fingerprint library are added as the background library to evaluate the influence of the direction field extraction on the fingerprint recognition performance. The experimental results show that the method is better than the previous algorithms.

2.3 Ridge and minutiae point extraction

Ridges and minutiae are closely related features, so their extraction is discussed together. Two methods are specifically described here (Tang et al., 2017; Dabouei et al., 2018).

2.3.1 FingerNet

Tang et al. (2017) combined the domain knowledge of fingerprints and the representation ability of deep learning to design a deep convolutional network for minutiae extraction. First, the traditional fingerprint processing flow, including direction estimation, segmentation, enhancement, and minutiae extraction, is converted into a convolutional network with fixed weights (as shown in the figure below).

The traditional fingerprint feature extraction process (orientation field estimation, segmentation, Gabor enhancement, and minutiae extraction) is transformed into a convolutional network. (Tang et al., 2017)

It is then extended to a weight-learnable FingerNet network (below) to enhance its representation capabilities. The FingerNet network is fully differentiable and can learn network weights from large amounts of data. First, for the input fingerprint image, pixel-level normalization is adopted to fix the mean and variance of the input image. The entire network is divided into three parts: direction field estimation and segmentation, enhancement, and detail point extraction.

FingerNet network structure (Tang et al., 2017)

The backbone of the orientation field and segmentation modules is the VGG network, which consists of several convolutional-BN-pReLU blocks and max-pooling layers. After basic feature extraction, an Atrous Spatial Pyramid Pooling (ASPP) layer is employed to obtain multi-scale information. The scales of dilated convolutions are 1, 4 and 8. Subsequently, parallel direction regression is performed on the feature maps of each scale to directly predict the probability of 90 discrete angles for each input pixel to obtain a direction distribution map. And perform segmentation map regression to predict the probability that each input pixel is a region of interest, and obtain a segmentation score map.

Gabor augmentation directly acts as an augmentation module. Among them, the ridge frequency takes a fixed value, and the ridge direction is discretized into 90 discrete angles, corresponding to the direction distribution map. The final enhanced fingerprint image is obtained by multiplying the phase group by the upsampled orientation map. Specifically, the parameters of Gabor filters are settable and fine-tuned during training.

The enhanced fingerprint image is sent to the minutiae extraction module. The backbone of this module is also the VGG network, followed by the ASPP layer. After feature extraction, the minutiae point extraction part outputs four different graphs to meet the network requirements. The first image is the minutiae score map, which represents the probability that each 8×8 block contains a minutiae. The second and third plots are X/Y probability maps of minutiae points for fine localization through 8 discrete position classification tasks. The last picture is the minutiae direction distribution map, which represents the minutiae direction, similar to the direction distribution map.

The loss function for each output is shown in the figure below. The minutiae points annotated by fingerprint experts are used as ground truth. Since the orientation field and segmentation map have no ground truth, weak and strong labels are generated from minutiae points and matched profile fingerprints, respectively. The weak orientation label is the orientation field of the aligned archival fingerprint extracted by conventional methods. The strong direction label is the minutiae point direction. Finally, the weakly segmented labels of the scene fingerprints are obtained by dilating the convex hull of the set of minutiae points.

The loss function of FingerNet consists of 9 parts, corresponding to the three outputs of orientation field, segmentation map and minutiae. (Tang et al., 2017)

The authors conducted experiments on the NIST SD27 and FVC2004 databases. On NIST SD27, the average errors in position and angle between the extracted minutiae points and the ground truth are 4.4 pixels and 5.0°, respectively. On FVC2004, the average errors in position and angle are 3.4 pixels and 6.4°, respectively. The figure below shows an example with FingerNet extracted orientation fields, enhanced foreground regions and minutiae points.

Orientation fields, enhanced foreground regions and minutiae points extracted from live fingerprints by FingerNet. (Tang et al., 2017)

In addition, the authors also conduct recognition experiments to test whether fingerprint matching can benefit from FingerNet. The results show that the recognition rate of FingerNet outperforms other methods due to better minutiae extraction. For example, compared with the VeriFinger minutiae point extraction method, the rank-1 recognition rate of FingerNet is about 19% higher.

2.3.2 Conditional Generative Adversarial Network

Dabouei et al. (2018a) proposed a conditional generative adversarial network (cGAN) based direct on-site fingerprint reconstruction model. The authors made two modifications to cGAN to adapt it to the task of in situ fingerprint reconstruction. First, the model is forced to generate three additional maps on top of the ridge map to ensure that orientation and frequency information are considered during generation and to prevent the model from filling in large missing regions and producing spurious detail points. Second, a perceptual ID preservation method is developed to force the generator to preserve ID information during reconstruction. Using a synthetic scene fingerprint database, the authors train a deep network to predict missing information from input scene images.

The model consists of three networks: generator, fingerprint-aware ID information (PIDI) extractor, and discriminator (see figure below). The generator is a U-net network that takes input field fingerprints and simultaneously generates ridges, frequency maps, orientation maps and segmentation maps. The reconstruction error is the weighted sum of the generated maps and their respective ground truth errors. After this, the resulting graph is concatenated with the input live fingerprints to condition the discriminator. Truth maps are extracted from raw clean fingerprints that are first warped to simulate live images. During the training phase, these graphs are used to provide supervision to the discriminator.

Schematic of the cGAN model for on-site fingerprint reconstruction (Dabouei et al., 2018a)

The Fingerprint PIDI Extractor is an offshoot of the Deep Siamese Fingerprint Verifier, which is trained using a contrastive loss. It is trained as a fingerprint verifier to extract Perceptual ID Information (PIDI) of generated graphs. The extracted PIDI is the output feature map of the first four convolutional layers of the verifier module, and is connected to the corresponding layers of the discriminator to emphasize the ID information on the discriminator's decision.

The discriminator is a deep CNN that maps the conditional output of the generator of size 256×256×5 to a discriminative matrix of size 16×16×1. The corresponding live fingerprints are connected to the generated or ground-truth graphs to act as conditions. The PIDI obtained by the fingerprint verifier is also passed to the discriminator.

The authors conducted experiments on the IIIT-Delhi MOLF database. The rank-50 accuracy of on-site and live fingerprint matching is 70.89%, and the rank-10 accuracy of on-site and on-site fingerprint matching is 88.02%. Furthermore, measuring the quality of the reconstructed fingerprints using NFIQ shows that the quality of the generated fingerprints is significantly improved compared to the original in situ image.

2.4 3D finger reconstruction

A finger itself is a three-dimensional object, and a three-dimensional fingerprint is the most original form of a fingerprint. The advantages of 3D fingerprints over 2D fingerprints include: (1) Avoid skin deformation; (2) Complete fingerprints can be collected at one time without rolling fingers; (3) 3D information has additional discrimination. Researchers have proposed a variety of 3D fingerprint acquisition techniques (Kumar, 2018). However, due to the bulky size, high cost, and lack of obvious advantages in recognition performance, these three-dimensional fingerprint collection technologies have not yet achieved large-scale applications.

Cui et al. (2023) proposed a technique for reconstructing 3D fingerprints from a non-contact fingerprint image, which only requires an ordinary camera to collect 3D fingerprints, significantly reducing hardware costs. Different from the previous 3D acquisition scheme that relied too much on hardware, this scheme uses machine learning technology to learn the 3D shape prior of fingers and the 3D information contained in 2D non-contact images from a large number of samples. Experiments show that the 3D fingerprint reconstructed by this technology is very close to the reconstruction result of the bulky and expensive structured light 3D imaging equipment.

3D fingerprint reconstruction

Cui et al. (2023) proposed to reconstruct a 3D fingerprint from a non-contact fingerprint image

The 3D fingerprint reconstruction algorithm uses a neural network to estimate the surface gradient and subsequently the surface shape from a single non-contact fingerprint image. The core of the algorithm is the gradient estimation network shown in the figure below. The network inputs preprocessed images and masks, and outputs orientation fields, periodograms, and gradients. The first part of the network is image normalization to adjust the brightness of the image; the second part is the direction field and periodogram feature extraction network, including 3 convolution and pooling blocks to extract the feature map of the original image 1/8 size; the third Part regresses the direction field and the periodogram respectively; finally the fourth part regresses the gradient from the direction field and the periodogram.

Finger Surface Gradient Estimation Network proposed by Cui et al. (2023)

3. Fingerprint matching

Given feature representations of two fingerprints, a fingerprint matching algorithm aligns the two, compares features for consistency, and derives a matching score (eg, a score between 0 and 1, with 1 being the most similar). The impact of feature representation on the design of fingerprint matching algorithms is fundamental. Two common feature representations are minutiae point sets and fixed-length feature vectors. Various fingerprint matching algorithms usually first need to solve the registration (alignment) problem.

Since the deformation of the finger skin is elastic, rigid registration does not remove the elastic deformation. The dense registration technique can measure the pixel-level deformation field between fingerprint images and remove the deformation, so it is beneficial to various matching methods. The technology of dense registration and dewarping will make the comparison of large database fingerprints very slow, while the distortion self-correction technology can directly remove the possible distortion of a single fingerprint image before matching, which is very suitable for large database comparison.

3.1 Matching based on minutiae

3.1.1 Minutiae Depth Descriptor

The minutiae descriptor is a very important part of minutiae matching. In the past, minutiae descriptors were usually designed empirically. Among them, well-designed descriptors (such as MCC) perform quite well in matching live fingerprints and ink fingerprints. However, in live fingerprint matching, the performance of these descriptors will be greatly degraded due to the lack of minutiae and the low reliability of automatic minutiae extraction. Cao and Jain (2019) proposed to use ConvNet for minutiae descriptor extraction.

The minutiae descriptor is learned from 14 image patches of different scales and positions (as shown in the figure below). For each image patch extracted from the same minutiae, one ConvNet is trained to obtain feature vectors, and finally a subset of the 14 feature vectors output by 14 ConvNets is concatenated into a minutiae descriptor.

Cao and Jain (2019) compute minutiae descriptors from multiple image patches of different scales and positions

The training minutiae point images are extracted from the Michigan State Police fingerprint database. The database contains ten fingerprints of 1311 people with at least 10 rolling prints for each finger. Each distinct minutiae point is considered as a class, and only classes with more than 8 samples are kept. In this case, each ConvNet is trained as a multi-class classifier. At test time, the output of the last fully connected layer of each ConvNet is treated as the feature vector of the input image patch.

3.2 Matching based on fixed-length representation

Representing fingerprints as fixed-length vectors like faces and irises is a very attractive idea. Compared with the traditional minutiae representation, the fixed-length representation of fingerprints has fundamental advantages in terms of matching speed and template encryption. But this path is difficult. There has been no major progress in this direction since FingerCode (Jain et al., 2000). Benefiting from deep learning techniques and large-scale training data, great progress has been made in this direction in recent years.

3.2.1 DeepPrint

DeepPrint proposed by Engelsma et al. (2021) has taken the research of fingerprint fixed-length representation a big step forward.

DeepPrint has three main modules. The first is the alignment module, which uses a spatial transformation network to align fingerprints into the same coordinate system. The aligned fingerprint images are then fed to the base network, whose output is then fed to two branches. The first branch is directly used for feature extraction and loss computation, and this texture feature is highly correlated with ridge direction and frequency. The second branch is a custom network that captures minutiae features. There are two loss functions, one is minutiae reconstruction loss and the other is classification loss. The authors use the Michigan State Police fingerprint database mentioned in Section 2.1 for network training.

DeepPrint fingerprint fixed-length representation method proposed by Engelsma et al. (2021)

The feature lengths of texture representation and minutiae representation are both 96. Therefore, the final fingerprint representation is the concatenation of these two representations, a 192-dimensional feature vector. Before concatenation, the two representations are normalized to unit length to remove the effect of norm. For matching, the cosine distance is used to compute the similarity between two fingerprint representations. In the two examples below, the left pair of fingerprints is a pair of matching fingerprints, but it was incorrectly rejected by the minutiae matching algorithm. The right pair of fingerprints is a pair of mismatching fingerprints, but was wrongly accepted by the minutiae matching algorithm. The matching scores given by DeepPrint correctly discriminate between these two examples, indicating that DeepPrint is robust to wet fingerprints and skin deformation, and can learn discriminative features.

DeepPrint can correctly identify examples where minutiae matching algorithms fail (Engelsma et al., 2021)

3.2.2 Multi-scale fixed-length representation

Scene fingerprints taken from crime scenes are widely used to identify criminals. In practical applications, on-site fingerprint identification usually requires a one-to-one comparison of a query on-site fingerprint with a large-scale database, which puts forward higher requirements for the accuracy and efficiency of fingerprint matching. For this purpose, a fast retrieval algorithm is usually combined with an accurate but slower fingerprint matching algorithm. Fingerprint retrieval is usually used to select a shorter candidate list before fingerprinting, which is used to reduce the search space and time complexity while maintaining the recognition accuracy.

Despite the importance of on-site fingerprint retrieval, compared with rolling fingerprints and planar fingerprints, research on on-site fingerprint retrieval algorithms is still less. Due to the small area of ​​on-site fingerprints, poor image quality, and huge differences in the amount of information, the existing scrolling and flat fingerprint retrieval methods cannot be easily migrated to on-site fingerprint retrieval. For the on-site fingerprint retrieval problem, Gu et al. (2022) proposed a retrieval algorithm based on multi-scale fixed-length representation. For the comparison of large-scale databases, fingerprint retrieval is used to efficiently exclude most of the library fingerprints and filter out a short candidate list in advance, so as to reduce the number of subsequent matching comparisons and improve the accuracy and efficiency of subsequent fingerprint matching.

On-site fingerprint fixed length display

On-site Fingerprint Retrieval Based on Fixed-Length Representation (Gu et al., 2022)

The fingerprint retrieval method based on fixed-length representation only needs a few mathematical calculations to calculate the distance between two feature representations in fingerprint comparison, which is very suitable for fast large-scale fingerprint comparison. However, the previous fingerprint fixed-length representation methods (such as DeepPrint) did not fully consider the problem of fingerprint incompleteness, and it is easy to introduce the background noise of the fingerprint into the feature representation of the fingerprint. Gu et al. (2022) proposed to extract depth features from image blocks at different positions at different scales, represent incomplete on-site fingerprints through local image blocks, fully consider the importance of different image blocks and only calculate the fingerprint similarity in the foreground area, and use Improve the recognition performance of incomplete on-site fingerprints.

Multi-scale fixed-length representation

Gu et al.'s (2022) fingerprint multi-scale fixed-length representation scheme

The figure below shows the case on the Hisign and NIST SD27 field fingerprint database. When the quality of the fingerprint is poor and the enhanced fingerprint becomes incomplete or even separated, or the quality of the fingerprint is high but the comparison of the sides results in fewer fingerprint areas near the center of the fingerprint, DeepPrint, a method of extracting global features from the center of the fingerprint, may introduce More background, the performance is worse than the minutiae method, and the algorithm can reliably extract features from the foreground area of ​​the fingerprint and calculate the similarity of overlapping areas, effectively adapting to the situation of incomplete fingerprints. When the quality of the fingerprint is poor and there are a lot of errors in the minutiae, the performance of the minutiae-based retrieval method is severely degraded, and the method based on the fixed-length representation to extract deep features from the enhanced image can handle such a situation.

Results of four retrieval methods for 8 on-site fingerprint retrieval cases (Gu et al., 2022)

The retrieval performance on the Hisign and NIST SD27 field fingerprint databases is shown in the figure below. In comparison, the performance of the multi-scale fixed-length representation method has obvious advantages, and the error rate is significantly lower than other methods under the same penetration rate, which reflects that this method is more suitable for field fingerprints with smaller effective areas.

Field retrieval performance

Retrieval performance curves on live fingerprint databases (Gu et al., 2022)

3.2.3 Siamese network

Lin and Kumar (2019) proposed a fixed-length representation for cross-modal matching between contact/contactless fingerprints. Probably because of the lack of training samples, the authors used Siamese networks. Its framework mainly consists of three sub-networks, each of which has two network branches that share parameters (corresponding to contact and non-contact fingerprints, respectively). The method structure is shown below. The input of the first sub-network is the fingerprint ridge enhancement map and the minutiae point map, the input of the second sub-network is the area behind the fingerprint fuzzy core point, the input of the third sub-network is the core point area of ​​the fingerprint, and the output length of each sub-network is 1024 The one-dimensional feature vector of is finally directly merged into a fixed-length feature with a length of 3072. Before entering the network, the fingerprint is automatically cropped according to the central position of the fingerprint, and enhanced and minutiae are extracted. During training, positive and negative samples of contact/non-contact fingerprint pairs are constructed and a contrastive loss function is used. When comparing, directly calculate the Euclidean distance between the corresponding fixed-length features of the two fingerprints.

Fixed-length representation method for cross-modal fingerprint comparison proposed by Lin and Kumar (2019)

The sub-network structure of fixed-length feature extraction is shown in the figure below. The image passes through four convolutional layers, a maximum pooling layer and a fully connected layer in sequence to obtain the corresponding feature vector, and the contact/non-contact feature extraction of different sub-networks Parameter sharing between modules. For sub-network 1, since there are two inputs, the two features are combined after the first convolutional layer, and then input to the next convolutional layer.

Lin and Kumar (2019) use Siamese networks to extract feature representations from fingerprints of different modalities

3.3 Fingerprint Rigid Registration

Traditional fingerprint rigid registration methods are mainly based on minutiae point matching, direction field matching, image correlation and so on. For fingerprints with better quality and larger areas, traditional methods perform well. The cases they struggle to handle are low-quality fingerprints (especially on-site fingerprints) and small-area fingerprints (such as small-area fingerprint sensors in mobile phones).

3.3.1 Fingerprint registration based on dense sampling points

The virtual minutiae proposed by Cao and Jain (2019) suffer from unstable estimation of local ridge directions, which is common in field fingerprints. In addition, virtual minutiae are not salient features and cannot be accurately located. To overcome the shortage of virtual minutiae points, Gu et al. (2021) proposed an on-site fingerprint registration algorithm based on dense sampling points.

The flowchart of the algorithm is shown in the figure below. For a pair of images to be registered (on-site fingerprints and rolling fingerprints), uniformly sample dense sampling points in the fingerprint areas of the two images, and then estimate the relationship between the sampling point pairs in the two images by aligning and matching local image blocks. Alignment parameters and similarity. After that, according to the similarity between each point pair, the possible corresponding relationship between the sampling points is obtained, and finally the final result is obtained by using the global matching method based on spectral clustering.

Flowchart of the on-site fingerprint registration method by Gu et al. (2021)

The core of the whole algorithm is the alignment and matching of local image blocks. The flow of this module is shown in the figure below. As input a pair of partial images, the image patch alignment network estimates the translation and rotation parameters between them. Afterwards, depth descriptors are extracted from the aligned image blocks, and the matching is judged according to the similarity of the descriptors.

Local patch alignment and matching pipeline of Gu et al. (2021)

In addition, considering the registration accuracy and time complexity, the authors propose a coarse-to-fine registration scheme. The two-stage registration process is the same, but the fine registration takes the coarse registration result as input. The figure above shows the process of one of the registrations. In the coarse registration stage, the interval between sampling points is relatively large, and all sampling points on the two fingerprints are compared one by one to obtain candidate correspondences. In fine registration, the sampling points are denser, but each sampling point is only compared with its neighbors. With this method, the number of comparisons in coarse registration is more, but the number of sampling points is less; while the number of sampling points in fine registration is dense, but the number of matching times required for each point is reduced.

The authors conducted registration experiments on the field fingerprint database NIST SD27. For each pair of fingerprints, the corresponding minutiae point pairs are known, and the position and direction difference between the registered matching minutiae point pairs are used as the evaluation index. Compared with the previous best-performing on-site fingerprint registration algorithm, the performance of this method is better.

The ultimate goal of fingerprint registration is to improve the performance of fingerprint matching. The author conducted a matching experiment at NIST SD27. After a pair of output fingerprints are registered, the local key point descriptor similarity of corresponding points in the overlapping area is calculated, and the mean value of all descriptor similarities in the overlapping area is used as the matching score of a pair of fingerprints. The figure below shows the CMC curves of the matching results. It can be seen that the registration algorithm has greatly improved the matching performance compared with the previous algorithm, and the recognition rate of rank-1 on the NIST SD27 database has increased from 61.6% to 70.1%.

Comparison of recognition performance on NIST SD27 using different on-site fingerprint registration methods (Gu et al., 2021)

3.3.2 Registration based on spatial transformation network

Plane fingerprints are widely used in civilian fields, such as in the authentication and interaction process of mobile phones, smart watches and other devices. Considering the portability and the cost of portable devices, people began to pursue the miniaturization of fingerprint scanners, which also limits the capture area of ​​fingerprint images and greatly reduces the performance of traditional fingerprint identification methods. Therefore, small fingerprint matching is becoming a new problem for smart portable devices.

He et al. (2022) proposed a small fingerprint matching method based on spatial transformation network (STN) and local self-attention mechanism. The overall recognition process is shown in the figure below. For the input local fingerprint pair, it is first enhanced and then input to the relative pose estimation network (AlignNet), and its relative rigid transformation parameters are predicted. The fingerprint is rigidly transformed according to the prediction result, and finally the aligned Enhanced fingerprint input comparison network (CompareNet) to get the recognition result.

The small fingerprint matching method proposed by He et al. (2022)

The network structure of relative pose estimation is as follows. The input image 2 is rotated 0°, 90°, 180°, and 270° respectively, and then merged with the input image 1 to control the relative angle difference between the fingerprint pairs. Four sets of fingerprint pairs are input ResNet34 with shared weight parameters extracts features, and the features are combined to output the prediction results of rigid transformation parameters through a multi-layer perceptron.

insert image description here

Relative pose estimation network proposed by He et al. (2022)

The comparison network structure of He et al. (2022) is shown below. The aligned two fingerprint images are input into the encoder network and the corresponding multi-layer perceptron at three resolutions, and the features are combined and input into the multi-layer perceptron for Multi-scale fusion, and finally output classification prediction results.

Alignment network proposed by He et al. (2022)

The author collected the local fingerprints collected by the capacitive sensor and the optical sensor under the screen, and conducted experiments on this data set and the public data set FVC2004. Better performance and more robustness to different sensor types.

3.4 Fingerprint Dense Registration

The accuracy of dense fingerprint registration methods is challenged by fingerprint self-similarity, noise and distortion. Image correlation-based dense registration methods (Si et al., 2017) use image correlation coefficients and are vulnerable to these challenges; phase demodulation-based dense registration methods (Cui et al., 2018) are susceptible to fingerprint distortion and noise , where the phase unwrapping method is also limited by error accumulation.

Cui et al. (2021) applied deep learning to fingerprint dense registration for the first time, by training an end-to-end network to directly estimate deformation fields from fingerprint pairs. The algorithm is divided into two steps: initial registration based on minutiae points and fine registration based on network. The input fingerprint first calculates the spatial deformation based on the matched minutiae points for rough registration, and then obtains the dense deformation field through the network for fine registration.

Fingerprint dense registration algorithm proposed by Cui et al. (2021)

The network structure refers to the optical flow estimation network, which consists of two parallel feature extraction networks and an encoding-decoding network. The network is trained end-to-end, inputting two coarsely registered fingerprints and outputting the corresponding deformation field. In order to generate training data for the network, the author uses Tsinghua distorted fingerprint video to obtain the distorted deformation field through video tracking, and then applies the deformation field to live fingerprints and low-quality fingerprints, thus obtaining a large number of fingerprint pairs as training data.

Fingerprint dense registration network proposed by Cui et al. (2021)

The registration and matching experiments on FVC2004, Tsinghua Twisted Fingerprint Database (TDF) and field fingerprint database NIST-27 show that the dense registration algorithm is superior to previous methods in both registration error and matching error. Due to the use of the parallel computing capability of the GPU, the algorithm is much faster than the previous serial fingerprint registration algorithm.

3.5 Fingerprint deformation correction

Skin distortion is a long-standing challenge in fingerprint matching, which leads to false mismatches. The study by Si et al. (2015) showed that the recognition rate can be improved by performing distortion field estimation on distorted fingerprints and then correcting them to normal fingerprints. Another similar problem is correcting perspective distortion in contactless fingerprints.

3.5.1 Skin deformation correction

3.5.1.1 Principal Component Coefficients of Network Regression Distortion Field

Dabouei et al. (2018b) used deep learning for fingerprint distortion correction for the first time. The process flow of this method is shown in the figure below. By training the deep full convolutional network, the coefficient of the input fingerprint deformation principal component is predicted, and the coefficient is weighted with the corresponding template to obtain the predicted deformation field, and the fingerprint is processed by thin plate spline interpolation (TPS). correction. The author carried out recognition experiments on the public twisted fingerprint database TDF and the database FVC 2004 DB_1 containing twisted fingerprints. The results show that the method of network prediction of deformation principal components is better than the previous nearest neighbor prediction method.

The distortion correction method proposed by Dabouei et al. (2018b)

3.5.2.2 Network regression dense distortion field

Guan et al. (2022) believe that previous distortion correction methods (Si et al., 2015; Gu et al., 2018; Dabouei et al., 2018b) are based on the principal component representation of the distortion field, and limited principal component templates can only roughly estimate the deformation mode. This is not accurate, and is very sensitive to finger pose, making it difficult to effectively handle multi-angle, complex and distorted fingerprints.

The authors propose a deep learning network based on self-referencing information to directly estimate and rectify the dense distortion field of distorted fingerprints. This scheme uses an end-to-end deep learning network and does not require the absolute correctness of the fingerprint pose, so it is robust to multi-pose fingerprints. On the other hand, it uses dense estimation instead of the existing low-dimensional representation based on principal component analysis. Distortions are more expressive and the estimated distortions are more accurate in detail. The network strengthens the network's ability to refer to neighborhood information through multi-scale hole convolution and channel attention modules containing context information, and optimizes the representation of the true value of the deformation field to ensure that the true value removes the rigid transformation component and only retains the elastic distortion part.

Fingerprint Distortion Correction Network proposed by Guan et al. (2022)

The authors collected 480 fingerprint twisting videos, including fingerprints in many different poses and different twisting types. Experiments are conducted on this database and the publicly available warped fingerprint database TDF to measure deformation estimation accuracy, matching performance, model complexity, and inference efficiency. Experimental results show that the method is superior to existing fingerprint distortion correction algorithms based on principal component analysis.

Performance comparison of several fingerprint distortion correction algorithms (Guan et al., 2022)

3.5.2 Perspective distortion correction

Contactless fingerprinting has emerged as a convenient, inexpensive, and hygienic way to obtain fingerprint samples. However, cross-matching contactless and conventional contact fingerprints is a challenging task due to the elasticity and viewing angle distortion between contact/contactless fingerprints.

Dabouei et al. (2019) proposed a non-contact fingerprint perspective distortion correction scheme, which reduces the distortion introduced by perspective distortion by combining ridge correction and ridge enhancement networks, and eliminates the need for the estimated true value of distortion parameters. The flow of the method is shown in the figure below. For an input non-contact fingerprint, the displacement vector of its grid sampling point is estimated by a simple fully convolutional network, and corrected by TPS, and then the U-Net model is used to obtain the distortion-corrected non-contact fingerprint binary ridge line graph. In the training phase, the distortion evaluation score map S is generated according to the variation of the predicted distortion, and S is used as the weight to calculate the cross-entropy between the distortion-corrected and enhanced non-contact fingerprint y and the previously aligned non-distorted contact fingerprint y* . Experiments show that, compared with the original sampled image, the model can recover richer details from non-contact fingerprints, thus greatly improving the matching performance between contact/non-contact fingerprints.

Non-contact fingerprint perspective deformation correction scheme proposed by Dabouei et al. (2019)

4. Fingerprint synthesis

In the era of deep learning, large-scale training data is critical to the performance of the model. However, the cost of collecting a large-scale fingerprint library is very high, and it involves privacy issues. Compared with the field of face recognition, the size of the public fingerprint database in the field of fingerprint recognition is too small. Currently, the largest publicly available fingerprint database is NIST SD14 (which was taken off the shelf by NIST a few years ago), and it has only 27,000 different fingers (only 2 images for each finger). Therefore, fingerprint image synthesis technology is very valuable.

SFinGe proposed by Cappelli et al. is a very classic fingerprint synthesis technology (Cappelli et al., 2000; Cappelli, 2022). The author has carefully designed a whole set of fingerprint synthesis techniques, first synthesizing the master fingerprint (master fingerprint), and then synthesizing various pressed images of the fingerprint. The author considers various factors, including the mathematical model of each feature of the fingerprint, the inter-class variation of the feature, and the intra-class variation in the actual image. But no matter how rich the author's experience is and how clever the generalization is, it is always difficult to accurately model the laws in real samples.

In recent years, researchers have proposed several fingerprint synthesis techniques based on generative adversarial networks (GANs).

4.1 PrintsGAN

The PrintsGAN fingerprint synthesis method proposed by Engelsma et al. (2023) operates in two stages. In the first stage, a master fingerprint (binary fingerprint image at 250 ppi) is generated. Afterwards, the main fingerprint is passed to the nonlinear deformation and cropping module to simulate the effect of fingers pressing on the fingerprint collector at different angles and strengths. Finally, the deformed and cropped master fingerprint is passed to the second stage of the compositing process, rendering realistic texture details at 500 ppi. By inputting different identity noise, deformation noise and texture noise, PrintsGAN is able to generate a large number of different master fingerprints and their different images. In this way, PrintsGAN models the inter-class and intra-class variance of a large fingerprint database, and then synthesizes a large amount of real fingerprint data, which is used to train a deep network to extract a fixed-length representation suitable for matching.

PrintsGAN fingerprint synthesis method proposed by Engelsma et al. (2023)

The authors synthesized a database of 525,000 fingerprint images (35,000 different fingers with 15 images per finger). Then two methods of training DeepPrint fixed-length feature extraction network were compared, (1) using synthetic fingerprint pre-training, and fine-tuning on a smaller-scale real fingerprint library (25,000 images from NIST SD302); (2) Only use the real fingerprint database training. The matching performance of DeepPrint obtained by the first method on the NIST SD4 database is TAR=87.03% @ FAR=0.01%; while the second method can only reach TAR=73.37%. However, the authors do not report the performance of training DeepPrint directly with the real data used to train PrintsGAN.

5. False fingerprint detection

With the popularization of fingerprint identification technology, there are more and more cases of using fake fingerprints to deceive fingerprint identification systems. There are a variety of materials that can be used to create fake fingerprints that can fool many types of fingerprint sensors and systems. In recent years, false fingerprint detection (also called live fingerprint detection, presence attack detection) has become a hot research direction in the field of fingerprints. Among them, software-based detection methods have received much attention because they do not require additional hardware and can be enhanced by updating software.

pseudo-fingerprint

Images of fingerprints forged by various materials and their optical fingerprint collectors (Chugh and Jain, 2021)

Chugh et al. (2018) believe that in the process of making fake fingerprints, there are usually defects such as missing ridges, cracks, and bubbles, thereby introducing false details. The surrounding areas of these false minutiae points can provide salient features for distinguishing real and fake fingerprints. The authors further propose a method for binary classification of minutiae-centered fingerprint blocks. Firstly, the minutiae point extraction algorithm is used to extract the position and direction of the minutiae point, and then the fingerprint block is intercepted according to the position and direction of the minutiae point and rotated to a unified pose. Feed aligned fingerprint blocks into Mobilenet-v1 for binary classification and output pseudo-fingerprint scores. Finally, the pseudo-fingerprint scores of all fingerprint blocks are fused into the pseudo-fingerprint score of the complete fingerprint. This method outperforms other existing methods in different public datasets.

The pseudo-fingerprint detection method of Chugh et al. (2018)

Pseudo-fingerprints made of different methods and materials and real fingerprints collected by different fingerprint sensors often have differences in image style. The performance of deep learning-based fake fingerprint detection networks often depends on the style of real and fake fingerprints used during training. The pseudo-fingerprint detection network has a low recognition rate for new materials that did not appear in the training set. In order to solve this problem, Chugh and Jain (2021) proposed a fingerprint style transfer module (general material generator) to augment the fingerprint data and improve the generalization performance of the fake fingerprint detection network. As shown in the figure below, when training the style transfer module, first, two known material pseudo-fingerprints are input into the encoding-style transfer-decoding network to generate new pseudo-fingerprints. The content loss and style loss are then computed using the same encoder, and the adversarial loss is computed using a discriminator similar to DC-GAN. Afterwards, two randomly selected pseudo-fingerprints of known materials are input into the trained style transfer module to synthesize new pseudo-fingerprints. At the same time, the style transfer module trained with real fingerprints is used to synthesize new real fingerprints. The performance of the pseudo-fingerprint detection network trained with augmented fingerprints has been improved to varying degrees in various public databases.

Chugh and Jain (2021) False Fingerprint Detection Method

references

  1. Cao, K., & Jain, A. K. (2015). Latent orientation field estimation via convolutional neural network. In 2015 International Conference on Biometrics (ICB) (pp. 349-356). IEEE.
  2. Cao, K., & Jain, A. K. (2019). Automated latent fingerprint recognition. IEEE transactions on pattern analysis and machine intelligence, 41(4), 788-800.
  3. Cappelli, R. (2022). Fingerprint Synthesis. In Handbook of Fingerprint Recognition (pp. 385-426). Springer, Cham.
  4. Cappelli, R., Erol, A., Maio, D., & Maltoni, D. (2000). Synthetic fingerprint-image generation. In Proceedings 15th International Conference on Pattern Recognition. ICPR-2000 (Vol. 3, pp. 471-474).
  5. Chugh, T., Cao, K., & Jain, A. K. (2018). Fingerprint spoof buster: Use of minutiae-centered patches. IEEE Transactions on Information Forensics and Security, 13(9), 2190–2202.
  6. Chugh, T., & Jain, A. K. (2021). Fingerprint spoof detector generalization. IEEE Transactions on Information Forensics and Security, 16(1), 42–55.
  7. Cui, Z., Feng, J., Li, S., Lu, J., & Zhou, J. (2018). 2-D phase demodulation for deformable fingerprint registration. IEEE Transactions on Information Forensics and Security, 13(12), 3153-3165.
  8. Cui, Z., Feng, J., & Zhou, J. (2021). Dense registration and mosaicking of fingerprints by training an end-to-end network. IEEE Transactions on Information Forensics and Security, 16, 627-642.
  9. Cui, Z., Feng, J., & Zhou, J. (2023). Monocular 3D Fingerprint Reconstruction and Unwarping. IEEE transactions on pattern analysis and machine intelligence.
  10. Dabouei, A., Kazemi, H., Iranmanesh, S. M., Dawson, J., & Nasrabadi, N. M. (2018a). ID preserving generative adversarial network for partial latent fingerprint reconstruction. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS) (pp. 1-10).
  11. Dabouei, A., Kazemi, H., Iranmanesh, S. M., Dawson, J., & Nasrabadi, N. M. (2018b). Fingerprint distortion rectification using deep convolutional neural networks. In 2018 International Conference on Biometrics (ICB).
  12. Dabouei, A., Soleymani, S., Dawson, J., & Nasrabadi, N. M. (2019). Deep contactless fingerprint unwarping. In 2019 International Conference on Biometrics (ICB) (pp. 1-8).
  13. Duan, Y., Feng, J., Lu, J., & Zhou, J. (2021). Orientation Field Estimation for Latent Fingerprints with Prior Knowledge of Fingerprint Pattern. In 2021 IEEE International Joint Conference on Biometrics (IJCB) (pp. 1-8).
  14. Duan, Y., Feng, J., Lu, J., & Zhou, J. (2023). Estimating Fingerprint Pose via Dense Voting. IEEE Transactions on Information Forensics and Security.
  15. Engelsma, J. J., Cao, K., & Jain, A. K. (2021). Learning a fixed-length fingerprint representation. IEEE transactions on pattern analysis and machine intelligence, 43(6), 1981-1997.
  16. Engelsma, J. J., Grosz, S. A., & Jain, A. K. (2023). PrintsGAN: synthetic fingerprint generator. IEEE transactions on pattern analysis and machine intelligence.
  17. Feng, J., & Jain, A. K. (2011). Fingerprint reconstruction: from minutiae to phase. IEEE transactions on pattern analysis and machine intelligence, 33(2), 209-223.
  18. Feng, J., Zhou, J., & Jain, A. K. (2013). Orientation field estimation for latent fingerprint enhancement. IEEE transactions on pattern analysis and machine intelligence, 35(4), 925-940.
  19. Gu, S., Feng, J., Lu, J., & Zhou, J. (2018). Efficient rectification of distorted fingerprints. IEEE Transactions on Information Forensics and Security, 13(1), 156-169.
  20. Gu, S., Feng, J., Lu, J., & Zhou, J. (2021). Latent fingerprint registration via matching densely sampled points. IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1231-1244.
  21. Gu, S., Feng, J., Lu, J., & Zhou, J. (2022). Latent Fingerprint Indexing: Robust Representation and Adaptive Candidate List. IEEE Transactions on Information Forensics and Security, 17, 908-923.
  22. Guan, X., Duan, Y., Feng, J., & Zhou, J. (2022). Direct Regression of Distortion Field from a Single Fingerprint Image. In 2022 IEEE International Joint Conference on Biometrics (IJCB).
  23. He, Z., Zhang, J., Pang, L., & Liu, E. (2022). PFVNet: A Partial Fingerprint Verification Network Learned From Large Fingerprint Matching. IEEE Transactions on Information Forensics and Security, 17, 3706-3719.
  24. Jain, A. K., Prabhakar, S., Hong, L., & Pankanti, S. (2000). Filterbank-based fingerprint matching. IEEE transactions on Image Processing, 9(5), 846-859.
  25. Kumar, A. (2018). Contactless 3D fingerprint identification. Springer.
  26. Lin, C., & Kumar, A. (2019). A CNN-based framework for comparison of contactless to contactbased fingerprints. IEEE Transactions on Information Forensics and Security, 14(3), 662–676.
  27. Ouyang, J., Feng, J., Lu, J., Guo, Z., & Zhou, J. (2017). Fingerprint pose estimation based on faster R-CNN. In 2017 IEEE International Joint Conference on Biometrics (IJCB) (pp. 268-276).
  28. Si, X., Feng, J., Zhou, J., & Luo, Y. (2015). Detection and rectification of distorted fingerprints. IEEE transactions on pattern analysis and machine intelligence, 37(3), 555-568.
  29. Si, X., Feng, J., Yuan, B., & Zhou, J. (2017). Dense registration of fingerprints. Pattern Recognition, 63, 87-101.
  30. Su, Y., Feng, J., & Zhou, J. (2016). Fingerprint indexing with pose constraint. Pattern Recognition, 54, 1-13.
  31. Tang, Y., Gao, F., Feng, J., & Liu, Y. (2017). FingerNet: An unified deep network for fingerprint minutiae extraction. In 2017 IEEE International Joint Conference on Biometrics (IJCB) (pp. 108-116).
  32. Yang, X., Feng, J., & Zhou, J. (2014). Localized dictionaries based orientation field estimation for latent fingerprints. IEEE transactions on pattern analysis and machine intelligence, 36(5), 955-969.
  33. Yin, Q., Feng, J., Lu, J., & Zhou, J. (2021). Joint estimation of pose and singular points of fingerprints. IEEE Transactions on Information Forensics and Security, 16, 1467-1479.

Guess you like

Origin blog.csdn.net/minutiae/article/details/128455108