Be careful of being quietly regarded as the "legal person" of the company! Major security vulnerabilities in multiple apps exposed

Table of contents

Multiple government apps have security vulnerabilities

Analysis of Face Recognition Risks

Securing facial recognition applications


Ms. Zhang had never been to Zhuzhou, Hunan, but found that she had an individual industrial and commercial business in her name, and the company was located thousands of miles away. After calling the police and giving feedback, she learned that the self-employed person applied online and conducted real-name verification, which was legal and compliant. However, this is not an isolated case. A man in Heilongjiang became the sole shareholder of a company in Hebei for no reason and assumed the responsibility for the company's tax arrears of 6.41 million yuan; a teacher in Chongqing found that he had become a legal person of a company in Shandong for no reason, resulting in being listed on the list of companies. Enter the list of dishonest persons.

In late June, an investigation by the Beijing News found that illegal intermediaries illegally obtained identity information and facial photos, and used AI face-changing technology to successfully crack relevant government apps. These illegal activities allow criminals to use other people's identity information to register companies or replace shareholders of already established companies.

picture


Multiple government apps have security vulnerabilities

Since March 1, 2019, market supervision and administration departments in many places across the country have begun to implement the "real-name system" for corporate registration of identity information, requiring that when establishing, changing, or canceling a company, in addition to filling in identity information, the parties must also use the corporate registration app Perform "face recognition" authentication.

An investigation by the "Beijing News" showed that although the change of company shareholders required all shareholders to scan the QR code and complete electronic signatures, illegal intermediaries "cheated" the authentication of government affairs websites by cracking facial recognition and using AI technology, allowing the company's shareholders to be secretly replaced.

Because the facial recognition functions of many government apps are easily cracked. Illegal intermediaries can use AI technology to extract facial information from static pictures and let the characters complete actions such as blinking and nodding, thus passing the real-name authentication of a certain App in only 3 minutes. The investigation also found that there are criminals who sell "file checking" services. They only need to provide their name and ID number to obtain the household registration information of the person concerned, including ID photo, gender, ethnicity and the region and county where the household registration is located.

During the process of registering individual industrial and commercial households in a certain city, a Beijing News reporter only provided a person's name and ID number to an illegal intermediary. The illegal intermediary used a provincial handheld registration app to scan the code and the page jumped to the electronic platform. , and then found the identity card photo of the party concerned on the electronic platform, which has been submitted to the platform.

Illegal intermediaries claim that they can not only crack multiple provincial enterprise registration apps, but also the facial recognition of some provincial tax apps. Companies established using other people's information are often used for illegal activities, such as false invoicing, money laundering, fraud, etc. Both faces and fingerprints can be copied, and facial recognition is not a particularly secure verification method. AI technology can make people in photos complete actions and fool face recognition with liveness detection.

picture


Analysis of Face Recognition Risks

Due to the uneven technical conditions and management level of the subjects who use facial recognition technology, criminals will even develop cheating tools to crack, interfere with, and attack the applications and algorithms behind facial recognition technology, thereby leading to theft, fraud, theft of financial security, and even personal life. Security Question.

According to the analysis of the "Face Recognition Security White Paper" released by Dingxiang in 2022 , the face recognition risks of the above government apps are mainly caused by inaccurate face recognition algorithms and insecure face recognition systems.

Facial recognition algorithm is inaccurate

Wear glasses, hats, masks, or make high imitation models, 3D model 2D facial photos, use AI technology to turn static photos into dynamic photos, etc., to deceive face recognition algorithms and live monitoring algorithms.

Fake faces. Use still photos, play pre-recorded dynamic videos, and use image processing or 3D modeling software to convert photos into dynamic videos to confuse facial recognition judgment.

Face modification. Wearing glasses, hats, masks and other disguises, or making high imitation models, 3D modeling of 2D face photos, photo activation, etc., to deceive facial recognition detection.

Technology changes face . Through the AI ​​algorithm, the faces of the people in the video are replaced by the faces of others. Or use AI face-changing technology to transform an ordinary static photo into a vividly expressive face, which can even be easily attached to another person's face and automatically change with the other person's movements and expressions.

Facial recognition system is not safe

Crack facial recognition applications or protections, tamper with verification processes and communication information, hijack access objects, modify software processes, and replace real data with fake data to deceive facial recognition verification.

The system is hacked. Criminals crack the code of the face recognition system or the code of the face recognition application, tamper with the logic of the face recognition code, or inject attack scripts to change its execution process. The face recognition system accesses and provides feedback according to the path set by the attacker.

The device was hijacked . By invading the facial recognition device, or planting a backdoor on the device, hijacking the camera, hijacking the facial recognition app or application by swiping into a specific program, bypassing face verification.

Communications have been tampered with . By cracking and invading the facial recognition system or device, hijacking the message information between the facial recognition system and the server, tampering with the facial information, or replacing real information with false information.

picture


Securing facial recognition applications

In early August, the Cyberspace Administration of China issued a notice to solicit public opinions on the "Regulations on Security Management of Facial Recognition Technology Applications (Trial) (Draft for Comment)" . Among them, specific provisions are made for the collection of facial information, the use of facial information, and the security of facial recognition technology. Facial recognition technology services must comply with the requirements for network security protection level 3 or above, and relevant equipment must be inspected every year. Risk detection and assessment.

Based on the "Safety Management Regulations on the Application of Facial Recognition Technology (Trial) (Draft for Comment)" and the risks of facial recognition in government apps, Dingxiang suggested that the accuracy of facial recognition can be improved in government apps and the security of the facial recognition system can be strengthened. Assure.

Improved facial recognition accuracy . Texture-based methods analyze microscopic texture patterns in face image samples to further enhance the recognition of photos and real people; by calculating the Fourier spectrum of hair instead of face, the accuracy of face video detection is enhanced. Add live lip reading detection; add image texture, light, background, and screen reflection detection; use a specialized infrared camera to collect the three-dimensional structure of the face, etc.

Facial recognition system security . Conduct code obfuscation, encrypted packaging, and permission control for facial recognition applications, apps, and clients to conduct terminal environment security testing; obfuscate and encrypt data communication transmission to prevent eavesdropping, tampering, and fraudulent use during information transmission; risk The control decision-making engine can comprehensively detect the device environment and discover various risks and abnormal operations such as injection, secondary packaging, and hijacking in real time; it builds an exclusive risk control model to provide strategic support for discovering potential risks, unknown threats, and ensuring face recognition security.

Dingxiang business security perception and defense platform is based on advanced technologies such as threat probes, stream computing, and machine learning. It is an active security defense platform that integrates equipment risk analysis, operational attack identification, abnormal behavior detection, early warning, and protection and disposal. It can detect cameras in real time. Effectively prevent and control various face recognition system risks such as hijacking, equipment counterfeiting and other malicious behaviors.

Guess you like

Origin blog.csdn.net/dingxiangtech/article/details/132453785