Discussion on Personal Information Security in the Big Data Era (2)——"Face the Face"

The Internet is like a road, users will leave footprints when they use it.

Everyone is generating data all the time, and while consuming data, they are also being consumed by data.

Recently, the news that a college graduate stole the school's intranet data and collected the personal privacy information of all students in the school has aroused people's renewed attention to the issue of personal information security in the era of big data. In the era of big data, recommendation algorithms and AIGC pose new challenges to personal information security.

1. Make plays on every face

In 2017, an anonymous user named Deepfakes published the program code on the forum, shared a tutorial on making face-changing videos, and attached deep learning code and related data sets. Since then, simple and easy-to-use "face-swapping" videos have appeared. Ordinary people only need a computer equipped with an Nvidia GPU and some data, and use related technologies to create fake face-swapping videos.

"Deepfakes" (Deep learning + fake photos) is based on deep learning-based generative confrontation network (GAN) technology, which can not only change faces, but also forge voices.

In 2018, a fake video of former U.S. President Barack Obama complaining about Trump being an idiot went viral on Twitter. The creator first found a video of Obama's speech, and moved his mouth abruptly to Obama's face. In the end, the fake video made Obama say that sentence: President Trump is total and complete dipshit (President Trump is a total idiot) .

insert image description here

A UP master named "Face Changer" at station B used AI technology to change Yang Mi's face to "Huang Rong" in the play. Looking at it, the expression and body movements of "Yang Mi's version of Huang Rong" are very smooth, and you can't see it at all. There was a flaw, because it was too realistic, it quickly caused an uproar on the Internet. It is reported that this face-swapping brother has previously released a number of videos of changing the faces of characters in film and television dramas. Those who have been face-swapped include Dragon Ma in "A Song of Ice and Fire", rose in "Titanic", and "Comedy". Liu Piaopiao in "The King", Laura in "Tomb Raider" and so on, and the replacements are all Yang Mi.

insert image description here
In 2019, an APP called "ZAO" (Face Making Drama) was born, which set off a wave of "face changing" on social networks. Its products can "use your face to create popular online emoticons", "use your face to star in classic movies", "use your face to play with idols", "use your face to play with friends' faces", and it is clear "Only support taking real portrait photos for face-swapping" "It is not allowed to use the Internet or public portrait photos for face-swapping".

insert image description here

2. Biometric information security

2.1 Biometric information is obtained in large quantities

After "ZAO" and other software face-changing technologies detonated the Internet, the public began to pay attention to the text content of "User Agreement" and "Privacy Policy" formulated by "ZAO", especially involving "personal information protection" and "intellectual property rights". and other legal issues.

insert image description here
insert image description here
It can be seen that the "ZAO" face-changing service has collected a large amount of user face data, and:

  • The user's portrait can be authorized to be used by a third party at will, and the copyright of this authorization depends entirely on "ZAO";
  • The user's portrait can be modified and replaced arbitrarily, and can be re-distributed on the network;
  • The benefits obtained during the dissemination process of the secondary processed portrait do not belong to the user himself;
  • This authorization is free, permanent and irrevocable.

On the May 4th Youth Day, a software introduced several avatar templates of the Republic of China, and many people uploaded their youth photos in their previous lives on WeChat circles. The software has collected more than 80 million user photos in a short period of time, and the service provider will also collect the user's photos, time, location and other information in the photos, and even read the address book and SMS calls.

insert image description here
Many online test apps and small programs will collect user photos, and even ask for advanced permissions such as address book, call, text message, photo, and camera.insert image description here

2.2 Biometric information is illegally misused

Static or dynamic face information (which ZAO will collect from users), fingerprints, voiceprints, and irises are biometric information and are all sensitive personal information. The reason why biometric information is sensitive is that it is naturally capable of truly identifying individuals and cannot be changed for life. In some ways, these biometrics truly define each of us .

Once the biometric information is leaked, it is really "lose nothing" and there is no channel for relief. If email passwords and phone numbers are leaked, they can still be changed, but after information such as face, voiceprint, and iris are leaked, there is no way to change them. Individuals are almost exposed to the risk of being attacked and harassed for life.

In April 2023, the police in Baotou, Inner Mongolia reported a case of using AI to carry out fraud. Mr. Guo, the legal representative of a company in Fuzhou, was defrauded of 4.3 million yuan within 10 minutes. According to reports, scammers use AI face-changing and onomatopoeia technology to pretend to be acquaintances to carry out fraud.

Fingerprint payment, face recognition unlocking, and voice unlocking biometric identification technologies have penetrated into our daily lives. The biggest problem with face-changing and voice-changing apps is that if the biometric information collected on a large scale is leaked and misused, it will cause various frauds and forgery. Especially with the accelerated iteration of AI technology, the threshold of synthetic technology continues to decrease, and the risk of fraud is accumulating, which requires a high degree of vigilance.

At present, AI fraud is mainly based on voice synthesis and AI face changing.

The first: sound synthesis

Scammers extract someone's voice by harassing phone recordings, etc., and synthesize the sound after successfully obtaining the material, and then use the fake voice to deceive the other party, so the deceived person believes the voice when they hear it.

insert image description here
The second type: AI face change

Scammers first analyze all kinds of information published by the public on the Internet, and screen the target group according to the deception to be carried out. Then use AI technology to perform face-changing camouflage, and then confirm the information in the form of video to defraud the other party's trust.

insert image description here

3. States strengthen the supervision of deep synthesis

The collection and utilization of biometric information has great asymmetry. What an individual may get is only an entertaining video clip, but the collector has the most basic personal information. Among them, face information is more open and easier to collect. At present, the trend of strengthening supervision in the international arena is also more obvious. Many countries are beginning to implement legislation to strengthen the constraints on the collection of face information and the application scenarios of recognition technology.

In June 2019, lawmakers from both parties in the United States proposed the "Deepfakes Report Act of 2019" in the House of Representatives and the Senate at the same time, which clarified the definition of "digital content forgery" and required the Department of Homeland Security to regularly release deepfakes. Technical related reports. The bill regards the Department of Homeland Security as the competent coordinating department, and stipulates that the production of the report shall be under the responsibility of the Undersecretary of Science and Technology Affairs of the Department of Homeland Security, and the divisions and bureaus in charge of it are also involved in the research of deep forgery technology, such as the Science and Technology Bureau, Cyber ​​Security Department, Homeland Security Advanced Research Projects Agency, et al.

my country's "Internet Information Service Deep Synthesis Management Regulations" was reviewed and approved at the 21st office meeting of the State Internet Information Office in 2022 on November 3, 2022, and approved by the Ministry of Industry and Information Technology and the Ministry of Public Security. Since January 2023 It will come into force on the 10th.

Among them, Article 7 specifies:

Deep synthesis service providers should implement the main responsibility of information security, establish and improve management systems such as user registration, algorithm mechanism review, scientific and technological ethics review, information release review, data security, personal information protection, anti-telecom network fraud, emergency response, etc., with security Controllable technical safeguards.

Article 17 specifies:

Providers of in-depth synthesis services that provide the following in-depth synthesis services, which may cause confusion or misidentification by the public, shall place prominent marks on the reasonable positions and areas of the generated or edited information content, and remind the public of the in-depth synthesis:

(1) Text generation or editing services that simulate natural persons such as intelligent dialogue and intelligent writing;
(2) Editing services that generate voices such as synthetic human voices and imitation voices or significantly change personal identity characteristics
; Editing services that generate or significantly change personal identity characteristics such as replacement, face manipulation, and gesture manipulation of images and videos; (4) Generation or
editing services such as immersive realistic scenes;
(5) Other functions that generate or significantly change information content services.

Providers of in-depth synthesis services that provide in-depth synthesis services other than those specified in the preceding paragraph shall provide prominent identification functions and remind users of in-depth synthesis services that they may carry out prominent identification.

4. Personal Information Protection Tips

  • Be very vigilant about some service apps that require uploading personal frontal photos and ID card photos, taking photos or recording videos with ID cards in hand.
  • Be extra cautious when uploading personal photos to the Internet and posting photos in Moments, and don’t easily accept other people’s requests to take pictures and group photos when you go out.
  • When publishing photos with hand patterns such as "scissorhands", care should be taken not to disclose fingerprint information to reduce unnecessary risks.
  • Do not fingerprint yourself on untrusted or unclear-purpose devices. After unlocking the door lock or mobile phone with your fingerprint, wipe the surface of the sensor with your hand to blur the fingerprint image and prevent others from extracting fingerprint information.

Guess you like

Origin blog.csdn.net/apr15/article/details/131750573