How to prevent fraud risks brought by technologies such as AI? Starting from technology, law, education and other aspects

Preface

The Internet is a double-edged sword, this is a problem we often say.

With the rapid development of artificial intelligence technology, AI fraud has become an emerging threat facing today's society. Criminals use artificial intelligence technology to conduct fraudulent activities in a more efficient and intelligent way, posing huge risks to the security of individuals and organizations. Telecom network fraud is one of the illegal and criminal activities with the highest number of cases, the largest losses, and the strongest public response.

Recently, telecom fraud cases using AI technology to create face-swapping videos, synthesized voices, etc., and impersonating relatives, friends, and colleagues have continued to appear. For example, in some cases, fraudsters use other people's real names and photos to pretend to be others and add the victim's social account, and then use "AI face-changing" technology to have a short video call with the victim to gain the victim's trust and then commit fraud; in some cases, In the case, the fraudster extracts someone's voice through harassing phone recordings, etc., obtains the material and synthesizes the voice, so as to use the forged voice to talk to the victim and commit fraud; and so on. In these cases, criminals have relied on their highly deceptive fraud forms, precise fraud scripts, and realistic forged audio and video to make them difficult to detect, which has attracted the attention and vigilance of society and has become a new pain point and difficulty in the management of telecommunications network fraud. This article will reveal the dangers of AI fraud and discuss ways to deal with the challenges and preventive measures.

What is AI fraud

AI fraud refers to the use of artificial intelligence technology to commit fraud, such as using speech synthesis, image synthesis and other technologies to forge other people's voices, faces, identities and other information, and then defraud money, in order to deceive or induce victims to transfer money, leak privacy, Criminal acts for the purpose of providing services, etc. This kind of fraud is often very sophisticated, making it difficult to tell the truth from the fake.

Case

Case number one

Not long ago, Baotou police reported a case of fraud using AI: Mr. Guo, the legal representative of a company in Fuzhou, was defrauded of 4.3 million yuan in 10 minutes. According to the report, a "friend" of Mr. Guo suddenly contacted him via WeChat video and said that a client friend of his was bidding in another place and needed a deposit of 4.3 million yuan and required a public-to-public account transfer. He wanted to borrow money from Mr. Guo's company. Account transfer. Based on his trust in his friend and having already verified his identity via video chat, Mr. Guo directly transferred 4.3 million yuan in two installments to the other party's designated bank card without verification. Afterwards, Mr. Guo found out he had been deceived when he called his friend. It turned out that criminals used intelligent AI face-changing and onomatopoeia technology to pretend to be a friend and defrauded him. "The criminals used video intercom to simulate the voice of the victim's friend. It seemed that the person was the same person and the voice was his, but in fact it was not the victim's friend at all."

Case 2

Coincidentally, in February 2022, a Mr. Chen went to the police station to report a crime, claiming that he had been defrauded of nearly 50,000 yuan by a "friend". After verification by the police, the fraudsters used the video posted by one of Mr. Chen's friends on the social platform, intercepted his facial image and then used "AI face-changing" to synthesize it, creating the illusion that Mr. Chen was video chatting with his "friend" to deceive his trust, thereby committing fraud.

On October 7 this year, the Beijing Supervision Bureau of the State Financial Supervision and Administration Bureau issued a risk warning stating that some criminals illegally obtain personal information, use computer algorithms to simulate and synthesize the portraits, faces and voices of relatives, leaders, colleagues or public officials of the deceived, and impersonate the above persons. Identity fraud; after gaining the victim's trust, use pre-prepared routines to send fraudulent information such as bank card transfers, virtual investment and financial management, and bill rebates to the victim, and use video calls, voice bombing and other means to further reduce the victim's Victims often have difficulty detecting abnormalities in a short period of time because of their vigilance. Once they accept the fraudsters' tricks and complete the transfer, there is no news from the other party.

Characteristics of AI fraud

  1. High technical content.
    AI fraud requires the use of advanced AI technologies, such as voice synthesis, face-changing, image generation, etc., to produce realistic audio and video materials, and through big data analysis, natural language processing, etc., to obtain and use the victim’s personal information, social relationships, consumption habits, etc.
  2. is highly concealed.
    AI fraud is often carried out through online platforms or social software and is difficult to detect and track. At the same time, because AI technology can imitate or forge the identity characteristics of others, it is difficult for victims to distinguish authenticity and it is easy for victims to develop a sense of trust and security.
  3. High success rate.
    AI fraud can specifically select target objects and carry out customized inducements or threats based on their characteristics and needs. At the same time, because AI technology can imitate or forge the identity characteristics of others, it is difficult for victims to distinguish authenticity and it is easy for victims to develop a sense of trust and security. According to reports, the success rate of AI fraud is close to 100%.

How to prevent and respond to AI fraud

First of all, we should face up to the existence and harm of AI technology crimes and strengthen the formulation and improvement of relevant laws and regulations. At present, my country still has some deficiencies and gaps in the definition, identification, conviction, and sentencing of crimes using AI technology. We should formulate laws and regulations specifically targeting AI technology crimes based on international practices and practical experience, and revise and update them in a timely manner to adapt to the needs of technological development and social changes. At the same time, we should also strengthen the publicity and education of relevant laws and regulations to improve the public's legal awareness and literacy.

Secondly, we should strengthen the supervision and review of AI technology and establish effective coordination and responsibility mechanisms. At present, my country's supervision of AI technology still has some problems such as fragmentation and lack of unified standards and norms. We should establish and improve cross-departmental, cross-regional, and cross-field coordination mechanisms, and clarify the responsibilities and authorities of all aspects. At the same time, we should also strengthen the review and supervision of AI technology developers, providers, users, etc., and establish and improve corresponding reward and punishment mechanisms.
Insert image description here
Finally, we should improve our own prevention awareness and capabilities, and seek help and protection in time. As the general public, we should remain vigilant and rational when facing possible AI fraud, and follow the following principles:

  • Do not trust strangers or unfamiliar sources of information and verify the authenticity of the information.
  • Do not provide personal information or privacy easily, and pay attention to protecting the security of personal information.
  • Do not transfer money or provide services easily, and pay attention to protecting the safety of property.
  • Do not click on links or download software easily, and be careful to guard against viruses and Trojans.

If you encounter suspicious circumstances or are accidentally deceived, please call the police for help in time and keep relevant evidence.

In short, AI fraud is a new form of crime that brings us new challenges and tests. We can neither deny or resist the positive significance of AI technology itself for fear of choking, nor can we take it lightly and ignore or ignore the negative impacts that AI technology may bring. We should start from many aspects and take effective measures to promote the healthy development of AI technology while protecting our own rights and interests.

suggestion

  1. Improve technical awareness: Understand the basic principles and common applications of AI technology, be wary of unknown information and unfamiliar numbers, and do not believe them easily.

  2. Strengthen security precautions: In the Internet environment, strengthen network security precautions, such as installing anti-virus software, firewalls and other network security tools. Spam messages, unfamiliar numbers, and information related to personal privacy received should be deleted or changed in a timely manner. If you encounter suspicious situations, seek help or call the police promptly.

  3. Strengthen supervision and cooperation: AI electronic fraud is a criminal act that requires relevant departments to strengthen supervision and crackdown. At the same time, it is also necessary for all sectors of society to work together to be vigilant and report similar incidents, establish an AI anti-fraud platform, and jointly resist and combat this criminal behavior.

  4. Research on AI recognition technology and countermeasures: In order to better identify and prevent AI fraud, it is essential to research AI recognition technology and countermeasures. AI recognition technology can train AI technology by establishing fraud data sets, and ultimately achieve the effect of accurately identifying fake information. In terms of anti-control measures, anti-control algorithms can be developed to effectively contain malicious programs.

  5. Strengthen education popularization and media publicity: In the face of increasingly complex AI electronic fraud techniques, the public's popularity also needs to be followed up to improve their understanding and identification capabilities of AI technology. At the same time, it is also necessary to strengthen publicity about the dangers of AI technology in fraud, and let more people know about this criminal behavior through various media channels, thereby increasing the public's vigilance against AI electronic fraud and conducting a comprehensive crackdown on this criminal behavior.

  6. Strengthen international cooperation. The use of AI technology to commit fraud is a form of transnational crime, and international cooperation needs to be strengthened to jointly combat this crime. Countries should strengthen information sharing, jointly combat transnational criminal organizations, and maintain the security and stability of the international community.

In addition, in terms of preventing AI electronic fraud, the government, enterprises and the public also need to strengthen supervision and cooperation to jointly fight crime. The government should increase the supervision and crackdown on AI technology, while companies should strengthen protection measures for their own information and user information, and formulate emergency plans. Individuals in the public should strengthen their security awareness and preventive measures, such as turning on two-factor authentication, password encryption and other methods to improve account security. In particular, they should remain highly vigilant about information requiring operations such as transfers.
Insert image description here
In short, preventing AI e-mail fraud is a long-term and arduous task that requires joint efforts from all walks of life. We need all industries, departments, and countries to work together and strengthen information sharing to promote the healthy development of AI technology.

postscript

AI fraud is an emerging threat that poses serious risks to the security of individuals and organizations. In order to respond to and prevent AI fraud, we need to be more vigilant, strengthen security awareness, adopt technical preventive measures, and strengthen cooperation and supervision. Only through multi-party cooperation and joint efforts can we effectively deal with the challenge of AI fraud and protect the interests of individuals and organizations.

Reprinted from:https://blog.csdn.net/u014727709/article/details/134099413
Likes, collections are welcome. Comments and corrections are welcome

おすすめ

転載: blog.csdn.net/u014727709/article/details/134099413