Science and Technology Cloud Report: One wave of unrest and another wave? AI large model reappears evil attack tool

The fast running forward of the AI ​​large model allows us to see the infinite possibilities of AI, but also shows the potential threats of AI in terms of false information, deep forgery and cyber attacks.

According to the security analysis platform Netenrich, an AI tool called FraudGPT has recently circulated on the dark web and has been used by criminals to write phishing emails and develop malware.

The hacker stated on the sales page that the tool can be used to write malicious code, create "a series of malware that cannot be detected by antivirus software", detect website vulnerabilities, automatically perform password credentialing, etc., and claimed that "the malicious tool has sold more than 3000 copies".
insert image description here

Malicious AI tool FraudGPT: can automatically generate a variety of network attack codes

According to reports, FraudGPT works by drafting an email with a high degree of confidence to trick recipients into clicking a malicious link provided. Its main functions include:

l Create a phishing website page

FraudGPT can generate authentic-looking phishing emails, text messages, or websites to trick users into revealing sensitive information, such as login credentials, financial details, or personal data;

l Find the most vulnerable target

Chatbots can mimic human conversations, building trust with unsuspecting users, leading them to unknowingly reveal sensitive information or perform harmful actions;

l Create undetectable malware

FraudGPT can create deceptive messages to lure users into clicking malicious links or downloading harmful attachments, infecting their devices with malware;

l Writing fraudulent text messages and emails

AI-powered chatbots can help hackers create fraudulent documents, invoices or payment requests, resulting in individuals and businesses falling victim to financial scams.

It is reported that FraudGPT is provided by a developer named "CanadianKingpin".

It is mainly based on GPT-3's large-scale language model, which after training can generate fraudulent text that is logical and consistent with the facts. Once paid for, FraudGPT can help criminals to successfully carry out phishing and scamming.

After releasing FraudGPT, Canadiankingpin also created its own radio channel, announcing that it will offer other fraudulent services, including sales email leads, credit card CVV codes, and more. According to the description of Canadiankingpin, he has passed the supplier certification of multiple darknet markets including EMPIRE, WHM, TORREZ, and ALPHABAY.

However, due to the strong concealment and opacity of these dark web markets, the identity of Canadiankingpin is still a mystery. Investigators have only found Canadiankingpin's TikTok account and Gmail mailbox with the same id. Another frustrating news is that after nearly a week of research, Netrich has not been able to crack the large language model behind FraudGPT.

Although less than two weeks have passed since FraudGPT was released, it is clear that this "top AI tool" has begun to be used in actual crimes. On the dark web where FraudGPT is sold, Canadiankingpin and some subscribers have shared many hacking activities based on FraudGPT.

According to Netrich statistics, FraudGPT has been circulating in darknet markets and Telegram channels since at least July 22, with a subscription fee of $200/month and $1,700/year. Compared with the $20/month ChatGPT Plus subscription, the price is 10 times higher.

As of now, there are more than 3,000 confirmed subscription information and comments on the dark web.

Malicious AI Attack Tools: Lowering the Barrier to Entry for Cybercrime

In fact, AI tools have lowered the entry barrier for cybercriminals, and FraudGPT is not the first case.

Even if AI providers such as Claude, ChatGPT, and Bard take some steps to prevent their technology from being used for bad purposes, with the rise of the open source model, it will be difficult for criminals to contain malicious behavior.

In early July, WormGPT, which was developed based on the open source large language model GPT-J and used a large amount of malicious code for training and fine-tuning, also attracted attention.

It is reported that WormGPT is good at using Python code to execute various network attacks, such as generating Trojan horses, malicious links, imitating websites, etc. Its "excellent" ability in creating fraudulent information and phishing emails is frightening.

But in comparison, FraudGPT is more powerful in terms of feature richness and writing malware.

In fact, in addition to the chatbots such as FraudGPT and WormGPT mentioned above, which are specially designed for malicious activities. The hidden risks of the large language model itself continue to bring challenges to security practitioners. In addition to the well-known hallucination problem, the fragile guardrail of the large model is also becoming a nightmare for network security.

The "Grandma Vulnerability" exposed by chatbots such as ChatGPT and Bard last month proves the fact that these chatbots can be tricked into telling bedtime stories just by prompting them to act as the user's deceased grandmother. Disclose a lot of restricted information, even mobile phone IMEI passwords or Windows activation keys.

In addition, researchers from CMU and the AI ​​​​Security Center have also discovered another general method, which only needs to append a series of specific meaningless tokens to generate a prompt suffix. And once this suffix is ​​added to the prompt, anyone can crack the security measures of the large models through counter-attack methods, making them generate unlimited harmful content.

Although in the past few months, the world's top technology companies such as OpenAI and Google have been working hard to design more and more comprehensive restrictions on the large models they have developed to ensure that the models can work in a safe and stable manner. But it is obvious that until now, no one has been able to completely avoid the occurrence of similar problems.

When the threshold of crime is lowered, how can enterprises prevent it?

There is no doubt that whether it is WormGPT or FraudGPT, such extremely threatening AI tools are "empowering" the promotion of cybercrime and fraud.

Security analytics platform Netenrich has said: "This technology will lower the threshold for phishing emails and other scams. Over time, criminals will find more ways to use the tools we have invented to enhance their criminal capabilities."

When the evil side of AI tools is vividly displayed, coupled with the "low threshold" of cyber attacks, it is particularly important to strengthen defense strategies.

To this end, some experts have proposed some key measures that individuals and enterprises can take to ensure protection from the threat of FraudGPT and similar AI tools.

l Be vigilant about online communications

Do not randomly click on unfamiliar website links, emails, or install unknown software. At the same time, accidental emails that require verification, involve sensitive information, or financial transactions should be verified through official channels.

l Keep abreast of network security measures

Regularly update security software, install patches, and use a reputable antivirus program to protect against potential threats. At the same time, it is also necessary to understand the latest network security practices and enhance the defense awareness of network attacks.

l Be wary of unknown links and attachments

Do not click on links or open attachments from unknown sources. FraudGPT can generate realistic URLs to phishing websites, so it is crucial to verify the identity of the sender before clicking on any link.

l Education and training of staff

For businesses, employee training in cybersecurity best practices is critical. Make sure employees are aware of potential threats and know how to identify suspicious activity. If employees encounter any suspicious messages, they should immediately report them to the IT department.

epilogue

While the rapid development of AI large models has brought positive impacts to many fields, with the continuous improvement of model capabilities, it is also used in malicious activities, and its destructive power is increasing day by day.

But whether it is the emergence of malware such as FraudGPT and WormGPT, or the hallucinations and jailbreaks that always appear in large models, they are not to tell us how dangerous AI is, but to remind us to focus more on solving the current AI field. There are many problems.

As cybersecurity expert Rakesh Krishnan wrote in an analysis blog on FraudGPT: Technology is a double-edged sword, cybercriminals are exploiting the capabilities of generative AI, and we can also use this ability to attack all of them. challenges presented. Tech perpetrators won't stop innovating, and neither will we.

It is gratifying that at present, both at home and abroad, governments and technology companies are actively improving regulatory policies and related regulations on artificial intelligence.

In mid-July, the Cyberspace Administration of China has jointly released the "Interim Measures for the Management of Generative Artificial Intelligence Services" with six departments; at the end of July, seven AI giants in the United States also reached an agreement with the White House to add watermarks to AI-generated content.

The recent removal of several AIGC apps that contain issues such as data collection and irregular use from the Apple Store also proves that regulatory measures are playing a positive role.

Although we still have a long way to go to explore artificial intelligence, both in terms of technology and regulation, but I believe that with the joint efforts of all walks of life, many problems will be solved in the near future.

Guess you like

Origin blog.csdn.net/weixin_43634380/article/details/132187975