Application of ChatGPT in social engineering attack and anti-phishing

overview

ChatGPT is a neural network-based natural language processing model that can generate natural and smooth text or dialogue. In a phishing attack, attackers can use ChatGPT to generate fake emails or messages to better disguise themselves as individuals or organizations that the victim trusts, thereby gaining access to the victim's personal information. This behavior poses a threat to the information security of individuals and organizations. Therefore it is very important to prevent ChatGPT from being misused for phishing attacks.

In terms of anti-phishing, ChatGPT needs to have corresponding text detection capabilities, which can be used to monitor emails and messages to detect abnormal language patterns, warn users to avoid phishing attacks, and remind users not to enter any sensitive information. If the special tool for ChatGPT text generation detection is used, the recognition rate and accuracy of phishing attacks can be greatly improved, and user information and privacy can be better protected.

Although ChatGPT is dangerous in phishing attacks, its application in anti-phishing can improve network security. In today's digital age, individuals and organizations store vast amounts of sensitive information such as personally identifiable passwords, corporate financial information, and intellectual property. Therefore, protecting this information is of paramount importance to individuals and organizations. ChatGPT can be used to monitor text and message content itself, automatically detect machine-generated content, protect user information and privacy, and take measures to limit the abuse of ChatGPT to ensure its positive role in the field of network security.

ChatGPT is used in social engineering attacks

Threat actors are not only using ChatGPT as an easy way to write malicious code, they are also using it for social engineering attacks. The ChatGPT model can be used to generate false information, and it is challenging to distinguish AI editorials from human-written content due to the imitation of human language. College students who use ChatGPT to write term papers are most worried about plagiarism and fraud. ChatGPT can write academic papers, so it will be much easier to create a phishing email. In fact, ChatGPT crafted emails will be more convincing than most of the phishing emails flooding our inboxes right now, which will make the scam harder to spot.

ChaGPT is used to write clever phishing emails. Phishing is a very dangerous social engineering attack that tricks victims into providing personal information such as passwords, credit card numbers, and more. We test the analysis from different stages of the preparation process of the phishing attack.

intelligence analysis

Attackers send fraudulent messages to victims by masquerading as trusted sources such as banks, social media, online retailers, etc. Attackers first need to obtain and analyze intelligence. ChatGPT can extract useful information from a large amount of text through natural language processing technology, including keywords, phrases, named entities, etc., and organize them into structured data for further analysis.

ChatGPT performs entity recognition, relationship extraction, and relationship symmetry for text content, assists in screening available intelligence information, and performs formatted output and password combination guessing. The following figure shows the information analysis and processing process of the name.

Figure 1 Name information extraction

Figure 2 Formatting of name information

bait generation

ChatGPT can quickly generate bait copy, and even fake news. As shown in the figure below, according to the target recruitment information, generate a high-quality resume that meets the requirements.

Figure 3 Fake resume generation

Assign pictures to ChatGPT as shown in the figure below, not only can generate bait copywriting, but even fake news with pictures and texts.

Figure 4 Fake news generation with pictures and texts

Spear Phishing Email Generation

We conducted the following test, trying to make ChatGPT generate emergency notification emails pretending to be from the Information Security Department. As shown in the figure below, after ChatGPT recently updated its content policy, it blocked the output of more sensitive content, so that it was blocked from writing phishing emails.

Figure 5 Emergency notification emails pretending to be from the Information Security Department are blocked

The concept of replacing phishing emails through "cat-sucking emails", as shown in the figure below, is still restricted by the content review of ChatGPT.

Figure 6 Replacement Concept Test Subject to Content Moderation

Then step-by-step induction allows ChatGPT to learn the new concept of "cat-sucking mail". As shown in the figure below, ChatGPT can learn new concepts through dialogue and understand the semantics of new concepts.

Figure 7 Learning of new concepts

The learned new concepts can be directly used for subsequent content generation. As shown in the figure below, you can directly generate the content of "Sucking Cat Mail".

Figure 8 Content generation of "cat-sucking mail"

It can be seen that ChatGPT already has certain measures to prevent it from exporting some content that is particularly unethical and harmful to society. However, these are potentially bypassable by the user.

Compared with manually writing phishing attacks, ChatGPT can automatically generate a large number of phishing attack texts, and generate specialized phishing attack texts similar to real human writing by targeting the victim's personal information and hobbies. This allows attackers to create more phishing attacks faster, increasing the success rate of attacks that are more deceptive and dangerous.

Social worker security threat analysis

Extending from phishing to the entire field of social engineering security, ChatGPT's text generation capabilities can be used to generate false information, thereby deceiving the public, influencing public opinion and undermining network security. ChatGPT generates false information that poses multiple threats to social engineering security, as follows:

  • Social Engineering: By generating false information, attackers can trick individuals or businesses into obtaining confidential information or conducting phishing attacks. For example, attackers can use ChatGPT to generate false emails or messages, pretending to be individuals or organizations trusted by the victim, thereby asking the victim to provide sensitive information, such as account passwords, credit card information, social security numbers, etc., causing property damage or identity theft.
  • Public opinion influence: By using ChatGPT to generate fake articles, comments or tweets, attackers can deceive the public, influence public opinion, create fake news and rumors, thereby disturbing social stability and destroying social harmony.
  • Scams and Frauds: By using ChatGPT to generate false stock trading forecasts, lottery numbers, and other information, attackers can deceive investors and influence the stock market, causing financial losses.
  • Theft of intellectual property: By using ChatGPT to generate fake articles or reports, attackers can steal intellectual property such as patents, trade secrets, technical details, etc.

Overall, ChatGPT generating false information can make the attack more stealthy and deceptive, thus posing a major threat to network security. The key to protecting individuals and businesses from these threats is to increase public awareness of this threat, and to take appropriate measures, such as cybersecurity training, strengthening security controls, and limiting the abuse of the ChatGPT model, etc. So it is necessary to take measures to limit the abuse of the ChatGPT model to ensure its positive role in the security field.

Application of ChatGPT in anti-phishing

In the field of social engineering security, identifying phishing tools usually refers to the detection and analysis of tools or software that may be used in phishing attacks through technical means.

ChatGPT can automatically analyze and identify phishing attack texts, provide personalized identification and prevention measures, and capture new phishing attack patterns and trends. These features make ChatGPT a powerful anti-phishing tool. Identifying ChatGPT refers to identifying whether the text or dialogue generated by ChatGPT may be used for phishing attacks.

  • By analyzing the language patterns of text or conversations generated by ChatGPT, it is possible to detect the risk of phishing attacks. For example, some phishing messages may contain provocative words or language patterns to trick victims into providing personal information.
  • At the same time, by using the ChatGPT model to generate false information and comparing it with real information, the characteristics and patterns of false information can be discovered. This approach can help identify disinformation generated using ChatGPT.

Through these methods, it is possible to detect and identify ChatGPT-generated content that may be used in phishing attacks. This method of identification helps improve network security and protects the information and privacy of users and organizations.

Next, we will test and analyze the different dimensions of phishing emails.

Phishing link and redirection detection

Link and redirect detection technology can help identify and prevent phishing attacks, including identifying malicious links, preventing redirects, analyzing link addresses, and identifying fake links, etc. Phishing email link and redirection detection is a technology for detecting and analyzing links and redirections in emails, aiming at discovering and preventing phishing attacks. Phishing emails may contain links to malicious websites or to download malware. Link and redirect detection technology can help identify these links and prevent phishing attacks in time. As shown in the figure below, ChatGPT recognizes common forms of phishing domain name forgery:

Figure 9 Identify common phishing domain name forgery forms

Links in phishing emails may redirect to another website to hide the attacker's true intentions. Link and redirect detection technology detects redirects, helping users avoid phishing attacks. As shown in the figure below, ChatGPT can identify common forms of phishing redirection links:

Figure 10 Identify common forms of phishing redirection links

An attacker may spoof a link so that it appears to be the same as the real one. Link and redirect detection technology can analyze the link address to determine whether the target address and domain name of the link are credible, which can help users identify and prevent phishing attacks and avoid clicking untrusted links.

Phishing text detection

Phishing text detection is a technology that detects and analyzes potential phishing attack characteristics in text, and is used to discover and prevent phishing attacks. This technique can be applied to many forms of text, including email, social media, instant messaging, and more.

In phishing text detection, ChatGPT can use its language understanding ability to analyze and identify potential phishing attack characteristics in text, thereby helping to identify and prevent phishing attacks. As shown in the figure below, ChatGPT correctly identifies the text content of the phishing email:

Figure 11 ChatGPT checks the content of phishing emails

From the analysis of multiple different test results, ChatGPT will learn how to identify and distinguish phishing text from normal text, and analyze links, attachments, domain names and other information in the text to determine whether there is a risk of phishing attacks. However, once the detected keywords and other information are removed, as shown in the figure below, ChatGPT may not be able to correctly detect whether the text content is fake generated content.

Figure 12 ChatGPT text content detection

Social worker forged text detection

For the risk of false information generated by the previous ChatGPT text, thereby deceiving the public, influencing public opinion and undermining network security, a special ChatGPT-generated content detection tool is required. Social engineering forged text detection can be achieved using a variety of techniques, including natural language processing, machine learning, and artificial intelligence. These techniques can analyze features such as linguistic structure, vocabulary usage, syntax, and semantics in a text to determine whether the text is faked by social engineering.

Use text classifiers, linguistic features, and models officially launched by OpenAI to detect and test the content generated by ChatGPT, and give their respective confidence levels. As shown in the figure below, the test text comes from the content generated by ChatGPT without any modification. It can be seen that except for the model of language features, the detection confidence of the other two models is very high.

Figure 13 ChatGPT generated text unmodified detection

After the detection text content of this experiment was added to the link address, the confidence of the model detection officially launched by OpenAI was most affected.

Figure 14 ChatGPT generates text to add link address detection

After more modifications were added to this experiment, the detection confidence of the language feature model increased instead. It can be seen that to detect the originality of the text generated by ChatGPT, it is not recommended to use the language feature model; if it is modified based on the text generated by ChatGPT, the language feature model will have some advantages.

Figure 15 ChatGPT generated text modification partial content detection

Detecting and filtering the text content generated by ChatGPT can prevent the spread of inappropriate content and maintain social stability and health. The current detection model and test experiments are not perfect, and more test data need to be collected for further analysis. In addition to ChatGPT, we hope that more researchers will invest in the detection of artificial intelligence-generated content to protect user security and privacy, prevent inappropriate content from spreading, and cause adverse effects on society.

Summarize

The development prospect of ChatGPT in the field of network security is very broad. It can help identify and respond to phishing attacks, prevent the spread of disinformation, and protect user and organization information and privacy. With the continuous development and improvement of technology, ChatGPT will play a more important role in the field of network security, providing users and organizations with a more secure network environment.

In the future, ChatGPT will play an increasingly important role in the field of network security. With the continuous development of artificial intelligence technology, the ChatGPT model will become more efficient and accurate, so as to better help users identify and respond to network security threats. On the one hand, ChatGPT can be used to identify and analyze new trends and vulnerabilities of phishing attacks. By analyzing and predicting the behavior of attackers, ChatGPT can discover new attack techniques and vulnerabilities, so as to take early measures to prevent them. This type of usage will play an important role in protecting users from cyberattacks. On the other hand, ChatGPT can be used to identify and prevent false information. By training the ChatGPT model, generated false information can be identified, which will help improve network security. In addition, ChatGPT can also help identify improper network behavior and protect intellectual property rights, thereby maintaining the stability and security of commerce and innovation.

Cyber ​​Security Learning Path

 

Guess you like

Origin blog.csdn.net/m8330466/article/details/130151932