These five types of business fraud threats are predicted to surge in 2024

Table of contents

New attacks brought by AI will multiply

Accounts and identities are more difficult to identify

Malicious data theft is still crazy

Account takeover fraud will surge

Internal leakage has become one of the important risks for enterprises


Fraud losses worldwide are showing an alarming growth trend, causing huge trouble and losses to businesses and consumers. Global fraud losses are estimated to be as high as $5.4 trillion, with UK fraud losses estimated at $185 billion. In the U.S., financial services companies experienced a 9.9% increase in fraud costs, underscoring the scale of the problem.

The main reasons behind this growing trend are technological advancements and increasing social engineering. As people increasingly turn to online and mobile channels for shopping, fraudsters are following suit, leveraging advanced technological means to conduct fraudulent activities. In addition, social engineering exploits one of the most complex and persistent security vulnerabilities, people, making many users who lack information security thinking a target for fraudsters.

Social media platforms have become an important tool for fraudsters. The world's 4.8 billion social media users provide a wide range of potential targets, but most users lack the necessary information security thinking and training to identify and avoid fraud. Phishing campaigns continue, and as AI tools mature and become more widely used, phishing lures become more and more convincing. These tools can generate realistic text and content, making people more susceptible to scams.

Dingxiang Defense Cloud Business Security Intelligence Center predicts that business fraud risks will mainly have the following five trends in 2024.

picture


New attacks brought by AI will multiply

With the widespread application of artificial intelligence (AI) in various industries, the security threats it brings have also attracted increasing attention. AI is becoming a new threat method. Attackers using AI technology will bring unprecedented risks to enterprises and individual users . In 2024, all industries will face a surge in cyberattacks using machine learning tools.

picture

Spread misinformation. Generative AI can be used to spread misinformation or craft realistic phishing emails. Some criminals have reportedly begun leveraging generative AI tools like ChatGPT to create phishing emails that sound as professional as legitimate businesses. These emails, often pretending to be from a bank or other institution, ask victims to provide personal information or funds and risk financial loss or identity theft if they do not follow instructions.

Undermining network security. Malicious or erroneous code generated by AI can be devastating to cybersecurity. As more and more enterprises use AI for business applications such as data analysis, healthcare implementation and user interface customization, hackers may exploit vulnerabilities in these applications. According to reports, AI security research papers have exploded in the past two years, and the 60 most commonly used machine learning (ML) models have at least one security vulnerability on average. This means hackers can exploit these vulnerabilities to control or compromise devices and systems that use these models.

Increased risk of fraud. Fraudsters can also use AI technology to mimic legitimate ads, emails and other communications, increasing the risk of fraud . This AI-driven approach will lead to a surge in low-quality activity as the barriers to entry for cybercriminals will be lowered and the potential for deception will increase.

Manipulate data and decisions. In addition to cybersecurity threats, generative AI may also be used to manipulate data and decision-making . Attackers may seek to poison the source data used by AI, causing organizations that rely too much on AI to be systematically misled in their decision-making. This may involve removing certain key information from the data or adding false information, causing the AI ​​system to produce inaccurate results.

picture


Accounts and identities are more difficult to identify

Impersonation fraud has long been a common fraud tactic. But with the advancement of artificial intelligence (GenAI), synthetic identity theft and fraud is easier than ever. This technology enables fraudsters to create identities at scale, making it easier to generate trustworthy synthetic IDs.

Based on the fusion of "deep learning" and "forgery", AI is able to create convincing fake audio, video or images. This technology enables fraudsters to quickly create new identities that are more trustworthy. Fraudsters are able to create new identities by piecing together elements of personal information and combining them with false identifiers. These identities can be used for various frauds, such as credit card fraud, online fraud, etc.

picture

According to the McKinsey Institute, synthetic identity fraud has become the fastest-growing type of financial crime in the United States and is on the rise globally. In fact, synthetic identity fraud accounts for 85% of all fraud today. Additionally, GDG research shows that more than 8.6 million people in the UK use a false or someone else’s identity to obtain goods, services or credit.

Synthetic identity theft is an extremely challenging task because these identities often involve a combination of real elements (such as a real address) and fabricated information. This makes detection and prevention extremely difficult. The use of legitimate components and false details further complicates detection efforts. Additionally, because these fraudulent identities lack prior credit history or related suspicious activity, they are difficult to identify through traditional fraud detection systems.

Social media platforms have become a major avenue for synthetic identity fraud. Leveraging AI technology, fraudsters are able to create and distribute highly customized and convincing content that targets individuals based on their online behavior, preferences and social networks. This content blends seamlessly into users' feeds, promoting rapid and widespread distribution. This makes cybercrime more efficient and challenging for users and platforms.

For financial institutions, this is even more concerning. Fraudsters use AI to learn the business processes of various financial institutions. With an understanding of how various organizations operate, they can write scripts to quickly fill out forms and create identities that look trustworthy to commit credit fraud.

This is particularly concerning for new account fraud and application fraud. Each bank has its own account opening workflow, its own unique technology, and its own language for onboarding. An individual must appear trustworthy to open an account. Criminals can use GenAI tools to learn different banking screen layouts and stages. Armed with an understanding of how various organizations operate, criminals can write scripts to quickly fill out forms and create credible-looking identities to commit new account fraud. Banks will no longer have to answer the question “Is this the right person?” but also “Is my customer a human or an AI?”

It is expected that in 2024, with the further development and popularization of AI technology, more and more impersonation fraud will appear . False identities, fake accounts, etc. will become more common. Enterprises and individual users need to remain vigilant and enhance security awareness. At the same time, enterprises and organizations also need to continuously strengthen their AI security measures and internal training to reduce potential risks.

picture


Malicious data theft is still crazy

With the continuous development of artificial intelligence technology, AI's demand for data is also increasing. The emergence of generative AI and large models has placed unprecedented demands on data. Taking OpenAI's GPT model as an example, the amount of pre-training data for GPT-1 is only 5GB. By GPT-2, the amount of data has increased to 40GB, and the amount of data for GPT-3 has soared directly to 45TB. The market has gradually reached a consensus: Those who get data win the world, and data is the key to large model competition.

Currently, there are two main sources of AI training data: self-collection and crawling. Self-collection of data requires a lot of manpower, material resources and time, and the cost is high. Crawling data is relatively easy to obtain, which brings huge challenges to data security. Problems such as data leaks and privacy violations arise one after another. Especially with the emergence of Cybercrime as-a-Service, malicious crawler services and technologies can be purchased more easily. It is expected that in 2024, the threat of malicious crawlers stealing data will continue to increase.

picture

A malicious crawler is an automated program that can access and crawl website data by simulating user behavior. Malicious crawlers are usually used to illegally obtain user personal information, business secrets and other data. The data stolen by malicious crawlers not only includes functional public data such as public information on the Internet and user data on social media, but also includes unauthorized data such as internal corporate data, personal privacy data, and sensitive data such as financial data. Data, medical data, etc. In 2022, the US National Security Agency (NSA) released a report stating that data stolen by malicious crawlers has become an important source of cyber attacks.

The ways malicious crawlers steal data include: automatically accessing a large number of websites to obtain user information or data in batches; crawling targeted websites based on specific conditions; and conducting in-depth analysis of the internal structure of the website. Get deep crawls of more hidden data. Data shows that global data theft will reach 190 billion in 2023, of which more than 80% comes from malicious crawlers. Malicious crawlers usually automatically access websites through programming to obtain user information or data. This type of behavior not only violates users' privacy, but also causes huge economic losses to companies.

picture


Account takeover fraud will surge

ATO fraud is a form of account theft and impersonation (ID fraud) in which fraudsters use phishing and malware methods to obtain legitimate user credentials or purchase private information from the dark web, and then use technical means to take over the use of the stolen accounts. .

ATO fraud has been on the rise for many years. In 2022, ATO fraud accounted for more than one-third of all fraudulent activity reported to the FTC. In 2020, ATO fraud increased significantly by 350% year-on-year, with 72% of financial services companies suffering such attacks. In 2021, account takeovers caused 20% of data breaches, costing consumers and businesses more than $5.1 billion.

The intensification of account theft and fraudulent use is due to the increase in phishing attacks and data leaks, as well as the frequent occurrence of data leaks, making it easier for fraudsters to obtain users' personal information and passwords. On the other hand, AI technology can help fraudsters quickly identify suspicious accounts and automatically generate effective attack tools.

picture

Fraudsters will transfer account balances, points, vouchers, etc., and will also use stolen accounts to send phishing emails and post false information to fake orders and whitewash comments to make them look more authentic. In addition, fraudsters may use stolen accounts to create fake websites to sell goods or services, post threats, harass, or spread hate speech. In short, fraudsters' attempts to steal accounts have caused serious losses to both individuals and businesses. Individuals may face financial losses, identity theft, and other issues. Businesses may face financial losses, reputational damage, and other problems.

With the further development and popularization of AI technology, it is expected that account theft and fraudulent use will become more common and difficult to prevent in 2024.

picture


Internal leakage has become one of the important risks for enterprises

Insider threats have become a major challenge for businesses. Data shows that insider threats have surged by 44% in recent years. These threats can arise from the actions of employees, customers or suppliers, either maliciously or negligently, with employees with privileged access being the greatest source of fraud risk.

picture

Earlier research has found that language models may leak private information. With the application of generative AI in business, it has brought new risks and challenges. Internal data leakage will become one of the important business risks in 2024.

The increasingly common phenomenon of bring-your-own artificial intelligence (BYOAI), in which employees use personal artificial intelligence tools in the workplace, poses the risk of exposing sensitive company secrets, albeit inadvertently. In addition, fraudsters use AI to carry out complex attacks such as deep forgery, making it more difficult for victims to identify and prevent them. For example, fraudsters can create fake emails or documents to deceive employees or bypass security systems.

Guess you like

Origin blog.csdn.net/dingxiangtech/article/details/135388512