What routines did the first batch of people who used ChatGPT to go to jail play?

Generate complete fraudulent words, routines and routines in a short period of time through GPT

"Virtual characters" can be used as virtual customer service, and virtual lovers can also play killing pigs

Make the victim think they are "in love"

The routine is still the old routine

But scammers have new tools

Put on a lot of vests again

hard to guard against

Do you think OpenAI doesn't know that crooks will use this tool for bad things?

Of course I know, they have also added a security protection mechanism.

But you can't prevent liars from being educated. People will also break through the security mechanism.

Just like when bad guys used the function of sending text messages to carry out fraud, operators had to limit the number of text messages sent by individuals, and the ports used by enterprises were also checked more and more strictly.

——But liars can change tools. They can change WeChat, live broadcasts, and pop-up groups without text messages...

Now bad guys break through the security mechanism of the website, and can use ChatGPT to quickly program and write software to continue to do bad things; such as writing phishing emails, mass sending, and obtaining your personal information; there are also encryption tools to remotely lock your computer and then blackmail you; Such as generating scripts, remotely attacking your exchange network, and so on.

Using AI to talk about a "virtual love" can only be regarded as an entry-level deception method for scammers.

The powerful production efficiency of artificial intelligence can produce a large number of "virtual lovers" with almost zero marginal cost.

Using powerful technology, a scam message can be generated within seconds, from self-introduction, hobbies, to a gripping "love letter".

Combined with the "user portrait" gameplay, the characteristics of the target user are entered, and the dialogue technique is personalized and upgraded, the generated love letter is not a cookie-cutter love story, but a customized lingering.

 

Then use generative AI to create a "virtual lover" to lure target users into love

The next step is the highlight. Use GPT to assist in writing the payment collection program, and use the link to obtain bank card information, and you can cheat money in one go.

Of course, this method of "killing pigs" is not new. In some community groups in Hangzhou, anti-fraud notices are often issued, and it seems that people are "killed pigs" every week.

 

The difference is that in the past, it was a real person who chatted with you and sent you messages, at least it took the brainpower and time of a real person; now AI can imitate a real person's face, imitate a real person's voice, and imitate a real person's video , It can also output text 24 hours a day without rest, making it impossible to tell whether the other end of the screen is a person or a piece of code.

In order to test whether people have "trust" in AI love words, four months ago, McAfee, the world's largest security technology company, used AI to generate a love letter and sent it to 5,000 users around the world.

 

These users know that this love letter may be generated by artificial intelligence, but this does not affect the fact that 33% of people still believe that these love words come from real people, not a bunch of code.

Only 31% of users believed that the love letter was generated by AI

The remaining 36% of users said that they could not distinguish whether it was artificial handwriting or artificial intelligence handwriting.

For the unsuspecting, artificial intelligence is much easier to pick up.

In addition to "virtual lovers", hackers are still using GPT to generate ransomware and malicious code in batches.

You have to know that the nourishment of ChatGPT and the knowledge content it possesses are much more than all the books that all fans of my account have read. When bad guys use high-intelligence tools, you don't even know that you have been tricked.

It's the kind where you've been scammed -- and still counting the money for the scammer.

It is not because of the emergence of GPT that it is so easy for scammers to succeed.

Every technological advancement has room for bad guys to take advantage of it, just like someone cheating money on Douyin and blatantly cheating.

Of course, they could be updating the deception every second.

 Unregulated artificial intelligence has become a huge threat to us before we know it. So regulation is very important.

 

Guess you like

Origin blog.csdn.net/qq_16027093/article/details/131645680