ChatGPT turns into an "AI spy": everything you say online will be monitored

Most people use ChatGPT to chat or assist in study or office work.

However, some “spyware” companies are now exploring how to use ChatGPT and other emerging AI to spy on users on social media.

One of the companies, Social Links, founded by a Russian entrepreneur, is using ChatGPT as an assistant to monitor user communications on platforms such as Facebook, Instagram, Twitter and Telegram.

picture

At a security exhibition in Paris, Social Links demonstrated the ability to use ChatGPT for "sentiment analysis" to help predict whether online activities will evolve into actual activities by using AI to assess the emotions of social media users or highlight topics that are commonly discussed in the group. violence, and whether law enforcement action is warranted.

Previously, Meta listed the company as a spyware vendor at the end of 2022 and banned 3,700 of the company's Facebook and Instagram accounts used to scrape information on social networking sites.

But Social Links denies any connection to the accounts, and Meta's accusations have not hurt the company, which now has more than 500 clients, half of them in Europe and more than 100 in North America.

Can AI become more just?

Social Links One analyst used an AI tool to assess online reaction to a recent controversial deal struck by Spain’s acting prime minister. The AI ​​tool scans Twitter for posts containing keywords and tags and evaluates their sentiment as positive, negative or neutral via ChatGPT, then displays the results in an interactive chart. The tool can also quickly summarize and analyze online discussions on major platforms (such as Facebook) and extract common discussion topics.

Social Links can also search for facial recognition matches once its tools flag someone for expressing "negative" sentiments on social media. Users can select a mugshot and use Social Links' own algorithm to find matches on social media, giving police a broader picture of an individual's identity.

But Talavera said the company doesn't store facial images and instead searches the web for matches, similar to doing a reverse image search on Google, though Social Links also uses its own facial recognition software that can search public social media Photo collections for media groups and users.

A senior U.S. policy analyst told Forbes that using AI tools like ChatGPT for technical eavesdropping may amplify false facts and biases, or it may It will make online discussions indifferent.

picture

Because everyone may feel that "they are being monitored, not necessarily by humans, but by AI. AI has the ability to report all traces of you online to humans and bring you unfair consequences."

OpenAI did not respond to requests for comment, but ChatGPT’s usage policy states that it is not allowed to be used for “activities that invade the privacy of others,” such as “tracking or monitoring individuals without their consent.”

Social Links says:

“We strictly adhere to OpenAI’s policy that we only use ChatGPT to analyze text: summarize content, identify themes, classify text as positive, neutral, or negative, and assess the sentiment of individual elements in the text.”

In order to ensure the fairness of AI, Meta spokesperson Ryan Brack said:

Meta has a team of more than 100 people focused on combating unauthorized web crawlers and potentially taking legal action against unauthorized scrapers.

AI’s “cannibalism”

An Italian surveillance company is using its latest Gens.AI tool to create lifelike human social media profiles based on a range of characteristics — not only generating legitimate-looking avatars but also releasing them onto Facebook as autonomous AI entities and on platforms such as Telegram. The company claims that as long as its avatar is realistic enough, it can "get close to the target and build a trusting relationship" and avoid detection, and undercover investigators use such fake profiles to learn more about suspects.

When you combine Social Links and tools like Gens.AI, you get a very real but bizarre "AI echo chamber" in which AI social media monitoring software made by one spyware company is used to monitor software produced by another AI characters created using ChatGPT and other large language models.

Perhaps this is the "cannibalism" of AI spyware.

picture

Finally, everyone should be cautious when speaking online and be careful of AI spies ~

Guess you like

Origin blog.csdn.net/xixiaoyaoww/article/details/134634072