Potential harmful effects of generative artificial intelligence and the way forward (1)

This is version 1 of this article, reflecting the documented and projected harms of Generative AI as of May 15, 2023. Due to rapid changes in the development, use, and harms of Generative AI, we acknowledge that this is an inherently dynamic paper that will change in the future.

In this article, we use a standard format to explain the types of harm that generative AI can produce. Each section begins by explaining relevant background information and potential risks posed by generative AI, and then highlights specific harms and interventions by academics and regulators to remedy each harm. This paper draws on two taxonomies of AI hazards to guide our analysis:

1. Danielle Citron and Daniel Solove's theory of privacy injury types, including physical injury, economic injury, reputation injury, psychological injury, autonomous injury, discrimination injury and relationship injury;

2. Joy Buolamwini’s taxonomy of algorithmic harms, including loss of opportunity, economic loss, and social stigma, including loss of freedom, increased surveillance, reinforced stereotypes, and other perils of the powerful.

These taxonomies do not necessarily cover all potential AI hazards, and we use them to help readers visualize and contextualize AI hazards without limiting the types and diversity of AI harms that readers consider.

  • introduction

In November last year, OpenAI decided to release ChatGPT, a chatbot based on the large-scale language model GPT-3, which pushed artificial intelligence tools to the forefront of public awareness. New artificial intelligence tools for generating text, images, video and audio based on user prompts have exploded in popularity over the past six months. Suddenly, phrases like "stable diffusion," "illusion," and "value alignment" are everywhere. New stories emerge every day about the different capabilities of generating AI and their potential harms, but there's no clear indication of what's coming next or what impact these tools will have.

While generative AI may be new, its hazards are not. For years, AI academics have been warning us about the problems that large AI models can cause. These old problems are exacerbated as industry goals shift from research and transparency to profit, opacity, and concentration of power. The widespread availability and hype of these tools has led to an increase in both individual and mass harm. AI replicates racial, gender, and disability discrimination, harms that inextricably intertwine in every issue highlighted in this report.

The decision by OpenAI and others to rapidly integrate generative AI technologies into consumer-facing products and services has undermined longstanding efforts to make AI development transparent and accountable, sending many regulators scrambling to make sense of its impact Prepare. It is clear that generative AI systems can significantly amplify the risks to personal privacy, democracy, and cybersecurity. In the words of the OpenAI CEO, who does have the authority not to hasten the technology's release, "I am particularly concerned that these models could be used for widespread misinformation ... [and] offensive cyberattacks."

This rapid deployment of generative AI systems without adequate safeguards is clear evidence that self-regulation has failed. Hundreds of entities, from businesses to media and government entities, are developing and seeking to quickly integrate these untested AI tools into a wide range of systems. This rapid rollout will have disastrous results if the necessary protections of fairness, accountability and transparency are not present from the outset.

We are at a critical juncture as policymakers and industry around the globe focus on the enormous risks and opportunities presented by artificial intelligence. There is an opportunity to make this technology work for people. Companies should be required to demonstrate their work, clarify when AI is used, and provide informed consent throughout training, development, and use.

One thread of public concern has centered on the "existential" risks of AI, the speculative long-term risk of robots taking over jobs, socializing and eventually taking over humans. Some lawmakers at the state and federal levels have begun to take addressing AI more seriously—however, it remains to be seen whether their focus is simply supporting companies developing AI tools and requiring marginal disclosure and transparency requirements. Developing clear bans on high-risk uses, addressing the easy spread of disinformation, requiring meaningful and positive disclosure to facilitate informed consent, and strengthening consumer protection agencies are all necessary to address the harms and risks unique to generating AI , educate legislators and the public, and provide some avenues for harm mitigation.

-Ben Winters, Senior Legal Counsel

enhanced level of information manipulation

  • Background and Risks

The widespread availability of free and low-cost generative artificial intelligence tools has facilitated the dissemination of vast amounts of text, image, speech, and video content. Much of the content created by AI systems may be benign, or may be beneficial to specific audiences, but these systems can also facilitate the spread of extremely harmful content. For example, generative AI tools can and will be used to spread false, misleading, biased, inflammatory or dangerous content. As generative AI tools become more sophisticated, it will be faster, cheaper, and easier to produce this content, and existing harmful content can serve as the basis for more content. In this section, we consider five categories of harmful content that AI tools can instigate: scams, disinformation, misinformation, cybersecurity threats, and clickbait and surveillance ads. While we distinguish between disinformation (the purposeful dissemination of false information) and misinformation (the less intentional dissemination or creation of disinformation), the dissemination of AI-generated content will obscure the use of AI-generated content without prior editing Or the boundaries of fact-checking parties. Entities that use AI-generated output without due diligence should be held liable together with the entity that generated the output for the harm it causes.

Case Study – 2024 Election

Products using GPT-4 and subsequent large language models can create fast and uniquely human-sounding "scripts" that can be read in text, email, print, or via AI speech generators and AI video generators combined to distribute. These AI-generated scripts could be used to dissuade or intimidate voters, or to spread misinformation about voting or elections. In 2022, for example, voters in at least five states received text messages with intentionally incorrect voting information. This type of election misinformation has become commonplace in recent years, but generated AI tools will enhance the ability of bad actors to quickly spread credible election misinformation.

Congress must enact legislation to prevent the deliberate intimidation, deterrence, or distraction of voters through false or misleading information and false statements of support.

fraud

Scam calls, text messages and emails have gotten out of hand, hurting the public in many ways. In 2021 alone, 2.8 million consumers filed fraud reports with the FTC, claiming losses of more than $2.3 billion, and nearly 1.4 million consumers filed identity theft reports. Generative AI can accelerate the creation, personalization, and believability of these various scams using AI-generated text, speech, and video. AI speech generation can also be used to imitate the voice of a loved one to make a phone call requesting immediate financial assistance for bail, legal help or ransom.

According to a 2022 report by EPIC and the National Consumer Law Center, during June 2020-21, there were more than 1 billion scam bots on U.S. phones each month, resulting in nearly $30 billion in consumer losses, with the most common It is aimed at vulnerable groups such as the elderly, the disabled, and the indebted, and often uses automated voice-speak-scripts generated by text generators such as ChatGPT, designed to pretend to be an authority figure to intimidate consumers into sending money. In 2022, estimated consumer losses rise to $39.5 billion, with the Federal Trade Commission reporting $326 million in losses to scam text messages alone.

Auto-dialers, automated text messages, automated emails and mailers, coupled with data brokers selling lists of numbers or email addresses, enable entities to send large amounts of information at once. The same data brokers could sell lists of people who are potential targets, along with “insights” about their mental health, religion or sexual orientation. The extent to which data proxies are allowed to target individuals for use exacerbates the harm caused by AI.

Text generation services also increase the chances of successful phishing scams and election meddling by bad actors. This is already happening — in a 2021 study, researchers found that phishing emails generated by GPT-3 were more effective than those generated by humans. The resulting AI could help people with limited English skills craft natural-sounding and accurate emails, expanding the pool of potentially effective fraudsters in a way that makes detecting scams more difficult.

false information

Bad actors can also use generative AI tools to produce adaptable content designed to support campaigns, political agendas, or hate positions, and to spread this information quickly and cheaply across many platforms. The rapid spread of false or misleading content enabled by AI can also have recurring effects on generative AI: for example, when large amounts of disinformation are injected into digital ecosystems and informed by reinforcement learning methods to more generative systems When training, false or misleading inputs can produce increasingly incorrect outputs.

Using generative AI tools to accelerate the spread of disinformation could fuel efforts to influence public opinion, harass specific individuals, or influence politics and elections. The effects of increased misinformation can be far-reaching, and once disseminated cannot be easily countered; this is especially worrisome given the risks disinformation poses to the democratic process.

error message

The inaccurate output of large language models that generate text, such as Bard or ChatGPT, has been widely documented. Even without the intent to lie or mislead, these generative AI tools can generate harmful misinformation. The harm is exacerbated by the refined and often well-written style followed by the AI-generated texts, and the inclusion of real facts, which can give disinformation an air of legitimacy. For example, a law professor was placed on an AI-generated "list of legal scholars who sexually harass others," even though no such allegations existed, according to The Washington Post. As Princeton University professor Arvind Narayanan said in an interview with the Markup:

"Sayash Kapoor and I call it a bullshit generator, and so do others. We don't mean prescriptive, but relatively precise. We mean, it's trained to produce plausible text. It's very Good at persuading, but it's not trained to produce truthful statements. It often produces truthful statements as a side effect of being plausible and persuasive, but that's not the goal. "

AI-generated content also touches on a broader legal issue: our trust in what we see and hear. As AI-generated media becomes more pervasive, it will also become more common for us to be tricked into believing that fictional things are real or that real things are fictional. What do individuals do when they no longer trust information and new information is being generated faster than it can be checked for accuracy? Information sources like Wikipedia can be overwhelmed by AI-generated fake content. In targeted situations, this can be detrimental, as it can lead the target to act on the assumption that their loved one is in crisis.

Safety

The aforementioned phishing issues also pose a security threat. While chatbots cannot (yet) develop their own new types of malware from scratch, hackers may soon be able to leverage the encoding power of large language models like ChatGPT to create malware that can then be fine-tuned for maximum coverage and effectiveness, making more novice hackers a serious security risk. In fact, security professionals have noticed that hackers are already discussing how to use ChatGPT to install malware and extract information from their targets.

Generations of AI tools will likely begin to learn from repeated exposure to malware and be able to develop newer and less predictable malware that evades detection by common security systems.

Clickbaiting and Monitoring the Ad Ecosystem

In addition to misinformation and disinformation, generative AI can be used to create clickbait headlines and articles that manipulate how users browse the internet and on apps. For example, generative AI is being used to create full articles regardless of their truthfulness, grammar or lack of common sense to drive SEO and create more pages that users can click through. These mechanisms attempt to maximize clicks and engagement at the expense of truth, degrading the user experience in the process. Generative AI continues to fuel this pernicious cycle by spreading misinformation at an ever-increasing rate, creating headlines that maximize views and undermine consumer autonomy.

harm

  • Financial loss : Successful scams and malware can result in direct financial loss to victims through extortion, deception, or access to financial accounts. It could also have long-term effects on credit.
  • Reputation/relationship/social stigma : Disinformation and disinformation generate and disseminate false or harmful information about individuals, thereby damaging their reputation in the community, potentially damaging their personal and professional relationships, and affecting their dignity.
  • Psycho-Emotional Distress : Disinformation and misinformation can cause serious emotional harm as individuals deal with the effects of disinformation being spread. In addition, many people will face shame and embarrassment if they are victims of scams who may feel manipulated or used to clickbait and monitor ads.
  • Psychological barriers : The influx of false or misleading information and clickbait makes it difficult for individuals to carry out their daily activities online.
  • Autonomy : The spread of misinformation and disinformation makes it increasingly difficult for individuals to make sound, informed choices, and the manipulative nature of surveillance advertising complicates the issue of choice.
  • Discrimination : Scams, disinformation, misinformation, malware, and clickbait all take advantage of the vulnerability of "markers," including membership of certain vulnerable groups and categories (seniors, immigrants, etc.).

example

  • People are using artificial intelligence to send fake bomb threats to public places like schools.
  • AI voice generators are being used to call people's loved ones, convincing them that their family members are in prison and in dire need of bail and legal assistance.
  • The Center Against Digital Hate tested Google's Bard chatbot to see if they would replicate 100 common conspiracy theories, including Holocaust denial, and said the mass child murder tragedy at Sandy Hook was staged using "crisis actors" of. Bard published text based on these lies 78 times out of 100 without context or disclosure.
  • Vice reporters have discovered unedited AI spam widely across the internet.
  • Technology news site CNET suspended its use of artificial intelligence and corrected 41 of the 77 stories it published that were written using artificial intelligence tools. Articles written by artificial intelligence designed to boost Google search views to boost ad revenue contained inaccurate and misleading information.
  • Likewise, Buzzfeed reportedly publishes AI-written content, travel guides, aimed at attracting search traffic to different destinations. The quality of the results was unanimously judged to be useless and unhelpful.

intervention

  • Enact a law that makes it illegal (regardless of means) to intimidate, deceive, or knowingly mislead someone about an election or a candidate, such as the Fraud and Voter Intimidation Prevention Act.
  • Pass the U.S. Data Privacy Protection Act. ADPPA limits the collection and use of personal information to what is reasonably necessary and proportionate to the purposes for which the information was collected. Such restrictions would limit the personal information used to profile users, making them targets for advertising, phishing and other scams. ADPPA will also limit the use of personal data to train generative artificial intelligence systems that can manipulate users.
  • Enacting FTC business oversight rules, setting data minimization standards, and prohibiting the secondary use of personal information out of context would also prevent the use of personal information collected for unrelated purposes to train generative artificial intelligence systems.

Harassment, Impersonation and Blackmail

Background and Risks

Some of the earliest uses or misuses of generative AI technology were deepfakes: realistic images or videos created using machine learning algorithms that portray someone as something they didn't say or do, usually by replacing a person's likeness with another similar one. one person. Deepfakes and other AI-generated content can be used to facilitate or exacerbate many of the harms outlined throughout the report, but this section focuses on a subset: the deliberate, targeted abuse of individuals. AI-generated images and videos offer bad actors several ways to impersonate, harass, humiliate, exploit, and blackmail others. For example, deepfake videos could show victims praising causes they dislike, or engaging in sexually explicit or other humiliating acts. These images and videos can also spread rapidly across the internet, making it difficult or impossible for victims, law enforcement, and other interested parties to identify the creators and ensure harmful deepfakes are removed.

Unfortunately, many victims of targeted deepfakes have no recourse, and those who seek recourse are often forced to identify and confront the perpetrators themselves.

The dangers of synthetic media predate artificial intelligence and machine learning. As far back as the 1990s, commercial photo-editing software allowed users to change the look or swap faces in photos. However, modern deepfakes and other AI-generated synthetic content can be traced back to Google's 2015 release of TensorFlow, an open source tool for building machine learning models, and the 2017 viral spread of deepfakes created using such tools . To create these early deepfakes—many of which involved placing the faces of celebrities on the bodies of pornographic movie actors—creators had to use tools like TensorFlow to build a machine learning model (usually a generative adversarial network, or GAN) that train it on one image, video, or audio file, and then instruct the model to map a specific person's features or voice onto another's body. The release of new generative AI services like Midtravel and Runway removes these technical barriers, allowing anyone to quickly create AI-generated content by providing a few key images, source video, or even text entries.

In essence, using AI-generated content to impersonate, harass, humiliate, exploit, or blackmail individuals or organizations is often no different than using other methods. Victims of deepfake harm may still have recourse to existing criminal and civil remedies for fraud, impersonation, extortion, and cyberstalking to correct the malicious use of generative AI tools. However, generative AI raises new legal issues and exacerbates harm in new ways, leaving both victims and regulators unable to use existing legal avenues to remedy harm. For example, deepfakes of the deceased—a phenomenon known as “ghostbots”—may not only involve defamation laws, but may cause emotional distress among the deceased’s loved ones, whereas false textual references may not. These new legal issues fall broadly into three categories: those involving bad faith; those involving privacy and consent; and those involving trustworthiness.

Case Study – Silencing a Journalist

In April 2018 , Indian investigative journalist Rana Ayub received an email from a source inside the Modi government. A video of her engaging in a sexual act has gone viral, leading to public shaming and criticism from those who want to discredit her work. But that is false. Ayyub's likeness was inserted into a pornographic video using an early form of deepfake technology. As public scrutiny intensified, her home address and mobile phone information were leaked, leading to death and rape threats. This early video was circulated to harass, humiliate and ostracize an outspoken government critic — and for months, it succeeded.

malicious

A common malicious use case for generating artificial intelligence to harm, humiliate or sexualize others involves generating deepfakes of non-sensory images or videos. These sexual deepfakes, one of the earliest and most common examples of deepfake technology, have garnered a lot of media attention. However, many existing non-pornographic laws limit liability to situations where content is posted with the intent to cause harm. Some malicious uses of generative artificial intelligence undoubtedly meet this threshold, but many deepfake creators may not intend to harm the subject of the deepfake; instead, they may create and distribute the deepfake without their subjects seeing or influenced by content.

How deepfakes are made

The standard approach to creating deepfakes uses a machine learning model to detect keypoints in a reference frame or video, called a "driver video," and then maps a photo of the target individual to "paint the source photo onto each frame using the keypoints. For example , a machine learning model can be trained to detect a few points on a face in a video, and then based on these key points, the source photo can be mapped to a face in the video. The resulting photo or deepfake can then be edited to remove Tiny artifacts that can reveal the unreality of deepfake photos.

The intent requirement also permeates other criminal laws that apply to the malicious use of generative artificial intelligence. For example, federal cyberstalking statute 18 U.S.C. Section 2261A applies only to "a person with the intent to kill, injure, harass, intimidate, or monitor [with a similar intent]." “State impersonation statutes like California Penal Code Section 528.5 similarly limit enforcement to those who impersonate another person “with the intent to injure, intimidate, threaten, or defraud another person.” Blackmailing others might fall under these criminal statutes, but creating harmful or sexual deepfakes for personal enjoyment or entertainment probably wouldn't.

Finally, predicting the intent of deepfake creators has become more difficult due to a modern feature of many online platforms: user anonymity. When victims become aware of a malicious deepfake spreading online, as happened to journalist Rana Ayub in 2018, it can be difficult, if not impossible, to find the original creator to file a lawsuit or criminal charges.

Privacy and Consent

Even if victims of AI-generated targeted harm are successful in identifying a malicious deepfake creator, they may have difficulty correcting many of the harms because the resulting image or video is not the victim but one using multiple Composite images or videos of sources to create believable but fictitious scenes. In essence, these AI-generated images and videos circumvent traditional notions of privacy and consent: because they rely on public images and videos, like images or videos posted on social media sites, they typically do not rely on any private information. This characteristic of AI-generated content precludes certain traditional privacy violations, including intrusive segregation and disclosure of private facts, which explicitly rely on disclosure or violation of private facts. Other privacy violations, including false exposure, fare better because they only require the plaintiff to prove that the creator knew or recklessly ignored whether a reasonable person would find AI-generated content deeply objectionable. Still, these claims face a difficult legal hurdle: the First Amendment.

The generative nature of new AI tools like Midtravel and Runway puts them at a difficult crossroads between free speech protections and privacy protections for deepfake victims. Many AI-generated photos and videos transform the original material or include new content in ways that may be protected by the First Amendment, but they may appear to be real footage of victims in embarrassing, sexual or other undesirable situations. The tension between free speech, privacy and consent has created new and thorny legal issues for both individuals and public figures such as celebrities and politicians.

Consider the issue of consent. Many of the AI ​​profiles of personal harm use public-sourced photos that victims post online. Victims may disagree with the fictional but believable photos and videos generated for them by artificial intelligence tools, but existing legal claims may not provide the redress these victims have come to expect. Although the right of publicity originally protected the privacy and dignity of the individual, some modern courts, for example, have focused on the victim's financial interest in their identity, that is, the celebrity's economic interest in their public image. These courts and similar state appropriations Laws may not provide the simple legal remedies victims have come to expect when confronted with non-sensory deepfakes; they may expect victims to demonstrate some financial or physical harm in addition to non-consensual Creators benefit financially. These laws and judicial interpretations have not been developed with generative AI in mind, meaning that even AI harm that should be easy to remedy can become complex, expensive, and confusing for victims. Of course, victims of malicious deepfakes and other AI-generated content can still bring several other legal claims, such as defamation or negligence causing emotional distress, but the generative nature of new AI tools suggests that even those claims could face legal hurdles. The novelty and scalability of generative AI could be an obstacle for victims of malicious deepfakes, even if their underlying legal claims are strong.

Defamation is yet another example of generative AI making legal claims more challenging. While private individuals may hold the creators of defamatory deepfakes accountable as long as the descriptions are false and harm victims, public figures such as celebrities and politicians must overcome a higher First Amendment hurdle to obtain compensation. In The New York Times Company v.

For example, the Sullivan Supreme Court ruled that a public figure must prove that the defendant posted defamatory material with actual malice, in other words, "knowingly that it was false, or reckless as to whether it was false." In Hustler Magazine, Inc. v. Falwell, the Supreme Court applied the same standard to dismiss charges of intentional infliction of emotional distress. However, the actual malicious standard applied in these cases is based on assumptions about what a reasonably prudent person could do to investigate and reveal the veracity of the information they received. As generative AI tools become more sophisticated, it will only become harder for individuals and news organizations to tell whether something is real or generated by AI, effectively increasing the hurdles public figures must overcome, to repair the harm caused by the defamatory deepfake.

Importantly, the malicious use of generative AI affects everyone—individuals and public figures alike. The legal distinction between individuals and public figures is far from clear, and both individuals and public figures have successfully overcome the First Amendment, privacy, and consent barriers discussed above. These cases, and the legal tests they involve, simply underscore the legal assumptions that may not hold when someone uses generated AI to impersonate, harass, defame, or otherwise harm others, and that may serve to correct and perpetuate the harm done by AI set up barriers. While many traditional legal remedies may still be available to victims of malicious deepfakes and other generated AI harms, the new legal issues raised by generative AI, and the potentially large number of breaches that could arise from publicly available generative AI tools, It will undoubtedly make these legal remedies harder to find and less effective in practice.

Credibility

When deepfakes are disseminated to audiences who believe they are real, they cause real social harm to the subjects. Even if a deepfake is debunked, it will have a lasting negative impact on how others view the subject of deepfakes. The credibility of AI-generated content can also undermine the ability of victims to seek legal redress. The proliferation of generative AI and deepfakes has undermined core assumptions about how legal fact-finding and evidence-testing happen. At present, the threshold for verifying court evidence is not particularly high. All the claimant must prove is that a reasonable jury could find it favorable for authenticity or identification, after which the determination of authenticity is left to the jury. Additionally, many courts have adopted assumptions about the veracity of the audio and visual evidence destroyed by deepfakes. For example, some courts recognize the "silent witness" theory of video authentication, in which the presence of a recording speaks to the veracity of the evidence without the observation of a human witness. Other courts have found evidence taken from news archives or government databases to be authentic, both of which could be vulnerable to deepfakes. As AI-generated content becomes more ubiquitous and believable, courts and regulators alike will need to identify and employ methods to determine whether images and videos are authentic, and reconsider concerns about the veracity and value of evidence presented at trial. legal assumptions.

harm

  • PHYSICAL : In some cases, believable deepfakes of victims who appear to have certain behaviors could put them at risk of physical harm and violence, for example, in cultures where public sexual activity can shame families, or in In cultures where same-sex relationships are illegal.
  • Financial/Economic Loss : Spreading pornographic fake images and videos generated by artificial intelligence, or involving hot political or social topics, may cause victims to lose their jobs and find it difficult to find future jobs.
  • Reputation/relationship/social stigma : For example, if deepfakes lead others to believe that the victim is cheating on a partner or engaging in illicit behavior with a minor, the victim's status in the community, intimate and professional relationships, and dignity can be seriously damaged or destroy.
  • Psychological : Victims of these attacks often feel deeply violated and may feel hopeless, fearing that their lives will be destroyed.
  • Autonomy/Loss of Opportunity : Deepfakes have been weaponized to deliberately silence journalists, activists, and other vulnerable groups and, if widely believed, could lead to loss of opportunity and altered life circumstances. It can also be a threat to democracy and social change.
  • Autonomy/Discrimination : Deepfakes can easily become a tool to target already vulnerable individuals belonging to marginalized groups or to make individuals appear to belong to marginalized groups – they may also reinforce negative attitudes towards sex work and sex workers.

example

  • EU police forces have issued an official warning that "serious" criminal abuse using ChatGPT and other generative artificial intelligence tools is emerging and growing.
  • A Twitch streamer created deepfake porn of another Twitch streamer, imposed her face on the porn, and pretended to be her.
  • A TikTok user has spoken out about digitally created nude photos she shared on the internet. The photos were used to threaten and blackmail her.
  • The voices of video game voice actors have been taken away and used to train artificial intelligence to use their voices to harass and expose their messages without their knowledge or consent.

intervention

  • Technological solutions include deepfake detection software and methods for watermarking AI-generated content. These solutions may help victims, courts, and regulators identify AI-generated content, but their effectiveness depends entirely on how quickly innovative detection and authentication tools can be developed by technologists and responsible AI actors, Rather than the rate at which malicious AI developers develop new, harder-to-detect AI tools.
  • Despite the novel capabilities of generative AI tools and the legal challenges they pose, many long-standing legal tools may still apply. For example, deepfakes that exploit copyrighted content, which may include photos taken by the victim themselves, may be vulnerable to traditional copyright claims. Depending on the specifics of the AI-generated content, victims may also resort to a variety of tort claims, such as defamation, misrepresentation, intentional infliction of emotional distress, and misappropriation of names and likenesses. To circumvent the challenge of identifying anonymous creators, victims may sue online platforms that host and disseminate malicious AI-generated content if those platforms (including providers of AI tools such as Midtravel and Runway) are concerned about the harmfulness or illegality of such content made a significant contribution that may apply to claims involving the malicious distribution of AI-generated content.
  • Some regulatory interventions may further protect victims of deepfakes and other malicious uses of generated artificial intelligence. While a blanket ban on deepfakes or tools that generate artificial intelligence could violate the First Amendment, expand claims under copyright law or privacy violations to include fictitious depictions of victims regardless of the content’s impact on them, This would go a long way toward undoing the damage done by malicious use of generative AI. Criminal statutes could also be updated or supplemented with statutory language that captures the issues described above, including language that mitigates the intent to hold someone accountable for non-sensory, AI-generated sexual depictions of others. Given the difficulty of identifying credible deepfakes and verifying evidence, the Federal Rules of Evidence may benefit from higher verification standards to combat possible deepfakes. Finally, malicious deepfakes and other AI-generated content created for commercial purposes may be regulated by administrative agencies such as the Federal Trade Commission and state attorneys general's offices on the grounds that they are unfair and deceptive.

In focus: Section 230 regulations

Section 230 of the Communications Decency Act states that providers of interactive computer services shall not be "recognized" as publishers or speakers of information provided by third parties, and if this action has any relationship to content provided by third parties, the company claim immunity from Section 230. In recent years, courts have moved to limit the scope of Section 230, finding instead that a company can claim a Section 230 exemption only if the basis of liability is that the dissemination company played no role in creating the inappropriate message.

Generative AI tools are not exempt across the board: Some commentators have framed the Generative AI Section 230 debate as an all-or-nothing decision, with some claiming that Generative AI software gets a Section 230 exemption, others They don't. But judges have declined to apply section 230 in such a broad way in a recent major court decision. Instead, the court applied section 230 on a claim-by-claim basis. Therefore, whether companies will receive Section 230 protections will depend on the specific facts and legal obligations at issue, not just whether they deploy generative AI tools.

Section 230 should not apply to certain claims, such as product liability claims, because they do not view companies as publishers or speakers of information: In the past, courts have applied Section 230 very broadly, primarily by interpreting the provision as , as long as a company's alleged illegal activities involve the dissemination of third-party information, it is considered a publisher or spokesperson. Courts have come to renege on this, recognizing that Section 230 does not protect companies from claims against their own obligation not to cause harm. Accordingly, claims that a generative AI company violated its own duties in terms of service design, information collection, use or disclosure, and content creation should not be barred by Section 230.

For example, generative AI companies will have a hard time using Section 230 to escape product liability claims, such as negligent design or failure to warn at least in the Ninth Circuit, which now recognizes that such claims are not based on third-party information for injury , but based on a company violating its obligation to design products that do not pose an unreasonable risk of harm to consumers, as well as to disclose personal information, as these laws obligate companies to respect the privacy interests of third parties.

Generative AI companies will lose Section 230 protections when the tools are solely responsible for creating the content: Generative AI companies could face several different types of claims regarding the information generated by their tools. Section 230 provides companies with protection from legal claims based on information provided by another party, an "information content provider" in Section 230 jargon. An information content provider is defined as "any person or entity that is wholly or partially responsible for the creation or development of information provided to the Company". Thus, a generative AI company is not covered by Section 230 if it is itself an information content provider of the information in question and the company is "wholly or partially responsible for the creation or exploitation of the information."

When a generative AI tool is accused of creating new harmful content, such as when it "hallucinations" or fabricated information that was not in its training data, the legal claim is not based on third-party information and Section 230 should not apply. For example, a generative AI company would not be protected under Section 230 when a generative AI tool fabricates false and reputation-damaging information about an individual, such as defamatory or false information, because the company, not any third party, is responsible for creating the false and Information that damages reputation, which is the basis for legal claims.

Generative AI companies will not be protected by Section 230 when they make significant contributions to inappropriate content: In some cases, generative AI companies will try to argue that the disputed output originated from a third party , either as user input or as training data. In this case, the court must determine whether the company partially created or developed the information. The main test is whether the firm has made a significant contribution to the inappropriateness of the information. Material contributions may include altering or aggregating third-party information so that it violates the law, requiring or encouraging third parties to enter information that violates the law, or otherwise causing violations of the law.

When a user asks a generative AI tool to create misinformation or deepfakes, or when the tool uses training data to create harmful content, the tool transforms the input into harmful content, companies deploying the tool should not use Section 230 to avoid liability . A user input request for harmful information, a photo or video of a deepfake target is not harmful in itself, nor is it sufficient to create harmful content. After all, that's why users use generative artificial intelligence to create content. It is also unlikely that these inputs alone would be sufficient legal basis for a claim against a generative AI company. In this case, companies deploying generative AI tools made a significant contribution to misinformation by transforming information that could not form the basis of liability into information that could.

On the other hand, if a user asks an AI-generated tool to simply repeat a defamatory statement the user entered into the tool, or to repeat harmful information from another source, the tool may not materially affect the harm and thus may benefit protected under Article 230.

Section 230 should not be an obstacle to holding companies accountable for harm done to generating AI tools. Any new regulations or claims should address the obligations of companies that generate AI and the harm that generating harmful content does to the tools themselves.

It is not clear whether the stolen training data was information "provided" by a third party: To gain protection under Section 230, a company must prove that the information forming the basis of liability was "provided by a third party". There is little precedent for when information is "provided" by a third party. "Providing" information may mean giving the information or making it available to others. But it's unclear whether the third parties intended to feed their information to the resulting AI tool simply by making it visible to a general audience on the internet. In fact, in many cases, the opposite is clearly the case.

The relationship between Internet companies and third-party information providers is critical in determining whether information is provided by third parties. The type of service originally envisioned by Section 230 included users who provided information directly to the service, such as the Prodigy message boards that served as the basis for the case that inspired Section 230.72, search engines, and other types of services where third parties did not provide information directly. Some protections from Section 230, but even these companies give third parties some control over whether and to what extent information is posted or republished on their services. For example, websites can tell Google's search engine crawlers not to index their pages, but there's no effective means of stopping AI companies from crawling their sites. The third party's lack of control over the use of its information in generating AI tools, and similar considerations described in [Privacy Section], could sway courts from finding that stolen data was "provided by" a third party .

Guess you like

Origin blog.csdn.net/lsyou_2000/article/details/132223522