The lawyer used ChatGPT to file a lawsuit, but was deceived into citing non-existent cases.

DoNews reported on May 29 that recently, in a court case, an American lawyer relied on the chatbot ChatGPT for legal research (legal research), resulting in the submission of wrong information. The incident sheds light on the potential risks of AI in the legal sphere, including misinformation.

In a case involving a man suing an airline for personal injury, the plaintiffs' legal team filed a brief lawsuit citing several previous court cases to support their argument , trying to establish a legal precedent for their claim. Lawyers for the airline, however, discovered that some of the cases cited did not exist and immediately notified the trial judge.

Judge Kevin Castell expressed surprise at the situation, calling it "unprecedented" and ordered an explanation from the plaintiff's legal team.

Steven Schwartz, one of the plaintiffs' attorneys, admitted that he used ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret, saying: "I have never used artificial intelligence to conduct research before. Legal literature search, nor knowing that its contents may be false."

The court filing is accompanied by screenshots of a conversation between Schwartz and ChatGPT in which Schwartz asked about a particular case: whether Varghese v. China Southern Airlines Co Ltd was real.

ChatGPT replied that it was true and stated that the case can be found in legal reference databases such as LexisNexis and Westlaw. However, follow-up investigation revealed that this case did not exist, and further investigation revealed that ChatGPT fabricated 6 cases that did not exist.

In light of this incident, the two attorneys involved, Peter Loduka and Steven Schwartz of Levidow, Levidow & Oberman LLP, will appear before a disciplinary hearing on June 8 to explain their actions .

The incident has also sparked debate in the legal community about the appropriate use of AI tools in legal research, and the need for comprehensive guidelines to prevent similar situations from happening.

f0401693c2735cd8f1dad1941fed7431.jpeg

Guess you like

Origin blog.csdn.net/zhaomengsen/article/details/130926511