ChatGPT "illusion" problem can not be solved

Technologists start to wonder if ChatGPT and AI 'illusion' problems will go away: 'It's unsolvable'

More and more people interact with artificial intelligence chatbots such as ChatGPT and find that they are prone to misinformation. Called hallucinations, whining, or just making things up, this phenomenon is now a problem facing every business, organization, and high school student trying to get a generative AI system to write a document and do the job for them. Some even use it for tasks with potentially significant consequences, such as psychotherapy and legal research and writing.

“I don’t think there’s a model out there that isn’t subject to some hallucinations,” says Daniela Amodei, co-founder and president of Anthropic, which makes a chatbot called Claude 2. "They're primarily designed to predict the next word," Amodei said. "So the model always has a certain error rate." 

Anthropic, ChatGPT maker OpenAI, and developers of other large AI systems say they are working to make those systems hallucinate as little or as little as possible. How long it will ultimately take, and whether they will be good enough to provide safe advice, remains to be seen. 

Emily Bender, director of the Computational Linguistics Laboratory and professor of linguistics at the University of Washington, said: "This cannot be solved. The problem lies in the mismatch between the technology and the intended application scenario." The reliability of generative AI technology is crucial, because according to The McKinsey Global Institute predicts it will add $2.6 trillion to $4.4 trillion in value to the global economy. Chatbots are just one part of the craze, which also includes technology capable of generating new images, videos, music and computer code. 

Google has pitched an AI platform for news writing to news organizations. The AP is also exploring the use of the technology in partnership with OpenAI, which is paying to use parts of AP’s text archives to improve its AI systems. 

Indian computer scientist Ganesh Bagler, working with the Indian Institute of Hotel Management, has been working for years on AI systems to create novel recipes for South Asian cuisine, such as the new rice-based biryani. An "illusion" can make an otherwise delicious meal unpalatable. 

When OpenAI CEO Sam Altman visited India in June, a professor at the Indian Institute of Technology in Delhi asked some tough questions. "I guess hallucinations in ChatGPT are acceptable, but when a recipe hallucinates, it becomes a serious problem," Bagler told Altman, standing in a crowded campus auditorium. 

Altman offered an optimistic response, but also no guarantees. "I think we can largely solve the problem of hallucinations," Altman said. "I think it will take a year and a half to two years. After that we probably won't talk about it anymore. It will take time for the model to learn the balance between creativity and perfect accuracy." 

However, for some experts who study the technology, such as linguist Bender of the University of Washington, that improvement is not enough. 

When used to generate text, Bender said language models are designed to write stories, and they are good at mimicking various text forms, such as legal contracts, TV scripts or sonnets. "But because they're just writing stories forever, when they output the correct text, that's quite an accident," Bender said. Such errors are difficult for a human reader to detect because they tend to be ambiguous." 

For some marketing firms, this type of error isn't a huge problem. A Texas start-up company cooperates with OpenAI, Anthropic, Google and Facebook parent company Meta to provide customers with a series of tailor-made AI language models. For those who care about accuracy, it might choose Anthropic's model; for those who care about the security of their proprietary source data, it might choose another model. 

The problem of hallucinations is indeed not easy to solve. But some think a company like Google, which has a must "very high standard of factual accuracy" in its search engine, can find a solution. 

Some tech optimists, including Microsoft co-founder Bill Gates, are optimistic. “I’m optimistic that AI models will one day be able to distinguish fact from fiction,” Gates said of AI in a July blog post, citing a 2022 paper by OpenAI as “in this regard. Promising work” example. 

Yet even when Altman pitches products for various uses, he doesn't count on the information the model sends back to him to assure him of complete authenticity. Altman told the audience at Bagler's school, "Maybe I'm the last person in the world who doesn't trust ChatGPT output answers." This elicited laughter. 

References

https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

https://www.94c.cc/info/chatgpt-and-ai-hallucinations-will-ever-go-away.html

Guess you like

Origin blog.csdn.net/2302_76860168/article/details/132588405