ChatGPT has a high EQ, but it is not suitable as a search engine

Microsoft and Google are rushing to use large language model technology to power up their search engines. But there are good reasons to think the technology is better suited as an emotional companion than factual information.
insert image description here
US media commented that current artificial intelligence tools based on large language models, such as ChatGPT, are better at empathy than providing accurate factual information. This is because the designers prioritized fluency over accuracy. Coupled with the large amount of sentiment information in the training database, the competition to embed large language models into search engines is meaningless.

The following is the translation

Earlier this year, Arvind Narayanan, a professor of computer science at Princeton University, developed a ChatGPT-enabled voice interface for his four-year-old daughter. This is not only an experiment, but also because Narayanan believes that artificial intelligence robots will one day become an important part of his daughter's life.

Narayanan's daughter was naturally curious and often asked him questions about animals, plants and the human body. He thinks ChatGPT, developed by OpenAI, could provide useful answers. To his amazement, once he told the system that he was talking to a child, the chatbot showed impeccable empathy.

Narayanan's daughter asked, "What happens when the lights go out?"

"When the lights go out, it's dark, which can be a little scary," ChatGPT replied in a synthetic voice. "But don't worry!" "There are a lot of things you can do to make yourself feel safe and comfortable in the dark."

ChatGPT then offered some suggestions for using a night light, ending with a reminder: “It’s normal to feel a little scared in the dark.” This explanation apparently reassured Narayanan’s daughter.
insert image description here
Microsoft and Google are rushing to use large language model technology to power up their search engines. But there are good reasons to think the technology is better suited as an emotional companion than factual information.

Strange as it may sound. But whether it's Google's Bard or Microsoft's new Bing, the AI ​​has made a series of embarrassing factual mistakes when it's used to search for information. Bard gave wrong information about the James Webb Space Telescope in its first public presentation, and the new Bing got a bunch of financial statements wrong.

Factual errors are costly when AI chatbots are used for search. Replika, an artificial intelligence app, has been downloaded more than 5 million times. Eugenia Kuyda, the app's founder, said when the AI ​​was designed to be a companion, it was much less expensive to make mistakes. "It doesn't affect the user experience, unlike search, where small mistakes can destroy users' trust in the product," she said.

Margaret Mitchell, a former AI researcher at Google, co-authored a paper on the risks of large language models. Large language models are simply "not suitable" for search engines, she said. These large language models can go wrong because the data used to train them often contains misinformation, and the models have no ground truth to validate what they generate. Furthermore, designers of large language models may prioritize fluency of generated content over accuracy.

This is one of the reasons these tools are particularly good at catering to users. After all, large language models are currently trained by grabbing text from the web, including emotional content posted on social media platforms such as Twitter and Facebook, as well as personal psychological counseling content on forums such as Reddit and Quora. Lines from movies and TV shows, dialogue from novels, and research papers on emotional intelligence are fed into the training database, making the tools more empathetic.

Some people have reportedly used ChatGPT as a robot therapist. One of them said they did it to avoid being a burden to others.

In order to test the empathy ability of artificial intelligence, people conducted an online emotional intelligence test on ChatGPT. It performed well, scoring perfect marks for social awareness, relationship management and self-management, and was only slightly short on self-awareness.

To some extent, ChatGPT performed better than some people in the test

While it's a little unreal that a machine can bring empathy to people, there's a grain of truth to it. People have an innate need for social connection, and the human brain’s ability to mirror the feelings of others means that we can gain a sense of understanding even when the other person doesn’t really “feel” what we think. Mirror neurons in the human brain fire when we feel empathy from others, giving us a sense of connection.

Of course, empathy is a multifaceted concept, and to truly experience it, people still need to connect with real people.
insert image description here
Thomas Ward, a clinical psychologist at King's College London who has studied the role of software in psychotherapy, warns against assuming that, especially in severe psychological conditions, artificial intelligence can adequately satisfy people. needs in terms of mental health. For example, chatbots may not be able to understand human emotional complexities. In other words, ChatGPT rarely says "I don't know" because its design tends to be confident rather than cautious about answering questions.

Nor should people use chatbots as a habitual outlet for emotional venting. “In a world that sees AI chatbots as a way to end loneliness, subtle human connections, like holding hands or knowing when to speak and when to listen, could disappear,” Ward said.

This may end up causing more problems. But for now, at least AI skills with emotions are more reliable than their grasp of facts.

おすすめ

転載: blog.csdn.net/java_cjkl/article/details/130399553