AI learning language is very similar to the human brain! New research proves that language is not a unique ability of human beings, and machines can also learn丨Nature sub-journal...


The west wind comes from the qubit of the concave temple | public account QbitAI

The learning method of artificial neural network (ANN) is very similar to that of the human brain!

You heard me right, at least in terms of language processing, machines seem to be more human-like.

First look at the picture below:

88ef08edcb83310dbc366589767dd14a.png

Blue represents human brain wave signals, and red represents artificial neural network signals.

And this is what the human brain and the machine do when they hear the same speech .

Are they very similar, are you surprised?

In fact, this picture comes from a recent study, and related papers have been published in Scientific reports, a sub-journal of Nature.

b6e2d84d40bbe41b4a58853e1fc45ff0.png

Previously, the question of how the human brain and the machine brain learn has always been a mystery.

The topic of "whether neural networks learn in the same way as humans" has also been controversial.

So what evidence is there that humans and machines "likely process language in similar ways"?

Artificial neural network signals closely match human brainwave signals

To demystify learning in artificial neural networks, Gašper Beguš, a computational linguist at the University of California, Berkeley, conducted the study with Alan Zhou, a doctoral student at Johns Hopkins University, and Christina Zhao, a neuroscientist at the University of Washington.

In this study, they listened to a simple sound to humans, and then collected the brain waves produced by the humans after hearing the sound. At the same time, the same sound is fed into the neural network, and the signal generated by the neural network is analyzed.

Comparing the two, the results are surprisingly similar.

Most importantly, the researchers tested networks of general-purpose neurons for a variety of tasks, and even very general networks (with no bias for speech or other sounds) still exhibited correspondence to human neural encodings.

6c32a79848b0838fa56bd1fdcb1e4e53.png

So how exactly was this research carried out?

First, the researchers recruited 14 English speakers and 15 Spanish speakers to better compare how the human brain responds to the artificial neural network.

The men were then played a monosyllabic audio - "bah" - for 8 minutes each, repeated twice.

During playback, the researchers recorded fluctuations in the average electrical activity of neurons in each listener's brainstem (the part of the brain that first processes sound).

Separately, the researchers fed the same "bah" sound into two different sets of neural networks. One set of neural networks was trained in English and the other in Spanish.

The neural network architecture chosen by the researchers is a generative adversarial network (GAN).

GAN was first proposed in 2014 for image generation. GAN consists of two modules, the discriminator and the generator, which learn from each other through games to produce better output.

Specifically, the generator creates a sample of an image or sound, and the discriminator determines how close it is to the training sample and provides feedback, causing the generator to react again, and so on, until the GAN is able to output the desired result.

400fd80e22f176a5e3056b0a8e8dc7da.png

In this study, however, the discriminator was initially trained on a range of English or Spanish sounds.

Then, a generator that was never trained on those sounds had to find a way to produce them. It started out making random sounds, and after about 40,000 interactions with the discriminator, the generator gradually produced the correct sounds. After this training, the discriminator also becomes better at distinguishing real sounds from generated ones.

After the discriminator was trained, the researchers played the "bah" sound. They measured fluctuations in the average activity level of the discriminator artificial neurons and recorded the processing activity of the neural network, focusing on the layers of artificial neurons in the network that analyze sound (to mimic brainstem readouts).

Comparing the collected human brain wave signal with the artificial neural network signal, the results are shown in the figure below:

7bf7e1daff98a9d3ac1dc2f248fe4f20.png
Experimental results using English: blue is human brain waves, red is artificial neural network signals
0b5493a7fe8e8f4453290b729d53f684.png
Experimental results in Spanish: blue is human brain waves, red is artificial neural network signals

It can be found that these artificial neural network signals match the human brain wave signals very well!

This also suggests that the two systems are engaging in similar activity.

In addition, the experiment revealed another intriguing similarity between humans and machines. Brain waves showed that English and Spanish speakers had different auditory perceptions of the "bah" sound (Spanish speakers heard "pah" more often).

The GAN signal also showed that the network trained in English processed sounds differently than the network trained in Spanish.

"And these differences are generated in the same direction," explains Beguš.

The brainstem of English speakers responded to the sound of "bah" slightly earlier than Spanish speakers, and the English-trained GAN also responded to the same sound slightly earlier than the Spanish-trained model.

The difference in this reaction time between humans and machines is almost the same, on the order of one-thousandth of a second.

It also provides Beguš with additional evidence that humans and artificial networks "likely process language in similar ways."

Chomsky wrong?

The conclusion of this study actually contradicts the view that "human beings are born with the ability to understand language, which is hard-wired with the human brain" proposed by linguist Noam Chomsky in the 1950s.

In addition, Chomsky also proposed the concept of Universal Grammar, that is, the human brain generally has a language acquisition mechanism that enables people to learn to use language.

Could it be that Chomsky is wrong?

In this regard, some netizens said:

ChatGPT has proven that grammar is not needed to learn and understand a language.

6a48af48b427a97b30522991613b2d2e.png

There are also netizens who remain skeptical:

The human brain and neural networks are not the same thing, and we should be skeptical about studying the human brain by studying the computation time of neural networks.

Furthermore, the neural network has been trained in human language, so this statement cannot be inferred from the observed time either.

82cea29b279eb6b4da11b4937996265f.png

Beguš said: The debate is not over yet.

b7c555fcaf43e5ca6773f4823a80b01e.png
Gašper Beguš

He says he will further explore the parallels between the human brain and neural networks. For example, he is testing whether brain waves emitted by the cortex (after the brainstem completes some of its auditory processing) correspond to signals produced by the deeper layers of the GAN.

Ultimately, they hope to develop a robust language-acquisition model that describes how machines and humans learn language, allowing for experiments not possible with human subjects.

Christina Zhao, a neuroscientist at the University of Washington and a member of Beguš's research team, said:

For example, we can create an adverse environment, such as a neglected baby, and see if that leads to something like a speech disorder.

In addition, Beguš said that he is trying to see how far this road can go, and how close the general neuron can be to human language.

Can we expand and enhance our computing architecture to achieve human-like performance, or can it prove that this will never be possible.

More work is needed before we know for sure about this problem. But we are still very surprised by the inner workings of these systems, by the similarities between humans and artificial neural networks.

What do you think about the human brain and artificial neural networks?

Portal:
[1] https://www.nature.com/articles/s41598-023-33384-9 (link to the paper)

参考链接:
[1]https://www.quantamagazine.org/some-neural-networks-learn-language-like-humans-20230522/

Guess you like

Origin blog.csdn.net/QbitAI/article/details/131820427