What is deep learning? From the Turing test to ChatGPT (book at the end of the article)

The Turing Test: Are Machines Intelligent?

   Are machines intelligent?

British mathematician Alan Turing (Alan Turing) asked the question in 1950 and proposed the Turing test, with the purpose of judging whether a machine has human-level intelligence.
The basic idea of ​​the Turing test is: a person and a machine have a conversation in a separate room, and the other person needs to judge which is the machine and which is the human based on the content of the conversation. If the person cannot tell which is a machine and which is a human, the machine can be considered to have human intelligence.

Specifically, the Turing test is divided into two forms:

  •    Standard Turing Test: The tester has a dialogue with the testee through a telegraph or telex machine, and the tester does not know which is the machine and which is the human;
  •    Modified Turing test: The tester has a conversation with a machine and a real person at the same time. The tester knows which is the machine and which is the human, and then needs to judge which answer is more like a human answer.

The Origins of AI: The Dartmouth Conference

Six years after the concept of machine intelligence was proposed, that is, in the summer of 1956, ten scholars including John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester were in Dartmouth, New Hampshire, USA. The colleges gather to discuss the research directions and methods of artificial intelligence, which is the famous Dartmouth conference.
The Dartmouth conference is an important historical event in artificial intelligence research. It not only laid the foundation for artificial intelligence research, but also had a profound impact on the future development of artificial intelligence. At the Dartmouth conference, the concept of artificial intelligence was formally proposed for the first time, and the main research directions in the field of artificial intelligence were also determined, such as

       logical reasoning

       machine learning

       natural language processing

       …

These directions are still the main directions of artificial intelligence research.

At the same time, the Dartmouth conference also promoted the rapid development of the field of artificial intelligence. The participants of the meeting put forward many important concepts and methods in their future research, such as artificial neural network, expert system, computer vision and so on. These results have laid a solid foundation for future artificial intelligence research.

Therefore, the Dartmouth conference has played a very important role in the development of the field of artificial intelligence. It is not only the starting point of the field of artificial intelligence, but also provides guidance for the future development direction and methods of artificial intelligence.

Proposed artificial neural network

To make the machine have human intelligence level, a simple idea is to let the machine imitate the human thinking mode, or let the machine have the same function as the human brain. In the human brain, there is a basic functional unit—neuron, which is a specialized cell capable of transmitting electrical and chemical signals. Neurons have thus become one of the basic modeling objects of artificial intelligence.
The picture shows the biological characteristics of neurons: dendrites receive information from other neurons, which is weighted by synapses to determine whether to activate or inhibit the influence of input.

Influences from multiple neurons are pooled in a weighted sum in the nucleus, and this information is then sent to the axon for further processing. Afterwards, the signal either reaches its destination (such as a muscle) or travels through the dendrites to the next neuron. Many of these neurons are combined in some way to produce more interesting and complex behavior than a single neuron.


Inspired by biological neural theory, artificial neural networks (neuron networks) are constructed, which are extensively parallel interconnected networks composed of adaptable simple units, which can simulate intelligent interactive responses of biological nervous systems to external inputs. The basic unit of a neural network is called a neuron, and each neuron is connected to several other neurons to form a network. When a neuron's input exceeds a bias threshold, it is activated to produce an output, which sends the signal to other parts of the neural network.

AI Leaps: Deep Learning

Early neural network models mainly include perceptron models, multi-layer feedforward neural network models, etc., but the performance of these models is limited because they can only handle simple linear problems, and even XOR problems cannot be handled.
In 1986, the famous backpropagation algorithm was born. The backpropagation algorithm is the cornerstone of all neural network training today and has been used to this day. It can update the weight by propagating the error from the output layer to the input layer, so that the neural network can handle more complex nonlinear problems.

In 2000, deep learning began to rise. During this period, researchers began to use deeper neural network models, that is, deep learning models, to handle more complex tasks, such as speech recognition and image recognition. However, since the training of deep learning models requires a large amount of computing resources and data, during this period, the application range of deep learning was limited, and AI technology fell into a trough again.


In 2012, a deep learning model called convolutional neural network achieved amazing results in the ImageNet image classification competition, which attracted the attention of the global scientific and technological community, and also marked that deep learning technology has entered a new stage. With the continuous iterative update of computer hardware and the development of big data technology, deep learning has made major breakthroughs in natural language processing, computer vision, speech recognition and other fields.


On May 11, 1997, when a supercomputer named "Deep Blue" moved a pawn on the chessboard to the C4 position, Kasparov, the greatest chess master in human history, had to admit in frustration that he had lost. up. At the end of the century, a man-machine war finally won with a slight advantage of the computer.

In 2016, in the third match between Google artificial intelligence AlphaGo and Korean chess player Lee Sedol, AlphaGo finally defeated Lee Sedol and won three consecutive victories, securing the victory in the fifth game.

In 2018, OpenAI released GPT-1, a language model with 150 million parameters capable of generating text relevant to a given context. A year later, OpenAI released GPT-2, a more powerful language model with 150 million to 1.5 billion parameters, at which point its language capabilities approached human-level performance. Another year later, OpenAI released GPT-3, with a staggering 1.75 trillion parameters.

Immediately afterwards, OpenAI released ChatGPT, a language model dedicated to chat and dialogue. ChatGPT is developed based on GPT-3, which has a strong dialogue ability, can understand language context, and can generate expressive and coherent responses. The release of ChatGPT provides us with a powerful natural language processing tool that can be applied to various practical scenarios, such as customer service, intelligent assistant and automatic reply, etc.


In the future, with the continuous development of intelligent science, artificial intelligence will go further...

With the rapid development of the field of artificial intelligence, new technologies, new structures, and new paradigms emerge in an endless stream.

The wide application of deep learning technology has promoted the development of the traditional artificial intelligence three axes - computer vision, natural language processing, and speech recognition; at the application level, medical imaging diagnosis, disease prediction, automatic driving, robots, etc. are coming to the ground; AI composition , AI painting, AI writing and other iterations of AIGC have profoundly changed people's research methods and thinking patterns...

Artificial intelligence technology remains at the forefront of theoretical and applied research. If a worker wants to do a good job, he must first sharpen his tools. In order to learn artificial intelligence well, he must lay a solid foundation;

Those who need learning materials can pay attention to the official account [Gupao AI] reply: 168 to receive related e-books;

 

 

 

Guess you like

Origin blog.csdn.net/weixin_43378545/article/details/129590394