Artificial intelligence neural network concept stocks, neural network chip concept stocks

1. What are the artificial intelligence concept stocks? Who is the leader in artificial intelligence chips?

Artificial intelligence includes hardware intelligence, software intelligence and others.

Hardware intelligence includes: Hanvon Technology, CANNY Elevator, Cixing, NetPosa, Gosuncn, and Tsinghua Unigroup.

Software intelligence includes: Jinzi Tianzheng, HKUST Xunfei.

Other categories include: Zhongke Sugon and Jingshan Light Machinery.

Google AI Writing Project: Little Fat Cat

2. What are the artificial intelligence stocks?

1. Suzhou Keda: Suzhou Keda Technology Co., Ltd. is a leading provider of video and security products and solutions. It is committed to helping various government and enterprise customers solve visual problems with video conferencing, video surveillance and rich video application solutions. Communication and management problems ai neural network concept stock .

In 2012, the company was restructured into a joint stock limited company; on December 1, 2016, the company was listed on the main board of the Shanghai Stock Exchange.

2. Jiadu Technology: Jiadu Technology (PCI) was founded in 1986 and is headquartered in Guangzhou, China. It has branches or offices in more than 30 regions in China, with more than 2,000 employees and a research and development team of scientists.

Established Jiadu Technology Global Artificial Intelligence Technology Research Institute and Transportation Brain Research Institute, built or participated in the construction of 2 national joint laboratories, 1 national enterprise technology center, and 4 provincial engineering technology centers.

3. Qianfang Technology: Beijing Peking University Qianfang Technology Co., Ltd. is one of the legal person shareholders of Peking University. It is a high-tech enterprise and software enterprise established in full cooperation with the college.

The company's business in the field of transportation has achieved rapid development. On the basis of transportation information construction, it has expanded its business in various aspects such as traffic information services and traffic travel media operations.

4. Weining Health: Founded in 1994, the company is the first listed company in China that focuses on medical and health informatization. It is committed to providing medical and health informatization solutions, and continuously improving people's medical experience and health level.

Through continuous technological innovation, Weining Health has independently developed products and solutions suitable for different application scenarios, and its business covers smart hospitals, regional health, primary health, public health, medical insurance, health services and other fields.

5. Shensi Electronics

Shensi Electronics is a well-known domestic identification solution provider and service provider, and is also a designated manufacturer of resident ID card reading machines certified by the Ministry of Public Security.

6. HKUST Xunfei

HKUST Xunfei is mainly engaged in intelligent voice and language technology research, software and chip product development, voice information service and e-government system integration, etc.

7. Sugon

Sugon is a domestic leader in the field of high-performance computing and the largest high-performance computer manufacturer in Asia. It is mainly engaged in research, development, and manufacturing of high-performance computers, general-purpose servers and storage products, and provides software development, system integration and technical services around high-end computers.

8. Inspur information

Inspur is one of the earliest IT brands in China, and it is a leading cloud computing and big data service provider in China. It has four business groups: cloud data center, cloud service and big data, smart city and smart enterprise. Inspur servers also rank first in the Chinese market and among the top three in the world.

3. I heard that Yunzhisheng is "the first AI voice", is it true? How is Yunzhisheng?

it is true. In the field of AI artificial intelligence, intelligent voice is the most mature track. As an early entrant in the artificial intelligence voice industry, Unisound is indeed known as "the first AI voice company". Through continuous deep cultivation in the field of artificial intelligence, Yunzhisheng has now developed into a top artificial intelligence service provider for the Internet of Things. It takes full-stack AI technology as the core, and is based on the cloud-core integrated platform to provide smart IoT, smart IoT intelligent products and services in medical and other scenarios have won the trust and praise of many partners.

4. How did artificial intelligence originate

Artificial Intelligence (AI), abbreviated as AI, is a comprehensive new discipline developed by the mutual penetration of computer science, cybernetics, information theory, linguistics, neurophysiology, psychology, mathematics, philosophy and other disciplines. . AI has gone through ups and downs since its inception, but it has finally been recognized by the world as a fringe new subject and has increasingly aroused people's interest and attention. Not only many other disciplines have begun to introduce or borrow AI technology, but also expert systems, natural language processing and image recognition in AI have become the three breakthroughs of the emerging knowledge industry.
The germination of the idea of ​​artificial intelligence can be traced back to Pascal and Leibniz in the seventeenth century, who had the idea of ​​intelligent machines earlier. In the nineteenth century, British mathematicians Boole and De Morgan proposed the "Laws of Thinking", which can be said to be the beginning of artificial intelligence. In the 1920s, the British scientist Babbage designed the first "computing machine", which is considered to be computer hardware and the predecessor of artificial intelligence hardware. The advent of electronic computers has made the research of artificial intelligence truly possible.
As a discipline, artificial intelligence came out in 1956. It was the first time at a conference held by the "father of artificial intelligence" McCarthy and a group of mathematicians, informatics scientists, psychologists, neurophysiologists, and computer scientists at Dartmouth University. propose. The research on artificial intelligence has formed different research schools due to different research angles. These are: the symbolist school, the connectionist school and the behaviorist school.
Traditional artificial intelligence is symbolism, which is based on the hypothesis of physical symbol system proposed by Newell and Simon. The physical symbol system is composed of a group of symbolic entities, all of which are physical patterns, which can appear as components in the entities of symbolic structures, and can generate other symbolic structures through various operations. The physical symbol system hypothesis holds that the physical symbol system is a sufficient and necessary condition for intelligent behavior. The main work is "General Problem Solver" (General Problem Solver, GPS): Through abstraction, a real system is transformed into a symbolic system, and based on this symbolic system, the problem is solved using a dynamic search method.
The connectionist school starts from the structure of the human brain nervous system, studies the nature and ability of non-programmed, adaptive, and brain-style information processing, and studies the group information processing ability and dynamic behavior of a large number of simple neurons.
People also call it neural computing. The research focus is on simulating and realizing the perception, perception process, imagery thinking, distributed memory, self-learning and self-organization process in the process of simulating and realizing human cognition.
The behaviorist school starts from behavioral psychology and believes that intelligence is only manifested in the interaction with the environment.
The research of artificial intelligence has gone through the following stages:
the first stage: the rise and fall of artificial intelligence in the 1950s.
After the concept of artificial intelligence was first proposed, a number of remarkable achievements have appeared one after another, such as machine theorem proving, checkers program, general problem s Solver, LISP table processing language, etc. However, due to the limited reasoning ability of the digestion method and the failure of machine translation, artificial intelligence has entered a trough. The characteristics of this stage are: attaching importance to the method of problem solving, ignoring the importance of knowledge.
The second stage: From the end of the 1960s to the 1970s, the emergence of expert systems brought a new upsurge in artificial intelligence research
DENDRAL chemical mass spectrometry analysis system, MYCIN disease diagnosis and treatment system, PROSPECTIOR prospecting system, Hearsay-II speech understanding system and other expert systems research and development, leading artificial intelligence to practical use. Moreover, the International Joint Conferences on Artificial Intelligence (IJCAI) was established in 1969.
The third stage: In the 1980s, with the development of the fifth-generation computer, artificial intelligence has been greatly developed. In
1982, Japan started the "Fifth-Generation Computer Development Program", namely "Knowledge Information Processing Computer System KIPS". Make logical reasoning as fast as numerical operations. Although this project ultimately failed, its development formed a wave of research on artificial intelligence.
The fourth stage: the rapid development of neural networks in the late 1980s
In 1987, the United States held the first international conference on neural networks, announcing the birth of this new discipline. Since then, countries have gradually increased their investment in neural networks, and neural networks have developed rapidly.
The fifth stage: In the 1990s, a new research climax of artificial intelligence appeared.
Due to the development of network technology, especially the technology of the Internet, artificial intelligence began to shift from the research of a single intelligent agent to the research of distributed artificial intelligence based on the network environment. Not only study the distributed problem solving based on the same goal, but also study the multi-objective problem solving of multiple intelligent agents, making artificial intelligence more practical. In addition, due to the proposal of the Hopfield multi-layer neural network model, the research and application of artificial neural network has flourished. Artificial intelligence has penetrated into every field of social life.
IBM's "Deep Blue" computer defeated the human world chess champion. The United States has formulated an information superhighway plan with multi-Agent system application as an important research content. Softbot (soft robot) based on Agent technology is widely used in the software field and network search engines. At the same time, Sandia Laboratories in the United States has established the largest "virtual reality" laboratory in the world, intending to achieve more friendly human-computer interaction and build a better intelligent user interface through data helmets and data gloves. Image processing and image recognition, sound processing and sound recognition have achieved good development. IBM has launched ViaVoice sound recognition software to make sound an important information input medium. Major international computer companies have begun to use "artificial intelligence" as their research content. It is generally believed that computers will develop in the direction of networking, intelligence, and parallelization. The field of information technology in the 21st century will center on intelligent information processing.
At present, the main research contents of artificial intelligence are: distributed artificial intelligence and multi-intelligence agent system, artificial thinking model, knowledge system (including expert system, knowledge base system and intelligent decision-making system), knowledge discovery and data mining (from a large number of incomplete useful knowledge for us from the fuzzy, noisy data), genetic and evolutionary calculation (through the simulation of biological genetics and evolution theory, to reveal the evolution law of human intelligence), artificial life (by Artificial life systems (such as: robot bugs) and observe their behavior, explore the mysteries of primary intelligence), artificial intelligence applications (such as: fuzzy control, intelligent buildings, intelligent human-machine interfaces, intelligent robots, etc.) and so on.
Although the research and application of artificial intelligence has achieved a lot of results, there is still a long way to go before it can be fully promoted and applied. There are still many problems to be solved, and the cooperation of multidisciplinary research experts is required. The future research directions of artificial intelligence mainly include: artificial intelligence theory, machine learning model and theory, imprecise knowledge representation and its reasoning, common sense knowledge and its reasoning, artificial thinking model, intelligent human-machine interface, multi-agent agent system, knowledge discovery and Knowledge acquisition, artificial intelligence application foundation, etc.

5. What is AI deep learning?

Deep learning (DL, Deep Learning) is a new research direction in the field of machine learning (ML, Machine Learning), which is introduced into machine learning to make it closer to the original goal—artificial intelligence (AI) , Artificial Intelligence).
Deep learning is to learn the internal laws and representation levels of sample data. The information obtained during the learning process is of great help to the interpretation of data such as text, images and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to be able to recognize data such as text, images, and sounds. Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition that far exceed previous related technologies.
Deep learning has achieved many results in search technology, data mining, machine learning, machine translation, natural language processing, multimedia learning, speech, recommendation and personalization technology, and other related fields. Deep learning enables machines to imitate human activities such as audio-visual and thinking, and solves many complex pattern recognition problems, making great progress in artificial intelligence-related technologies.

6. Artificial neural network, what does artificial neural network mean

1. The concept of artificial neural network
Artificial Neural Network (ANN), referred to as neural network (NN), is based on the basic principles of neural networks in biology. After understanding and abstracting the structure of the human brain and the response mechanism to external stimuli, Based on network topology knowledge, it is a mathematical model that simulates the processing mechanism of complex information by the nervous system of the human brain. The model is characterized by parallel distributed processing capabilities, high fault tolerance, intelligence, and self-learning capabilities. It combines information processing and storage, with its unique knowledge representation and intelligent adaptive learning capabilities. attention in various subject areas. It is actually a complex network with a large number of simple components connected to each other, with a high degree of nonlinearity, a system capable of complex logic operations and nonlinear relationships.
A neural network is an operational model consisting of a large number of nodes (or neurons) connected to each other. Each node represents a specific output function, called an activation function. The connection between each two nodes represents a weighted value for the signal passing through the connection, called weight, and the neural network simulates human memory in this way. The output of the network depends on the structure of the network, the way the network is connected, the weights and the activation function. The network itself is usually an approximation to a certain algorithm or function in nature, or it may be an expression of a logical strategy. The concept of neural network construction is inspired by the operation of biological neural networks. The artificial neural network combines the understanding of the biological neural network with the mathematical statistical model, and realizes it with the help of mathematical statistical tools. On the other hand, in the field of artificial perception of artificial intelligence, we use the method of mathematical statistics to enable the neural network to have human-like decision-making ability and simple judgment ability. This method is a further extension of traditional logic calculus.
In artificial neural networks, neuron processing units can represent different objects, such as features, letters, concepts, or some meaningful abstract patterns. The types of processing units in a network fall into three categories: input units, output units, and hidden units. The input unit accepts signals and data from the outside world; the output unit realizes the output of system processing results; the hidden unit is a unit between the input and output units and cannot be observed from the outside of the system. The connection weight between neurons reflects the connection strength between units, and the representation and processing of information is reflected in the connection relationship of network processing units. Artificial neural network is a non-programmable, adaptive, brain-style information processing. Its essence is to obtain a parallel and distributed information processing function through the transformation and dynamic behavior of the network, and to imitate human beings in different degrees and levels. The information processing function of the brain nervous system.
Neural network is a mathematical model that uses information processing similar to the synaptic connection structure of the brain. It is simulated on the basis of human understanding of the combination of brain organization and thinking mechanism. It is rooted in neuroscience. , mathematics, thinking science, artificial intelligence, statistics, physics, computer science and a technology of engineering science.
Second, the development of artificial neural network
The development of neural network has a long history. Its development process can be roughly summarized as the following four stages.
1. The first stage - the enlightenment period
(1), MP neural network model: In the 1940s, people began to study neural networks. In 1943, American psychologist McCulloch and mathematician Pitts proposed the MP model, which is relatively simple but of great significance. In the model, the algorithm is realized by treating the neuron as a functional logic device, and the theoretical research of the neural network model has been initiated since then.
(2) Hebb's rule: In 1949, the psychologist Hebb published "The Organization of Behavior", in which he put forward the hypothesis that the strength of synaptic connections is variable. This hypothesis holds that the learning process ultimately occurs at the synaptic site between neurons, and the strength of the synaptic connection changes with the activity of neurons before and after the synapse. This assumption developed into the very famous Hebb rule in neural networks. This law tells people that the connection strength of synapses between neurons is variable, and this variability is the basis of learning and memory. Hebb's law lays the foundation for constructing a neural network model with a learning function.
(3) Perceptron model: In 1957, Rosenblatt proposed the Perceptron model based on the MP model. The perceptron model has the basic principles of modern neural networks, and its structure is very consistent with neurophysiology. This is a MP neural network model with continuously adjustable weight vectors. After training, it can achieve the purpose of classifying and identifying certain input vector patterns. Although it is relatively simple, it is the first real neural network. Rosenblatt proved that two-layer perceptrons can classify inputs, and he also proposed an important research direction of three-layer perceptrons with hidden layer processing elements. Rosenblatt's neural network model contains some of the basic principles of modern neural computers, thus forming a major breakthrough in neural network methods and technologies.
(4) ADALINE network model: In 1959, famous American engineers B.Widrow and M.Hoff proposed the adaptive linear element (Adaline for short) and Widrow-Hoff The neural network training method of learning rules (also known as the minimum mean square error algorithm or δ rule) and applying it to practical engineering has become the first artificial neural network used to solve practical problems, which has promoted the research and application of neural networks and develop. The ADALINE network model is an adaptive linear neuron network model with continuous values, which can be used in adaptive systems.
2. The second stage - low tide period
Minsky and Papert, one of the founders of artificial intelligence, did in-depth research on the functions and limitations of the network system represented by perceptrons. In 1969, they published the sensational book "Perceptrons", pointing out that simple linear perception The function of the perceptron is limited, and it cannot solve the classification problem of two types of samples that are linearly inseparable. For example, it is impossible for a simple linear perceptron to realize the logical relationship of "XOR". This assertion brought a heavy blow to the research of artificial neural network at that time. The 10-year low tide period in the history of neural network development began.
(1) Self-organizing neural network SOM model: In 1972, Professor KohonenT. of Finland proposed the self-organizing neural network SOM (Self-Organizing feature map). Later neural networks were mainly implemented based on the work of KohonenT. SOM network is a class of unsupervised learning network, which is mainly used for pattern recognition, speech recognition and classification problems. It adopts a "winner takes king" competitive learning algorithm, which is very different from the previously proposed perceptron. At the same time, its learning and training method is unguided training, which is a self-organizing network. This learning and training method is often used as a training for extracting classification information when it is not known which classification types exist.
(2) Adaptive Resonance Theory ART: In 1976, Professor Grossberg of the United States proposed the famous Adaptive Resonance Theory ART (Adaptive Resonance Theory), whose learning process has the characteristics of self-organization and self-stabilization.
3. The third stage - the period of revival
(1), Hopfield model: In 1982, the American physicist Hopfield (Hopfield) proposed a discrete neural network, that is, the discrete Hopfield network, which strongly promoted the research of neural networks. In the network, it introduced the Lyapunov function into it for the first time, and later researchers also called the Lyapunov function an energy function. Proves the stability of the network. In 1984, Hopfield proposed a continuous neural network, changing the activation function of neurons in the network from discrete to continuous. In 1985, Hopfield and Tank used the Hopfield neural network to solve the famous Traveling Salesman Problem. Hopfield neural network is a set of nonlinear differential equations. Hopfield's model not only summarizes the nonlinear mathematical information storage and extraction functions of artificial neural networks, proposes dynamic equations and learning equations, but also provides important formulas and parameters for network algorithms, so that the construction and learning of artificial neural networks have a theory Guidance, under the influence of the Hopfield model, a large number of scholars have aroused their enthusiasm for the study of neural networks and actively devoted themselves to this academic field. Because the Hopfield neural network has great potential in many aspects, people attach great importance to the research of neural network, and more people start to study neural network, which greatly promotes the development of neural network.
(2) Boltzmann machine model: In 1983, Kirkpatrick et al. realized that the simulated annealing algorithm can be used to solve NP complete combinatorial optimization problems. This method of simulating the annealing process of high-temperature objects to find the global optimal solution was first developed by Metropli et al. in 1953 proposed in the year. In 1984, Hinton and young scholars such as Sejnowski proposed a large-scale parallel network learning machine, and clearly proposed the concept of hidden units. This learning machine was later called the Boltzmann machine.
Using the concepts and methods of statistical physics, Hinton and Sejnowsky first proposed a multi-layer network learning algorithm, called the Boltzmann machine model.
(3) BP neural network model: In 1986, DERu melhart and others proposed a backpropagation learning algorithm for multi-layer neural network weight correction based on the multi-layer neural network model --- -BP algorithm (Error Back-Propagation), which solves the learning problem of multi-layer forward neural network, proves that multi-layer neural network has strong learning ability, it can complete many learning tasks and solve many practical problems.
(4) Parallel distributed processing theory: In 1986, "Parallel Distributed Processing: Exploration in the Microstructures of Cognition" edited by Rumelhart and McCkekkand, in this book, they established a parallel distributed processing theory, mainly dedicated to the microcosmic of cognition At the same time, the error backpropagation algorithm of the multi-layer feedforward network with nonlinear continuous transfer function is analyzed in detail, that is, the BP algorithm, which solves the problem that there is no effective algorithm for weight adjustment for a long time. It can solve the problems that the perceptron cannot solve, answered the questions about the limitations of the neural network in the book "Perceptrons", and proved that the artificial neural network has a strong computing power in practice.
(5) Cellular neural network model: In 1988, Chua and Yang proposed the cellular neural network (CNN) model, which is a large-scale nonlinear computer simulation system with the characteristics of cellular automata. Kosko built the bidirectional associative memory model (BAM), which has unsupervised learning capabilities.
(6) Darwinism model: The Darwinism model proposed by Edelman had a great influence in the early 1990s. He established a neural network system theory.
(7) In 1988, Linsker proposed a new self-organization theory for the perceptron network, and formed the maximum mutual information theory based on Shanon's information theory, thus igniting the light of NN-based information application theory.
(8) In 1988, Broomhead and Lowe used radial basis function (RBF) to propose a design method for hierarchical networks, thus linking the design of NN with numerical analysis and linear adaptive filtering.
(9) In 1991, Haken introduced synergy into the neural network. In his theoretical framework, he believed that the cognitive process is spontaneous, and asserted that the pattern recognition process is the pattern formation process.
(10) In 1994, Liao Xiaoxin proposed the mathematical theory and foundation of cellular neural networks, which brought new progress in this field. By expanding the activation function class of neural network, more general models of time-delayed cellular neural network (DCNN), Hopfield neural network (HNN), and bidirectional associative memory network (BAM) are given.
(11), in the early 1990s, Vapnik et al. proposed the concept of support vector machines (Supportvector machines, SVM) and VC (Vapnik-Chervonenkis) dimension.
After years of development, hundreds of neural network models have been proposed.

Guess you like

Origin blog.csdn.net/mr_yu_an/article/details/127327485