With the advent of the era of artificial intelligence, the doubling of demand for computing power has become a new trend

Direction 1: AI and computing power complement each other

Artificial Intelligence (Artificial Intelligence), the English abbreviation is AI. It is a new technical science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Artificial intelligence is an important driving force for a new round of technological revolution and industrial transformation.

Artificial intelligence is an important part of the discipline of intelligence. It attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a manner similar to human intelligence. Research in this field includes robotics, language recognition, and image recognition. , natural language processing, and expert systems. Since the birth of artificial intelligence, the theory and technology have become increasingly mature, and the application fields have also continued to expand. It can be imagined that the technological products brought by artificial intelligence in the future will be the "container" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like human beings, and it may surpass human intelligence.
Artificial intelligence is an extremely challenging science, and those who are engaged in this work must understand computer knowledge, psychology and philosophy, etc. Artificial intelligence is a very broad science that consists of different fields such as machine learning, computer vision, etc. In general, a major goal of artificial intelligence research is to enable machines to perform complex tasks that normally require human intelligence. But different times and different people have different understandings of this "complex work". In December 2017, artificial intelligence was selected as one of the "Top Ten Buzzwords in Chinese Media in 2017". On September 25, 2021, in order to promote the healthy development of artificial intelligence, the "New Generation Artificial Intelligence Code of Ethics" was released.

  

Regarding what is "intelligence", it involves issues such as consciousness (CONSCIOUSNESS), self (SELF), thinking (MIND) (including unconscious thinking (UNCONSCIOUS_MIND)). It is generally accepted that the only intelligence that man understands is his own intelligence. But our understanding of our own intelligence is very limited, and we also have limited understanding of the necessary elements that constitute human intelligence, so it is difficult to define what artificial intelligence is. The research of artificial intelligence often involves the study of human intelligence itself. Other intelligence related to animals or other man-made systems is also generally considered to be an AI-related research topic.
Professor Nelson gave such a definition of artificial intelligence: " artificial intelligence is a discipline about knowledge - the science of how to express knowledge and how to obtain and use knowledge ." And another professor Winston of the Massachusetts Institute of Technology He believes: " Artificial intelligence is the study of how to make computers do intelligent work that only humans could do in the past ." These statements reflect the basic ideas and basic content of the discipline of artificial intelligence. That is, artificial intelligence is the study of the laws of human intelligence activities, the construction of artificial systems with certain intelligence, and the study of how to make computers complete tasks that required human intelligence in the past, that is, to study how to use computer software and hardware to simulate certain human intelligence. Basic theories, methods and techniques of behavior.
Since the 1970s, artificial intelligence has been called one of the world's three cutting-edge technologies (space technology, energy technology, artificial intelligence). It is also considered one of the three cutting-edge technologies (genetic engineering, nanoscience, artificial intelligence) in the 21st century. This is because it has developed rapidly in the past 30 years, has been widely used in many disciplines, and has achieved fruitful results. Artificial intelligence has gradually become an independent branch, both in theory and practice. into a system.

Artificial intelligence is a subject that studies the use of computers to simulate certain human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), mainly including the principle of computer-based intelligence, manufacturing computers similar to human brain intelligence, and making computers Higher-level applications can be realized. AI will involve disciplines such as computer science, psychology, philosophy, and linguistics. It can be said that almost all disciplines of natural science and social science have gone far beyond the scope of computer science. The relationship between artificial intelligence and thinking science is the relationship between practice and theory. Artificial intelligence is at the level of technical application of thinking science. is an application branch of it. From the point of view of thinking, artificial intelligence is not limited to logical thinking, imagery thinking and inspirational thinking can be considered to promote the breakthrough development of artificial intelligence. Mathematics is often regarded as the basic science of various disciplines. The discipline of intelligence must also borrow mathematical tools. Mathematics not only plays a role in the scope of standard logic and fuzzy mathematics, but mathematics enters the discipline of artificial intelligence, and they will promote each other and develop faster. 

For example, the heavy scientific and engineering calculations were originally undertaken by the human brain. Now computers can not only complete such calculations, but also can do them faster and more accurately than human brains. Therefore, contemporary people no longer regard such calculations as It is a "complex task that requires human intelligence to complete". It can be seen that the definition of complex work changes with the development of the times and the advancement of technology, and the specific goals of the science of artificial intelligence naturally develop with the changes of the times. On the one hand, it keeps making new progress, and on the other hand, it turns to more meaningful and more difficult goals.
Generally, the mathematical foundations of "machine learning" are "statistics", "information theory" and "control theory". Also includes other non-mathematical subjects. This type of "machine learning" is very dependent on "experience". Computers need to continuously acquire knowledge and learn strategies from the experience of solving a class of problems. When encountering similar problems, they use empirical knowledge to solve problems and accumulate new experiences, just like ordinary people. We can call this learning method "continuous learning". But in addition to learning from experience, human beings also create, that is, "leap learning". This is called an "inspiration" or "epiphany" in some cases. For a long time, the most difficult thing for a computer to learn is "epiphany". Or more strictly speaking, it is difficult for a computer to learn "qualitative change that does not depend on quantitative change" in terms of learning and "practice", it is difficult to go directly from one "qualitative" to another "qualitative", or from a "concept" directly to another "concept". Because of this, the "practice" here is not the same as human practice. The human practical process includes both experience and creation.
This is something that intelligence researchers dream of.
In 2013, SC WANG, a data researcher at Teijin Data General Data Center, developed a new data analysis method, which led to a new method for studying the properties of functions. The authors found that new data analysis methods offer a way for computers to learn to "create." In essence, this method provides a fairly effective way to model human "creativity". This approach is endowed by mathematics, and it is an "ability" that ordinary people cannot possess but computers can possess. Since then, computers are not only good at calculating, but also good at creating because of being good at calculating. Computer scientists should categorically deprive "creative" computers of overly comprehensive operating capabilities, otherwise computers will really "retaliate" humans one day.
When looking back at the derivation process and mathematics of the new method, the author expands his understanding of thinking and mathematics. Mathematics is concise, clear, reliable, and strong in modeling. In the history of the development of mathematics, the brilliance of the creativity of mathematics masters shines everywhere. These creativity are presented in the form of various mathematical theorems or conclusions, and the biggest feature of mathematical theorems is: a logical structure containing rich information based on some basic concepts and axioms expressed in a modular language. It should be said that mathematics is the subject that most simply and most directly reflects (at least one type of) creativity model.

In the summer of 1956, a group of far-sighted young scientists led by McCarthy, Minsky, Rochester and Shannon gathered together to study and discuss a series of related issues of using machines to simulate intelligence, and for the first time put forward the The term "artificial intelligence" marks the official birth of the emerging discipline of "artificial intelligence". IBM's "Deep Blue" computer defeated the human world chess champion is a perfect performance of artificial intelligence technology.
Since the formal introduction of the subject of artificial intelligence in 1956, it has made great progress over the past 50 years and has become an extensive cross-cutting and cutting-edge science. In general, the purpose of artificial intelligence is to make a computer, a machine, think like a human. If you want to make a machine that can think, you must know what thinking is, and furthermore, what is wisdom. What kind of machine is intelligent? Scientists have made cars, trains, airplanes, radios, etc., which imitate the functions of our body organs, but can they imitate the functions of the human brain? So far, we only know that the thing contained in our Tianling Cap is an organ composed of billions of nerve cells. We know very little about this thing, and imitating it may be the most difficult thing in the world.
When the computer appeared, humans began to have a tool that could simulate human thinking. In the years to come, countless scientists worked hard for this goal. Today, artificial intelligence is no longer the patent of a few scientists. There are people studying this subject in almost all computer departments of universities around the world. College students who study computer must also study such a course. With the unremitting efforts of everyone, today Computers seem to have gotten pretty smart. For example, in May 1997, the Deep Blue (DEEP BLUE) computer developed by IBM defeated the chess master Kasparov (KASPAROV). You may not notice that in some places computers help people to perform other tasks that originally belonged to humans, and computers play their role for humans with its high speed and accuracy. Artificial intelligence is always at the forefront of computer science, and computer programming languages ​​and other computer software exist because of advances in artificial intelligence.
On March 4, 2019, the Second Session of the Thirteenth National People's Congress held a press conference. The spokesperson of the conference, Zhang Yesui, said that legislative items closely related to artificial intelligence had been included in the legislative plan.
The "Deep Learning Platform Development Report (2022)" believes that with the maturity of technology, industry, policy and other aspects of the environment, artificial intelligence has passed the reserve period of technical theory accumulation and tool platform construction, and has begun to enter the scale of application and value. Release is the golden decade of industrial empowerment targeted.

In April 2023, the US "Science Times" published an article introducing five leading technologies that are currently profoundly changing the healthcare field: wearable devices and applications, artificial intelligence and machine learning, telemedicine, robotics, and 3D printing.

  

 The main material basis for studying artificial intelligence and the machine that can realize the technical platform of artificial intelligence are computers. The history of artificial intelligence development is linked to the development history of computer science and technology. In addition to computer science, artificial intelligence also involves many disciplines such as information theory, cybernetics, automation, bionics, biology, psychology, mathematical logic, linguistics, medicine and philosophy. The main content of artificial intelligence research includes: knowledge representation, automatic reasoning and search methods, machine learning and knowledge acquisition, knowledge processing systems, natural language understanding, computer vision, intelligent robots, automatic programming, etc.

Research Methods
There is no unified principle or paradigm guiding AI research today. Researchers have debated many issues. Several of these long-standing questions are: Should AI be modeled psychologically or neurologically? Or is human biology as irrelevant to AI research as bird biology is to aerospace engineering? Can intelligent behavior be described by simple principles such as logic or optimization? Or do you have to solve a ton of completely unrelated problems?
Can intelligence be expressed using high-level symbols, such as words and ideas? Or is the processing of "subsymbols" required? JOHN HAUGELAND proposed the concept of GOFAI (Great Old Fashioned Artificial Intelligence), and also proposed that artificial intelligence should be classified as SYNTHETIC INTELLIGENCE. This concept was later adopted by some non-GOFAI researchers.

Brain simulation
Main article: Cybernetics and computational neuroscience

During the 1940s and 1950s, many researchers explored the links between neurology, information theory, and cybernetics. Some of the rudimentary intelligences constructed using electronic networks were also produced, such as TURTLES by W. GRAY WALTER and JOHNS HOPKINS BEAST. These researchers also often hold technical association meetings at Princeton University and the RATIO CLUB in the UK. Until 1960, most people had abandoned this method, although the principles were proposed again in the 1980s.

Symbol processing
Main article: GOFAI

When digital computers were developed in the 1950s, researchers began to explore whether human intelligence could be reduced to symbol processing. Research is concentrated at Carnegie Mellon University, Stanford University, and MIT, each with its own research style. JOHN HAUGELAND calls these methods GOFAI (Good Old Fashioned Artificial Intelligence). In the 1960s, symbolic methods made great achievements in simulating high-level thinking on small proof programs. Methods based on cybernetics or neural networks are placed on the back burner. Researchers in the 1960s and 1970s were convinced that symbolic methods could eventually successfully create machines with strong artificial intelligence, and this was also their goal.


Cognitive simulation economists Herbert Simon and Alan Newell studied human problem-solving abilities and attempted to formalize them, while they laid the groundwork for fundamental principles of artificial intelligence, such as cognitive science, operations research, and business science. Their research team used the results of psychology experiments to develop programs that simulate human problem-solving methods. This method has been followed by Carnegie Mellon University and developed to a peak in SOAR in the 1980s. Based on logic Unlike Alan Newell and Herbert Simon, JOHN MCCARTHY believes that machines do not need to simulate human minds, but should try to find the essence of abstract reasoning and problem solving, whether people use the same algorithm or not. His lab at Stanford University works on the use of formal logic to solve a variety of problems, including knowledge representation, intelligent planning, and machine learning. Also working on logical methods at the University of Edinburgh led to the development of the programming languages ​​PROLOG and Logic elsewhere in Europe The Science of Programming. The "anti-logic" Stanford researchers (such as Marvin Minsky and Seymour Papert) found that solving hard problems in computer vision and natural language processing required specialized solutions—they argued that there is no such thing as a simple and Universal principles (such as logic) enable all intelligent behavior. ROGER SCHANK describe their "anti-logic" approach as "SCRUFFY". Common sense knowledge bases (such as DOUG LENAT's CYC) are examples of "SCRUFFY" AIs because they have to manually code complex concepts one at a time. Based on knowledge, large-capacity memory computers appeared around 1970, and researchers began to construct knowledge into application software in three ways. This "knowledge revolution" led to the development and planning of expert systems, the first successful form of artificial intelligence software. The "knowledge revolution" also made people realize that many simple artificial intelligence software may require a lot of knowledge.

Sub-symbolic approach
In the 1980s, symbolic artificial intelligence stagnated, and many believed that symbolic systems would never be able to imitate all human cognitive processes, especially perception, robotics, machine learning, and pattern recognition. Many researchers began to focus on sub-symbolic methods to solve specific artificial intelligence problems.
Bottom-up, interface AGENT, embedded environment (robot), behaviorism, researchers in the field of new AI robots, such as RODNEY BROOKS, negate symbolic artificial intelligence and focus on basic engineering issues such as robot movement and survival. Their work refocuses on the ideas of early cybernetic researchers while proposing the use of control theory in artificial intelligence. This is consistent with the representational perception argument in the field of cognitive science: higher intelligence requires individual representations (such as movement, perception, and imagery). Computational intelligence In the 1980s, DAVID RUMELHART and others proposed neural network and connectionism again. This and other sub-symbolic methods, such as fuzzy control and evolutionary calculation, belong to the research category of computational intelligence.

Statistics
In the 1990s, artificial intelligence research developed complex mathematical tools to solve specific branches of problems. These tools are true scientific methods, that is, the results of these methods are measurable and verifiable, and the reason for the success of artificial intelligence. A common mathematical language also allows for collaboration in existing disciplines (such as mathematics, economics or operations research). STUART J. RUSSELL and PETER NORVIG point to these advances as nothing less than a "revolution" and "the success of NEATS". Some have criticized these techniques for being too focused on a specific problem without considering the long-term goal of strong artificial intelligence.


Integrated Approach
The Intelligent AGENT Paradigm An intelligent AGENT is a system that perceives its environment and takes actions to achieve its goals. The simplest intelligent agents are those that solve specific problems. More complex AGENTs include humans and human organizations (such as corporations). These paradigms allow researchers to study individual problems and find useful and verifiable solutions without considering a single method. An agent that solves a particular problem can use whatever method is feasible - some agents use symbolic methods and logical methods, some use sub-symbolic neural networks or other novel methods. Paradigms also provide researchers with a common language to communicate with other fields -- such as decision theory and economics (also using the concept of ABSTRACT AGENTS). In the 1990s, the intelligent agent paradigm was widely accepted. AGENT Architecture and Cognitive Architecture Researchers have designed some systems to handle the interaction between intelligent AGENTs in a multi-ANGENT system. A system that includes symbols and sub-symbols is called a hybrid intelligent system, and the research on this system is artificial intelligence system integration. The hierarchical control system provides a bridge between reactive level subsymbolic AI and the highest level traditional symbolic AI, while easing planning and world modeling time. RODNEY BROOKS' SUBSUMPTION ARCHITECTURE was an early grading system plan.

Simulation of intelligent simulation
of machine sight, hearing, touch, feeling and thinking mode: fingerprint recognition, face recognition, retina recognition, iris recognition, palmprint recognition, expert system, intelligent search, theorem proving, logical reasoning, game, information sensing and Dialectical treatment.

Disciplinary scope
Artificial intelligence is a frontier discipline, which belongs to the three-way interdisciplinary discipline of natural science, social science and technical science.

Involved disciplines
Philosophy and cognitive science, mathematics, neurophysiology, psychology, computer science, information theory, cybernetics, uncertainty theory, bionics, social structure and scientific outlook on development.

Research Areas
Language learning and processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combinatorial scheduling problems, perception problems, pattern recognition, logic programming, soft computing, management of imprecise and uncertain, artificial intelligence Life, neural networks, complex systems, genetic algorithms, human thinking, the most critical problem is the shaping and improvement of the machine's independent creative thinking ability.


Security issues
Artificial intelligence is still being studied, but some scholars believe that it is very dangerous to allow computers to have IQ, and it may resist humans. This kind of hidden danger has also occurred in many movies. The main key is to allow the generation and continuation of the machine's autonomous consciousness. If the machine has the autonomous consciousness, it means that the machine has the same or similar creativity and self Conservation of awareness, emotion and spontaneity.

Implementation methods
There are 2 different ways to implement artificial intelligence on computers. One is to use traditional programming techniques to make the system appear intelligent, regardless of whether the methods used are the same as those used by human or animal organisms. This method is called engineering method (ENGINEERING APPROACH), and it has made achievements in some fields, such as character recognition, computer chess and so on. The other is the modeling method (MODELING APPROACH), which not only depends on the effect, but also requires the realization method to be the same or similar to the method used by humans or biological organisms. Genetic algorithm (GENERIC ALGORITHM, referred to as GA) and artificial neural network (ARTIFICIAL NEURAL NETWORK, referred to as ANN) belong to the latter type. Genetic algorithm simulates the genetic-evolutionary mechanism of human beings or organisms, and artificial neural network simulates the activity mode of nerve cells in human or animal brains. In order to get the same smart effect, both methods can usually be used. With the former method, it is necessary to manually specify the program logic in detail. If the game is simple, it is still convenient. If the game is complex and the number of characters and activity space increase, the corresponding logic will be very complicated (exponential growth), and manual programming will be very cumbersome and error-prone. Once an error occurs, the original program must be modified, recompiled, debugged, and finally a new version or a new patch is provided for the user, which is very troublesome. When using the latter method, the programmer needs to design an intelligent system (a module) for each character to control. This intelligent system (module) does not understand anything at first, just like a newborn baby, but it can learn and Gradually adapt to the environment and deal with various complex situations. This kind of system often makes mistakes at the beginning, but it can learn a lesson, and it may be corrected the next time it is run, at least it will not go wrong forever, and there is no need to release a new version or apply a patch. Using this method to realize artificial intelligence requires programmers to have a biological way of thinking, and it is more difficult to get started. But once in the door, it can be widely used. Because this method does not need to specify the rules of the role's activities in detail, it is usually less labor-saving than the previous method when applied to complex problems.

 In a classic digital computer, the central processing unit (CPU) includes an arithmetic unit and an instruction controller, where the arithmetic unit carries the main calculation functions.
The narrow definition of computing power is the theoretical maximum number of floating-point operations per second (FLOPS) that a computer has. However, computers not only have computing power, but also data storage and access capabilities, data exchange capabilities with the outside world, and data display capabilities.
In a broad sense, computing power is the ability of computer equipment or computing/data centers to process information, and it is the ability of computer hardware and software to cooperate to perform certain computing needs .
Currently, the English name of computing power is considered to be computing power or HashRate, etc. Such definitions do not reflect the essence of the term.
Academician Sun Ninghui of the Institute of Computing Technology, Chinese Academy of Sciences and others proposed the ability to express calculations using computability. The connotation of computing power is far more complicated than that of electricity, and computing power is very much in line with the essence of the term.

Capability needs to be measured, and the measurement of the capability of a computing device is related to the type of information it processes. For example, in high-performance computing, double-precision floating-point calculations per second are used to measure its computing power; in artificial intelligence scenarios, single-precision, version-precision or integers are used to measure its computing power; How many hash collisions a clock can do to measure its computing power, mining computing power is the speed at which the output of the hash function is calculated; in high-throughput scenarios, the standard for measuring computing power is the number of bytes processed per second .

The "computing power network" proposed by China Mobile and other operators, cloud network, its essence is to connect clouds into a network and cloud-network integration; Computing power infrastructure, utility, on-demand use, pay-as-you-go; for users, the cloud is changing from buying to renting, and computing power network is changing from renting to using. What you rent is equipment, and what you use is capacity. Therefore, computing power network, computing grid, represents the meaning of power service.

Computing power economy, computing power economy, is a part of the digital economy composed of computing power equipment, computing power network and computing power network.

Computing power (also known as hash rate) is a measure of the processing power of the Bitcoin network. That is, the speed at which the computer (CPU) calculates the output of the hash function. The Bitcoin network must perform intensive mathematical and cryptography-related operations for security purposes. For example, when the network reaches a hashrate of 10Th/s, it means it can perform 10 trillion calculations per second.
In the process of obtaining Bitcoin through "mining", we need to find its corresponding solution m, and for any 64-bit hash value, to find its solution m, there is no fixed algorithm, only by computer Random hash collisions, and how many hash collisions a mining machine can do per second is the representative of its "computing power". The unit is written as hash/s, which is the so-called workload proof mechanism POW (Proof Of Work).

A few days ago, the computing power of Bitcoin's entire network has fully entered the era of P computing power (1P=1024T, 1T=1024G, 1G=1024M, 1M=1024k). In the environment of soaring computing power, the arrival of the P era means that Bitcoin Entered a new phase of the arms race.
Computing power is a measure of the total computing power of a unit that generates new blocks at a certain network consumption. The individual blockchain for each coin changes with the time it takes to generate a new block of transactions.

1 kH/s = 1,000 hashes per second
1 MH/s = 1,000,000 hashes per second.
1 GH/s = 1,000,000,000 hashes per second.
1 TH/s = 1,000,000,000,000 hashes per second.
1 PH/s = 1,000,000,000,000,000 hashes per second.
1 EH/s = 1,000,000,000,000,000,000 hashes per second.

The early Bitcoin blockchain adopted a Proof of work (PoW) mechanism that was highly dependent on node computing power to ensure the consistency of the distributed bookkeeping of the Bitcoin network. With the development of blockchain technology and the emergence of various competitive currencies, researchers have proposed a variety of mechanisms that can reach consensus without relying on computing power, such as the Proof of stake (PoS) consensus pioneered by Peercoin and the Bitcoin The authorized share certification mechanism (Delegated proof of stake, DPOS) consensus mechanism, etc.

The security and immutability of the Bitcoin blockchain system are guaranteed by the powerful computing power of the PoW consensus mechanism. Any attack or tampering with the block data must recalculate the SHA256 of the block and all subsequent blocks It is a difficult problem, and the calculation speed must make the length of the forged chain exceed the main chain. The cost caused by the difficulty of this attack will far exceed its benefits. It is estimated that as of January 2016, the computing power of the Bitcoin blockchain has reached 800 000 000 Gh/s, that is, 8*10^18 operations per second, exceeding the sum of the computing power of the world's Top500 supercomputers.

Security threats are by far the most important problem facing blockchain. Among them, the blockchain based on the PoW consensus process mainly faces the 51% attack problem, that is, nodes have the ability to successfully tamper and forge blockchain data by mastering more than 51% of the computing power of the entire network. Taking Bitcoin as an example, according to statistics, the computing power of large mining pools in China has accounted for more than 60% of the total computing power of the entire network. In theory, these mining pools can cooperate to implement a 51% attack, thereby realizing double payment of Bitcoin. Although in the actual system, the cost required to master 51% of the computing power of the entire network far exceeds the income after a successful attack, the security threat of 51% attacks always exists. The PoS-based consensus process solves the 51% attack problem to a certain extent, but it also introduces the N@S (Nothing at stake) attack problem when the block is forked. Researchers have proposed to partially solve the 51% attack problem by constructing a PoW consensus algorithm that relies on both high computing power and high memory. A more secure and effective consensus mechanism needs further research and design.

AI needs the support of computing power to reach its full potential. Here are some reasons why AI needs computing power:

  • Large-scale data processing: AI tasks usually involve a large amount of data processing, including data collection, storage, cleaning, and analysis. Powerful computing power can accelerate the data processing process, enabling AI systems to acquire and process large-scale data sets faster, thereby improving the efficiency of model training and inference.
  • Deep learning model training: Deep learning is one of the most commonly used technologies in the field of AI, but deep neural network models usually have a large number of parameters and complex structures. The training of these models requires a lot of computing resources and time. Powerful computing power can accelerate the model training process, enabling the model to converge faster and achieve better performance.
  • Complex models and algorithms: More complex and precise models and algorithms are constantly emerging in the field of AI. These models and algorithms may contain more layers, more complex calculation operations and larger parameter quantities. Powerful computing power can support the computing needs of these complex models and algorithms, improving their efficiency and accuracy.
  • Real-time application requirements: Some AI application scenarios require real-time response and inference capabilities, such as autonomous driving and intelligent voice assistants. These tasks have high requirements on computing power, and need to process a large amount of data and perform complex computing operations in a short time. Powerful computing power enables real-time performance and provides faster response and decision-making capabilities.
  • Model optimization and tuning: In order to improve the performance and effects of AI systems, model optimization and tuning are often required. This involves extensive experimentation and computational resources to try out different model architectures, hyperparameters, and optimization algorithms. Powerful computing power can speed up this process, improve optimization efficiency, and help find better model configurations and parameter settings.

All in all, AI needs the support of computing power to process large-scale data, train complex models, realize real-time applications and optimize algorithms. Powerful computing power can provide faster and more efficient computing power, enabling AI systems to fully develop their potential and achieve better performance and results.

The increase in computing power can bring the following benefits:

  • Improve efficiency: Strong computing power can speed up the computing speed of AI systems and shorten the time required to process data and train models. This increases productivity and speeds up decision-making and response.
  • Improved accuracy: Increased computing power enables AI systems to process larger data sets and apply more complex models and algorithms. This improves the accuracy and predictive power of the model, making the results of the AI ​​system more precise and reliable.
  • Supporting innovation: Powerful computing power provides technical support for innovation in the AI ​​field. It enables researchers and developers to explore more complex and challenging problems and achieve higher levels of intelligent functionality.
  • Accelerated deployment: The increase in computing power helps to rapidly deploy AI solutions. The time required to process large-scale data and train complex models is reduced, and AI technology can be applied to actual scenarios more quickly to achieve commercialization and actual value.
  • Promoting cross-fields: Strong computing power provides the possibility for cross-applications between different fields. It can support the application of AI in industries such as medical care, finance, and manufacturing, as well as the integration with other technical fields, such as the Internet of Things, big data analysis, etc.

The relationship between AI and computing power is complementary. Powerful computing power provides AI with computing and processing capabilities, enabling AI systems to run more efficiently, process large-scale data, train complex models, and achieve higher-level intelligent functions. The improvement of computing power has promoted the development and innovation of AI technology, providing a basis for wider application and commercialization.

 Artificial intelligence needs the support of computing power, and the bottom layer of computing power is the chip. At present, the application in the field of artificial intelligence is currently in the stage of rapid development of technology and demand integration, and the application scenarios will be more diversified, so the AI ​​chip market in China and even the world will be further developed.

"The world is entering a period of economic development dominated by the information industry. We must seize the opportunity of the integrated development of digitalization, networking, and intelligence, and use informationization and intelligence as leverage to cultivate new kinetic energy. We must highlight the leading and pillar, give priority to Cultivate and vigorously develop a number of strategic emerging industrial clusters and build new pillars of the industrial system. It is necessary to promote the deep integration of the Internet, big data, and artificial intelligence with the real economy to make the digital economy bigger and stronger."

my country has always attached great importance to the development of the digital economy, especially since 2017, driven by a series of top-level designs, my country's digital economy has achieved remarkable results and has increasingly become a new engine of the national economy. At the same time, my country's digital economy has entered a new stage of development. Coupled with the promotion of "new infrastructure", the application scenarios of the digital economy in the future will become more diverse and the industrial needs will become more complex and diverse.

As the digital economy has become a key measure for economic recovery and the commanding heights of economic development in major countries around the world, where is the next track for global competition in the digital economy? Where is the new driving force for the transformation and development of the digital economy? How to build on the existing advantages and promote the development of more advanced digital technology, stronger computing power and a higher level of digital industry in my country?

Therefore, the research points to "computing power".

The size of the computing power represents the strength of the ability to process digital information. From manual computing in primitive society to mechanical computing in ancient times, electronic computing in modern times, and now digital computing, computing power refers to the ability of human beings to process data, and also represents the development level of human intelligence.

At present, computing science is moving from traditional computing simulation and digital simulation to the fourth paradigm based on the deep integration of high-performance computing, scientific big data, and deep learning. Computing power also forms computing speed, algorithms, big data storage, communication capabilities, cloud Computing service capabilities and other measurement indicators, it empowers the digital transformation and upgrading of all walks of life through a series of digital software and hardware infrastructures such as artificial intelligence, big data, satellite network, optical fiber network, Internet of Things, cloud platform, and near-earth communication .

Under this trend, the demand for data capacity and computing power shows a state of cyclical enhancement, and the continuous increase in the amount of data requires the matching evolution of computing power.

"New Moore's Law" also pointed out that every 18 months, the amount of new human data is the sum of the amount of data in the history of computers. The demand for computing power due to the large-scale data volume has reached unprecedented heights and intensity, and computing power has become an important driving force for the continuous and in-depth development of the digital economy. Without computing power, all digital technologies are out of the question.

Many experts believe that the "computing age" has arrived. On the one hand, computing power is expected to replace heat and electricity and become a new kinetic energy and new engine that drives the digital economy forward; on the other hand, computing power is becoming a key factor affecting the country's comprehensive strength and international discourse power, and the core of the country-to-country relationship. Competitiveness is focusing on computing power represented by computing speed, computing methods, communication capabilities, and storage capabilities. Whoever masters advanced computing power in the future will have the initiative in development.

Fields of application of artificial intelligence (AI):

  • Computer vision: image recognition, object detection and tracking, face recognition, image generation, etc.
  • Natural language processing: text analysis, speech recognition, machine translation, sentiment analysis, etc.
  • Machine learning: data mining, pattern recognition, predictive analytics, recommender systems, etc.
  • Autonomous driving: self-driving cars, drone navigation, etc.
  • Intelligent voice assistant: voice control, intelligent dialogue, voice recognition and synthesis, etc.
  • Medical and health: medical image analysis, disease diagnosis, intelligent auxiliary diagnosis and treatment, etc.
  • Fintech: risk assessment, credit scoring, fraud detection, quantitative trading, etc.
  • Education field: personalized learning, intelligent educational software, online tutoring, etc.
  • Social media: content recommendation, sentiment analysis, social network analysis, etc.
  • Smart manufacturing: industrial automation, quality control, supply chain optimization, etc.

Application areas of computing power:

  • Scientific research: scientific simulation, astrophysics, climate simulation, etc.
  • Data analysis: big data processing, data mining, pattern recognition, statistical analysis, etc.
  • Graphics rendering: game development, movie special effects, virtual reality (VR) and augmented reality (AR), etc.
  • Simulation and simulation: flight simulation, urban planning, architectural design, etc.
  • Quantitative finance: high-frequency trading, financial model analysis, risk management, etc.
  • Biomedicine: genomics analysis, protein folding, drug screening, etc.
  • Weather Forecasting: Meteorological models, weather forecasts and climate simulations, etc.
  • Encryption and security: password cracking, network security analysis, etc.
  • Artificial intelligence training: deep learning model training, neural network optimization, etc.
  • Cloud computing: large-scale data storage and processing, distributed computing, etc.

Direction 2: AI + Computing Power Creates the "Strongest Leader"

"The strongest leader" refers to a company or organization that occupies a monopoly or leadership position in a certain industry. These companies usually have strong strength, market share and technological advantages, and can lead the trend in the industry and affect the development and pattern of the entire market. Here are some key points to introduce the concept of "the strongest leader":

  • Market monopoly position: The strongest leader occupies a monopoly position in a specific industry and has a relatively high market share. This enables them to dominate the development of the market and grasp more resources and customers.
  • Technical or innovative advantages: The strongest leaders usually have advanced technical capabilities and innovative strength. They invest heavily in technology research and development, actively promote technological progress and innovation in the industry, so as to maintain a competitive advantage.
  • Strong brand influence: The strongest leader has a wide range of popularity and brand influence in the market. Their products or services are widely recognized and trusted, and consumers tend to choose their products or services.
  • Advantages in resources and scale: The strongest leader has abundant resources and strong capital strength. They usually have huge production capacity, globalized supply chains and extensive sales networks, and can gain cost advantages through economies of scale.
  • Industry influence and rule-making ability: The strongest leader has great influence on the industry and can guide and formulate the development trend and rules of the industry. They have a high status in industry associations, standards organizations and other institutions, and can participate in decision-making and formulate industry standards.

The emergence of the strongest leader is often the result of market competition and development, but it may also attract the attention of antitrust and competition policies. On the one hand, the market advantages of the strongest leaders can bring innovation, reduce costs, improve efficiency, and promote the development of the industry; on the other hand, their monopoly position may also lead to unfair market, lack of competition and innovation. Therefore, regulators usually keep a close eye on the behavior of the strongest leaders in order to keep the market competition fair and healthy.

In the field of AI, there are also some companies or organizations known as "the strongest leaders", which occupy a leading position in AI technology, market share and innovation. Here are some examples of some of the strongest leaders in the field of AI:

  • Google (Alphabet): As one of the largest Internet companies in the world, Google has strong technical strength and resources in the field of AI. Its artificial intelligence division, Google AI, is dedicated to the research and development of various AI technologies and applications, such as natural language processing, computer vision, deep learning, and more. Google's products and services, such as Google Search, Google Assistant, Google Translate, etc., all incorporate AI technology.
  • Microsoft: As the world's leading technology giant, Microsoft also plays an important role in the field of AI. Its AI research organization, Microsoft Research, has made many breakthroughs in the field of artificial intelligence. Microsoft's AI technology is widely used in its products and services, such as the intelligent assistant Cortana, Azure machine learning platform, etc.
  • IBM: IBM is a technology company with a long history and strong strength in the field of AI. IBM's AI platform Watson is known for its powerful cognitive computing capabilities and intelligent analysis capabilities. Watson has a wide range of applications in medical diagnosis, natural language processing, image recognition, etc.
  • Amazon: As one of the largest e-commerce platforms in the world, Amazon also has extensive applications in the field of AI. Its artificial intelligence assistant Alexa has strong capabilities in speech recognition and natural language processing. In addition, Amazon also provides a series of AI services through its AWS cloud computing platform, such as machine learning, image analysis, speech synthesis, etc.

In addition to the above companies, there are many companies that play an important role in the field of AI, such as Facebook, Apple, OpenAI, etc. Through continuous research and innovation, these companies have promoted the development and application of AI technology, leading the development direction of the entire industry. Their advantages in data sets, algorithm research, talent attraction, etc. make them have an important position and influence in the field of AI.

The rise of AI + computing power in different industries has its own characteristics. The following is an analysis of the cloud computing, logistics and financial fields:

Cloud Computing

The rise of AI+computing power in the field of cloud computing provides enterprises and individuals with powerful computing power and resources. The cloud computing platform provides large-scale distributed computing and storage capabilities, making the training and reasoning of AI algorithms more efficient.

Cloud computing provides elasticity and flexibility, and the scale of computing resources can be adjusted according to demand, so that AI applications can be expanded and contracted according to actual conditions.

AI services and tools on the cloud computing platform allow developers to build and deploy AI models more conveniently, lowering the development threshold.

AI algorithms and models can be deployed on the cloud to achieve remote access and real-time response, providing more intelligent functions for various application scenarios.

Logistics field

The application of AI+computing power in the logistics field brings more efficient logistics management and operation. By using AI technology, logistics data can be analyzed and optimized to improve the accuracy and efficiency of the logistics process.

AI applications in logistics include route planning, cargo tracking, inventory management, and more. By utilizing big data and machine learning algorithms, intelligent scheduling and forecasting can be achieved, reducing transportation time and costs.


AI can also be applied to security inspection and risk management in logistics. For example, using computer vision technology for cargo scanning and identification, improving security and preventing fraud.

The financial sector

The rise of AI+ computing power in the financial field has brought about smarter and more efficient financial services. By using AI algorithms and big data analysis, financial institutions can conduct risk assessment, transaction analysis and forecasting more accurately.

AI applications in the financial sector include credit assessment, portfolio optimization, risk management, and more. AI models can analyze large amounts of financial data, identify potential risks and opportunities, and provide corresponding decision support.


Financial technology (Fintech) companies use AI + computing power technology innovation to launch various intelligent financial products and services, such as intelligent investment advice, intelligent payment, anti-fraud, etc., which have changed the pattern of the traditional financial industry.

In general, the rise of AI+ computing power in industries such as cloud computing, logistics, and finance has promoted the digital transformation and intelligent development of these industries. By making full use of AI technologies such as big data, machine learning, and deep learning, combined with powerful computing support, it is possible to improve efficiency, reduce costs, optimize decision-making, and create smarter and more convenient services and products.

The combination of AI+computing power can bring huge competitive advantages to enterprises and organizations, make them rise faster, and pose a greater competitive obstacle to latecomers. Here are some reasons:

Efficient processing of large-scale data: AI needs to process a large amount of data for training and learning, and powerful computing power can accelerate the process of data processing and analysis. Enterprises with high computing power can collect, process and utilize data faster, improving the accuracy and performance of models.
Fast training of complex models: Complex AI models, such as deep neural networks, require massive computing resources for training. Powerful computing power can accelerate the speed of model training, enabling enterprises to develop high-quality AI models faster and take the lead in the market.
Real-time applications and decision support: Certain application scenarios require AI to make decisions and respond in real time, such as autonomous driving and financial transactions. Powerful computing power can provide instant computing power, enabling the AI ​​system to process a large amount of input data in real time and make accurate decisions and responses.
Algorithm optimization and innovation: The optimization and innovation of AI algorithms requires a large number of experiments and iterations, and powerful computing power can support a more efficient algorithm optimization process. Leaders can take advantage of computing power to carry out faster algorithm iteration and innovation, so as to maintain the leading edge of technology.

By making full use of the advantages of AI+computing power, leaders can build high-quality AI solutions faster, improve efficiency and reduce costs, thereby establishing competitive barriers in the market. Latecomers often need to invest a lot of resources and time to catch up with the leader's technology and market position, which enables the leader to maintain a competitive advantage for a certain period of time.

Therefore, AI + computing power has played a key role in the rapid rise of enterprises and the construction of competitive advantages. 

Direction 3: Challenges brought by the combination of AI+computing power

The hegemony phenomenon created by AI+computing power may have the following impacts and disadvantages on the market and the competitive environment:

Monopoly risk: When a certain company or organization achieves a dominant position in the field of AI+computing power, there is a risk of monopolizing the market. This can lead to restricted competition in the market, barriers to entry for other competitors, and suppression of innovation. Monopolies may use their dominance to control markets, limit competition, and stifle innovation.
Data monopoly and privacy issues: The training and application of AI requires a large amount of data support. When an enterprise has a large amount of data and uses powerful computing power for AI analysis and application, it may monopolize data resources. This can make it difficult for other businesses to obtain enough data to compete, exacerbating market imbalances. In addition, data monopoly also raises issues of privacy and data security, requiring enhanced regulatory and protection measures.
Technical barriers and difficulty of entry: When a company establishes a powerful AI+computing power platform and achieves a dominant position, other companies will face higher technical barriers and difficulty of entry when entering the market. Strong computing power and resource advantages enable leaders to launch innovative products and solutions faster, while latecomers need to invest a lot of resources to catch up. This can lead to unfair competition in the market, limiting the possibility of innovation and competition.
Reliance risk and single point of failure: When only a few companies in the market have mastered the dominance of AI+computing power, other companies and users may over-rely on these dominant companies. This creates a dependency risk, and if one of these businesses encounters problems or fails, it could negatively affect the entire market and users. In addition, over-reliance on a few players also limits market diversity and innovation.

In order to mitigate these impacts and disadvantages, regulation and policy guidance need to be strengthened to ensure fair competition in the market, data privacy protection and technological innovation. At the same time, encouraging multiple companies to participate in the competition in the field of AI + computing power and promoting technology sharing and open cooperation will help break the monopoly situation and promote the development of innovation and market diversity.


Alright, this is the end of Xiao Yalan's blog content shared today, let's keep going! ! !

Guess you like

Origin blog.csdn.net/weixin_74957752/article/details/131425879