Definition and relevance of artificial intelligence

Definition and relevance of artificial intelligence

  • Artificial intelligence (English: Artificial Intelligence, AI), also known as machine intelligence, refers to the intelligence displayed by machines made by humans. Generally, artificial intelligence refers to human-like intelligence technology realized by means of ordinary computer programs.

  • The core issues of AI include the ability to build reasoning, knowledge, planning, learning, communication, perception, movement and manipulation of objects that can be similar to or even beyond humans.

  • The definition of artificial intelligence can be divided into two parts, namely "artificial" and "intelligent". "Intelligence" refers to other issues such as consciousness , self , mind , including the unconscious mind.

  • The research direction of artificial intelligence has been divided into several sub-fields:

    • Deduction, Reasoning, and Problem Solving : Research on Materialized Agents emphasizes the importance of perceptual movement. Neural network research attempts to reproduce this skill by simulating human and animal brain structures.

    • Knowledge representation : It is one of the core research problems in the field of artificial intelligence. Its goal is to allow machines to store corresponding knowledge and to obtain new knowledge by reasoning and deduction according to certain rules. There are many problems to be solved that require a lot of knowledge about the world, including prior knowledge stored in advance and knowledge obtained through intelligent reasoning.

    • Planning : An intelligent agent must be able to set goals and achieve those goals. They need a way to build a predictable model of the world

    • Machine learning : In order to allow the machine to obtain knowledge from the user and input data, etc., so that the machine can automatically judge and output the corresponding results. It is mainly divided into two categories: supervised learning and unsupervised learning.

    • Natural language processing : discusses how to process and use natural language, while natural language cognition refers to making computers "understand" human language.

    • Movement and Control , Perception , Sociality , Creativity , Ethical Management , etc.

  • Genre: Strong AI and Weak AI

    • Strong artificial intelligence :

      • The view is that it is possible to create truly intelligent machines that can reason and solve problems, and that such machines can be considered sentient and self-aware. There are two types of strong artificial intelligence:

        • Human-like artificial intelligence, that is, machines that think and reason just like people think.

        • Non-human-like artificial intelligence, that is, machines generate perceptions and consciousnesses that are completely different from humans, and use reasoning methods that are completely different from humans.

    • Weak AI

      • The idea is that it is impossible to make intelligent machines that can actually reason and solve problems, that these machines just appear to be intelligent, but are not really intelligent and have no autonomous consciousness.
    • Related discussion

      • Point of contention : If a machine's only job is to transform encoded data, is the machine thinking? Shiller thinks this is impossible. He gave an example of a Chinese room to illustrate that if the machine only converts data, and the data itself is a coding representation of something, then the premise of not understanding the correspondence between the coding and the actual thing It is impossible for a machine to have any understanding of the data it processes. Based on this argument, Shiller believes that even if a machine passes the Turing test, it does not necessarily mean that the machine is really thinking and conscious like a human being.

      • Philosopher's point of view

        • In his book " Consciousness Explained" , Daniel Dennett believes that man is just a machine with a soul . Why do we think that "man can have intelligence, but ordinary machines cannot"? He believes that it is possible for a data transformation machine like the above to have a mind and consciousness.

        • Simon Blackburn, in his introductory philosophy textbook Think, says that a person's actions that appear to be "intelligent" do not really mean that the person is actually intelligent. I can never know if the other person is really as intelligent as me or if she/he is just looking intelligent. Based on this argument, since weak AI believes that it can make a machine appear to be intelligent, it cannot completely deny that the machine is really intelligent. Blackburn believes this is a matter of subjective identification

Note: The above is summarized on Wikipedia.

Discussion on artificial intelligence

  • Academic related discussions

    • In "Uncertainty Artificial Intelligence"  , the uncertainty of artificial intelligence is fully discussed. The main points are as follows:

      1. Randomness and fuzziness are the most basic connotations of uncertainty

        • Randomness and random math :

          • Randomness, also known as contingency, refers to the indeterminate nature of the occurrence of an event because the conditions for the occurrence of an event are not sufficient, so that there is no decisive causal relationship between the conditions and the result, and random mathematics can be used as a tool for research. .
        • Fuzziness and Fuzzy Math :

          • Fuzziness is also known as non-clarity. It appears because the concept itself is vague, it is difficult to determine whether an object conforms to this concept, there is no clear meaning in quality, and there is no clear boundary in quantity. The nature of this unclear boundary is not caused by It is caused by people's subjective cognition, but is an objective attribute of things. The uncertain nature of conceptual extension can be studied by using fuzzy mathematics as a tool.

          • The research method of artificial intelligence on fuzziness is usually to fuzzify the original precise knowledge processing methods in various ways, such as fuzzy predicates, fuzzy rules, fuzzy frameworks, fuzzy semantic networks, fuzzy logic, etc. Fuzzy logic later It is developed into a possibility reasoning method, which can better deal with ambiguity by means of possibility measure and necessity measure.

        • The association of randomness and ambiguity

          • People use stochastic mathematics and fuzzy mathematics to study randomness and fuzziness respectively, just to understand uncertainty from different perspectives and specify their own axiom systems. However, when studying generalized uncertainty, these axioms Whether or not the preconditions hold is often a big question.
      2. Chaos, Fractals and Complex Networks

        • chaos

          • Since the 1990s, people have further combined chaos and neural network, proposed a variety of chaotic neural network models, and explored various information processing methods using chaos theory. , to study the chaotic response of the neuron model, the scaling parameters of the refractory term, the properties of the uncertain time decay constant and other parameters, as well as the relationship between these parameters and the chaotic response of the neural network, etc. This is drawn by the chaotic neural network model. The graph is very similar to the EEG.
        • Minute form

          • Through the analysis of a large number of real data, scientists have found that price changes have the characteristics of no scale and self-similarity. Fractals can be used to simulate price changes over time, and multifractals can also describe market uncertainty. Different from conventional statistical methods, Fractals decompose complex systems, which can reflect the internal fine structure and information contained in complex systems, while statistical methods can only obtain macroscopic and rough estimates.
        • complex network

          • Complex networks with small-world effects and scale-free properties have attracted much attention in recent years. A large number of real networks, such as the Internet, the World Wide Web, the power network, the aviation network, the food chain, and the interpersonal network are all such complex networks.

          • Human is one of the most complex systems capable of perceiving, transmitting, processing, developing and utilizing uncertain information. Humans come to Perceive the external world. The human brain has memory, thinking, consciousness and intelligence. From a structural point of view, the human brain is a complex network containing more than 14 billion nerve cells, which can flexibly process various complex and uncertain information. People are trying to Construct a cognitive model with complex network characteristics and imitate the central nervous system with neurons as the basic unit. In recent years, people have also tried to combine neural network models, fuzzy reasoning and representation, evolutionary algorithms, etc. A new model is constructed to reflect the uncertain cognitive process of human beings.

      3. Uncertainty in human cognitive processes

        • perceived uncertainty

        • memory uncertainty

        • uncertainty of thinking

      4. Uncertainty in natural language

      5. computer simulation of uncertainty

        • Computer Simulation of Randomness, Fuzziness and Their Correlations

        • Computer Simulation of Fractal Uncertainty

        • Computer Simulation of Network Topology Uncertainty

    • Alpha Go and artificial intelligence, is it powerful computing power or real intelligent creation?

    • Are machines creative?

    • Can artificial intelligence replace humans?

    • Is artificial intelligence the next evolutionary direction of human beings?

    • When will the singularity come?

      Related Links

  • socially relevant discussions

    • One of the worries : Will artificial intelligence cause mass unemployment among the middle class?

      • However, the widespread application of artificial intelligence may impact the existing social structure and eliminate the traditional jobs of the middle class, such as doctors, lawyers, teachers, translators, etc. The Economist wrote last year that artificial intelligence is exacerbating the risk of "occupational polarization". One pole is high-paying and high-tech occupations, which are occupations that cannot be replaced by artificial intelligence, such as architects, artists, and senior management. The other pole is low-paying, low-skilled occupations, or occupations that artificial intelligence does not need to replace, such as cleaners, fast food workers, etc. The famous physicist Hawking wrote in the "Guardian" column: "Factory automation has made many traditional manufacturing workers unemployed, and the rise of artificial intelligence is likely to spread the unemployment wave to the middle class, leaving only care, creation and supervision for human beings. jobs.” According to a widely circulated Citibank-Oxford University research report, 47% of jobs in the US, 35% in the UK, and 57% in OECD countries will be replaced by artificial intelligence in the next 10 to 20 years.
    • The second worry : Will artificial intelligence make the rich richer and the poor poorer?

      • The Guardian, which has always claimed to represent the interests of the working class, recently published an "official article" on artificial intelligence, arguing that the wealthy in the future can hide in private jets and private islands, and let robots provide protection and services. people isolated. In such a world, the rich can make themselves richer without hiring anyone. The liberation of capital from labor means the end of labor, the end of wages, and the elimination of the working class. Although this article is a bit extreme, the discussion on how to avoid artificial intelligence from widening the gap between the rich and the poor has indeed become a hot topic.
    • Worry No. 3 : The artificial intelligence industry is highly competitive and will underdeveloped areas be marginalized

      • Some technical experts say that in the future, technology companies must advertise "artificial intelligence +" to survive. The development of artificial intelligence has attracted the attention of governments around the world. The U.S. government mainly guides the development of the artificial intelligence industry through public investment. The U.S. government invested $2.2 billion in advanced manufacturing in fiscal year 2013, with a focus on the “National Robotics Program”. Last year, the United States stepped up the development of artificial intelligence, released a number of strategic documents, raised artificial intelligence to the national strategic level, and formulated a grand plan and development blueprint for the development of artificial intelligence in the United States. The European Union also proposed the Human Brain Project in 2013, investing nearly 1.2 billion euros, aiming to simulate the brain through computer technology, combine the research data of brain science with related industries, and establish a new set of analysis, integration and simulation. Data-based ICT platform to maintain Europe's leading position in AI research.

      • Tech giants such as Google, Apple, Facebook, Microsoft, Amazon, IBM, etc., have long regarded the development of artificial intelligence technology as a core strategy

    • Worry No. 4 : Will AI escape legal supervision?

      • According to the current level of artificial intelligence technology, robots are still in the state of "machines", but AlphaGo has made the general public aware of its deep learning capabilities, and it is no longer impossible for "machines" to evolve into "people". There are still many uncertainties about whether artificial intelligence will affect the survival and continuation of human beings and completely change the fate of human beings. Strengthening the management of the ethical and legal risks of AI and establishing an accountability mechanism for AI decision-making have been put on the agenda. For example, artificial intelligence can quickly extract personal stable and private information from public data, and the ethical choice of autonomous driving in the event of an accident, etc.
    • Related Links

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324399847&siteId=291194637