AI Overview (Review Notes)

I took a semester of AI courses, learned a lot of knowledge here and there, and listened to some lectures. Now I’m going to make some summaries.
Also, the blogger has recently opened his own WeChat public account. If you are interested, you can follow it. It is called "Deep Water Cat Demon". It usually updates some scientific and technological information, basic learning content, and reviews of papers in recent research fields.

Text Book:
Artificial Intelligence - A Modern Approach
authors: Stuart Russell, Peter Norvig
Teacher: 侯佳利

What is AI?

1. Artificial Intelligence
- an emerging technology that developed rapidly after World War II
- officially named in 1956
- general applications:

Chess: bridge, chess, chess, Go (I can only play backgammon), proving
mathematical theorems, composing poems, diagnosing diseases (I don’t seem to know any of them),
automated systems, robot control, games, scheduling, decision support
Home appliance control (here it will be combined with the Internet of Things, which is also a trend of AI)
simulation testing, computer graphics
prediction, weather, earthquake, stock market, etc.
……..I think mankind will be unemployed and communism is just around the corner

definition

  • A system that thinks like a human –> a system that thinks rationally
  • Systems that think like humans –> systems that act like humans
  • A system that thinks rationally + a system that acts like a human being –> a system that acts rationally

A branch of computer science dedicated to explaining and simulating intelligent behavior through computational processes (1990, Schalkoff) A branch of
computer science devoted to the automation of intelligent behavior (1993, Lauger and Stubblefield)

category

Write picture description here

Application example
- Japan displays fashion robot HRP-4C, 2009.3.16, Japan Institute of Advanced Industrial Technology

It has a mobile motor, one motor is equivalent to one human joint
158cm/43g (I don’t think it’s a bit short)

  • GMIC, 2014, BeiJing Osaka University Intelligent Robot Research Institute

    Hiroshi Ishiguro, the father of contemporary Japanese robots,
    has 65 subtle expressions

  • Mobile phone control helicopter

  • Quadcopter: Parrot AR.Drone
  • Wireless remote control aircraft
  • Quadcopter + dynamic balance gimbal + GpPro digital camera
  • DJI PHANTOM 4 PRO
  • DJI Mavic Pro
  • DJI Spark
  • DJI Inspire2
  • DJI Go APP
  • Very large UAV

2. Introduction to famous figures

  • Alan Turing


    • 1936 Truing Machine, the father of computers, laid the foundation for computer calculation theory
      . John von Neumann implemented the first electronic computer.
      • 1943 The father of information security
        successfully cracked Enigma, Germany's best encryption device during World War II.
      • 1950 The father of artificial intelligence
        proposed the Turing Test and established the method for artificial intelligence verification.
  • Bletchley Park

    During World War II, the British Bletchley Park worked on cracking the German Enigma and Lorenz encryption systems. The
    Allies gathered experts in cracking secrets, Alan Turing and Dilly Knox, and successfully cracked the Enigma and Lorenz encryption systems
    during World War II. During the battle, the British fleet was saved from an attack by the German submarine U-boot wolf pack tactics.

3. Turing Test - Judgment of Artificial Intelligence.
In 1950, Alan Turing, the father of computer science, proposed Computing Machinery and Intelligence
to scientifically test whether the behavior of a computer is consistent with artificial intelligence. The specific method is as follows:

There are two computers in the room, controlled by a person and a program respectively.
The subject types on the computer and chats with the two computers for five minutes.
Guess which one is followed by a person and which one is controlled by a program
. If more than 30% of humans are deceived, they will pass the test.

Application: Famous chat programs ELIZA, MGONZ, ALICE
MSN robot
was the first software to pass the Turing Test in 2014:
http://news-b5.zhengjian.org/node/22258

4. Challenge - Easier said than done.
Humans can easily perform many actions or behaviors, but it is difficult to clearly define the operation.
If the definition can be cleared, robots can do it.

5. Academic basis of artificial intelligence

  • Philosophical foundation
    I feel that the teacher and the PPT explanation are not very clear, so I will skip it for now.
    I think the foundation of any subject is philosophy. This is a very deep and fundamental concept. I will add more when I have some understanding of this.

  • Theoretical basis
    Mathematics: logic, probability theory, decision-making, calculation, formal rules
    Psychology: scientific language that observes the human mind and explains the principles developed Linguistics
    : theorems with language structure and definitions, "Linguistic Behavior"
    Computer Science: various Tools to truly realize AI

5. Evolutionary history
: gestation period: 1943-1956

The earliest researchers are: Warren McCulloch and Walter Pitts (1943)
used three resources as a starting point to establish an artificial neuron model:
basic physiology (knowledge of brain nerve function)
formal analysis of propositional logic
computational theory (Alan Turing)

Birth (1956)
by John McCarthy, who convened scholars of automatic machine theory, neural networks and intelligence research, held the Dartmouth Conference
to define the name of artificial intelligence
and officially became a new research field.

Early Enthusiasm: 1952-1969

But at that time, computers had limited capabilities and programs could do very little, and mainstream sounds were things that computers could not do. In
1957, Herbert Simon and Allen Newell proposed the General Problem Solver (GPS).

Some actual facts: 1966-1974

Many expected realizations did not come true as expected:
Prophecy: Computers can become chess champions within ten years, but it took forty years to achieve
great results on small-scale problems, but no significant progress has been seen on problems that are too complex.

Intellectually based systems: the key to ability 1969-1979

AI becomes industry: 1980-1988

Success of Expert System
US-Japan Computer Science Competition:
1981 Japan promoted the fifth-generation computer plan,
and the United States accordingly proposed MCC (Microelectronics and Computer Technology Corporation).
The AI ​​industry skyrocketed from a scale of millions of dollars to a billion dollars.

Restarting Neural Networks: 1986-present
Recent events: 1987-present

Autonomous planning and scheduling: NASA's agent program controls the work schedule of space vehicles millions of kilometers away.
International chess masters and computer programs.
Travelers and speech understanding.
Laboratory analysts and real-time expert system
robot systems are one. ALVINN
Lymph Node Pathology and Expert System for Drivers
Traffic Monitoring Personnel and Automatic Monitoring System

6. Challenges:
Inability to think independently
, moral and religious controversies

7. Game artificial intelligence
: production of monsters, enemies, virtual teammates
, collision detection, non-player character control (NPC)
qualitative and non-qualitative AI:

Qualitative Deterministic
behavior or performance is clear and predictable, such as chasing algorithms

Nondeterministic
has a certain degree of uncertainty and is a bit unpredictable.
For example, let NPC learn the player's battle tactics.

Cheat
Finite State Machine
Fuzzy State Machine: Fuzzy Logic
NPC avoid obstacles
rules

intelligent agent

1. Agent:
Everything that can sense the environment through the sensor and act on the environment through the actuator.
Human agent:

Sensor: eyes, nose, ears and other organs
Actuator: hands, feet and other parts

Robot Agent:

Sensor: camera, infrared rangefinder, gyroscope, electronic compass
Actuator: server motor, stepper motor

Software agent:

Sensor: accepts keyboard clicks, reads files, accepts network packets
Actuator: screens, reports, writes files, sends network packets
Each agent can perceive his own actions -> knows what he has done,
but the agent may not necessarily Can perceive the effects of actions, but does not know what will happen after the action, or what impact it will have on the environment.

  • Agent
    percept Perception: represents the agent's perceptual input at any given moment
    percept sequence: the complete history of all inputs received by the agent
    Agent action selection: depends on the current Percept sequence
    Agent function: for the agent to follow the percept sequence Corresponding to the action selection that determines the agent's behavior (abstract mathematical representation)
    Agent program: agent function for artificial agents

Example 1:

  • Vacuum cleaner
    Sensor: position detection, clean or not
    Actuator: moving motor, suction motor
    Action: move left, move right, suction, stop

Example 2:

  • RoCar
    is a programmable control robot. It
    is connected to the computer via USB
    and can be controlled by high-level language .
    It has a variety of sensors :
    three light sensors: can be controlled to drive along a black track.
    Temperature sensor: fire alarm or ambient temperature detection
    . Two sound sensors: can receive sound events.
    Two collision sensors: can detect left front, right front, and front obstacles. Actuator
    has 8 DIP switches . Two sets of motors are independent: control forward and backward, and can be used together to move left. , turn right, buzzer: make a sound, a set of digital LEDs: can display digital status, seven light-emitting diodes: light changes, express binary digital Action , forward, backward, turn left , turn right, buzzer warning (pitch and sound length can be specified) ) One digital LED displays the digital status . Eight light-emitting diodes serve as a PID controller for light changes . According to the Sensor signal, the output signal sent to the Acuators is adjusted to change the device under control . Proportional unit P: Current error integration unit I : Past accumulated error Differential unit D: Future error Suitable for dynamic and uncertain control, continuous transformation through continuous feedback, widely used in the stability control system of multi-axis aircraft


















Example 3:

  • The DJI PHANTOM 4 Pro
    quadcopter
    has three sets of visual positioning barrier systems: front, rear, and bottom.
    It has an ultrasonic rangefinder underneath and
    can provide GPS stabilization and visual stabilization control:
    vertical stabilization: GPS+/- 0.5 meters, visual + /-0.1m
    Horizontal self-stabilization: GPS+/-1.5m, vision +/-0.3m

2. Rational agent:

Be the agent for the right things.
The actions of the agent function are set correctly.
Making new purchases that make the agent more successful
requires measurement methods to evaluate the success of the agent.
Performance indicators
are the concrete expression of the agent's success criteria.
Rational agents People will try their best to meet performance indicators.
Wrong performance indicators will bring disaster
. There are four aspects of rational judgment:
effectiveness indicators that define success criteria.
The agent's prior knowledge of the environment (Prior Knowledge).
The actions that the agent can perform.
The agent's current Perceptual Sequence
- Definition of a Rational Agent:
For all perceptual sequences, a rational agent should choose actions that are expected to maximize its efficacy index based on the evidence provided by the perceptual sequence and the agent's built-in prior knowledge.

omniscience, learning and autonomy

Omniscience:
An omniscient agent knows the actual costs of an action and can decide his or her actions accordingly.
Rationality:
Omniscience is not possible in actual circumstances.
Rationality is concerned only with what is assumed to be the perceived, expected success of
rationality. To maximize expected performance, omniscience is to maximize actual performance

Rational choice depends on the sequence of perceptions up to the present moment. At a certain time, the so-called rationality is: - The
effectiveness measure that defines the degree of success
- Everything the agent is aware of: the entire historical process of perception is the sequence of perceptions
- The agent's awareness of the environment: with appropriate perceptrons
- The actions the agent can perform

Rational choice:
- Agents should not be allowed to engage in activities that are definitely stupid
- Observation of the environment before acting: correcting perception through sensors -> Information collection
- Appropriate exploration of unknown environments
- Ability to learn independently: Independence Based on prior knowledge, from random actions to built-in reflective behaviors

3. Environment
Task environment:
- Essentially a question answered by a rational agent

Determine the task environment:

  • Develop rational agent specifications (task environment PEAS)
    performance indicators (Performance)
    environment (Environment)
    Actuators actuators specifications
    Sensors sensor specifications

  • The first step in designing an agent is to develop a specification of the task environment: be as comprehensive as possible

Example:
Agent type: driver
Performance
indicators:
safe, fast, law-abiding, comfortable journey, profit maximization
Environment:
road, other vehicles, pedestrians, customers
Actuator:
steering wheel, accelerator, brake, signal light, horn, display
sensor :
Camera, sonar, speedometer, engine sensor, keyboard

Nature of the task environment:
- Fully observable, partially observable
- Deterministic, stochastic
- Episodic, continuous sequential
- Static, dynamic dynamic
- Discrete, continuous
- Single agent, multiple agents

4. Agent structure

  • Architecture:
    Computing device with corpse sensors and actuators

  • Agent = architecture + program.
    Receives stimulation from sensor.
    Processes the received stimulation.
    Controls actuator to respond to behavior.

  • Algorithm - table-driven agent fails.
    Record the perception sequence
    . Use the perception sequence as an index
    to query in the action table: the action table represents the agent function produced by the program.
    Respond to the action based on the result of the table lookup.

Reasons for failure:
None of the entity agents has space to store the table.
The designer did not have time to build the table.
No agent could learn from experience all the correct entries for the table.
Even if the environment is simple enough, how to populate all the entries in the table? The odds remain difficult

5. Basic agent program

  • Simple reflective agents (reflex agents)
    agents choose their own actions based on current perceptions,
    ignoring past perception history (perception sequences)
    condition-action rules

  • Model-based reflective agent

Keep historical internal state and record changes in environmental state with state

  • Goal-based reflective agent

Has current status and goal information

  • Utility-based reflective agents

To improve the effectiveness of agents

6. Learning agent

  • Can be divided into four conceptual components:

Learning element The learning element
is responsible for making improvements.
It accepts feedback from critics and decides how to modify the execution element. The
performance element
is responsible for selecting external actions.
The critic
evaluates how well the agent does and sets out rewards or punishments.
The problem generator
is responsible for proposing exploratory actions
to enable the agent. People learn new and valuable experiences
Write picture description here

Implement RoCar

The features of RoCar were mentioned before:

Programmable control of the robot.
Connected to computer via USB.
High-level language control: Provides .NET programming interface
. Multiple sensors.

//宣告RoCar事件处理程序
private void RC_DIPsChanged(byte DIPsReceive)
{
    //事件处理的动作
    //DIPsReceive传回指拨开关的状态0-255 
}

//碰停车
//宣告vRobots物件
vRobots.RoChar01 RC=new vRobots.RoCar01();
//在InitializeComponent()后设定TouchsChanged事件处理程序
this.RC.TouchsChanged += new
vRobots.RoChar01.TouchsChangedEventHandler(RC_TouchsChanged)
private void RC_TOuchsChanged(Byte TouchReceive)
{
    RC.MoveC('X');
    RC.WaitN(3000);
    RC.MoveC('F');
}

Disadvantages of RoCar:

  1. Currently, wire control operations need to be performed via USB
  2. The next generation RoCar can add unlimited modules, but the cost is ridiculous
  3. Derived problems: When multiple vehicles are controlled, their signals will interfere with each other.
  4. Battery consumption is high
  5. It does not provide burning NB, which can provide more complex control for the processing center
    (off topic, I feel that this is used for teaching, but it is actually okay. I prefer software learning and high-level design, but I have done a more comprehensive study on the underlying aspects of the hardware. Encapsulation, as a scholar cannot learn)

Related Products:

RoArm
RoAnt
RoDog
RoBoy
..

Use search rules to solve problems

1. Problem-solving agent.
Intelligent agents should maximize their own performance indicators.
Simplified optimization performance: have Goal and act towards satisfying their needs.
For example:
the agent is enjoying a vacation in Romania. On the premise of optimizing the fun of the vacation, I have a ticket departing from Bucharest the next day and it cannot be changed. Our goal is to catch the plane. **
Simple way to design agents:

  • normalization

Express the goals and problems to be solved in a formalized way

  • search

The call search procedure solves this problem
when the agent has multiple options:
- Examine various possible sequences of actions
- Choose the best sequence among them

  • Execution
    uses the obtained solution to guide the agent's actions.
    After the agent completes execution, it will re-normalize the new goal.

normalization
Write picture description here

  1. Assumed environment:
    static, observable, discrete, deterministic, the solution to the problem is a single sequence of actions, and cannot handle any unexpected events.
  2. Formal definition
    can be defined in four parts:

    Initial state:
    All actions that can be used in In(Arad) :
    //{<Go(Sibiu),In(Sibiu)>,<Go(Timisoara),In(Timisoara)>,<Go(Zerind),In(Zerind)>}
    // 状态空间(state space),路径(path)

    Target test:
    Determine whether the specified state is the target state

    Path cost path cost
    The numerical cost of each path

  3. Abstraction: the process of removing details that are irrelevant to the problem

  4. Regularization method:

    • Using the successor function
    • Incremental regularization: starting from the initial state, adding operators describing the state
    • Complete state normalization: starting with all eight queens on the board, moving the queen becomes a legal solution

Eight-queen problem:
State: Place 0 to 8 queens on the chessboard.
Initial state: There is no queen on the chessboard.
Successor function: Add a queen to any space on the chessboard.
Target test: All 8 queens are placed on the chessboard and do not attack each other.
==》Improved
state: place n queens, so that each of the n rows on the left has a queen.
Initial state: there is no queen on the chessboard.
Subsequent function: put a queen into any space in the leftmost empty row. , and no queen can attack another.
Target test: 8 queens are all on the chessboard and do not attack each other.

search for solution

The solution process after regularizing the problem is completed by searching in the state space.
The basic search is an explicit search tree generated by defining the state space using the initial state and the subsequent function.
If there are multiple paths to the same state, the result is a search graph.

Search method: depth first, breadth first.
These two concepts are mentioned in the basic data structure, and the principle is also very simple.

Data structure of the node:
STATE: the status of the node
PARENT-NODE: parent node (similar to inheritance)
ACTION: the action of the node generated by the parent node
PATH-COST: path cost, the path cost from the initial node to the
node DEPTH: since The number of steps from the initial state to this node.
Please see other related blog posts for the specific code.

Measuring problem solving performance:

CompletenessCompletenessOptimalityTime
complexityTime
complexitySpace
complexity

search strategy

Breadth-first search: Breadth-first search
starts from the root node, expands each successor node, and then expands each subsequent successor.
After reading all of one layer, expand the next layer
. Cost-uniform search: uniform-cost search
optimizes
each step of breadth-first The cost can be different (determined by the single-step cost function)
and can be applied to any single-step cost function. It is the best algorithm.
When the single-step cost is the same, do a special case of breadth-first search and
depth-first search: depth-first search.
Limited depth search. :Depth-limited search
is an improvement of depth-first, with depth restrictions. Iterative
deepening depth-first search: Iterative deepening search
is an improvement of limited depth-first, and the depth limit is gradually adjusted.

Bidirectional search:
Perform two searches at the same time, starting from the initial and target respectively.
The two searches will stop when they meet in the middle.

Search using incomplete information
Incomplete information will lead to the following three types of problems:
sensorless problem
Contingency problem
Exploration problem

random number

Random object
creation: Random r= new Random
returns a random number according to the specified interval:
r.Next(-50,50);//returns an integer between -50 and 49

Definition of random numbers

  • A sequence of values ​​that appears in an unpredictable and evenly distributed pattern
  • Traditional C language uses 50 equations to simulate random numbers

The first random number is passed into the first equation from the random seed (Rand Seed).
The first random number is passed into the second equation to get the second random number.

//宣告Class的属性
Random r=new Random();
public const int DiTimes=1000000;
public int[] di=new int[6];

//button的事件
//清空
listBox1.Items.Clear();
for(int i =0;i<6;i++)
{
    di[i]=0;
}
.....

genetic algorithm

Origin:
Darwin's theory of evolution
: offspring inherit the characteristics of their parents

  • Genetic algorithm was proposed by John Holland in 1975
  • Conceptual idea originated from Darwin's theory of evolution in 1859
  • Imitate biological reproduction in nature and the survival rules of Wujing Tianze
  • Biological genes come from both parents
  • Genes determine adaptability to the environment
  • Genes with high adaptability have a higher probability of surviving and reproducing
  • Application
    : Optimization solution,
    business forecasting
    , investment decision-making,
    processing mechanism of machine learning identification system,
    time series forecasting

  • Features
    Optimization solution
    High efficiency
    Suitable for solving problems with large space
    Non-global optimal solution, knowledge approximate solution or acceptable solution
    Using the group as a unit, several possible answers can be searched in parallel at the same time
    The group size is set based on time characteristics and Time cost is a consideration.
    The operating status changes with random values.
    There is randomness in the operation steps.

Basic operations
There are three main basic operating mechanisms of genetic algorithms:

  • selection

    Roulette method: improved to the expected value method. Sort
    according to the fitness function value of each chromosome
    to determine the probability of the chromosome being selected.
    Those with higher scores have a higher probability of being selected.

  • Mating crossover
    is a process in which different chromosomes are randomly mated (local exchange of genes).
    Stop conditions:

    Fixed algebra
    Convergence evaluation

method:

Single point mating
Two point mating
Uniform mating

Environmental control parameters:

Population size
Mating rate
Crossover method
Selection method
Mutation rate
Number of generations or stopping conditions

Group size:

The number of chromosomes in the same generation group
is usually fixed.
The size of the group setting determines the length of training time, quality of training, search breadth and the trap of local optimal solutions.

mating rate

Mating operation: provides the main operation of evolution
Mating rate: sets the probability of mating occurring

  • Mutation is used
    to obtain new characteristic solutions that the parents do not possess.

Through the above three operations, new offspring are produced based on the parents. Better genes are more competitive and have a higher probability of reproducing the next generation.

Preparation:

Coding, designing the chromosome structure of virtual organisms, describing the characteristics of virtual organisms.
Binary coding is often used.
Evaluation function: determines what is trying to survive, and allows virtual organisms to evolve in a direction with high adaptability.

neural network-like

Principles of the human brain
The human brain is composed of neurons:
chemical changes between neurons generate electrical signals;
they receive electrical signals for signal transmission;
they process a large number of parallel signals
. The brain operates with fault tolerance, and damage to some neurons does not affect the overall operation.
neuron, cell body, nucleus, synapse

Neural network-like:
nodes nodes/units
links links
weight weight input layer, output layer, hidden layer

Node
Each node performs a simple calculation and transmits activity level: three commonly used trigger functions
are calculated based on the value of each input signal received from the relevant node and the weight of each input link : step function,
Signal function, S-shaped function
perceptron perceptrons
single-layer forward feedback network
Output units are independent of each other, each weight only affects one output Perceptron
that can express AND, OR and NOT, but cannot express XOR

  • Learning of linearly separable functions
    The question of what a perceptron can represent is more important than the question of how to learn it.
    If there are enough training examples, there is a perceptron that can learn any linearly separable function.

There are many types of network structures
. The main difference is feed-forward and recursive. It is
divided into three layers: input layer, output layer, and hidden layer.
Feed-forward: Each layer is linked to all structures of the next layer
inversely recursively . : The output layer starts to calculate the error, corrects the weight to the business layer, and transmits the error correction backward layer by layer.

  • Hopfield network

The easiest to understand in the recursive network, using bidirectional connections with symmetric weights, all units are input and output units at the same time, and there is only one function - the activity function, with a degree of +/-1

  • Two-layer feedforward network

Choose and decide which of these functions is indeed expressive based on the weights of the network

  • multilayer feedforward network

The most common learning method is back-propagation
- Link transmits signals forward
- and the error is corrected backward in a partial differential manner

View Neural Networks

  • expression ability
  • Computational efficiency
  • universality
  • Sensitivity to misinformation
  • permeability
  • Prior Knowledge

Applications of neural networks

  • Speech Recognition
  • handwriting recognition
  • drive

The processing of input data
is converted into real numbers. Usually, the Boolean value is first normalized from -1 to 1
: 1 is true, 0 is false
weight
. The synthesis of input value x weight determines the output value through trigger function.
Trigger function: Logistic function, step function, Hyperbolic tangent function, linear function

The purpose of training
is to find the weights connecting all neuron values, so that the input data can obtain the required output value through the neural network. The
training methods are divided into: supervised learning and unsupervised learning.
The most commonly used: inverse recursion of supervised learning Law

The training process
minimizes the mean square error, based on which the correction value of the previous layer is corrected through the derivative function of the trigger function.
Steps:
- Prepare a set of training values, including input values ​​and output values. If possible, add A set of test values ​​to test the effectiveness of learning
- Initialize the weights with random numbers
- Read the training in sequence, determine the output value based on feed-forward, and continue to pass it to the final output value -
Calculate the error between the output value and the expected value
- Adjust the weight to minimize Error
- Repeat 3-5 until stop condition is met

Fuzzy fuzzy logic

Origin In
1965, Professor Lotfi Zadehr of UC Berkeley proposed that
fuzzy logic is closer to the way humans solve problems. The semantics in human behavior are usually fuzzy, and the boundaries are usually fuzzy.

Traditional computer theory
studies explicit values ​​and uses Boolean values ​​as the result of judgments.

Fuzzy logic
The answer to each question is yes to varying degrees and no to varying degrees.
Applicable to various control systems.
Example: Fuzzy washing machine
presses the start button, the motor rotates slightly, measures the weight of the clothes, and adds different amounts of water accordingly, according to the weight Adjust the washing schedule according to the turbidity of the water.

blur operation

  • Explicit input->fuzzification
  • Fuzzy input->Fuzzy rules
  • Blurred output->Deblurred
  • Explicit output

Types of attribution functions:

  • Boolean logic
  • slope
  • multiple fuzzy sets
  • triangle
  • reverse slope
  • trapezoid
  • curve
  • higher order
  • Other other

Define the function.
Modify the degree of belonging returned by the belonging function. It is not absolutely necessary.
Fuzzy rules
are usually presented in the form of if A then B, if A then B.
Defuzzification
aggregates the output intensities to obtain a recursive function, and converts to obtain a clear output value (single value output attribute function)

Finite State Machine

Finite State Machine ,FSM

  • A mathematical model in discrete mathematics that represents a finite number of states and the transitions and actions between these states.
  • Can be used to describe system behavior as a finite number of continuous states
  • Status changes are continuous
  • Changes in conditioned stimuli determine the behavior of the state machine

basic model

  • Each state is a Node
  • Link represents a continuous change trajectory from one state to another.
  • The status is limited
  • The behavior of the system changes in these states

application

  • traffic light
  • vending machine
  • Computer Games

Genetic ProgrammingGenetic Programming

Genetic Algorithms GA

Proposed by John Holland in 1975
from Darwin's theory of evolution

Comparison between GA and GP:
Gp has structural flexibility that GA lacks.
It can search a larger and more complete solution space
and can effectively solve problems.
Since the search space of GP is larger, the evolution calculation time required is also longer and it is easier to generate problems. Specialized solution

GP’s question:

  • The tree structure search space is too large
  • prone to illegal solutions
  • After crossover, it is easy to produce skewed trees and other trees that are too tall.
  • GP generates a complex solution tree structure
  • The most suitable language for GP is LISP

Guess you like

Origin blog.csdn.net/baidu_34418619/article/details/79056870