4.2, NEAT supervised learning Supervised learning

4.2 NEAT supervised learning Supervised learning_哔哩哔哩_bilibili

NEAT supervised learning | Don't bother with Python


Next, let’s talk about a usage example on the neat-python web page, using neat to evolve a neural network to predict XOR judgments (the same output is False, and the same output is True)

  • Input True, True, Output False
  • Input False, True, Output True
  • Input True, False, Output True
  • Input False, False Output False

In the example, we use this form to replace the input and output to be learned:

xor_inputs = [(0.0, 0.0), (0.0, 1.0), (1.0, 0.0), (1.0, 1.0)]
xor_outputs = [   (0.0,),     (1.0,),     (1.0,),     (0.0,)]

Then how to evaluate the fitness of each individual (fitness), or his prediction score. We will score each individual. If the 4 xor judgments are all predicted correctly, we will get 4 points, and use the square difference to calculate The calculation is wrong. The following function is to generate a neural network based on each genome (DNA), use this neural network to predict, then score the genome, and write it into its fitness:

def eval_genomes(genomes, config):
    for genome_id, genome in genomes:   # for each individual
        genome.fitness = 4.0        # 4 xor evaluations
        net = neat.nn.FeedForwardNetwork.create(genome, config)
        for xi, xo in zip(xor_inputs, xor_outputs):
            output = net.activate(xi)
            genome.fitness -= (output[0] - xo[0]) ** 2

Each neat program needs to have such a scoring standard. Then we create a config file to specify all operating parameters. This config file should be stored separately, and there must be several aspects of parameter presets in the file . For the specific preset value of each aspect, please refer to my config-forward file in github . For the explanation of each aspect, if you don’t understand, please refer to here

Now we can use these preset parameters to generate a  value (  this is also used  config above  ).eval_genomesconfig

local_dir = os.path.dirname(__file__)
config_file = os.path.join(local_dir, 'config-feedforward')     # 参数文件
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
                     neat.DefaultSpeciesSet, neat.DefaultStagnation,
                     config_file)

With this  config, we can use it to generate our whole  population, and use this initial one  p to train 300 times. Note that  config-forward we set a parameter in it  fitness_threshold = 3.9, that is, as long as any fitness reaches 3.9 (maximum 4), we will Stop the iterative update  population. So it is possible to learn it in less than 300 times. After learning it, we output the best performance  winner.

p = neat.Population(config)
winner = p.run(eval_genomes, 300)   # 输入计算 fitness 的方式和 generation 的次数

The most important process is over, let’s keep it simple. The other codes in this example script are the codes of actual results, so you can see it casually.

print('\nOutput:')
winner_net = neat.nn.FeedForwardNetwork.create(winner, config)
for xi, xo in zip(xor_inputs, xor_outputs):
    output = winner_net.activate(xi)
    print("input {!r}, expected output {!r}, got {!r}".format(xi, xo, output))

We use this to output the final  winner prediction result of the neural network. Not surprisingly, you should predict very accurately. Finally, through the  visualize.py visualization function of the file , we can generate several pictures, open them with a browser to  speciation.svg see the trend of different populations, and  avg_fitness.svg see Look at the change curve of fitness  Digraph.gv.svg to see how the generated neural network looks like.

 

 

 Regarding the neural network diagram at the bottom, it needs to be explained. If it is a solid line, such as B->1, B->2, it means that the link is Enabled. If it is a dotted line (dotted line), such as B->A XOR B indicates that the link is Disabled. The red line represents weight <= 0, and the green line represents weight > 0. The width of the line is related to the size of the weight.

Guess you like

Origin blog.csdn.net/weixin_43135178/article/details/130769880