分类模型预测,基于遗传算法的神经网络优化---iris数据集

1. 计算智能(Computational Intelligence)

这门课程是笔者研究生的最后一门人工智能课程,难度那是相当让人头疼的。大家理解一下概念就好了。将下面一段话读一遍,这个概念就过了。

计算智能是以生物进化的观点认识和模拟智能。按照这一观点,智能是在生物的遗传、变异、生长以及外部环境的自然选择中产生的。在用进废退、优胜劣汰的过程中,适应度高的(头脑)结构被保存下来,智能水平也随之提高。因此说计算智能就是基于结构演化的智能。

计算智能的主要方法有人工神经网络、遗传算法、遗传程序、演化程序、局部搜索、模拟退火等等。这些方法具有以下共同的要素:自适应的结构、随机产生的或指定的初始状态、适应度的评测函数、修改结构的操作、系统状态存储器、终止计算的条件、指示结果的方法、控制过程的参数。计算智能的这些方法具有自学习、自组织、自适应的特征和简单、通用、鲁棒性强、适于并行处理的优点。在并行搜索、联想记忆、模式识别、知识自动获取等方面得到了广泛的应用。

典型的代表如遗传算法、免疫算法、模拟退火算法、蚁群算法、微粒群算法,都是一种仿生算法,基于“从大自然中获取智慧”的理念,通过人们对自然界独特规律的认知,提取出适合获取知识的一套计算工具。总的来说,通过自适应学习的特性,这些算法达到了全局优化的目的。

笔者这门课的期末考试就是基于遗传算法的神经网络优化—iris数据集。所以就只讨论遗传算法,而神经网络的概念请回看笔者的这篇文章(机器学习算法原理总结系列—算法基础之(7)神经网络(Neural Network)

2. 遗传算法

依然,基础概念都可以把你分分钟搞疯,还是,大声读一遍,然后过就好了。主要就是看我接下来的代码,自己试着去实现一遍,遗传算法的基本的运算过程也就懂点了。慢慢理解。

遗传算法(Genetic Algorithm)是模拟达尔文生物进化论的自然选择和遗传学机理的生物进化过程的计算模型,是一种通过模拟自然进化过程搜索最优解的方法。遗传算法是从代表问题可能潜在的解集的一个种群(population)开始的,而一个种群则由经过基因(gene)编码的一定数目的个体(individual)组成。每个个体实际上是染色体(chromosome)带有特征的实体。染色体作为遗传物质的主要载体,即多个基因的集合,其内部表现(即基因型)是某种基因组合,它决定了个体的形状的外部表现,如黑头发的特征是由染色体中控制这一特征的某种基因组合决定的。因此,在一开始需要实现从表现型到基因型的映射即编码工作。由于仿照基因编码的工作很复杂,我们往往进行简化,如二进制编码,初代种群产生之后,按照适者生存和优胜劣汰的原理,逐代(generation)演化产生出越来越好的近似解,在每一代,根据问题域中个体的适应度(fitness)大小选择(selection)个体,并借助于自然遗传学的遗传算子(genetic operators)进行组合交叉(crossover)和变异(mutation),产生出代表新的解集的种群。这个过程将导致种群像自然进化一样的后生代种群比前代更加适应于环境,末代种群中的最优个体经过解码(decoding),可以作为问题近似最优解。

遗传算法(Genetic Algorithm)是一类借鉴生物界的进化规律(适者生存,优胜劣汰遗传机制)演化而来的随机化搜索方法。它是由美国的J.Holland教授1975年首先提出,其主要特点是直接对结构对象进行操作,不存在求导和函数连续性的限定;具有内在的隐并行性和更好的全局寻优能力;采用概率化的寻优方法,能自动获取和指导优化的搜索空间,自适应地调整搜索方向,不需要确定的规则。遗传算法的这些性质,已被人们广泛地应用于组合优化、机器学习、信号处理、自适应控制和人工生命等领域。它是现代有关智能计算中的关键技术。
对于一个求函数最大值的优化问题(求函数最小值也类同),一般可以描述为下列数学规划模型:式中x为决策变量,式2-1为目标函数式,式2-2、2-3为约束条件,U是基本空间,R是U的子集。满足约束条件的解X称为可行解,集合R表示所有满足约束条件的解所组成的集合,称为可行解集合。
这里写图片描述
遗传算法也是计算机科学人工智能领域中用于解决最优化的一种搜索启发式算法,是进化算法的一种。这种启发式通常用来生成有用的解决方案来优化和搜索问题。进化算法最初是借鉴了进化生物学中的一些现象而发展起来的,这些现象包括遗传、突变、自然选择以及杂交等。遗传算法在适应度函数选择不当的情况下有可能收敛于局部最优 [1] ,而不能达到全局最优。
遗传算法的基本运算过程如下:
a)初始化:设置进化代数计数器t=0,设置最大进化代数T,随机生成M个个体作为初始群体P(0)。
b)个体评价:计算群体P(t)中各个个体的适应度。
遗传算法
遗传算法
c)选择运算:将选择算子作用于群体。选择的目的是把优化的个体直接遗传到下一代或通过配对交叉产生新的个体再遗传到下一代。选择操作是建立在群体中个体的适应度评估基础上的。
d)交叉运算:将交叉算子作用于群体。遗传算法中起核心作用的就是交叉算子。
e)变异运算:将变异算子作用于群体。即是对群体中的个体串的某些基因座上的基因值作变动。
群体P(t)经过选择、交叉、变异运算之后得到下一代群体P(t+1)。
f)终止条件判断:若t=T,则以进化过程中所得到的具有最大适应度个体作为最优解输出,终止计算。

3. 基于遗传算法的神经网络优化—iris数据集

Github:https://github.com/WEIHAITONG1/genetic-algorithm-neural-network

main_gann.py

from operator import itemgetter
import random
import matplotlib.pyplot as plt
import numpy as np
from iris_dataset import read_data, pre_processing
import time


# 双曲函数
def tanh(x):
    return np.tanh(x)


# 双曲函数导数
def tanh_derivate(x):
    return 1.0 - np.tanh(x) * np.tanh(x)


# Sigmoid函数
def sigmoid(x):
    return 1 / (1 + np.exp(-x))


# Sigmoid函数导数
def sigmoid_derivate(x):
    return sigmoid(x) * (1 - sigmoid(x))


# Each fitness is a small fraction of the total error
def calculate_fit(loss):
    total, fitnesses = sum(loss), []
    for i in range(len(loss)):
        fitnesses.append(loss[i] / total)
    return fitnesses


# takes a population of NetWork objects
def pair_pop(iris_data, pop):
    weights, loss = [], []

    # for each individual
    for individual_obj in pop:
        weights.append([individual_obj.weights_input, individual_obj.weights_output])
        # append 1/sum(MSEs) of individual to list of pop errors
        loss.append(individual_obj.sum_loss(data=iris_data))

    # fitnesses are a fraction of the total error
    fitnesses = calculate_fit(loss)
    for i in range(int(pop_size * 0.15)):
        print(str(i).zfill(2), '1/sum(MSEs)', str(loss[i]).rjust(15), str(
            int(loss[i] * graphical_error_scale) * '-').rjust(20), 'fitness'.rjust(12), str(fitnesses[i]).rjust(
            17), str(int(fitnesses[i] * 1000) * '-').rjust(20))
    del pop

    # Weight becomes item [0] and fitness [1] in this way, fitness is paired with its weight in a tuple
    return zip(weights, loss, fitnesses)


def roulette(fitness_scores):
    """The fitness score is part and their sum is 1. Fitter chromosomes have a bigger score."""
    cumalative_fitness = 0.0
    r = random.random()
    # Fitness score for each chromosome
    for i in range(len(fitness_scores)):
        # Fitness scores are added for each chromosome to accrue fitness
        cumalative_fitness += fitness_scores[i]
        # The colorimetric index is returned if the cumulative fitness is greater than r
        if cumalative_fitness > r:
            return i


def iterate_pop(ranked_pop):
    ranked_weights = [item[0] for item in ranked_pop]
    fitness_scores = [item[-1] for item in ranked_pop]
    new_pop_weight = [eval(repr(x)) for x in ranked_weights[:int(pop_size * 0.15)]]

    # Reproduce two randomly selected, but different chromos, until pop_size is reached
    while len(new_pop_weight) <= pop_size:
        ch1, ch2 = [], []
        index1 = roulette(fitness_scores)
        index2 = roulette(fitness_scores)
        while index1 == index2:
            # Make sure different chromosomes are used for breeding
            index2 = roulette(fitness_scores)
        # index1, index2 = 3,4
        ch1.extend(eval(repr(ranked_weights[index1])))
        ch2.extend(eval(repr(ranked_weights[index2])))
        if random.random() < crossover_rate:
            ch1, ch2 = crossover(ch1, ch2)
        mutate(ch1)
        mutate(ch2)
        new_pop_weight.append(ch1)
        new_pop_weight.append(ch2)
    return new_pop_weight


def crossover(m1, m2):
    # ni*nh+nh*no  = total weights
    r = random.randint(0, (nodes_input * nodes_hidden) + (nodes_hidden * nodes_output))
    output1 = [[[0.0] * nodes_hidden] * nodes_input, [[0.0] * nodes_output] * nodes_hidden]
    output2 = [[[0.0] * nodes_hidden] * nodes_input, [[0.0] * nodes_output] * nodes_hidden]
    for i in range(len(m1)):
        for j in range(len(m1[i])):
            for k in range(len(m1[i][j])):
                if r >= 0:
                    output1[i][j][k] = m1[i][j][k]
                    output2[i][j][k] = m2[i][j][k]
                elif r < 0:
                    output1[i][j][k] = m2[i][j][k]
                    output2[i][j][k] = m1[i][j][k]
                r -= 1
    return output1, output2


def mutate(m):
    # A constant can be included to control how much the weight has been abruptly changed
    for i in range(len(m)):
        for j in range(len(m[i])):
            for k in range(len(m[i][j])):
                if random.random() < mutation_rate:
                    m[i][j][k] = random.uniform(-2.0, 2.0)


def rank_pop(new_pop_weight, pop):
    # The new neural network is assigned to the pop_size list
    loss, copy = [], []
    pop = [NeuralNetwork(nodes_input, nodes_hidden, nodes_output) for _ in range(pop_size)]
    for i in range(pop_size):
        copy.append(new_pop_weight[i])

    for i in range(pop_size):
        # Everyone is assigned the weight generated by the previous iteration
        pop[i].assign_weights(new_pop_weight, i)
        pop[i].test_weights(new_pop_weight, i)

    for i in range(pop_size):
        pop[i].test_weights(new_pop_weight, i)

    # Calculate the fitness of these weights and modify them with weights
    paired_pop = pair_pop(iris_train_data, pop)

    # The weights are sorted in descending order of fitness (most suitable)
    ranked_pop = sorted(paired_pop, key=itemgetter(-1), reverse=True)
    loss = [eval(repr(x[1])) for x in ranked_pop]
    return ranked_pop, eval(repr(ranked_pop[0][1])), float(sum(loss)) / float(len(loss))


def randomize_matrix(matrix, a, b):
    for i in range(len(matrix)):
        for j in range(len(matrix[0])):
            matrix[i][j] = random.uniform(a, b)


class NeuralNetwork(object):
    def __init__(self, nodes_input, nodes_hidden, nodes_output, activation_fun='tanh'):
        # number of input, hidden, and output nodes
        self.nodes_input = nodes_input
        self.nodes_hidden = nodes_hidden
        self.nodes_output = nodes_output

        # activations for nodes
        self.activations_input = [1.0] * self.nodes_input
        self.activations_hidden = [1.0] * self.nodes_hidden
        self.activations_output = [1.0] * self.nodes_output

        # create weights
        self.weights_input = [[0.0] * self.nodes_hidden for _ in range(self.nodes_input)]
        self.weights_output = [[0.0] * self.nodes_output for _ in range(self.nodes_hidden)]
        randomize_matrix(self.weights_input, -0.1, 0.1)
        randomize_matrix(self.weights_output, -2.0, 2.0)

        # define activation function
        if activation_fun is 'tanh':
            self.activation_fun = tanh
            self.activation_fun_deriv = tanh_derivate
        elif activation_fun is 'sigmoid':
            self.activation_fun = sigmoid
            self.activation_fun_deriv = sigmoid_derivate

    def sum_loss(self, data):
        loss = 0.0
        for item in data:
            inputs = item[0]
            targets = item[1]
            self.feed_forward(inputs)
            loss += self.calculate_loss(targets)
        inverr = 1.0 / loss
        return inverr

    def calculate_loss(self, targets):
        loss = 0.0
        for k in range(len(targets)):
            loss += 0.5 * (targets[k] - self.activations_output[k]) ** 2
        return loss

    def feed_forward(self, inputs):
        if len(inputs) != self.nodes_input:
            print('incorrect number of inputs')

        for i in range(self.nodes_input):
            self.activations_input[i] = inputs[i]

        for j in range(self.nodes_hidden):
            self.activations_hidden[j] = self.activation_fun(
                sum([self.activations_input[i] * self.weights_input[i][j] for i in range(self.nodes_input)]))
        for k in range(self.nodes_output):
            self.activations_output[k] = self.activation_fun(
                sum([self.activations_hidden[j] * self.weights_output[j][k] for j in range(self.nodes_hidden)]))
        return self.activations_output

    def assign_weights(self, weights, I):
        io = 0
        for i in range(self.nodes_input):
            for j in range(self.nodes_hidden):
                self.weights_input[i][j] = weights[I][io][i][j]
        io = 1
        for j in range(self.nodes_hidden):
            for k in range(self.nodes_output):
                self.weights_output[j][k] = weights[I][io][j][k]

    def test_weights(self, weights, I):
        same = []
        io = 0
        for i in range(self.nodes_input):
            for j in range(self.nodes_hidden):
                if self.weights_input[i][j] != weights[I][io][i][j]:
                    same.append(('I', i, j, round(self.weights_input[i][j], 2), round(weights[I][io][i][j], 2),
                                 round(self.weights_input[i][j] - weights[I][io][i][j], 2)))

        io = 1
        for j in range(self.nodes_hidden):
            for k in range(self.nodes_output):
                if self.weights_output[j][k] != weights[I][io][j][k]:
                    same.append((('O', j, k), round(self.weights_output[j][k], 2), round(weights[I][io][j][k], 2),
                                 round(self.weights_output[j][k] - weights[I][io][j][k], 2)))
        if same:
            print(same)

    def test(self, data):
        results, targets = [], []
        for d in data:
            inputs = d[0]
            rounded = [round(i) for i in self.feed_forward(inputs)]
            if rounded == d[1]:
                result = '√ Classification Prediction is Correct '
            else:
                result = '× Classification Prediction is Wrong'
            print('{0} {1} {2} {3} {4} {5} {6}'.format(
                'Inputs:', d[0], '-->', str(self.feed_forward(inputs)).rjust(65), 'target classification', d[1],
                result))
            results += self.feed_forward(inputs)
            targets += d[1]
        return results, targets


start = time.clock()

graphical_error_scale = 300
max_iterations = 10
pop_size = 100
mutation_rate = 0.1
crossover_rate = 0.8
nodes_input, nodes_hidden, nodes_output = 4, 6, 1
x_train, x_test, y_train, y_test = read_data()
iris_train_data, iris_test_data = pre_processing(x_train, x_test, y_train, y_test)

# Sort the random population for the first time
pop = [NeuralNetwork(nodes_input, nodes_hidden, nodes_output) for i in range(pop_size)]  # fresh pop

paired_pop = pair_pop(iris_train_data, pop)

ranked_pop = sorted(paired_pop, key=itemgetter(-1), reverse=True)  # THIS IS CORRECT

# Sort the random population for the first time
iters = 0
tops, avgs = [], []

while iters != max_iterations:
    if iters % 1 == 0:
        print('Iteration'.rjust(150), iters)

    new_pop_weight = iterate_pop(ranked_pop)
    ranked_pop, toperr, avgerr = rank_pop(new_pop_weight, pop)

    tops.append(toperr)
    avgs.append(avgerr)
    iters += 1

end = time.clock()
print("generations of genetic total time-consuming: " + str(end - start))

# test a NN with the fittest weights
tester = NeuralNetwork(nodes_input, nodes_hidden, nodes_output)
fittestWeights = [x[0] for x in ranked_pop]
tester.assign_weights(fittestWeights, 0)
results, targets = tester.test(iris_test_data)
x = np.arange(0, 150)
title2 = 'Test after ' + str(iters) + ' iterations'
plt.title(title2)
plt.ylabel('Node output')
plt.xlabel('Instances')
plt.plot(results, 'xr', linewidth=0.5)
plt.plot(targets, 's', color='black', linewidth=3)
# lines = plt.plot(results, 'sg')
plt.annotate(s='Target Values', xy=(110, 0), color='black', family='sans-serif', size='small')
plt.annotate(s='Test Values', xy=(110, 0.5), color='red', family='sans-serif', size='small', weight='bold')
plt.figure(2)
plt.subplot(121)
plt.title('Top individual error evolution')
plt.ylabel('Inverse error')
plt.xlabel('Iterations')
plt.plot(tops, '-g', linewidth=1)
plt.subplot(122)
plt.plot(avgs, '-g', linewidth=1)
plt.title('Population average error evolution')
plt.ylabel('Inverse error')
plt.xlabel('Iterations')
plt.show()

print('max_iterations', max_iterations, 'pop_size', pop_size, 'pop_size*0.15', int(
    pop_size * 0.15), 'mutation_rate', mutation_rate, 'crossover_rate', crossover_rate,
      'nodes_input, nodes_hidden, nodes_output', nodes_input, nodes_hidden, nodes_output)

源码和结果部分已经放在Github上,望给Star噢!

猜你喜欢

转载自blog.csdn.net/tong_t/article/details/80327271