Intelligent Optimization Algorithm - Particle Swarm Optimization

In the past, optimization algorithms used a single node to find the optimal value. For example, for the mountain climbing problem, we always estimate that one person will climb the mountain, but the power of an individual is very small. For example, a person can easily walk to a local optimal solution and never get out again; another example is the Alps and Mount Everest. If a person walks to the Alps, even if he can reach Mount Everest by other means, it is obvious that the horizontal The distance across Eurasia is too long, and a lot of time is wasted. So, if it is a group algorithm, a group of people climb mountains, I can let some people go to Europe to climb mountains, some people go to Asia, and some people go to Africa, and then these people can communicate with each other, so that the algorithm not only compares its own experience value , and also uses the experience value of the group. However, there is a question. If one person walks to the Alps first, and everyone else is on the flat ground, based on swarm intelligence, will everyone stop climbing and go directly to the Alps to find them? No, the algorithm here uses a combination of swarm intelligence and individual intelligence, that is, each person will not come directly, but will judge their current position and the position weight of the optimal solution of the group. If the difference is too large, even if they return to The Alps will also generate a large step size. The next time you choose a direction, you will still directly cross Europe to Asia. Even if you all return to the Alps, those with a big difference in step size can still cross back to China. Look for Mount Everest.

  • Description
    Swarm intelligence, such as genetic algorithm, particle swarm algorithm, and ant colony algorithm, all use groups to make decisions, rather than individual individuals.

  • Algorithm principle In the
    past , our algorithm updated the latest value like this: x=xw*v
    where x is the input value, v is the step size, w is the step size weight (the default does not change), the update condition is that the current value is less than the minimum value, this , the change in x is only relevant to the individual.
    For particle swarm optimization, there are two differences:

    • 1. The x update conditions are different: individual comparison and group comparison are used. For each individual traversal, not only the individual minimum value, but also the group minimum value is compared.
    • 2. The above methods are still flawed, that is, the groups may eventually converge at one point and cannot come out, so the weight coefficient is reflected to distinguish the individual step sizes, so that even if they converge together, the step sizes are not the same (the step size is the speed. ). Update step formula: V = w * V + c1 * r1 * (pbest - X) + c2 * r2 * (gbest - X), update the input value x X = X + V. where w is the inertia weight, c1\c2 are the individual step weight and the group compensation weight, respectively, which are used to adjust the proportion of the weight to prevent a phoneme from having too much influence, r1\r2 is a random value between [0-1], pbest\gbest are the individual optimal value and the group optimal value, respectively.
  • PSO process (cd is equivalent to condition 1, e is equivalent to condition 2)
    a). Initialize a group of particles (the size of the group is m), including random positions and velocities;
    b). Evaluate the fitness of each particle;
    c). For each particle, compare its fitness value with the best position pbest it has experienced, and if it is better, take it as the current best position pbest;
    d). For each particle, compare its fitness value Compare with the best position gbest experienced by the global, if it is better, reset the index number of gbest;
    e). Change the speed and position of the particle according to equation (1);
    f). If the end condition is not reached (usually A good enough fitness value or reaching a preset maximum algebra Gmax), go back to b).

import numpy as np
import random
import matplotlib.pyplot as plt


# ----------------------PSO参数设置---------------------------------
class PSO():
    def __init__(self, pN, dim, max_iter):
        self.w = 0.8
        self.c1 = 2
        self.c2 = 2
        self.r1 = 0.6
        self.r2 = 0.3
        self.pN = pN  # 粒子数量
        self.dim = dim  # 搜索维度
        self.max_iter = max_iter  # 迭代次数
        self.X = np.zeros((self.pN, self.dim))  # 所有粒子的位置和速度
        self.V = np.zeros((self.pN, self.dim))
        self.pbest = np.zeros((self.pN, self.dim))  # 个体经历的最佳位置和全局最佳位置
        self.gbest = np.zeros((1, self.dim))
        self.p_fit = np.zeros(self.pN)  # 每个个体的历史最佳适应值
        self.fit = 1e10  # 全局最佳适应值

    # ---------------------目标函数Sphere函数-----------------------------
    # 这里用于求函数最优解
    def function(self, X):
        # return X**2-4*X+3+7*X**4+X**3-X**5
        return X ** 2 - 4 * X + 3


    # ---------------------初始化种群----------------------------------
    def init_Population(self):
        for i in range(self.pN):
            for j in range(self.dim):
                self.X[i][j] = random.uniform(0, 1)
                self.V[i][j] = random.uniform(0, 1)
            self.pbest[i] = self.X[i]
            tmp = self.function(self.X[i])
            self.p_fit[i] = tmp
            if tmp < self.fit:
                self.fit = tmp
                self.gbest = self.X[i]

                # ----------------------更新粒子位置----------------------------------

    def iterator(self):
        fitness = []
        for t in range(self.max_iter):

            # 1、更新个体位置和群体位置
            for i in range(self.pN):  # 更新gbest\pbest
                temp = self.function(self.X[i])
                if temp < self.p_fit[i]:  # 更新个体最优
                    self.p_fit[i] = temp
                    self.pbest[i] = self.X[i]
                    if self.p_fit[i] < self.fit:  # 更新全局最优
                        self.gbest = self.X[i]
                        self.fit = self.p_fit[i]

            # 2、更新步长\速度
            for i in range(self.pN):
                self.V[i] = self.w * self.V[i] + self.c1 * self.r1 * (self.pbest[i] - self.X[i]) + \
                            self.c2 * self.r2 * (self.gbest - self.X[i])
                self.X[i] = self.X[i] + self.V[i]
            fitness.append(self.fit)
            print(self.X[0], end=" ")
            print(self.fit)  # 输出最优值
        return fitness

        # ----------------------程序执行-----------------------


my_pso = PSO(pN=30, dim=1, max_iter=100)
my_pso.init_Population()
fitness = my_pso.iterator()
# -------------------画图--------------------
plt.figure(1)
plt.title("Figure1")
plt.xlabel("iterators", size=14)
plt.ylabel("fitness", size=14)
t = np.array([t for t in range(0, 100)])
fitness = np.array(fitness)
plt.plot(t, fitness, color='b', linewidth=3)
plt.show()

Reference article:
http://www.omegaxyz.com/2018/01/12/python_pso/
http://chuansong.me/n/1497455951515

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326338658&siteId=291194637