Principle of Particle Swarm Algorithm

1 Introduction to particle swarm algorithm

Particle Swarm Optimization (PSO) was proposed by Dr. Eberhart and Dr. Kennedy in 1995. The particle swarm algorithm is a swarm intelligence algorithm designed by simulating the predation behavior of a flock of birds. There are different food sources in the area. The task of the bird flock is to find the largest food source (global optimal solution), and the task of the bird flock is to find this food source. Throughout the entire search process, the flock of birds communicates their respective position information to each other to let other birds know the location of the food source. Eventually, the entire flock of birds can gather around the food source, that is, we have found the optimal solution. The problem converges. Inspired by nature, scholars have developed many similar intelligent algorithms, such as ant colony algorithm, cuckoo search algorithm, fish school algorithm, hunting algorithm and so on.

2 algorithm principle

This is a heuristic algorithm derived from the foraging behavior of a flock of birds. Now there is a flock of birds. They set out to forage together. The goal is to find the most abundant food in the feasible region. In the same WeChat group chat, the birds can constantly share the richest places they find. The strategy is as follows:
  1. Each bird randomly finds a place and starts in a random direction.
  2. After every minute of flying, each bird shares the optimal location and inventory of things it finds in the group, and then calculates the optimal position found by the group.
  3. Each bird reviews its own path, comprehensively considering the best position it has traveled and the best position of the group to determine the next direction.
  4. If everyone is near the same place, stop searching, otherwise repeat steps 2 and 3.

The position of the entire group is updated as shown in the figure below, and each red dot is a particle: (picture from scikit-opt)

Insert picture description here
The position update method of a certain bird (one of the particles) is as follows:
Insert picture description here

3 Iterative formula

The iterative formula is very simple and clear.
Each time the speed update formula:
Insert picture description here
position update formula:
Insert picture description here
c1, c2- acceleration constant, adjust the maximum learning step length
r1, r2- two random functions, the value range [0,1], to increase the random search The
w-inertia weight, non-negative number, adjusts the search range of the solution space.
How to judge the pros and cons of a location? The minimized objective function that needs to be solved is called the fitness function, and the position of the particle is brought into the fitness function. The smaller the result, the better.

4 algorithm flow

Insert picture description here

5 example calculation

Now we give a simple example, the objective function y = x 2 y=x^2 for solving a one-dimensional optimization problemand=x2 minimum point.
Initialize two particles, the positions arex = − 3, x = 2 x = -3, x = 2x=3,x=2 , the initial speed isv = 1, v = − 1 v=1, v=-1v=1,v=1 , for the convenience of calculationw, c 1, r 1, c 2, r 2 w,c_1,r_1,c_2,r_2w,c1,r1,c2,r2The parameter value is 1.
Insert picture description here

6 code implementation

6.1 Based on numpy
import numpy as np 
import random 

# pso  
def suit(x):
    x1, x2=x
    return -(x1-10) ** 2 + -(x2 - 3) ** 2  # + -x3 ** 2

def best_p(current_p,person_best): #  
    x = np.zeros_like(current_p) 
    n,d = current_p.shape
    for i in range(n):# 每个粒子比较一次
        a = current_p[i]
        b = person_best[i]
        if suit(b)>suit(a):
            x[i] = b
        else: x[i]= a
    return x 
        
def global_b(person_best): # n*d 
    n,d= person_best.shape 
    s = []
    for j in range(n):
        s.append(suit(person_best[j]))        
    i = np.array(s).argmax()
    x = np.array(person_best[i])
    return x         
# init  
n = 40 # 粒子个数
d = 2
current_v =  np.array([[random.randint(1, 100) for i in range(n)]]).reshape(-1,d)
current_p = np.array([[random.randint(1, 100) for i in range(n)]]).reshape(-1,d)
person_best = current_p
global_best = global_b(person_best)  
T = 0   
w=1   
while  T<100000:        
    # if  all([current_p[i][0]==current_p[0][0] for i in range(len(current_p))]):
    #     print(T,current_p)
    #     break
    if sum(person_best.std(axis=0))<.1:
        break        
    else:    
        w = w*0.99996;r1 = random.random(); r2 = random.random()
        current_v = w*current_v + r1*(person_best-current_p)+ r1*(global_best - current_p)
        #  w = w*0.999
        # current_v = w*current_v +(person_best-current_p)+(global_best - current_p)
        current_p = current_p + current_v
        person_best =  best_p(current_p, person_best)
        global_best = global_b(person_best) 
        #current_v = current_v + 2* (person_best- current_p)+2*(global_best- current_p)
        T+=1 
print(T, person_best)
6.2 Based on sko.pso

The python sko library contains commonly used heuristic algorithms, as well as particle swarm optimization PSO, which can be called directly, which is very fast and convenient.

from sko.PSO import PSO
def demo_func(x):
    x1, x2, x3 = x
    return (x1-5) ** 2 + (x2 - 2) ** 2 + (x3-19) ** 2
pso = PSO(func=demo_func, dim=3)
fitness = pso.fit()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y) 

>>>best_x is  [ 4.99981675  2.00044853 18.99955148] best_y is [4.35931123e-07]

参考文献:
[1]J. Kennedy and R.C. Eberhart. “Particle swarm optimization.” In IEEE international Conference on Neural Networks, volume 4,IEEE Press, 1995, pp. 1942–1948.
[2] scikit_opt

Guess you like

Origin blog.csdn.net/weixin_43705953/article/details/111510906