Particle Swarm Optimization Algorithm - A nonlinear function optimization Extreme

I. Introduction  

    PSO algorithm is also known as particle swarm optimization (PSO). PSO algorithm is derived from the study of the behavior of birds of prey; a flock of birds foraging behavior through simulation and stochastic search algorithm developed based on group collaboration.

Second, particle swarm algorithm analysis

1, the basic idea

    Particle Swarm simulated by designing a individual particles of birds in the flock, the particles having only two attributes: speed and position, velocity speed representative direction, moving the position of the representative movement. Each particle in the search space separate search for the optimal solution, which will be credited to the current personal best, and share individual extreme particle with other particles throughout the group, and to find the optimal individual extreme as the entire particle the current global population of the optimal solution, constantly iteration, the speed and position of all particles in PSO solution to adjust their speed and optimal position based on the current global current personal best and to find their own share of the entire PSO. Finally get the optimal solution termination condition is satisfied. Mainly to find optimal solutions through collaboration and information sharing between groups of individuals.

2, initialization

  Initialize all parameters, of course, the latter can also change. Set the maximum number of iterations, the acceleration factor c1 and c2, inertia weight w, population size sizepop, around speed limit, variable range, the fitness function Dim dimension, the number argument of a function, the objective function to be optimized selection, location information for the entire search space, the initial population, initialization speed.

3, with individual extreme global optimum

Definition of the fitness function for each individual extreme particle find the optimal solution, finding a global optimum from the optimal solution, this is called a global optimal solution. More convenient solution to the global history of the best, to be updated.

4, the speed and position of the formula

  Speed ​​update:

Wherein, w is the inertia weight, its value can not be negative. c1 and c2 for the acceleration factor, the value of which can not be negative. rand is a randomly generated random number between [0,1]. pbest (j, :) particle optimal position; pop (j, :) particle current location; V (j, :) particle current speed.

   Location updates:

 

 Note: PSO algorithm is initialized by a group of random particles. Then to find the optimal solution through constant iteration. In each iteration, the particle by tracking the two "extremes" (pbest, gbest) to update themselves. After finding the optimal values ​​of these two particles to update their speed and position by the above formula.

5, algorithm flow

 

 

 

 Three, MATLAB test results and analysis

 各参数初始值如下表:

加速因子c1 加速因子c2 惯性权重w 最大迭代次数  种群规模 速度范围 变量取值范围 适应度函数维数
1.49445 1.49445 0.8 1000 200 [-1,1] [-5,5] 10

 

 

 

1.1当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1.49445 1.49445 0.8

 

 

测试结果如下表:

 

 部分可视化图如下:

 

 

 

 

 1.2当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1.49445 1.49445 0.5

 

 

测试结果如下表:

 部分可视化图如下:

 

 1.3当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1.49445 1.49445 0.3

 

 

测试结果如下表:

 部分可视化图如下:

 

 

1.4 当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1.49445 1.49445 0.1

 

 

测试结果如下表:

 部分可视化图如下:

 

 

 分析:综上可得,当其他参数的值不变时,惯性权重w的值与全局寻优能力呈线性正相关的关系。惯性权重w的值越大,全局寻优能力越强,局部寻优能力越弱;其值越小,全局寻优能力越弱,局部寻优能力越强。另外,动态w 能获得比固定值更好的寻优结果。动态w可以在PSO搜索过程中进行线性变化。

 

2.1当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
0.5 0.5 0.8

 

 

测试结果如下表:

 部分可视化图如下:

 

 

 

 

2.2当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1 1 0.8

 

 

测试结果如下表:

 部分可视化图如下:

 

 

2.3当加速因子c1、c2和惯性权重w组合变化时(其他参数不变), 待优化的函数:Rastrigin函数,MATLAB实现如下:

c1 c2 w
1.5 1.5 0.8

 

 

测试结果如下表:

 

 

 部分可视化图如下:

 

 分析:当惯性权重w不变时,通过测试以上3组不同的加速因子的值,并且对每一个加速因子分别随机测试10组数据,分别记录最优适应度和达到最优适应度时的最多迭代次数,便算出最优适应度的平均值,对比其平均值可得:加速因子c1、c2的值越大,最优适应度平均值也越大,因此全局寻优能力也逐渐增强强。当调整幅度过大时,荣誉陷入局部最优中。

 

3.1当种群规模数(sizepop)和适应度函数维数(dim)组合变化时(其他参数不变),待优化的函数:Rastrigin函数,MATLAB实现如下:

种群规模数(sizepop) 适应度函数维数(dim)
200 5

 

 

测试结果如下表:

部分可视化图如下:

 

 

3.2当种群规模数(sizepop)和适应度函数维数(dim)组合变化时(其他参数不变),待优化的函数:Rastrigin函数,MATLAB实现如下:

种群规模数(sizepop) 适应度函数维数(dim)
200 15

 

 

测试结果如下表:

部分可视化图如下:

 

 

3.3当种群规模数(sizepop)和适应度函数维数(dim)组合变化时(其他参数不变),待优化的函数:Rastrigin函数,MATLAB实现如下:

种群规模数(sizepop) 适应度函数维数(dim)
200 25

 

 

测试结果如下表:

部分可视化图如下:

 

 

 

 分析:当其他参数不变时,通过测试以上3组不同的适应度函数维数为5、15和25,并且对每一个适应度函数维数分别随机测试10组数据,分别记录最优适应度和达到最优适应度时的最多迭代次数,便算出最优适应度的平均值,对比其平均值可得:适应度函数维数(dim)越大,寻优能力越强。

Guess you like

Origin www.cnblogs.com/twzh123456/p/11977054.html