python particle swarm algorithm to achieve

  What is the PSO algorithm

  Particle swarm optimization, also known as PSO or birds foraging algorithm (Particle Swarm Optimization, PSO). Presented by J. Kennedy and RC Eberhart et al in 1995. Belonging to one of its evolutionary algorithm, but also from random solutions, through iterative find the optimal solution, which is to evaluate the quality of the solution through fitness.

  This algorithm is its easy implementation, high precision, the advantages of rapid convergence attracted academic attention, and demonstrated its superiority in solving practical problems.

  Solving process

  PSO to strike the optimal solution is completed by birds of prey behavior simulation.

  Assuming that a flock of birds to catch food in space. In this area, only a piece of food (corresponding to the optimal solution). All the birds do not know the food there. They may determine the approximate distance from the food itself (the distance is determined by the fit value optimal solution). The most simple and effective way is to search the area around the current food from the nearest bird.

  PSO, each solution of the problem is the search space of a "bird." We call it "particle." All particles have a value determined by the adaptation function is optimized (fitness value), update the position and the highest particle itself is completed and determines its own optimal conditions can be accommodated by the position of the population.

  That is, PSO will initialize a group of random particles. Iterative find the optimal solution. In each iteration, the particles update is implemented by tracking the position of the two "extremes."

  1, the optimal solution found by the particle itself, pbest pbest.

  2, the entire population of the optimal solution found so far, global extreme gBest.

  Each particle has an important attribute, called speed, this property also determines the distance and direction of their updated update.

  Particle who will follow the current optimal particle search in the solution space.

  Solving particle swarm pseudo code as follows:

  PSO is initialized

  while not reached the maximum number of iterations or minimum loss:

  for each_p in PSO:

  Calculation fitness

  If a particle is higher than the optimum fitness fitness history (pbest)

  This value is set as a new pbest

  Select the best fit values ​​of all the particles as the particles gbest

  for each_p in PSO:

  Particle velocity is calculated according to equation (a)

  Update particle position according to equation (b)

  Wherein the equation (a) is:

  

Here Insert Picture Description


  Equation (b) as follows:

  

Here Insert Picture Description


  In the equation, v [i] is the i-th particle velocity, w is the inertia weight (to help out local optimal solution), present [i] is the current position of the i-th particle, pbest [i] is the first history best i particle, the gbest a global optimum, RAND () is a random number between (0, 1). c1, c2 is a learning factor, usually, c1 = c2 = 2.

  For equation (a) in terms of:

  

Here Insert Picture Description


  On behalf of the particle itself using the optimal solution found to update their speed.

  

Here Insert Picture Description


  On behalf of the entire population is currently using the optimal solution be found to update their speed.

  Implementation code Wuxi gynecological hospital http://www.ytsgfk120.com/

  This is an example of the maximum value of 2 + 20 * x + 10 obtains the linear equations y = -x ^.

  import numpy as np

  class PSO():

  def __init__(self,pN,dim,max_iter,func):

  Inertia factor self.w = 0.8 #

  self.c1 = 2 # Cognitive factor itself

  self.c2 = 2 # social cognition factor

  self.r1 = 0.6 # own cognitive learning rate

  self.r2 = 0.3 # social cognitive learning rate

  self.pN = number of particles pN #

  self.dim = dim # search dimensions

  self.max_iter = max_iter # maximum number of iterations

  self.X = np.zeros ((self.pN, self.dim)) # initial position and velocity of particles

  self.V = np.zeros ((self.pN, self.dim))

  self.pbest = np.zeros ((self.pN, self.dim), dtype = float) # particle best location history

  self.gbest = np.zeros ((1, self.dim), dtype = float) # global optimum position

  self.p_bestfit = np.zeros (self.pN) # best fit each individual's historical value

  self.fit = -1e15 # global optimum fitness

  self.func = func

  def function(self,x):

  return self.func(x)

  def init_pop (self,): # initial population

  for i in range(self.pN):

  # Initialization position and velocity of each particle

  self.X[i] = np.random.uniform(0,5,[1,self.dim])

  self.V[i] = np.random.uniform(0,5,[1,self.dim])

  self.pbest [i] = self.X [i] # initialization history optimum position

  self.p_bestfit [i] = self.function (self.X [i]) # fit corresponding value obtained

  if(self.p_bestfit[i] > self.fit):

  self.fit = self.p_bestfit[i]

  self.gbest = self.X [i] # obtain global optimum

  def update(self):

  fitness = []

  for _ in range(self.max_iter):

  for i in range(self.pN): #更新gbest\pbest

  temp = self.function (self.X [i]) # fitness value to obtain the current position

  if (temp> self.p_bestfit [i]): # update personal best

  self.p_bestfit[i] = temp

  self.pbest[i] = self.X[i]

  if (self.p_bestfit [i]> self.fit): # update the global optimum

  self.gbest = self.X[i]

  self.fit = self.p_bestfit[i]

  for i in range (self.pN): # update the weights

  self.V[i] = self.w*self.V[i] + self.c1*self.r1*(self.pbest[i] - self.X[i]) + \

  self.c2*self.r2*(self.gbest - self.X[i])

  self.X [i] = self.X [i] + self.V [i]

  fitness.append(self.fit)

  return self.gbest,self.fit

  def count_func(x):

  y = -x**2 + 20*x + 10

  return y

  pso_example = PSO(pN = 50,dim = 1,max_iter = 300, func = count_func)

  pso_example.init_pop()

  x_best,fit_best= pso_example.update()

  print(x_best,fit_best)

Guess you like

Origin blog.51cto.com/14503791/2435420