Multi-objective problem based on particle swarm optimization

Multi-objective problem based on particle swarm optimization

Overview of particle swarm algorithm

在实际工程优化问题中,多数问题是多目标优化问题。相对于单目标优化问题,多目标优
化问题的显著特点是优化各个目标使其同时达到综合的最优值。然而,由于多目标优化问题
的各个目标之间往往是相互冲突的,在满足其中一个目标最优的同时,其他的目标往往可能会 
受其影响而变得很差。因此,一般适用于单目标问题的方法难以用于多目标问题的求解。
多目标优化问题很早就引起了人们的重视,现已经发展出多种求解多目标优化问题的方
法。多目标优化问题求解中最重要的概念是非劣解和非劣解集,两者的定义如下。
非劣解(noninferior solution):在多目标优化问题的可行域中存在一个问题解,若不存在
另一个可行解,使得一个解中的目标全部劣于该解,则该解称为多目标优化问题的非劣解。所
有非劣解的集合叫做非劣解集(noninferior set)。
在求解实际问题中,过多的非劣解是无法直接应用的,决策者只能选择其中最满意的一个
非劣解作为最终解。最终解主要有三种方法,第一种是求非劣解的生成法,包括加权法、约束
法、加权法和约束法结合的混合法以及多目标遗传算法,即先求出大量的非劣解,构成非劣解
的一个子集,然后按照决策者的意图找出最终解。第二种为交互法,主要为求解线性约束多目
标优化的Geoffrion法,不先求出很多的非劣解,而是通过分析者与决策者对话的方式,逐步
求出最终解。第三种是事先要求决策者提供目标之间的相对重要程度,算法以此为依据,将多
目标问题转化为单目标问题进行求解。
利用进化算法求解多目标优化问题是近年来的研究热点,1967年,Rosenberg就建议采
用基于进化的搜索来处理多目标优化问题,但没有具体实现。1975年,Holland提出了遗传算
法,10年后,Schaffer提出了矢量评价遗传算法,第一次实现了遗传算法与多目标优化问题的
结合。1989年,Goldberg在其著作《Genetic Algorithms for Search,Optimization,and Ma-
chine Learning》中,提出了把经济学中的Pareto理论与进化算法结合来求解多目标优化问题
的新思路,对于后续进化多目标优化算法的研究具有重要的指导意义。目前﹐采用多目标进化
算法求解多目标问题已成为进化计算领域中的一个热门方向,粒子群优化,蚁群算法、人工免
疫系统、分布估计算法、协同进化算法,进化算法等一些新的进化算法陆续被用于求解多目标
优化问题。本案例采用多目标粒子群算法求解多目标背包问题。

Problem description and analysis

Insert picture description here
Insert picture description here
Insert picture description here

Solutions and results

Contact the author with the code or leave a message in the comment area
Fitness calculation
Refer to formula (10-1) for the fitness value of particles. There are two fitness values ​​for each individual, namely value and volume, and the individual must meet the
quality constraints.
Screening non-inferior solution set
Screening non-inferior solution set is mainly divided into initial screening non-inferior solution set and update non-inferior solution set. Initial screening of the non-inferior solution set means that after the particle
is initialized, when a particle is not dominated by other particles (that is, there is no other particle's P., R. are better than the particle),
the particle is put into the non-inferior solution set , And randomly select a particle from the non-inferior solution set as the optimal particle of the population before the particle update
. Updating the non-inferior solution set means that when the new particle is not dominated by other particles and the particles in the current non-inferior solution set, put the new particle
in the non-inferior solution set, and randomly select one from the non-inferior solution set before each particle update The particle is the optimal particle for the group.
10.2.5 Particle speed and position update The
particle update formula is as follows:
vI = oV+cr,(P%—x)+czr:(P%-x*)
x1 = x+VuH1
where o is the inertia weight; r, and r: is a random number distributed in the interval [0,1]; k is the current iteration number; P is the individual
optimal particle position; P‰a is the global optimal particle position; c, and c: are constant; V is the particle Speed; X is the particle position.
Particle optimization
Particle optimization includes individual optimal particles and group optimal particles. The update method of
individual optimal particles is to select the dominant particle from the current new particle and the individual optimal particle. When neither particle is the dominant particle, Randomly select a particle
as the individual optimal particle. The optimal particle of the group is a particle randomly selected from the non-inferior solution set.
Contact the author with the code or leave a message in the comment area
Next is the result graph of the operation:
Insert picture description here

Guess you like

Origin blog.csdn.net/wlfyok/article/details/108295769