TOPSIS method (entropy weight method) (model + MATLAB code)

TOPSIS can be translated as approaching the ideal solution sorting method, which is referred to as the distance method of superior and inferior solutions in China

The TOPSIS method is a commonly used comprehensive evaluation method , which can make full use of the information of the original data, and its results can accurately reflect the distance between the evaluation schemes.

1. Model introduction

Very large index (benefit index)  : the higher (larger) the better

Very small indicators (cost indicators)  : the fewer (smaller) the better

Intermediate indicators : the closer to a certain value, the better

Interval indicators : it is best to fall within a certain interval

Construct the formula for calculating the score: (x-min)/(max-min) [only one indicator]

Step 1: Normalize the original matrix

Unified indicator type: converting all indicators into extremely large ones is called indicator forwarding (the most commonly used can be added to papers)

The formula for converting a very small indicator to a very large indicator: max-x

function [posit_x] = Min2Max(x)
posit_x = max(x) - x;
%posit_x = 1 ./ x; %如果x全部都大于0,也可以这样正向化
end

Converting an intermediate metric to a very large metric:

function [posit_x] = Mid2Max(x,best)
M = max(abs(x-best));
posit_x = 1 - abs(x-best) / M;
end

Interval indicators are transformed into extremely large indicators:

function [posit_x] = Inter2Max(x,a,b)
r_x = size(x,1); % row of x
M = max([a-min(x),max(x)-b]);
posit_x = zeros(r_x,1); %zeros函数用法: zeros(3) zeros(3,1) ones(3)
% 初始化posit_x全为0 初始化的目的是节省处理时间
for i = 1: r_x
if x(i) < a posit_x(i) = 1-(a-x(i))/M; elseif x(i) > b
posit_x(i) = 1-(x(i)-b)/M;
else
posit_x(i) = 1;
end
end
end

The first step code:

[n,m] = size(X);
disp(['共有' num2str(n) '个评价对象, ' num2str(m) '个评价指标'])
Judge = input(['这' num2str(m) '个指标是否需要经过正向化处理,需要请输入1 ,不需要输入0: ']);

if Judge == 1
Position = input('请输入需要正向化处理的指标所在的列,例如第2、3、6三列需要处理,那么你需要输入[2,3,6]: '); %[2,3,4]
disp('请输入需要处理的这些列的指标类型(1:极小型, 2:中间型, 3:区间型) ')
Type = input('例如:第2列是极小型,第3列是区间型,第6列是中间型,就输入[1,3,2]: '); %[2,1,3]
% 注意,Position和Type是两个同维度的行向量
for i = 1 : size(Position,2) %这里需要对这些列分别处理,因此我们需要知道一共要处理的次数,即循环的次数
X(:,Position(i)) = Positivization(X(:,Position(i)),Type(i),Position(i));
% Positivization是我们自己定义的函数,其作用是进行正向化,其一共接收三个参数
% 第一个参数是要正向化处理的那一列向量 X(:,Position(i)) 回顾上一讲的知识,X(:,n)表示取第n列的全部元素
% 第二个参数是对应的这一列的指标类型(1:极小型, 2:中间型, 3:区间型)
% 第三个参数是告诉函数我们正在处理的是原始矩阵中的哪一列
% 该函数有一个返回值,它返回正向化之后的指标,我们可以将其直接赋值给我们原始要处理的那一列向量
end
disp('正向化后的矩阵 X = ')
disp(X)
end

Step 2: Normalize the normalization matrix

Standardization processing: In order to eliminate the influence of different index dimensions, it is necessary to standardize the matrix that has been forwarded

The second step code:

Z = X ./ repmat(sum(X.*X) .^ 0.5, n, 1);
disp('标准化矩阵 Z = ')
disp(Z)

Step 3: Calculate the score and normalize

Multiple indicators: the matrix of z and the minimum value / the distance between z and the maximum value + the distance between z and the minimum value

The third step code:

D_P = sum([(Z - repmat(max(Z),n,1)) .^ 2 ],2) .^ 0.5; % D+ 与最大值的距离向量
D_N = sum([(Z - repmat(min(Z),n,1)) .^ 2 ],2) .^ 0.5; % D- 与最小值的距离向量
S = D_N ./ (D_P+D_N); % 未归一化的得分
disp('最后的得分为:')
stand_S = S / sum(S)
[sorted_S,index] = sort(stand_S ,'descend')

TOPSIS with weights: use the AHP to determine the weights for the m evaluation indicators

Of course: the subjectivity of the AHP is too strong, and it is recommended that you use the entropy weight method for objective assignment

Entropy weight method is an objective weighting method

The basis of the principle: the smaller the degree of variation of the indicator (which can be understood as the variance), the less the amount of information reflected, and the lower the corresponding weight should be. (objective = the data itself can tell us the weights)

How to measure the size of the amount of information:

The more likely something happens, the less information it has, and the less likely it happens, the more information it has. Measured with probability (fitted with logarithmic function)

Information entropy (essence: the expected value of the amount of information):

The larger the information entropy, the smaller the corresponding amount of information

Calculation steps of entropy weight method:

(1) Determine whether there are negative numbers in the input matrix, and if so, re-normalize to a non-negative interval

(2) Calculate the proportion of the i-th sample under the j-th indicator, and regard it as the probability used in the relative entropy calculation

(3) Calculate the information entropy of each index, and calculate the information utility value, and normalize to obtain the entropy weight of each index

Definition of information utility value: dj=1-ej The greater the information utility value, the more information it corresponds to

The entropy weight of each indicator can be obtained by normalizing the information utility value:

share:

"Children are responsible for eating, drinking and not worrying"

Guess you like

Origin blog.csdn.net/m0_62504956/article/details/128461414