Topsis method, the full name of Technique for Order Preference by Similarity to an Ideal Solution, is often translated as the distance method of superiority or inferior solution in Chinese. This method can evaluate and rank individuals based on existing data. The method of sorting according to the closeness of a limited number of evaluation objects to the idealized object is to evaluate the relative advantages and disadvantages among the existing objects.
Introduce a practical example to understand:
Example: The following table shows the physical parameters of 5 students. Please use the TOPSIS method to make a comprehensive evaluation of the physical condition of the students.
Note that the directions of the above four indicators are not the same
. Positive:
1. Very small indicators
2. Intermediate indicators
3. Interval indicators
Because the dimensions of different features are different, the indicators need to be standardized later
Determination of the best and worst plan set:
1. On the one hand, see if the decision factor itself has a limit value, which must meet the practical significance
2. If there is no or it is difficult to find, find the MAX and MIN values in all evaluation sets
Calculating distance:
1. Positive distance formula:
2. Negative distance formula:
3. Evaluation index value:
Take Xiaoming as an example to calculate the evaluation index value:
further expansion:
the calculation in the above example assumes that the importance of the evaluation factors is the same , Which is often not the case in practice, and there are differences in the degree of importance between various factors. How to set the importance of each factor?
The weight is determined by AHP or entropy method
(the third part of this column introduced the entropy method of EXCEL)
matlab: Entropy method combined with TOPSIS
%基于熵权法对于TOPSIS的修正
clear;clc;
load X.mat;
%获取行数列数
r = size(X,1);
c = size(X,2);
%首先,把我们的原始指标矩阵正向化
%第二列中间型--->极大型
middle = input("请输入最佳的中间值:");
M = max(abs(X(:,2)-middle));
for i=1:r
X(i,2) = 1-abs(X(i,2)-middle)/M;
end
%第三列极小型--->极大型
max_value = max(X(:,3));
X(:,3) = abs(X(:,3)-max_value);
%第四列区间型--->极大型
a = input("请输入区间的下界:");
b = input("请输入区间的下界:");
M = max(a-min(X(:,4)),max(X(:,4))-b);
for i=1:r
if (X(i,4)<a)
X(i,4) = 1-(a-X(i,4))/M;
elseif (X(i,4)<=b&&X(i,4)>=a)
X(i,4) = 1;
else
X(i,4) = 1-(X(i,4)-b)/M;
end
end
disp("正向化后的矩阵为:");
disp(X);
%然后对正向化后的矩阵进行熵权法赋权重
tempX = X; %代替X进行计算的辅助变量,避免X受到影响而发生改变
%测试:tempX = [1,2,3;-1,0,-6;5,-3,2];
%标准化矩阵,消除负数项,并且把数值控制在0-1区间
min = min(tempX);
max = max(tempX);
min = repmat(min,size(tempX,1),1);
max = repmat(max,size(tempX,1),1);
tempX = (tempX-min)./(max-min);
%求出矩阵的概率矩阵,即能取到该值的概率
sumX = repmat(sum(tempX),size(tempX,1),1);
pX = tempX./sumX;
%求出信息熵矩阵,信息熵越大,能获得的信息就越少
temp = pX.*mylog(pX);
n = size(tempX,1);
sum1 = sum(temp);
eX = sum1.*(-1/log(n));
%求出信息效用值
dX = 1-eX;
%求出每个指标的熵权
wX = dX./(sum(dX));
%打印输出
disp("每个指标依次的熵权为:");
disp(wX);
Entropy method:
function [W] = Entropy_Method(Z)
% 计算有n个样本,m个指标的样本所对应的的熵权
% 输入
% Z : n*m的矩阵(要经过正向化和标准化处理,且元素中不存在负数)
% 输出
% W:熵权,m*1的行向量
%% 计算熵权
[n,m] = size(Z);
D = zeros(1,m); % 初始化保存信息效用值的行向量
for i = 1:m
x = Z(:,i); % 取出第i列的指标
p = x / sum(x);
% 注意,p有可能为0,此时计算ln(p)*p时,Matlab会返回NaN,所以这里我们自己定义一个函数
e = -sum(p .* mylog(p)) / log(n); % 计算信息熵
D(i) = 1- e; % 计算信息效用值
end
W = D ./ sum(D); % 将信息效用值归一化,得到权重
end