Matlab learning log (2019.7.29)

Matlab learning log (2019.7.29)

Evaluation and decision-making methods

1. Gray Correlation Analysis

Usage: Analysis of the various factors that affect the degree of change over time and solve the problem for a comprehensive evaluation of the results of the class.
Analysis Specific steps are as follows:
1, equalization processing data, the table can be painted; the need for a plurality of indicators extracted effect type indicators and cost index and standardize.
2, extracting a reference comparison queue queues and
3, comparison with the reference queue Queue subtraction
4, the difference between the maximum and minimum differential seeking
rho = 0.5;% coefficient resolution range for the resolution factor in the range [0,1]
5, seeking correlation coefficient
6, find relevance
7, relevance ordering

clc, clear
a=[];
for i=[1 5:9]    %效益型指标标准化
    a(i,:)=(a(i,:)-min(a(i,:)))/(max(a(i,:))-min(a(i,:)));
end
for i=2:4  %成本型指标标准化
   a(i,:)=(max(a(i,:))-a(i,:))/(max(a(i,:))-min(a(i,:))); 
end
[m,n]=size(a);
cankao=max(a')'  %求参考序列的取值
t=repmat(cankao,[1,n])-a;  %求参考序列与每一个序列的差
mmin=min(min(t));   %计算最小差
mmax=max(max(t));  %计算最大差
rho=0.5; %分辨系数
xishu=(mmin+rho*mmax)./(t+rho*mmax)  %计算灰色关联系数
guanliandu=mean(xishu)   %取等权重,计算关联度
[gsort,ind]=sort(guanliandu,'descend')  %对关联度按照从大到小排序

The size of the gray weight relation of the respective evaluation target sort order can be established rating of the associated object, the greater the degree of association, the better the evaluation result.

2. Principal Component Analysis

The first step: the raw data normalized.
Step two: the samples is calculated correlation matrix.
The third step: determine the correlation coefficient method Jacobian matrix eigenvalues and corresponding eigenvectors.
Step 4: Select important main component, and write the main component expressions.
The larger the contribution ratio, the stronger the main component of information contained in the original variables. Select the number of principal components, mainly based on the cumulative contribution rate of principal components to determine that the general requirements for cumulative contribution rate of more than 85%, so as to ensure comprehensive variable information can include the vast majority of the original variables.
Fifth step: calculating a principal component score.
Step Six: According to the principal component scores of data, it can carry out further statistical analysis. Among them, common application principal component regression, variable subset selection, evaluation and so on. Program below:

%%主成分分析法
clc;clear;
s2=xlsread('数模暑期培训第一次作业--附件.xlsx','附录2','B2:J14'); %导入附录2的数据
x2=zscore(s2);  %对附录2的数据标准化处理
r2=corrcoef(x2); %计算相关矩阵
[v2,d2]=eig(r2); %计算特征值和特征向量
w2=sum(d2)/sum(sum(d2)); %各主成分贡献率
f=repmat(sign(sum(v2)),size(v2,1),1);
v3=v2.*f;  %正交化
F=[x2-ones(13,1)*mean(x2)]*v3(:,[7 8 9])*w2(:,[7 8 9]).'; %计算各主成分得分
[F1,I1]=sort(F,'descend'); %从大到小排名

Precautions

1. In the data mining field or the image processing and the like are often used principal component analysis
2 can reduce the dimension of the data, but the data can be analyzed to elliptic distribution.
3. The linear algebra, a unique characteristic value, but it is not the only feature vector. Can be understood as the addition of even a negative sign, the fluctuation of the obtained data structure is the same. Such information is still retained data. But in order to unify the general we will also need to be amended to orthogonal matrix . For a better comparison can attempt to try to open command function code is built with edit pcacov.
4. Singular Value Decomposition (sigular value decomposition, SVD) is an orthogonal matrix decomposition; the SVD decomposition method is the most reliable, but it is more than the QR decomposition (QR decomposition is decomposed into a matrix and the orthonormal matrix triangular matrix) method spend nearly ten times the computing time. [U, S, V] = svd (A), where U and V represents two mutually orthogonal matrices and S represents a diagonal matrix. QR decomposition method and the same person, is not necessarily the original matrix A square matrix.

Today small part exercise program

1. seek to establish the following tables array

table (N, riqi, xianxing, Ndanhao, Nshuanghao)
in the first column, row number
in the second column, every day 2019 date, datetime formats
in the third column, if the date is a single number, then write the text "one-number access", if date is double, then write the text "No double pass", if it is the weekend, then write "odd and even traffic"
in the fourth column, write the current date total number of single car traffic how many days
the fifth column, write the number double the current date car traffic total number of days

clc;clear;
N=[1:1:365]'; %第一列,行编号
riqi=[datetime('2019-01-1'):datetime('2019-12-31')]'; %第二列,2019年每天的日期
x=0;  %单号通行的天数
y=0;  %双号通行的天数
for n=1:datenum('2019-12-31')-datenum('2019-01-01')+1  %计算2019年共多少天,做循环
    if isweekend(riqi(n))==1  %判断某一天是否为周末
         xianxing(n,1:4)='单双通行'; %如果是周末则第三列写文本“单双通行”
         x=x+1;
         y=y+1;
    else %某一天不为周末则判断日期为单号还是双号
        if rem(riqi(n).Day,2)==0 
            xianxing(n,1:4)= '双号通行';%某一天为非周末的双号日期则第三列写文本“双号通行”
            y=y+1;
        else
            xianxing(n,1:4)='单号通行';%某一天为非周末的单号日期则第三列写文本“单号通行”
            x=x+1;
        end
    end
    Ndanhao(n,1)=x;%将截至当前日期单号通行天数存放入第四列
    Nshuanghao(n,1)=y;%将截至当前日期双号通行天数存放入第五列
end
T=table( N, riqi, xianxing, Ndanhao, Nshuanghao )%建立tables数组
clear,clc
NDays = yeardays(2019);
N=[1:NDays]';
riqi=datetime(2019,1,1)+N-1;
xianxing=repmat('单号通行',NDays,1);
a=logical(mod(riqi.Day,2)); %日期是否为单数
xianxing(~a,:)=repmat('双号通行',sum(~a),1);
tf = isweekend(riqi);
xianxing(tf,:)=repmat('单双通行',sum(tf),1);
for i = 1:365   
    idxdanhao(i,1)=all(xianxing(i,:)=='单号通行',2)|all(xianxing(i,:)=='单双通行',2);
    idxshuanghao(i,1)=all(xianxing(i,:)=='双号通行',2)|all(xianxing(i,:)=='单双通行',2);
end
Ndanhao=cumsum(idxdanhao);
Nshuanghao=cumsum(idxshuanghao);
XianXingData=table( N,riqi, xianxing, Ndanhao, Nshuanghao)   

2. The gray prediction model (G (1,1))

Here Insert Picture Description

clc,clear;
x0=[2.874 3.278 3.337 3.390 3.369]';
n=length(x0);
la=x0(1:n-1)./x0(2:n); %计算级比
range=minmax(la); %计算级比范围
ss=[exp(-2/(n+1)) exp(2/(n+2))]; %求可容覆盖,用于判断是否使用灰色预测
x1=cumsum(x0); %累加运算
B=[-0.5*(x1(1:n-1)+x1(2:n)) ones(n-1,1)];
Y=x0(2:n);
u=B\Y; %求拟合参数u(1)表示a,u(2)表示b
syms x(t)
x=dsolve(diff(x)+u(1)*x==u(2),x(0)==x0(1)); %求微分方程的符合解
xt=vpa(x,6); %实数形式显示微分方程的解
y1=subs(x,t,[0:n-1]); %求已知数据的预测值
y1=double(y1); %符合数转换成双精度实数,以便做差分运算
ye=[x0(1),diff(y1)]; %差分运算,还原数据
en=x0'-ye; %计算残差
da=abs(en./x0'); %计算相对误差
r=1-(1-0.5*u(1))/(1+0.5*u(1))*la'; %计算级比偏差值
y=subs(x,t,[0:n]);
y=double(y);
ye2=[x0(1),diff(y)] %预测第n+1个数据
Published 12 original articles · won praise 0 · Views 120

Guess you like

Origin blog.csdn.net/qq_45244489/article/details/97678194