1. Algorithm principle
From some initial point X ( 0 ) { {X}^{(0)}}X( 0 ) , alongf ( X ) f(X)f ( X ) at pointX ( 0 ) { {X}^{(0)}}XNegative gradient direction at ( 0 )
Find f ( X ) f(X)The minimum point of f ( X ) X ( 1 ) { {X}^{(1)}}X( 1 ) , then fromX ( 1 ) { {X}^{(1)}}X( 1 ) start, repeat the above process to getX ( 2 ) { {X}^{(2)}}X( 2 ) . Going on like this, get the sequenceX ( k ) { {X}^{(k)}}X(k)。
It can be proved that from any initial point X ( 0 ) { {X}^{(0)}}XStarting from ( 0 ) , the sequence X ( k ) { {X}^{(k)}}obtained by the steepest descent method X( k ) all converge to makeXXX minimizesf ( X ) f(X)The solution of f ( X ) , that is, the equationAX = b AX=bAX=b solution. The iteration format of the steepest descent method is: given initial valueX ( 0 ) { {X}^{(0)}}X(0), X ( k ) { {X}^{(k)}} X( k ) is determined as follows
Second, the source program
The source program of the steepest descent method:
%最速下降法
% 'A';系数矩阵
% 'b':右端项
% 'e0':求解精度
% 'x';方程的解
% 'k':迭代次数
% 'tol':总误差
function [x,k,tol]=fastest(A,b,e0)
x0=zeros(length(b),1);
x=x0;
k=0;
i=1;
r=b-A*x0;
while norm(r)>=e0
r=b-A*x0;
q=dot(r,r)/dot(A*r,r);
x=x0+q*r;
k=k+1;
tol(i)=norm(x-x0);%误差
x0=x;
i=i+1;
end
end
Three, case analysis
Also for the calculation example P113 in the textbook, solve the linear equation system Ax=b, and
enter the following code: (the source code of the build function is in the first article of this series)
clc;
clear;
n=input('Please input n:'); %输入矩阵A的阶数
e0=input('Please input the accuracy:');
[A,b]=build(n); %建立课本例3.2方程组系数矩阵及右端项
[x,k,error]=fastest(A,b,e0);
q=log(error);
plot(q,'linewidth',1.5)
xlabel('迭代次数');
ylabel('log(error)');
title('最速下降法迭代误差变化曲线');
When n=100, when the accuracy is 1*10 to the -6th power, it takes 19465 iterations to meet the accuracy requirement