Kernel Extreme Learning Machine (KELM) Classification Algorithm Based on Lion Pride Algorithm Optimization-with Code

Kernel Extreme Learning Machine (KELM) Classification Algorithm Based on Lion Pride Algorithm Optimization


Abstract: In this paper, the Lion Pride algorithm is used to optimize the Kernel Extreme Learning Machine (KELM) and use it for classification

1. Theoretical basis of KELM

Kernel Based Extreme Learning Machine (KELM) is an improved algorithm based on Extreme Learning Machine (ELM) combined with kernel functions. KELM can improve the predictive performance of the model while retaining the advantages of ELM.

ELM is a single hidden layer feed-forward neural network, and its learning objective function F(x) can be expressed as a matrix: F ( x
) = h ( x ) × β = H × β = L (9) F(x) =h(x)\times\beta=H\times\beta=L \tag{9}F(x)=h(x)×b=H×b=L( 9 )
where:xxx is the input vector,h ( x ) h(x)h(x) H H H is the hidden layer node output,β ββ is the output weight,LLL is the desired output.

Turn network training into a linear system solution problem, β \betaβ according toβ = H ∗ ⋅ L β=H * ·Lb=HL is determined, where,H ∗ H^*H isHHThe generalized inverse matrix of H. In order to enhance the stability of the neural network, the regularization coefficientCCC and Identity MatrixIII , then the least squares solution of the output weight is
β = HT ( HHT + I c ) − 1 L (10) \beta = H^T(HH^T+\frac{I}{c})^{-1 }L\tag{10}b=HT(HHT+cI)1L( 10 )
Introduce the kernel function into ELM, the kernel matrix is:
Ω ELM = HHT = h ( xi ) h ( xj ) = K ( xi , xj ) (11) \Omega_{ELM}=HH^T=h(x_i )h(x_j)=K(x_i,x_j)\tag{11}OhELM=HHT=h(xi)h(xj)=K(xi,xj)( 11 )
In formula:xi x_ixi x j x_j xjAs the input vector for the test, formula (9) can be expressed as:
F ( x ) = [ K ( x , x 1 ) ; . . . ; K ( x , xn ) ] ( IC + Ω ELM ) − 1 L ( 12) F(x)=[K(x,x_1);...;K(x,x_n)](\frac{I}{C}+\Omega_{ELM})^{-1}L \tag {12}F(x)=[K(x,x1);...;K(x,xn)](CI+OhELM)1L( 12 )
where:( x 1 , x 2 , … , xn ) (x_1 , x_2 , …, x_n )(x1,x2,,xn) for a given training sample,nnn is the sample size.K ( ) K()K ( ) is the kernel function.

2. Classification problem

In this paper, we classify breast tumor data. The training set and test set are generated by random method, in which the training set contains 500 samples, and the test set contains 69 samples.

3. KELM optimized based on Lion Pride Algorithm

The specific principle of the lion group algorithm refers to the blog: https://blog.csdn.net/u011835903/article/details/113418075

As can be seen from the foregoing, this paper uses the lion-pride algorithm to optimize the regularization coefficient C and the kernel function parameter S. The fitness function is designed as the error rate of the training set and the test set.
fitness = argmin(T rain E rror R ate + T est E rror R ate ). fitness = argmin(TrainErrorRate + TestErrorRate).fitness=argmin(TrainErrorRate+TestErrorRate)

4. Test results

insert image description here

Correct rate of LSO-KELM in training set: 1
Correct rate of LSO-KELM in test set: 0.95652 Total
number of cases: 569 Benign: 357 Malignant: 212 Total number of cases in
training set: 500 Benign: 300 Malignant: 200 Total
number of cases in test set: 69 Benign: 57 Malignant : 12
benign breast tumor diagnosis: 55 misdiagnosis: 2 diagnosis rate p1=96.4912%
malignant breast tumor diagnosis: 11 misdiagnosis: 1 diagnosis rate p2=91.6667%
training set KELM correct rate: 1
test set KELM correct rate: 0.89855
total number of cases: 569 Benign: 357 Malignant: 212
Total number of cases in training set: 500 Benign: 300 Malignant: 200 Total
number of cases in test set: 69 Benign: 57 Malignant: 12
Diagnosis of benign breast tumors: 55 Misdiagnosis: 2 Diagnosis rate p1=96.4912%
Diagnosis of malignant breast tumors: 7 Misdiagnosis: 5 Confirmed rate p2=58.3333%

From the results, it can be seen that Pride-KELM is significantly better than the original KELM algorithm

5. Matlab code

Guess you like

Origin blog.csdn.net/u011835903/article/details/130630284