基于支持向量机的几种数据预处理的高光谱数据集分类分析

首先在这里就不介绍支持向量机和高光谱图像数据集了,本文将着重用实验代码结果来分析数据集的预处理部分和支持向量机的核函数选择。

这里的数据预处理有三种,分别是PCA;LDA;PCA+LDA。支持向量机的核函数也是选择了三种,分别是线性核函数;多项式核函数;高斯核函数。

这里先将.mat的数据转化为python后续算法处理的csv文件。

import matplotlib.pyplot as plt  
import numpy as np
from scipy.io import loadmat
import spectral
from sklearn import preprocessing  
import scipy.io as sio

input_image=sio.loadmat(r'F:\Python+AI_ML_DL全套\高光谱数据集\数据集\Salinas.mat')['salinas']
output_image=sio.loadmat(r'F:\Python+AI_ML_DL全套\高光谱数据集\数据集\Salinas_gt.mat')['salinas_gt']

# 除掉 0 这个非分类的类,把所有需要分类的元素提取出来
need_label = np.zeros([output_image.shape[0],output_image.shape[1]])
for i in range(output_image.shape[0]):
    for j in range(output_image.shape[1]):
        if output_image[i][j] != 0:
        #if output_image[i][j] in [1,2,3,4,5,6,7,8,9]:
            need_label[i][j] = output_image[i][j]


new_datawithlabel_list = []
for i in range(output_image.shape[0]):
    for j in range(output_image.shape[1]):
        if need_label[i][j] != 0:
            c2l = list(input_image[i][j])
            c2l.append(need_label[i][j])
            new_datawithlabel_list.append(c2l)

new_datawithlabel_array = np.array(new_datawithlabel_list)  # new_datawithlabel_array.shape (5211,177),包含了数据维度和标签维度,数据176维度,也就是176个波段,最后177列是标签维

data_D = preprocessing.StandardScaler().fit_transform(new_datawithlabel_array[:,:-1])
#data_D = preprocessing.MinMaxScaler().fit_transform(new_datawithlabel_array[:,:-1])
data_L = new_datawithlabel_array[:,-1]

# 将结果存档后续处理
import pandas as pd
new = np.column_stack((data_D,data_L))
new_ = pd.DataFrame(new)
new_.to_csv(r'C:\Users\KingWH\Desktop\exp_data/salinas.csv',header=False,index=False)

生成csv文件后,就可以直接对该文件进行操作。

扫描二维码关注公众号,回复: 3217939 查看本文章

下面是训练svm分类模型,这里具体PCA(18)-LDA(10)-GS-SVM,读者可以自行修改降维方式和维数,同时也可以修改网格搜素法的参数,直到找到最佳参数。

import joblib
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.svm import SVC
from sklearn import metrics
from sklearn import preprocessing
from sklearn.decomposition import RandomizedPCA
import pandas as pd
from sklearn.grid_search import GridSearchCV
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from time import time


# 导入数据集切割训练与测试数据

data = pd.read_csv(r'C:\\Users\KingWH\Desktop\exp_data\paviaU.csv',header=None)
data = data.as_matrix()
data_D = data[:,:-1]
data_L = data[:,-1]
data_train, data_test, label_train, label_test = train_test_split(data_D,data_L,test_size=0.5)


# 模型训练与拟合linear  rbf  poly
# clf_sig = SVC(kernel='sigmoid',gamma=0.125,C=16) 
t0 = time()
pca = RandomizedPCA(n_components = 18, whiten=True).fit(data_D)
X_train_pca = pca.transform(data_train) 
X_test_pca = pca.transform(data_test)
# # 
# # # 
lda = LinearDiscriminantAnalysis(n_components = 10).fit(X_train_pca,label_train)
X_train_ida = lda.transform(X_train_pca)
X_test_ida = lda.transform(X_test_pca)

print('ok')
# param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],
#               'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }
# clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)
# param_grid = {'C': [10, 20, 100, 500, 1e3],
#               'gamma': [0.001, 0.005, 0.01, 0.05, 0.1, 0.125], }
# clf = GridSearchCV(SVC(kernel='linear', class_weight='balanced'), param_grid)
clf = SVC(kernel = 'rbf',gamma=0.1,C=100)   # paviau:18-10 rbf 0.1  100  linear 0.125  20  poly 0.1 100  indian:80-40rbf 0.01 100  linear 0.1  20  poly 0.1 100 
clf.fit(X_train_ida,label_train)            #salinas:70-14 rbf 0.1  20    linear 0.125  10  poly 0.1 100
# clf.fit(data_train,label_train)
pred = clf.predict(X_test_ida)
accuracy = metrics.accuracy_score(label_test, pred)*100
print( accuracy)  
# print(clf.best_estimator_)
print("done in %0.3fs" % (time() - t0)) 
# 存储结果学习模型,方便之后的调用
joblib.dump(clf, "paviaU_MODEL.m")

模型建好之后,就可以调用对高光谱图像进行分类了,同样要保持测试集的降维方式和降维数和训练集一致,下面贴出我自己的分类结果。

我这里只给出了一个数据集的结果,大家可以看出PCA-LDA-RSVM效果是最好的,也可以通过实现混合核函数来提高分类精度。

猜你喜欢

转载自blog.csdn.net/qq_28821995/article/details/81144301