机器学习OneR算法

OneR算法:利用一个特征去预测分类

算法思想:
算法首先遍历每个特征的每一个取值,对于每一个特征值,统计它在各个类别中的出现次数,找到它出现次数最多的类别,并统计它在其他类别中的出现次数;
统计完所有的特征值及其在每个类别的出现次数后,我们再来计算每个特征的错误率。计算方法为把它的各个取值的错误率相加,选取错误率最低的特征作为唯一的分类准则(OneR),用于接下来的分类。

算法步骤:
1.将数据集特征值进行0/1化处理(大于等于均值的,特征值赋值为1,否则为0)
2.计算每一个特征的各个特征值在各个类别中出现的次数,并计算次数最多的特征值的误判次数
3.将某一个特征的所有特征值的误判次数加起来,找出误判最低的特征归属组合及总的错误次数
4.比较每个特征总的错误次数大小,找出错误次数最小的特征,该特征即为最佳判断特征
5.该特征对应的特征值分类即为最佳分类规则

示例:

import numpy as np
from sklearn.datasets import load_iris
from collections import defaultdict
from operator import itemgetter
from sklearn.cross_validation import train_test_split

dataset = load_iris()

X=dataset.data
y=dataset.target

attribute_means=X.mean(axis=0)#按列求均值

#将特征值小于均值的赋值为0,大于均值的赋值为1
X_d=np.array(X >= attribute_means,dtype='int')


def train_feature_value(X,y_true,feature_index,value):
    class_counts=defaultdict(int)
    for sample,y in zip(X,y_true):
#        print(sample,y,feature_index)
#        print(sample[feature_index],value)
#        print('-----')
        if sample[feature_index]==value:
            class_counts[y]+=1
    
    sorted_class_counts=sorted(class_counts.items(),key=itemgetter(1),reverse=True)
    most_frequent_class=sorted_class_counts[0][0]

    incorrect_predictions=[class_count for class_value,class_count in class_counts.items() if class_value != most_frequent_class]
    error=sum(incorrect_predictions)

    return most_frequent_class,error
    
def train_on_feature(X,y_true,feature_index):
    #特征值
    values = set(X[:,feature_index])
    predictors = {}
    errors = []
    for current_value in values:
        most_frequent_class,error = train_feature_value(X,y_true,feature_index,current_value)
#        print(most_frequent_class,error)
#        print('---')
        predictors[current_value] = most_frequent_class
        errors.append(error)
        
    total_error = sum(errors)
#    print(predictors,errors,total_error)
    return predictors, total_error
    
    
Xd_train,Xd_test,y_train,y_test=train_test_split(X_d,y,random_state=None)
#random_state为随机参数,固定为某一个值时,每次执行会得到相同的结果
    
all_predictors = {}
errors = {}
for feature_index in range(Xd_train.shape[1]):
#    print(feature_index)  
    predictors,total_error = train_on_feature(Xd_train,y_train,feature_index)
#    print(predictors,total_error)
#    print('---')
    all_predictors[feature_index] = predictors
    errors[feature_index] = total_error
#print(all_predictors,errors)
best_feature,best_error = sorted(errors.items(),key=itemgetter(1))[0]
#print(best_feature,best_error)
model = {'feature':best_feature,'predictor':all_predictors[best_feature]}
#print(model)

def predict(X_test,model):
    variable = model['feature']
#    print(variable)
    predictor = model['predictor']
#    print(predictor)
    y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
    return y_predicted
    
y_predicted = predict(Xd_test,model)
#print(y_predicted)
#print(y_test)
#计算准确率
accuracy = np.mean(y_predicted==y_test)*100
print("The test accuracy is {:.1f}%".format(accuracy))


猜你喜欢

转载自blog.csdn.net/d1240673769/article/details/88578411