《机器学习与数据挖掘》实验七

实验题目:   实现拉普拉斯修正的朴素贝叶斯分类器                                             

实验目的:   掌握朴素贝叶斯分类器的原理及应用                                       

实验环境(硬件和软件)   Anaconda/Jupyter notebook/Pycharm                               

实验内容:

编码实现拉普拉斯修正的朴素贝叶斯分类器,基于给定的训练数据,对测试样本进行判别。

要求:

一、经给定部分代码,补充完整的代码,需要补充代码的地方已经用红色字体标注,包括:

1#补充计算条件概率的代码-1

2#补充计算条件概率的代码-2

3#补充预测代码;

二、将补充完整的代码提交,并提交实验结果;(也可以自己重写这部分的代码提交

import numpy as np

def loaddata():
    X = np.array([[1, 'S'], [1, 'M'], [1, 'M'], [1, 'S'],
                  [1, 'S'], [2, 'S'], [2, 'M'], [2, 'M'],
                  [2, 'L'], [2, 'L'], [3, 'L'], [3, 'M'],
                  [3, 'M'], [3, 'L'], [3, 'L']])
    y = np.array([-1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1])
    return X, y

# 训练、计算各个概率值
def Train(trainset, train_labels):
    # 数据量
    m = trainset.shape[0]
    # 特征数
    n = trainset.shape[1]
    # 先验概率,key是类别值,value是类别的概率值
    prior_probability = {}
    # 条件概率,key的构造:类别,特征,特征值,value是
    conditional_probability = {}

    # 类别的可能取值
    labels = set(train_labels)
    # 计算先验概率,此时没有计算总数据量m
    for label in labels:
        prior_probability[label] = len(train_labels[train_labels == label]) + 1
    print('prior_probabilit =', prior_probability)

    # 计算条件概率
    for i in range(m):
        for j in range(n):
            # key的构造:类别,特征,特征值
            key = str(train_labels[i]) + ',' + str(j) + ',' + str(trainset[i][j])
            if key in conditional_probability:
                conditional_probability[key] += 1
            else:
                conditional_probability[key] = 1
    print('conditional_probability = ', conditional_probability)

    # 因字典在循环时不能改变,故定义新字典来保存值
    conditional_probability_final = {}
    for key in conditional_probability:
        # 取出当前的类别
        label = key.split(',')[0]
        key1 = key.split(',')[1]
        Ni = len(set(trainset[:, int(key1)]))
        print(Ni)
        conditional_probability_final[key] = (conditional_probability[key] + 1) / (prior_probability[int(label)] + Ni)

    # 最终先验概率(除以总数据量m)
    for label in labels:
        prior_probability[label] = prior_probability[label] / (m + len(labels))

    return prior_probability, conditional_probability_final, labels


# 定义预测函数
def predict(data):
    result = {}
    # 循环标签
    for label in train_labels_set:
        temp = 1.0
        for j in range(len(data)):
            key = str(label) + ',' + str(j) + ',' + str(data[j])
            # 条件概率连乘
            temp = temp * conditional_probability[key]
        # 在乘上先验概率
        result[label] = temp * prior_probability[label]
    print('result =', result)
    # 排序返回标签值
    return sorted(result.items(), key=lambda x: x[1], reverse=True)[0][0]


X, y = loaddata()
prior_probability, conditional_probability, train_labels_set = Train(X, y)
print('conditional_probability = ', conditional_probability)
r_label = predict([2, 'S'])
print(' r_label =', r_label)

实验截图 

猜你喜欢

转载自blog.csdn.net/m0_64351669/article/details/128199883