Naive Bayesian model training (wdbc data set)

The breast cancer Wisconsin (original) dataset is used , and the characteristics of the dataset are as follows:
insert image description here

The training code is as follows. I lost some performance in order to make the code logic clearer during the writing process; although it was tested multiple times, the test set was not randomly selected. . . . Anyway, it's homework, just fool around.

import pandas as pd
import random
import time


# 切分训练集和测试集
def randSplit(data):
    n = data.shape[0]
    m = int(n * random.uniform(0.1, 0.3))
    return data, data.sample(m)


# 构建朴素贝叶斯分类器
def gnb_classify(train, test):
    truePro = 0
    for i in range(train.shape[0]):
        if train.values[i, 10] == 2:
            truePro += 1
    truePro /= train.shape[0]  # true的概率
    falsePro = 1 - truePro  # false的概率

    # 统计频率
    numContainer = [{
    
    }, {
    
    }, {
    
    }, {
    
    }, {
    
    }, {
    
    }, {
    
    }, {
    
    }, {
    
    }]
    for i in range(train.shape[0]):
        if train.values[i, 10] == 2:
            for j in range(9):
                if train.values[i, j + 1] in numContainer[j]:
                    numContainer[j][train.values[i, j + 1]] += 1
                else:
                    numContainer[j][train.values[i, j + 1]] = 1
        else:
            for j in range(9):
                if -1 * train.values[i, j + 1] in numContainer[j]:
                    numContainer[j][-1 * train.values[i, j + 1]] += 1
                else:
                    numContainer[j][-1 * train.values[i, j + 1]] = 1

    # 计算概率
    for i in numContainer:
        sum = 0
        for k, v in i.items():
            sum += v
        for k, v in i.items():
            i[k] = v / sum

    # 预测训练集
    res = 0  # 存储预测正确的数量
    for i in range(test.shape[0]):
        trueP = truePro
        falseP = falsePro
        for j in range(9):
            if test.values[i, j + 1] in numContainer[j]:
                trueP *= numContainer[j][test.values[i, j + 1]]
            if -1 * test.values[i, j + 1] in numContainer[j]:
                falseP *= numContainer[j][-1 * test.values[i, j + 1]]

        if (trueP > falseP and test.values[i, 10] == 2) or (trueP < falseP and test.values[i, 10] == 4):
            res += 1

    return res / test.shape[0]


# 测试分类器
def test_classify():
    sum = 0
    for i in range(10):
        singleStart = time.time()
        randomTrain, randomTest = randSplit(df)
        p = gnb_classify(randomTrain, randomTest)
        sum += p
        singleEnd = time.time()
        print("第{}次训练,预测准确率:{} ,训练用时:{}s".format(i + 1, p, singleEnd - singleStart))
    return sum / 10


df = pd.read_csv(r'wdbc_discrete.data')
start = time.time()
probability = test_classify()
end = time.time()

print("\n测试集平均准确率为:{}".format(probability))
print("平均训练用时:{}s".format((end - start) / 10))

insert image description here

If you are interested in learning more about it, please visit my personal website: Pupil Space

Guess you like

Origin blog.csdn.net/tongkongyu/article/details/128242935