皮肤检测说明

项目介绍

在这个项目中,你要设计一个算法,使其能对最致命的皮肤癌——黑色素瘤进行视觉诊断。你的算法应特别能将这种恶性皮肤肿瘤与两种良性病变(黑色素痣脂溢性角化病)区分开来。

数据和目标来自国际皮肤影像协作组织2017年黑色素瘤检测的皮肤病变分析。作为挑战的一部分,参与者需要设计一个算法,用于诊断三种不同皮肤病的其中之一(黑色素瘤、黑色素痣或脂溢性角化病)的皮肤病变图像。在该项目中,你要创建一个模型,用于生成你自己的预测结果。

数据介绍:

训练数据 (5.3 GB)、验证数据 (824.5 MB)、测试数据 (5.1 GB)。训练、验证和测试图像分别保存在 data/ 文件夹下的 data/train/data/valid/ 和 data/test/ 文件夹中。每个文件夹应包含三个子文件夹(melanoma/nevus/seborrheic_keratosis/),每个子文件夹分别用于保存这三个图像类的代表性图像。

目标任务:

任务一:我们通过卷积神经网络区分恶性黑色素瘤与良性皮肤病变(黑色素痣、脂溢性角化病)。

任务二:我们将要检查的所有皮肤病变是黑色素细胞角化细胞(两种不同类型的表皮皮肤细胞)的异常生长造成的。黑色素瘤和黑色素痣是黑色素细胞产生的,而脂溢性角化病是角化细胞产生的。

模型建立:

1、加载以及预处理图片:

from skimage import io,transform,color
import numpy as np
import os
import math
def picture_cut(f):
    picture = io.imread(f)  #依次读取图片
    convert_picture = transform.resize(picture, (32,32))   #将其转变为大小一样的图片
    return convert_picture
base_path = 'G:/project/dermatologist-ai/data/'
read_path = ['train/melanoma','train/nevus','train/seborrheic_keratosis','valid/melanoma','valid/nevus',
             'valid/seborrheic_keratosis','test/melanoma','test/nevus','test/seborrheic_keratosis',]
save_path = ['d:/cancer_data/train/','d:/cancer_data/valid/','d:/cancer_data/test/',]
for i in range(len(read_path)):
    str = base_path + read_path[i] +'/*.jpg'
    coll = io.ImageCollection(str, load_func = picture_cut)
    picture_name = os.listdir(base_path + read_path[i])
    j = 0
    if i//3 > 0:
        j = j + i//3
    for k in range(len(coll)):
        io.imsave(save_path[j] + picture_name[k] + '.jpg', coll[k])

#标准化
def normalize(x):
    return np.array(x/255)
coll_1 =  io.ImageCollection('d:/cancer_data/train/*.jpg')
match_1 = io.concatenate_images(coll_1) #连接图片
train_features = normalize(np.array(match_1))

coll_2 =  io.ImageCollection('d:/cancer_data/valid/*.jpg')
match_2 = io.concatenate_images(coll_2)
valid_features = normalize(np.array(match_2))

coll_3 =  io.ImageCollection('d:/cancer_data/test/*.jpg')
match_3 = io.concatenate_images(coll_3)
test_features = normalize(np.array(match_3))

2、构建卷积层和最大池化层:

def conv2d_maxpool(x_tensor,conv_num_outputs,conv_ksize,conv_strides,pool_ksize,pool_strides):
    weight = tf.Variable(tf.truncated_normal([conv_ksize[0],conv_ksize[1],x_tensor.get_shape().as_list()[-1],conv_num_outputs],stddev = 0.05))
    bias = tf.Variable(tf.truncated_normal([conv_num_outputs],stddev = 0.1))
    
    layer = tf.nn.conv2d(x_tensor, weight, [1,conv_strides[0],conv_strides[1], 1],padding = "SAME")
    layer = tf.nn.bias_add(layer,bias)
    layer = tf.nn.relu(layer)
    layer = tf.nn.max_pool(layer, [1,pool_ksize[0],pool_ksize[1], 1], [1,pool_strides[0],pool_strides[1], 1],padding = "SAME")
    return layer

3、创建卷积模型:

def conv_net(x,keep_prob):
    layer = conv2d_maxpool(x, 16, (5, 5), (2, 2), (3, 3), (2, 2))
    layer = conv2d_maxpool(layer, 32, (5, 5), (2, 2), (3, 3), (2, 2))
    layer = conv2d_maxpool(layer, 64, (5, 5), (2, 2), (3, 3), (2, 2))
    layer = conv2d_maxpool(layer, 128, (5, 5), (2, 2), (3, 3), (2, 2))

    layer = flatten(layer)
    layer = tf.nn.dropout(layer, keep_prob)

    layer = fully_conn(layer, 256)
    layer = tf.nn.dropout(layer, keep_prob)
    layer = fully_conn(layer, 64)
    layer = tf.nn.dropout(layer, keep_prob)
    
    layer = output(layer, 2)
    return layer

tf.reset_default_graph()

# 输入
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(2)
keep_prob = neural_net_keep_prob_input()

# 模型
logits = conv_net(x, keep_prob)

logits = tf.identity(logits, name='logits')

# 损失和优化
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# 准确率
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

4、其他

#显示数据
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    train_loss, train_acc = session.run([cost, accuracy], feed_dict = {x:feature_batch, y:label_batch, keep_prob:1.0})
    valid_loss, valid_acc = session.run([cost, accuracy], feed_dict = {x:valid_features, y:valid_labels, keep_prob:1.0})
    print('Train(Loss: {:.4f} Accuracy: {:.2f}%)  Validation(Loss: {:.4f} Accuracy: {:.2f}%)'.format
          (train_loss, train_acc*100, valid_loss, valid_acc*100))

#特征标签的批次设置
def batch_features_labels(train_features, batch_size):
    for start in range(0, len(train_features), batch_size):
        end = min(start + batch_size, len(train_features))
        yield train_features[start:end], train_labels[start:end]

猜你喜欢

转载自blog.csdn.net/Charlotte_android/article/details/81268214