tf.keras--入门示例:如何定义一个标准的VGG16网络

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u010472607/article/details/82319507

Part1

tf.keras中提供了VGG16的网络实现, 可以直接学习

从0开始(官方示例提取):

# -*- coding: utf-8 -*-
"""
tf.keras 网络层命名

仿照示例实现一个自己的标准VGG16网络
"""
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense

input_shape = (224, 224, 3)
include_top = True
input_tensor = None
classes = 1000

# 输入层定义, 必须指定输入的shape,
# 注意: tf中数据格式为NHWC,切记!!
img_input = Input(shape=input_shape)

# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)

# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)

# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)

# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)

if include_top:
    # Classification block
    x = Flatten(name='flatten')(x)
    x = Dense(4096, activation='relu', name='fc1')(x)
    x = Dense(4096, activation='relu', name='fc2')(x)
    x = Dense(classes, activation='softmax', name='predictions')(x)

model = Model(img_input, x, name='vgg16')

model.summary()

Part2

在个人使用时, 常需要在已有的网络上构建新的网络, 这里以VGG16基础网络为基础, 重新定义分类部分,给出两种基于Model类的示例

方式一:

# -*- coding: utf-8 -*-
"""
tf.keras 网络层定义

根据自己的数据集, 如何修改已有的网络,调整输出层
"""
from tensorflow.keras.models import Model
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Flatten, Dense

NUM_CLASSES = 10

# 自定义VGG16网络的输出层
# 自定义方式一
base_model = VGG16(include_top=False, weights=None, input_shape=(224, 224, 3))
x = base_model.output
# 分类
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1-')(x)
x = Dense(4096, activation='relu', name='fc2-')(x)
x = Dense(NUM_CLASSES, activation='softmax', name='predictions')(x)
model = Model(base_model.input, x, name="My-VGG16")
model.summary()

方式二(Model子类化):

# -*- coding: utf-8 -*-
"""
tf.keras 网络层定义

根据自己的数据集, 如何修改已有的网络,调整输出层
"""
from tensorflow.keras.models import Model
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Flatten, Dense

NUM_CLASSES = 10

# 自定义VGG16网络的输出层
class MyVGG16(Model):
    def __init__(self, NUM_CLASSES=1000):
        self.NUM_CLASSES = NUM_CLASSES
        base_model = VGG16(include_top=False, weights=None, input_shape=(224, 224, 3))
        x = base_model.output
        # 分类
        x = Flatten(name='flatten')(x)
        x = Dense(4096, activation='relu', name='fc1-')(x)
        x = Dense(4096, activation='relu', name='fc2-')(x)
        x = Dense(NUM_CLASSES, activation='softmax', name='predictions')(x)
        super(MyVGG16, self).__init__(base_model.input, x, name="My-VGG16")

    # 复杂的网络必须自己控制前向传播的计算
    # 简单网络不写也不会有问题
    # def call(self, inputs, training=None):
    #     return self.output(inputs, training=training)

# model = VGG16(include_top=True, weights=None)
model = MyVGG16(NUM_CLASSES=NUM_CLASSES)
model.summary()

打印输出:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1- (Dense)                 (None, 4096)              102764544 
_________________________________________________________________
fc2- (Dense)                 (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 10)                40970     
=================================================================
Total params: 134,301,514
Trainable params: 134,301,514
Non-trainable params: 0
_________________________________________________________________

猜你喜欢

转载自blog.csdn.net/u010472607/article/details/82319507