100 Examples of Deep Learning-Convolutional Neural Network (ResNet-50) Bird Recognition | Day 8

1. Preliminary work

This article will use to ResNet-50realize the recognition and classification of bird pictures

My environment:

  • Language environment: Python3.6.5
  • Compiler: jupyter notebook
  • Deep learning environment: TensorFlow2

Highlights from the past issue:

From the column: [100 cases of deep learning]

To reprint , please contact me through the contact information on the left side (available on the computer side), or through private messages on the site

1. Setting up the GPU

If you are using a CPU, you can comment out this part of the code.

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    tf.config.experimental.set_memory_growth(gpus[0], True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpus[0]],"GPU")

2. Import data

import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号

import os,PIL

# 设置随机种子尽可能使结果可以重现
import numpy as np
np.random.seed(1)

# 设置随机种子尽可能使结果可以重现
import tensorflow as tf
tf.random.set_seed(1)

from tensorflow import keras
from tensorflow.keras import layers,models

import pathlib
data_dir = "D:/jupyter notebook/DL-100-days/datasets/bird_photos"

data_dir = pathlib.Path(data_dir)

3. View data

image_count = len(list(data_dir.glob('*/*')))

print("图片总数为:",image_count)
图片总数为: 565

2. Data preprocessing

folderQuantity
Bananaquit166 sheets
Black Throated Bushtiti111 sheets
Black skimmer122 sheets
Cockatoo166 sheets

1. Load data

Using a image_dataset_from_directorymethod to load data to disk tf.data.Datasetin

batch_size = 8
img_height = 224
img_width = 224

TensorFlow version is 2.2.0 students may encounter module 'tensorflow.keras.preprocessing' has no attribute 'image_dataset_from_directory'errors, upgrade TensorFlow and it will be OK.

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 565 files belonging to 4 classes.
Using 452 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 565 files belonging to 4 classes.
Using 113 files for validation.

We can output the label of the data set through class_names. The labels will correspond to the directory names in alphabetical order.

class_names = train_ds.class_names
print(class_names)
['Bananaquit', 'Black Throated Bushtiti', 'Black skimmer', 'Cockatoo']

2. Visualize the data

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5
plt.suptitle("微信公众号:K同学啊")

for images, labels in train_ds.take(1):
    for i in range(8):
        
        ax = plt.subplot(2, 4, i + 1)  

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
Insert picture description here
plt.imshow(images[1].numpy().astype("uint8"))
Insert picture description here

3. Check the data again

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
(8, 224, 224, 3)
(8,)
  • Image_batchIs the tensor of shape (8, 224, 224, 3). This is a batch of 8 pictures with a shape of 240x240x3 (the last dimension refers to the color channel RGB).
  • Label_batchIs a tensor of shape (8,), these labels correspond to 8 pictures

4. Configure the data set

  • shuffle() : Shuffle the data. For a detailed introduction to this function, please refer to: https://zhuanlan.zhihu.com/p/42417456
  • prefetch() : Prefetch data to speed up the operation. For detailed introduction, please refer to my first two articles, which are both explained.
  • cache() : Cache the data set in the memory to speed up the operation
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

3. Introduction to Residual Network (ResNet)

1. What does the residual network solve?

Residual network is to solve the problem of network degradation caused by too many hidden layers of neural network. The degradation problem refers to: when the hidden layers of the network increase, the accuracy of the network reaches saturation and then rapidly degrades, and this degradation is not caused by overfitting.

Expansion: "Two Dark Clouds" of Deep Neural Networks

  • Gradient dispersion/explosion

Simply put, the network is too deep, which will cause model training to be difficult to converge. This problem can be effectively controlled by the standard initialization and intermediate layer normalization methods. (It’s good to know this at this stage)

  • Network degradation

As the depth of the network increases, the performance of the network first gradually increases to saturation, and then decreases rapidly. This degradation is not caused by overfitting.

2. Introduction to ResNet-50

ResNet-50 has two basic blocks, named Conv BlockandIdentity Block

Conv Block structure:

Insert picture description here

Identity Block structure:

Insert picture description here

ResNet-50 overall structure:

Insert picture description here

Fourth, build a ResNet-50 network model

The following is the focus of this article, you can try to build ResNet-50 by yourself according to the above three pictures

from keras import layers

from keras.layers import Input,Activation,BatchNormalization,Flatten
from keras.layers import Dense,Conv2D,MaxPooling2D,ZeroPadding2D,AveragePooling2D
from keras.models import Model

def identity_block(input_tensor, kernel_size, filters, stage, block):

    filters1, filters2, filters3 = filters

    name_base = str(stage) + block + '_identity_block_'

    x = Conv2D(filters1, (1, 1), name=name_base + 'conv1')(input_tensor)
    x = BatchNormalization(name=name_base + 'bn1')(x)
    x = Activation('relu', name=name_base + 'relu1')(x)

    x = Conv2D(filters2, kernel_size,padding='same', name=name_base + 'conv2')(x)
    x = BatchNormalization(name=name_base + 'bn2')(x)
    x = Activation('relu', name=name_base + 'relu2')(x)

    x = Conv2D(filters3, (1, 1), name=name_base + 'conv3')(x)
    x = BatchNormalization(name=name_base + 'bn3')(x)

    x = layers.add([x, input_tensor] ,name=name_base + 'add')
    x = Activation('relu', name=name_base + 'relu4')(x)
    return x


def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):

    filters1, filters2, filters3 = filters

    res_name_base = str(stage) + block + '_conv_block_res_'
    name_base = str(stage) + block + '_conv_block_'

    x = Conv2D(filters1, (1, 1), strides=strides, name=name_base + 'conv1')(input_tensor)
    x = BatchNormalization(name=name_base + 'bn1')(x)
    x = Activation('relu', name=name_base + 'relu1')(x)

    x = Conv2D(filters2, kernel_size, padding='same', name=name_base + 'conv2')(x)
    x = BatchNormalization(name=name_base + 'bn2')(x)
    x = Activation('relu', name=name_base + 'relu2')(x)

    x = Conv2D(filters3, (1, 1), name=name_base + 'conv3')(x)
    x = BatchNormalization(name=name_base + 'bn3')(x)

    shortcut = Conv2D(filters3, (1, 1), strides=strides, name=res_name_base + 'conv')(input_tensor)
    shortcut = BatchNormalization(name=res_name_base + 'bn')(shortcut)

    x = layers.add([x, shortcut], name=name_base+'add')
    x = Activation('relu', name=name_base+'relu4')(x)
    return x

def ResNet50(input_shape=[224,224,3],classes=1000):

    img_input = Input(shape=input_shape)
    x = ZeroPadding2D((3, 3))(img_input)

    x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x)
    x = BatchNormalization(name='bn_conv1')(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((3, 3), strides=(2, 2))(x)

    x =     conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')

    x =     conv_block(x, 3, [128, 128, 512], stage=3, block='a')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='b')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='c')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='d')

    x =     conv_block(x, 3, [256, 256, 1024], stage=4, block='a')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')

    x =     conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')

    x = AveragePooling2D((7, 7), name='avg_pool')(x)

    x = Flatten()(x)
    x = Dense(classes, activation='softmax', name='fc1000')(x)

    model = Model(img_input, x, name='resnet50')
    
    # 加载预训练模型
    model.load_weights("resnet50_weights_tf_dim_ordering_tf_kernels.h5")

    return model

model = ResNet50()
model.summary()
Model: "resnet50"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D)  (None, 230, 230, 3)  0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 112, 112, 64) 9472        zero_padding2d[0][0]             
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 112, 112, 64) 256         conv1[0][0]                      
__________________________________________________________________________________________________
activation (Activation)         (None, 112, 112, 64) 0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 55, 55, 64)   0           activation[0][0]                 
__________________________________________________________________________________________________
2a_conv_block_conv1 (Conv2D)    (None, 55, 55, 64)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
2a_conv_block_bn1 (BatchNormali (None, 55, 55, 64)   256         2a_conv_block_conv1[0][0]        
__________________________________________________________________________________________________
2a_conv_block_relu1 (Activation (None, 55, 55, 64)   0           2a_conv_block_bn1[0][0]          
__________________________________________________________________________________________________
2a_conv_block_conv2 (Conv2D)    (None, 55, 55, 64)   36928       2a_conv_block_relu1[0][0]        
__________________________________________________________________________________________________
2a_conv_block_bn2 (BatchNormali (None, 55, 55, 64)   256         2a_conv_block_conv2[0][0]        
__________________________________________________________________________________________________
2a_conv_block_relu2 (Activation (None, 55, 55, 64)   0           2a_conv_block_bn2[0][0]          
__________________________________________________________________________________________________
2a_conv_block_conv3 (Conv2D)    (None, 55, 55, 256)  16640       2a_conv_block_relu2[0][0]        
__________________________________________________________________________________________________
2a_conv_block_res_conv (Conv2D) (None, 55, 55, 256)  16640       max_pooling2d[0][0]              
__________________________________________________________________________________________________
2a_conv_block_bn3 (BatchNormali (None, 55, 55, 256)  1024        2a_conv_block_conv3[0][0]        
__________________________________________________________________________________________________
2a_conv_block_res_bn (BatchNorm (None, 55, 55, 256)  1024        2a_conv_block_res_conv[0][0]     
__________________________________________________________________________________________________
2a_conv_block_add (Add)         (None, 55, 55, 256)  0           2a_conv_block_bn3[0][0]          
                                                                 2a_conv_block_res_bn[0][0]       
__________________________________________________________________________________________________
2a_conv_block_relu4 (Activation (None, 55, 55, 256)  0           2a_conv_block_add[0][0]          
__________________________________________________________________________________________________
2b_identity_block_conv1 (Conv2D (None, 55, 55, 64)   16448       2a_conv_block_relu4[0][0]        
__________________________________________________________________________________________________
2b_identity_block_bn1 (BatchNor (None, 55, 55, 64)   256         2b_identity_block_conv1[0][0]    

     =============================================================
            此处省略了若干行,此处省略了若干行,此处省略了若干行
     =============================================================
__________________________________________________________________________________________________
5c_identity_block_relu2 (Activa (None, 7, 7, 512)    0           5c_identity_block_bn2[0][0]      
__________________________________________________________________________________________________
5c_identity_block_conv3 (Conv2D (None, 7, 7, 2048)   1050624     5c_identity_block_relu2[0][0]    
__________________________________________________________________________________________________
5c_identity_block_bn3 (BatchNor (None, 7, 7, 2048)   8192        5c_identity_block_conv3[0][0]    
__________________________________________________________________________________________________
5c_identity_block_add (Add)     (None, 7, 7, 2048)   0           5c_identity_block_bn3[0][0]      
                                                                 5b_identity_block_relu4[0][0]    
__________________________________________________________________________________________________
5c_identity_block_relu4 (Activa (None, 7, 7, 2048)   0           5c_identity_block_add[0][0]      
__________________________________________________________________________________________________
avg_pool (AveragePooling2D)     (None, 1, 1, 2048)   0           5c_identity_block_relu4[0][0]    
__________________________________________________________________________________________________
flatten (Flatten)               (None, 2048)         0           avg_pool[0][0]                   
__________________________________________________________________________________________________
fc1000 (Dense)                  (None, 1000)         2049000     flatten[0][0]                    
==================================================================================================
Total params: 25,636,712
Trainable params: 25,583,592
Non-trainable params: 53,120
__________________________________________________________________________________________________

Five, compile

Before preparing to train the model, some more settings need to be made. The following are added in the compilation step of the model:

  • Loss function (loss): used to measure the accuracy of the model during training.
  • Optimizer (optimizer): Decide how to update the model based on the data it sees and its own loss function.
  • Metrics: used to monitor training and testing procedures. The following example uses accuracy, which is the ratio of images that are correctly classified.
# 设置优化器,我这里改变了学习率。
opt = tf.keras.optimizers.Adam(learning_rate=1e-7)

model.compile(optimizer="adam",
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

Six, training model

epochs = 10

history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs
)
Epoch 1/10
57/57 [==============================] - 12s 86ms/step - loss: 2.4313 - accuracy: 0.6548 - val_loss: 213.7383 - val_accuracy: 0.3186
Epoch 2/10
57/57 [==============================] - 3s 52ms/step - loss: 0.4293 - accuracy: 0.8557 - val_loss: 9.0470 - val_accuracy: 0.2566
Epoch 3/10
57/57 [==============================] - 3s 52ms/step - loss: 0.2309 - accuracy: 0.9183 - val_loss: 1.4181 - val_accuracy: 0.7080
Epoch 4/10
57/57 [==============================] - 3s 53ms/step - loss: 0.1721 - accuracy: 0.9535 - val_loss: 2.5627 - val_accuracy: 0.6726
Epoch 5/10
57/57 [==============================] - 3s 53ms/step - loss: 0.0795 - accuracy: 0.9701 - val_loss: 0.2747 - val_accuracy: 0.8938
Epoch 6/10
57/57 [==============================] - 3s 52ms/step - loss: 0.0435 - accuracy: 0.9899 - val_loss: 0.1483 - val_accuracy: 0.9381
Epoch 7/10
57/57 [==============================] - 3s 52ms/step - loss: 0.0308 - accuracy: 0.9970 - val_loss: 0.1705 - val_accuracy: 0.9381
Epoch 8/10
57/57 [==============================] - 3s 52ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.0674 - val_accuracy: 0.9735
Epoch 9/10
57/57 [==============================] - 3s 52ms/step - loss: 8.2391e-04 - accuracy: 1.0000 - val_loss: 0.0720 - val_accuracy: 0.9735
Epoch 10/10
57/57 [==============================] - 3s 52ms/step - loss: 6.0079e-04 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9646

Seven, model evaluation

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.suptitle("微信公众号:K同学啊")

plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
Insert picture description here

8. Save and load the model

This is the easiest way to save and load models.

# 保存模型
model.save('model/my_model.h5')
# 加载模型
new_model = keras.models.load_model('model/my_model.h5')

Nine, forecast

# 采用加载的模型(new_model)来看预测结果

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5
plt.suptitle("微信公众号:K同学啊")

for images, labels in val_ds.take(1):
    for i in range(8):
        ax = plt.subplot(2, 4, i + 1)  
        
        # 显示图片
        plt.imshow(images[i].numpy().astype("uint8"))
        
        # 需要给图片增加一个维度
        img_array = tf.expand_dims(images[i], 0) 
        
        # 使用模型预测图片中的人物
        predictions = new_model.predict(img_array)
        plt.title(class_names[np.argmax(predictions)])

        plt.axis("off")
Insert picture description here

Other exciting content:

"100 Cases of Deep Learning" column directly: [Portal]

Data needs students can leave the mailbox in the comments , if the passage of time can be in the left side of the article (computer terminal to see) or leave a message at to find my contact information, or station private letter to me, if you think this article helpful to you remember the point of a Follow, like, add a favorite