人工知能(AI)のディープラーニングを使って画像分類を行いました。ディープラーニングによる画像応用の代表的なモデルの一つとしてVGG16があります。VGG16の学習済みの畳み込みベースを用いて分類器を入れ替える転移学習という方法で、学習を行いました。
画像分類には5種類の花を使用しました。daisy、dandelion、rose、sunflower、tulipの5種類です。訓練データには、それぞれの花の画像を500枚ずつ、検証データには200枚ずつを使用しました。学習した結果の正解率は8割以上です。試しにsunflowerのテスト画像で予測してみたところ正解しています。

 

VGG16の転移学習を利用して画像を分類してみる
In [1]:
import os

import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Dropout, Activation, Flatten
from keras import optimizers
from keras.applications.vgg16 import VGG16
Using TensorFlow backend.
In [2]:
keras.__version__
Out[2]:
'2.2.4'

訓練画像、検証画像、テスト画像のディレクトリ

In [3]:
# 分類クラス
classes = ['daisy', 'dandelion','rose','sunflower','tulip']
nb_classes = len(classes)
batch_size_for_data_generator = 20

base_dir = "."

train_dir = os.path.join(base_dir, 'train_images')
validation_dir = os.path.join(base_dir, 'validation_images')
test_dir = os.path.join(base_dir, 'test_images')

train_daisy_dir = os.path.join(train_dir, 'daisy')
train_dandelion_dir = os.path.join(train_dir, 'dandelion')
train_rose_dir = os.path.join(train_dir, 'rose')
train_sunflower_dir = os.path.join(train_dir, 'sunflower')
train_tulip_dir = os.path.join(train_dir, 'tulip')

validation_daisy_dir = os.path.join(validation_dir, 'daisy')
validation_dandelion_dir = os.path.join(validation_dir, 'dandelion')
validation_rose_dir = os.path.join(validation_dir, 'rose')
validation_sunflower_dir = os.path.join(validation_dir, 'sunflower')
validation_tulip_dir = os.path.join(validation_dir, 'tulip')

test_daisy_dir = os.path.join(test_dir, 'daisy')
test_dandelion_dir = os.path.join(test_dir, 'dandelion')
test_rose_dir = os.path.join(test_dir, 'rose')
test_sunflower_dir = os.path.join(test_dir, 'sunflower')
test_tulip_dir = os.path.join(test_dir, 'tulip')

# 画像サイズ
img_rows, img_cols = 200, 200

画像データの数を確認する

In [4]:
print('total training daisy images:', len(os.listdir(train_daisy_dir)),train_daisy_dir)
print('total training dandelion images:', len(os.listdir(train_dandelion_dir)),train_dandelion_dir)
print('total training rose images:', len(os.listdir(train_rose_dir)),train_rose_dir)
print('total training sunflower images:', len(os.listdir(train_sunflower_dir)),train_sunflower_dir)
print('total training tulip images:', len(os.listdir(train_tulip_dir)),train_tulip_dir)

print('total validation daisy images:', len(os.listdir(validation_daisy_dir)),validation_daisy_dir)
print('total validation dandelion images:', len(os.listdir(validation_dandelion_dir)),validation_dandelion_dir)
print('total validation rose images:', len(os.listdir(validation_rose_dir)),validation_rose_dir)
print('total validation sunflower images:', len(os.listdir(validation_sunflower_dir)),validation_sunflower_dir)
print('total validation tulip images:', len(os.listdir(validation_tulip_dir)),validation_tulip_dir)

print('total test daisy images:', len(os.listdir(test_daisy_dir)),test_daisy_dir)
print('total test dandelion images:', len(os.listdir(test_dandelion_dir)),test_dandelion_dir)
print('total test rose images:', len(os.listdir(test_rose_dir)),test_rose_dir)
print('total test sunflower images:', len(os.listdir(test_sunflower_dir)),test_sunflower_dir)
print('total test tulip images:', len(os.listdir(test_tulip_dir)),test_tulip_dir)
total training daisy images: 500 .\train_images\daisy
total training dandelion images: 500 .\train_images\dandelion
total training rose images: 500 .\train_images\rose
total training sunflower images: 500 .\train_images\sunflower
total training tulip images: 500 .\train_images\tulip
total validation daisy images: 200 .\validation_images\daisy
total validation dandelion images: 200 .\validation_images\dandelion
total validation rose images: 200 .\validation_images\rose
total validation sunflower images: 200 .\validation_images\sunflower
total validation tulip images: 200 .\validation_images\tulip
total test daisy images: 69 .\test_images\daisy
total test dandelion images: 352 .\test_images\dandelion
total test rose images: 84 .\test_images\rose
total test sunflower images: 34 .\test_images\sunflower
total test tulip images: 284 .\test_images\tulip

ImageDataGeneratorを使って画像データを拡張する

In [5]:
train_datagen = ImageDataGenerator(rescale=1.0 / 255,shear_range=0.2,zoom_range=0.2,horizontal_flip=True)

train_generator = train_datagen.flow_from_directory(directory=train_dir,target_size=(img_rows, img_cols),color_mode='rgb',classes=classes,class_mode='categorical',batch_size=batch_size_for_data_generator,shuffle=True)
Found 2500 images belonging to 5 classes.
In [6]:
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
    
validation_generator = test_datagen.flow_from_directory(directory=validation_dir,target_size=(img_rows, img_cols),color_mode='rgb',classes=classes,class_mode='categorical',batch_size=batch_size_for_data_generator,shuffle=True)
Found 1000 images belonging to 5 classes.

VGG16モデル

In [7]:
input_tensor = Input(shape=(img_rows, img_cols, 3))
vgg16 = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)
vgg16.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 200, 200, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 200, 200, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 200, 200, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 100, 100, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 100, 100, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 100, 100, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 50, 50, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 50, 50, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 25, 25, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 25, 25, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 12, 12, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 6, 6, 512)         0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________

VGG16モデルに全結合分類器を追加する

In [8]:
top_model = Sequential()
#model.add(vgg16)
#model.add(Flatten())
top_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(nb_classes, activation='softmax'))
model = Model(inputs=vgg16.input, outputs=top_model(vgg16.output))
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 200, 200, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 200, 200, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 200, 200, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 100, 100, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 100, 100, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 100, 100, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 50, 50, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 50, 50, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 25, 25, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 25, 25, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 12, 12, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 6, 6, 512)         0         
_________________________________________________________________
sequential_1 (Sequential)    (None, 5)                 4720133   
=================================================================
Total params: 19,434,821
Trainable params: 19,434,821
Non-trainable params: 0
_________________________________________________________________

VGG16のblock5_conv1以降と追加した全結合分類器のみ訓練する

In [9]:
vgg16.trainable = True

set_trainable = False
for layer in vgg16.layers:
    if layer.name == 'block5_conv1':
        set_trainable = True
    if set_trainable:
        layer.trainable = True
    else:
        layer.trainable = False
In [10]:
for layer in vgg16.layers:
    print(layer, layer.trainable )
<keras.engine.input_layer.InputLayer object at 0x000002BCB77C9358> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB77C9908> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB778D748> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB778D9B0> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB78570B8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9140BA8> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9156AC8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9167A20> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB917C390> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9193F28> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB91C0B70> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB91D3AC8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB91FDB38> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9216C88> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9229C18> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9240B70> True
<keras.layers.convolutional.Conv2D object at 0x000002BCB9269BE0> True
<keras.layers.convolutional.Conv2D object at 0x000002BCB92822B0> True
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9298CC0> True
In [11]:
for layer in model.layers:
    print(layer, layer.trainable )
<keras.engine.input_layer.InputLayer object at 0x000002BCB77C9358> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB77C9908> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB778D748> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB778D9B0> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB78570B8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9140BA8> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9156AC8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9167A20> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB917C390> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9193F28> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB91C0B70> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB91D3AC8> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB91FDB38> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9216C88> False
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9229C18> False
<keras.layers.convolutional.Conv2D object at 0x000002BCB9240B70> True
<keras.layers.convolutional.Conv2D object at 0x000002BCB9269BE0> True
<keras.layers.convolutional.Conv2D object at 0x000002BCB92822B0> True
<keras.layers.pooling.MaxPooling2D object at 0x000002BCB9298CC0> True
<keras.engine.sequential.Sequential object at 0x000002BCBCCD5B00> True
In [12]:
model.compile(loss='categorical_crossentropy',optimizer=optimizers.RMSprop(lr=1e-5), metrics=['acc'])

学習

In [13]:
history = model.fit_generator(train_generator,steps_per_epoch=25,epochs=30,validation_data=validation_generator,validation_steps=10,verbose=1)   
Epoch 1/30
25/25 [==============================] - 19s 742ms/step - loss: 1.7846 - acc: 0.2260 - val_loss: 1.4590 - val_acc: 0.3750
Epoch 2/30
25/25 [==============================] - 12s 465ms/step - loss: 1.5717 - acc: 0.3140 - val_loss: 1.3343 - val_acc: 0.5150
Epoch 3/30
25/25 [==============================] - 12s 471ms/step - loss: 1.3851 - acc: 0.4240 - val_loss: 1.1347 - val_acc: 0.5700
Epoch 4/30
25/25 [==============================] - 12s 467ms/step - loss: 1.2057 - acc: 0.5360 - val_loss: 1.0050 - val_acc: 0.6400
Epoch 5/30
25/25 [==============================] - 12s 465ms/step - loss: 1.0874 - acc: 0.5740 - val_loss: 0.9271 - val_acc: 0.6500
Epoch 6/30
25/25 [==============================] - 12s 469ms/step - loss: 0.9593 - acc: 0.6480 - val_loss: 0.7695 - val_acc: 0.7350
Epoch 7/30
25/25 [==============================] - 12s 466ms/step - loss: 0.9121 - acc: 0.6320 - val_loss: 0.8004 - val_acc: 0.7300
Epoch 8/30
25/25 [==============================] - 12s 468ms/step - loss: 0.8780 - acc: 0.6660 - val_loss: 0.8283 - val_acc: 0.7150
Epoch 9/30
25/25 [==============================] - 12s 474ms/step - loss: 0.8224 - acc: 0.6920 - val_loss: 0.6235 - val_acc: 0.7650
Epoch 10/30
25/25 [==============================] - 12s 471ms/step - loss: 0.7298 - acc: 0.7300 - val_loss: 0.6413 - val_acc: 0.7550
Epoch 11/30
25/25 [==============================] - 12s 468ms/step - loss: 0.7014 - acc: 0.7140 - val_loss: 0.6686 - val_acc: 0.7650
Epoch 12/30
25/25 [==============================] - 12s 470ms/step - loss: 0.5839 - acc: 0.7900 - val_loss: 0.6091 - val_acc: 0.7500
Epoch 13/30
25/25 [==============================] - 12s 470ms/step - loss: 0.6186 - acc: 0.7820 - val_loss: 0.5687 - val_acc: 0.7950
Epoch 14/30
25/25 [==============================] - 12s 470ms/step - loss: 0.6812 - acc: 0.7460 - val_loss: 0.4834 - val_acc: 0.8100
Epoch 15/30
25/25 [==============================] - 12s 471ms/step - loss: 0.7044 - acc: 0.7120 - val_loss: 0.5387 - val_acc: 0.7900
Epoch 16/30
25/25 [==============================] - 12s 468ms/step - loss: 0.5584 - acc: 0.8160 - val_loss: 0.6987 - val_acc: 0.7050
Epoch 17/30
25/25 [==============================] - 12s 470ms/step - loss: 0.5730 - acc: 0.7840 - val_loss: 0.4891 - val_acc: 0.8150
Epoch 18/30
25/25 [==============================] - 12s 466ms/step - loss: 0.5879 - acc: 0.7740 - val_loss: 0.5013 - val_acc: 0.8300
Epoch 19/30
25/25 [==============================] - 12s 468ms/step - loss: 0.5282 - acc: 0.8160 - val_loss: 0.4267 - val_acc: 0.8600
Epoch 20/30
25/25 [==============================] - 12s 471ms/step - loss: 0.5530 - acc: 0.7980 - val_loss: 0.4987 - val_acc: 0.8350
Epoch 21/30
25/25 [==============================] - 12s 468ms/step - loss: 0.4602 - acc: 0.8460 - val_loss: 0.3858 - val_acc: 0.8300
Epoch 22/30
25/25 [==============================] - 12s 472ms/step - loss: 0.5306 - acc: 0.7960 - val_loss: 0.5561 - val_acc: 0.8100
Epoch 23/30
25/25 [==============================] - 12s 467ms/step - loss: 0.4891 - acc: 0.8320 - val_loss: 0.4646 - val_acc: 0.8400
Epoch 24/30
25/25 [==============================] - 12s 468ms/step - loss: 0.4925 - acc: 0.8360 - val_loss: 0.5183 - val_acc: 0.8100
Epoch 25/30
25/25 [==============================] - 12s 472ms/step - loss: 0.4701 - acc: 0.8320 - val_loss: 0.4827 - val_acc: 0.8300
Epoch 26/30
25/25 [==============================] - 12s 467ms/step - loss: 0.4426 - acc: 0.8460 - val_loss: 0.4985 - val_acc: 0.8050
Epoch 27/30
25/25 [==============================] - 12s 467ms/step - loss: 0.3773 - acc: 0.8700 - val_loss: 0.4224 - val_acc: 0.8650
Epoch 28/30
25/25 [==============================] - 12s 469ms/step - loss: 0.4123 - acc: 0.8560 - val_loss: 0.5356 - val_acc: 0.8000
Epoch 29/30
25/25 [==============================] - 12s 470ms/step - loss: 0.4190 - acc: 0.8520 - val_loss: 0.4123 - val_acc: 0.8600
Epoch 30/30
25/25 [==============================] - 12s 471ms/step - loss: 0.4291 - acc: 0.8500 - val_loss: 0.3669 - val_acc: 0.8650

学習結果を保存する

In [14]:
hdf5_file = os.path.join(base_dir, 'flower-model.hdf5')
model.save_weights(hdf5_file)

学習推移をグラフに表示する

In [15]:
import matplotlib.pyplot as plt
In [16]:
%matplotlib inline
In [17]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()
訓練データと検証データでの正解率のグラフ

訓練データと検証データでの正解率

訓練データと検証データでの損失値のグラフ

訓練データと検証データでの損失値

テストの画像データで正解率を調べる

In [18]:
test_generator = test_datagen.flow_from_directory(directory=test_dir,target_size=(img_rows, img_cols),color_mode='rgb',classes=classes,class_mode='categorical',batch_size=batch_size_for_data_generator)

test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
Found 823 images belonging to 5 classes.
test acc: 0.832146490335707

実際にテスト画像を分離してみる

In [19]:
import numpy as np
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.vgg16 import preprocess_input
In [20]:
filename = os.path.join(test_dir, 'sunflower')
filename = os.path.join(filename, '3681233294_4f06cd8903.jpg')
filename
Out[20]:
'.\\test_images\\sunflower\\3681233294_4f06cd8903.jpg'
In [21]:
from PIL import Image
In [22]:
img = np.array( Image.open(filename))
plt.imshow( img )
Out[22]:
<matplotlib.image.AxesImage at 0x2bcb9fcc860>
テストデータの写真

テストデータ

In [23]:
img = load_img(filename, target_size=(img_rows, img_cols))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)

predict = model.predict(preprocess_input(x))
for pre in predict:
    y = pre.argmax()
    print("test result=",classes[y], pre)
test result= sunflower [0.01896952 0.         0.         0.98103046 0.        ]