반응형
이 자료는 한국인공지능협회에서 진행한 2021년 사업주 직업능력 개발훈련으로 BASIC AI(32시간)-박성주 강사의 온라인 교육내용을 참고한 것입니다.
MNIST 데이터셋 불러오기¶
In [1]:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from keras.utils.vis_utils import plot_model
# 1. MNIST 데이터셋 불러오기
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
데이터 전처리하기¶
In [2]:
# 2. 데이터 전처리하기
train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))
train_images, test_images = train_images / 255.0, test_images / 255.0
합성곱 신경망 구성하기¶
In [3]:
# 3. 합성곱 신경망 구성하기
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 32) 320
max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0
)
conv2d_1 (Conv2D) (None, 11, 11, 64) 18496
max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0
2D)
conv2d_2 (Conv2D) (None, 3, 3, 64) 36928
max_pooling2d_2 (MaxPooling (None, 1, 1, 64) 0
2D)
=================================================================
Total params: 55,744
Trainable params: 55,744
Non-trainable params: 0
_________________________________________________________________
Dense 층 추가하기¶
In [4]:
# 4. Dense 층 추가하기
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 32) 320
max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0
)
conv2d_1 (Conv2D) (None, 11, 11, 64) 18496
max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0
2D)
conv2d_2 (Conv2D) (None, 3, 3, 64) 36928
max_pooling2d_2 (MaxPooling (None, 1, 1, 64) 0
2D)
flatten (Flatten) (None, 64) 0
dense (Dense) (None, 64) 4160
dense_1 (Dense) (None, 10) 650
=================================================================
Total params: 60,554
Trainable params: 60,554
Non-trainable params: 0
_________________________________________________________________
In [5]:
# 모델확인
plot_model(model, show_shapes=True, dpi=80)
Out[5]:
모델 컴파일하기¶
In [6]:
# 5. 모델 컴파일하기
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=["accuracy"])
훈련하기¶
In [7]:
# 6. 훈련하기
history = model.fit(train_images, train_labels, batch_size=16, epochs=10, verbose=1, validation_data=(test_images,test_labels))
Epoch 1/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.1819 - accuracy: 0.9430 - val_loss: 0.0587 - val_accuracy: 0.9824
Epoch 2/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0666 - accuracy: 0.9789 - val_loss: 0.0550 - val_accuracy: 0.9823
Epoch 3/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0471 - accuracy: 0.9851 - val_loss: 0.0556 - val_accuracy: 0.9830
Epoch 4/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0366 - accuracy: 0.9885 - val_loss: 0.0622 - val_accuracy: 0.9825
Epoch 5/10
3750/3750 [==============================] - 21s 6ms/step - loss: 0.0290 - accuracy: 0.9905 - val_loss: 0.0417 - val_accuracy: 0.9871
Epoch 6/10
3750/3750 [==============================] - 21s 6ms/step - loss: 0.0247 - accuracy: 0.9916 - val_loss: 0.0491 - val_accuracy: 0.9858
Epoch 7/10
3750/3750 [==============================] - 21s 6ms/step - loss: 0.0201 - accuracy: 0.9936 - val_loss: 0.0480 - val_accuracy: 0.9875
Epoch 8/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0189 - accuracy: 0.9937 - val_loss: 0.0802 - val_accuracy: 0.9818
Epoch 9/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0172 - accuracy: 0.9944 - val_loss: 0.0497 - val_accuracy: 0.9876
Epoch 10/10
3750/3750 [==============================] - 22s 6ms/step - loss: 0.0154 - accuracy: 0.9949 - val_loss: 0.0544 - val_accuracy: 0.9868
In [8]:
print(history.history.keys())
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [9]:
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
모델 평가하기¶
In [10]:
# 7. 모델 평가하기
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
313/313 - 1s - loss: 0.0544 - accuracy: 0.9868 - 913ms/epoch - 3ms/step
In [11]:
print('테스트 정확도 : ', acc)
테스트 정확도 : 0.9868000149726868
test_images: 숫자로 실제 평가 테스트¶
In [12]:
plt.imshow(test_images[587],cmap='Greys')
plt.show()
In [13]:
print(test_labels[587])
6
결과 예측하기¶
In [14]:
result= model.predict(test_images)
In [15]:
print(result[587])
[1.1671032e-09 1.7788943e-12 2.1471147e-09 3.5404605e-12 2.2394228e-07
1.0474565e-09 9.9999976e-01 4.9988434e-12 3.5337108e-10 1.5825743e-11]
In [16]:
import numpy as np
result_num = np.argmax(result[587])
print("예측 결과값 = %d" % (result_num))
예측 결과값 = 6
실제 데이터를 직접 처리해서 결과를 예측해보자¶
In [17]:
# 이미지를 가져오는 라이브러리 호출
# 2.7 이상붙처는 내부에 설치되어 있음. pip install pillow
from PIL import Image, ImageOps
In [18]:
image = Image.open('./test_5.jpg')
In [19]:
# 정규화
size = (28, 28)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
image_array = np.asarray(image)
normal_image_array = (image_array.astype(np.float32) / 127.0) -1
data = normal_image_array
real_data = data.reshape(-1, 28, 28, 1)
In [20]:
plt.imshow(data, cmap='Greys')
plt.show()
In [21]:
real_result = model.predict(real_data)
print(real_result)
[[5.5553396e-06 1.2178321e-02 2.7646674e-02 3.3765911e-05 2.2704608e-03
9.5192426e-01 4.4796267e-03 7.8509147e-05 9.0447837e-04 4.7833423e-04]]
In [22]:
# 실제 데이터 검증
real_num = np.argmax(real_result)
print("포토샵의 숫자 예측 결과값 = %d" % (real_num))
포토샵의 숫자 예측 결과값 = 5
In [ ]:
반응형
'그냥, 코딩' 카테고리의 다른 글
티처블머신(Teachable Machine) X 인공지능키트(AIIT,에이토) (0) | 2021.12.15 |
---|---|
인공지능설계 - 인공지능 6단계 학습 및 예제 실습 (0) | 2021.12.15 |
티처블머신(Teachable Machine) 소개 및 실습 (0) | 2021.12.15 |
크롤링(Crawling) (0) | 2021.12.14 |
labelimg 환경구축 및 사용방법 (0) | 2021.12.14 |