Quick Tensorflow with Python (CPU vs GPU)

Tensorflow is an open source software library developed by Google for data flow programming. It is perhaps the most popular deep learning library today used for tasks such as image recognition. This will be a quick walk-through using CIFAR-10 dataset. The CIFAR-10 data consists of 60,000 32x32 color images in 10 classes, with 6000 images per class. There are 50,000 training images and 10,000 test images in the official data. This demonstration is also Tensorflow’s Convolutional Neural Network (CNN)example on Github.

from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import numpy as np
import os
batch_size = 32
num_classes = 10
epochs = 60
data_augmentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

The class vectors are converted to binary class matrices. The Root Mean
Square Propagation (RMSprop) optimiser utilises the magnitude of recent gradients to normalise the gradients.

# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
model.add(Conv2D(32, (3, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Conv2D(64, (3, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
# initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

Data augmentation is used to synthetically generate more data on-the-fly in order to improve generalisation. The training is done for only 60 epochs using Nvidia GPU Titan X (12GB) which features the Pascal Architecture.

if not data_augmentation:
print('Not using data augmentation.')
model.fit(x_train, y_train,
validation_data=(x_test, y_test),
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
steps_per_epoch=int(np.ceil(x_train.shape[0] / float(batch_size))),
validation_data=(x_test, y_test),
# Save model and weights
if not os.path.isdir(save_dir):
#model_path = os.path.join(save_dir, model_name)
#print('Saved trained model at %s ' % model_path)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])

Each epoch completes in about 12 seconds with test accuracy at almost 79% at the 60th epoch. It can of course be run for longer. Compare this with a sixteen core CPU which averages about 80 seconds per epoch for the same task.

Other optimisers such as the Adaptive Gradient (AdaGrad) and the Adaptive Moment Estimation (Adam) optimisers can also be experimented with.

Data Scientist and AI Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store