0

Keras install error with Anaconda 5.0.0 for Python 3.5.4

pip install keras” returns the following TypeError.

The solution found at StackOverflow says to
1. install html5lib by conda install –force html5lib
2. install keras by pip install keras
keras_install_error

Advertisements
0

Keras: MNIST classification

Keras implementation for MNIST classification with batch normalization and leaky ReLU.


import numpy as np
import time
import pandas as pd
import matplotlib.pyplot as plt

from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, LeakyReLU, Activation, Flatten
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import plot_model
from keras.backend import set_learning_phase, learning_phase

def build_classifier(input_shape, num_classes, num_pooling):
"""
input_shape = (image_width, image_height, 1)
"""
# settings
kernel_size = (3, 3)
pool_size = (2, 2)
num_featuremaps = 32
size_featurevector = 1024

# Three steps to create a CNN
# 1. Convolution
# 2. Activation
# 3. Pooling
# Repeat Steps 1,2,3 for adding more hidden layers

# 4. After that make a fully connected network
# This fully connected network gives ability to the CNN
# to classify the samples

model = Sequential()

# add convolution blocks num_pooling times for featuremap extraction
for block in range(num_pooling):
if block == 0:
model.add(Conv2D(num_featuremaps, kernel_size=kernel_size, padding='same', activation='linear', input_shape=input_shape))
else:
num_featuremaps *= 2
model.add(Conv2D(num_featuremaps, kernel_size=kernel_size, padding='same', activation='linear'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))

model.add(Conv2D(num_featuremaps, kernel_size=kernel_size, padding='same', activation='linear'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))

model.add(MaxPooling2D(pool_size=pool_size))

model.add(Flatten())

# add fully connected layers for classification
for block in range(num_pooling):
model.add(Dense(size_featurevector, activation='linear'))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization())
model.add(Dropout(0.5))

model.add(Dense(num_classes))

model.add(Activation('softmax'))

return model

 

def demo_load_model():
num_pooling = 3
input_shapes = [(64, 64, 1), (128, 128, 1), (256, 256, 1)]
for input_shape in input_shapes:
model = build_pointset_classifier(input_shape,
100,
num_pooling)

filename = 'pointset_classifier_%s.png' % str(input_shape[0])
plot_model(model,
to_file=filename,
show_shapes=True)

 

if __name__ == '__main__':
""" settings
"""
img_width = 28
img_height = 28
input_shape = (img_width, img_height, 1)
num_classes = 10
num_epoch = 3
size_batch = 100

""" load data
"""
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(x_train.shape[0], img_width, img_height, 1)
x_train = x_train.astype('float32')
x_train /= 255.0
y_train = np_utils.to_categorical(y_train, num_classes)

x_test = x_test.reshape(x_test.shape[0], img_width, img_height, 1)
x_test = x_test.astype('float32')
x_test /= 255.0
y_test = np_utils.to_categorical(y_test, num_classes)

""" build neural network
"""
num_pooling = 2
set_learning_phase(1)
model = build_classifier(input_shape,
num_classes,
num_pooling)

filename = 'classifier_%s.png' % str(input_shape[0])
plot_model(model,
to_file=filename,
show_shapes=True)

model.compile(loss='categorical_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
model.summary()
""" train the model
"""
time_begin = time.clock()
history = model.fit(x_train, y_train,
validation_data=(x_test, y_test),
epochs=num_epoch,
batch_size=size_batch,
verbose=1)
print('Time elapsed: %.0f' % (time.clock() - time_begin))

""" valid the model
"""
set_learning_phase(0)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
0

Keras: error at the beginning of fit with Dropout

When Keras’ model contains Dropout component in it, we must tell the model whether we are training or test mode because the network behaves differently. The solution is mentioned in github issue and Keras docs.

 

from keras.backend import set_learning_phase
...
set_learning_phase(1)
model = ...
model.compile(...)
model.summary()
model.fit(...)
set_learning_phase(0)
model.predict()
0

Keras: plot_model returns pydot and graphviz related error on Windows

plot_model of Keras installed on Windows may return pydot and graphviz related error like “Graphviz’s executables are not found’. The error cause is that the computer does not know the path of Graphviz’s executable exactly mentioned as the error message. The solution is explained here.

  1. Install GraphViz software not python module
  2. add GraphViz’s bin directory to system path
  3. install graphviz like conda install graphviz
  4. install pydot like pip install git+https://github.com/nlhepler/pydot.git

 

0

customize a model in Keras

Here’s a note to customize a pre-defined model in Keras. This official information and stackoverflow post gave me how to do it.

A Model is a list of layers (or layer instances) such as

model = Sequential([
    Dense(32, input_shape=(784,)),
    Activation('relu'),
    Dense(10),
    Activation('softmax'),
])

Each layer has its input and output as tensor object and they are accessible as

for layer in model.layers:
    print(layer.input)
    print(layer.output)

1. keep first N layers
give N-th layer’s output model.layers[N].output as an input of a new N+1-th layer.

x = Conv2D(...)(model.layers[N].output)
x = Conv2D(...)(x)
x = MaxPool2D(...)(x)

2. add a model to a middle layer of another model
give model1’s N-th layer’s output as an input of model2’s M-th layer.

x = model2.layers[M](model1.layers[N].output)
x = Conv2D(...)(x)
x = Conv2D(...)(x)
x = MaxPool2D(...)(x)
0

Keras: multiple inputs & outputs

from keras.models import Model
from keras.layers import Input, Dense

input_1 = Input(..., name='input_1')
x = Dense(...)(input_1)
x = Dense(...)(x)
...
output_1 = Dense(dim_output_1, ..., name='output_1')

input_2 = Input(..., name='input_2')
x = keras.layers.concatenate([output_1, input_2])
x = Dense(...)(x)
x = Dense(...)(x)
x = Dense(...)(x)
output_2 = Dense(dim_output_2, ..., name='output_2')(x)

model = Model(inputs=[input_1, input_2],
              outputs=[output_1, output_2])
model.compile(...,
    loss={'output_1': 'binary_crossentropy',
          'output_2': 'binary_crossentropy'},
    loss_weights={'output_1': weight_1.,
                  'output_2': weight_2})
model.fit({'input_1': data_input_1,
           'input_2': data_input_2},
          {'output_1': data_output_1,
           'output_2': data_output_2},
          ...)