Keras, How to Get the Output of Each Layer

Keras, How to get the output of each layer?

You can easily get the outputs of any layer by using: model.layers[index].output

For all layers use this:

from keras import backend as K

inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs

Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

Edit: (based on comments)

K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

So if you remove the dropout layer in your code you can simply use:

from keras import backend as K

inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp], [out]) for out in outputs] # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs

Edit 2: More optimized

I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:

from keras import backend as K

inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs

Get keras middle layer output in sequential

Based on your first comment on my first post, I'm adding a new post rather than editing my existing answer as it's already got a too-long post. Anyway, your concern is reasonable. Even I was also struggling with some kind of issue with subclasses API, here. But It seems that I didn't write quite well in my query there, as people didn't feel like it's a matter of concern.

Anyway, here is another with a more concise and precise answer as we build a single model with desired outputs. A single extractor rather than previously two separate extractors which brought extra computation overhead. Let's say, our sequential model

import tensorflow as tf 

seq_model = tf.keras.Sequential(
[
tf.keras.Input(shape=(32, 32, 3)),
tf.keras.layers.Conv2D(16, 3, activation="relu", name='conv1'),
tf.keras.layers.Conv2D(32, 3, activation="relu", name='conv2'),
tf.keras.layers.Conv2D(64, 3, activation="relu", name='conv3'),
tf.keras.layers.Conv2D(128, 3, activation="relu", name='conv4'),
tf.keras.layers.Conv2D(256, 3, activation="relu", name='conv5'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10, activation='softmax')

]
)

for l in seq_model.layers:
print(l.name, l.output_shape)

conv1 (None, 30, 30, 16)
conv2 (None, 28, 28, 32)
conv3 (None, 26, 26, 64)
conv4 (None, 24, 24, 128)
conv5 (None, 22, 22, 256)
global_average_pooling2d_3 (None, 256)
dense_3 (None, 10)

And we want conv3 and conv5 from a single model. We can do that easily as follows

model = tf.keras.models.Model(
inputs=[seq_model.input],
outputs=[seq_model.get_layer('conv3').output,
seq_model.get_layer('conv5').output]
)

# check
for i in check_model(tf.keras.Input((32, 32, 3))):
print(i.name, i.shape)

model_13/conv3/Relu:0 (None, 26, 26, 64)
model_13/conv5/Relu:0 (None, 22, 22, 256)

Nice, two feature output from the expected layer. Now, let's use this two layers (like my first post) to build a functional API model.

encoder_input = tf.keras.Input(shape=(32, 32, 3), name="img")
x = tf.keras.layers.Conv2D(16, 3, activation="relu")(encoder_input)

last_x = check_model(encoder_input)[0]
print(last_x.shape) # (None, 26, 26, 64) - model_13/conv3/Relu:0 (None, 26, 26, 64)

mid_x = check_model(encoder_input)[1] # model_13/conv5/Relu:0 (None, 22, 22, 256)
mid_x = tf.keras.layers.Conv2D(32, kernel_size=3, strides=1)(mid_x)
print(mid_x.shape) # (None, 20, 20, 32)

last_x = tf.keras.layers.GlobalMaxPooling2D()(last_x)
mid_x = tf.keras.layers.GlobalMaxPooling2D()(mid_x)
print(last_x.shape, mid_x.shape) # (None, 64) (None, 32)

encoder_output = tf.keras.layers.Concatenate()([last_x, mid_x])
print(encoder_output.shape) # (None, 96)

encoder_output = tf.keras.layers.Dense(100, activation='softmax')(encoder_output)
print(encoder_output.shape) # (None, 100)

encoder = tf.keras.Model(encoder_input, encoder_output, name="encoder")

tf.keras.utils.plot_model(
encoder,
show_shapes=True,
show_layer_names=True
)
(None, 26, 26, 64)
(None, 20, 20, 32)
(None, 64) (None, 32)
(None, 96)
(None, 100)

Sample Image

Can i get the all output keras layers

This error basically tells you that you want to change the graph after compiling it. When you call compile, TF will statically define all operations. You have to move the code snippet where you define functors above the compile method. Just swap the last lines with these ones:

tf.compat.v1.disable_eager_execution()
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=1,
validation_data=(test_images, test_labels))

#Testing
input_shape = [1] + list(model.input_shape[1:])
test = np.random.random(input_shape)
layer_outs = [func([test, 1.]) for func in functors]
print(layer_outs)

How get output value in array from each layer Keras

To extract the i-th layer of a Neural Network you can use Keras functions.
Let's assume that you are training a Model on some data df:

from tensorflow.keras import backend as K

# create a Keras function to get i-th layer
get_layer_output = K.function(inputs = Model.layers[0].input, outputs = Model.layers[i].output)

# extract output
layer_output = get_layer_output(df)

You can find a practical application here. Hope this helps, otherwise let me know.

How to get the output of all layers of a Keras model?

UPDATE: You cannot fetch outputs of all layers, because "all layers" includes Input - and the error message is self-explanatory. Use:

outputs = get_all_outputs(model, input_data, 1)


Below should work for any model, Model or Sequential:

def get_all_outputs(model, input_data, learning_phase=1):
outputs = [layer.output for layer in model.layers[1:]] # exclude Input
layers_fn = K.function([model.input, K.learning_phase()], outputs)
return layers_fn([input_data, learning_phase])

Layer-level solutions:

def get_layer_outputs(model, layer_name, input_data, learning_phase=1):
outputs = [layer.output for layer in model.layers if layer_name in layer.name]
layers_fn = K.function([model.input, K.learning_phase()], outputs)
return layers_fn([input_data, learning_phase])

# or, for passing in a layer directly
def get_layer_outputs(model, layer, input_data, learning_phase=1):
layer_fn = K.function([model.input, K.learning_phase()], layer.output)
return layer_fn([input_data, learning_phase])

Keras get the output of the last layer during training

You can train a new model using the predictions of a previously trained model simply stacking on the desired output new layers and set trainable = False on the old layer. Here a dummy example

# after autoencoder fitting

for i,l in enumerate(autoecoder.layers):
autoecoder.layers[i].trainable = False
print(l.name, l.trainable)

output_autoecoder = autoecoder.layers[10].output
x_new = Dense(32, activation='relu')(output_autoecoder) # add a new layer for exemple

new_model = Model(autoecoder.input, x_new)
new_model.compile('adam', 'mse')
new_model.summary()

I use the output of the last autoencoder layer as the input of new blocks. We can merge all compiling a new model where the inputs are the same as autoecoder, in this way we can use the training data for another algorithm without calling the prediction method



Related Topics



Leave a reply



Submit