What Is a '"Python"' Layer in Caffe

What is a `Python` layer in caffe?

Prune's and Bharat's answers gives the overall purpose of a "Python" layer: a general purpose layer which is implemented in python rather than c++.

I intend this answer to serve as a tutorial for using "Python" layer.


A Tutorial for "Python" layer

what is a "Python" layer?

Please see the excellent answers of Prune and Bharat.

Pre-requisite

In order to use 'Python" layer you need to compile caffe with flag

WITH_PYTHON_LAYER := 1

set in 'Makefile.config'.

How to implement a "Python" layer?

A "Python" layer should be implemented as a python class derived from caffe.Layer base class. This class must have the following four methods:

import caffe
class my_py_layer(caffe.Layer):
def setup(self, bottom, top):
pass

def reshape(self, bottom, top):
pass

def forward(self, bottom, top):
pass

def backward(self, top, propagate_down, bottom):
pass

What are these methods?

def setup(self, bottom, top): This method is called once when caffe builds the net. This function should check that number of inputs (len(bottom)) and number of outputs (len(top)) is as expected.

You should also allocate internal parameters of the net here (i.e., self.add_blobs()), see this thread for more information.

This method has access to self.param_str - a string passed from the prototxt to the layer. See this thread for more information.

def reshape(self, bottom, top): This method is called whenever caffe reshapes the net. This function should allocate the outputs (each of the top blobs). The outputs' shape is usually related to the bottoms' shape.

def forward(self, bottom, top): Implementing the forward pass from bottom to top.

def backward(self, top, propagate_down, bottom): This method implements the backpropagation, it propagates the gradients from top to bottom. propagate_down is a Boolean vector of len(bottom) indicating to which of the bottoms the gradient should be propagated.

Some more information about bottom and top inputs you can find in this post.

Examples
You can see some examples of simplified python layers here, here and here.

Example of "moving average" output layer can be found here.

Trainable parameters
"Python" layer can have trainable parameters (like "Conv", "InnerProduct", etc.).

You can find more information on adding trainable parameters in this thread and this one. There's also a very simplified example in caffe git.

How to add a "Python" layer in a prototxt?

See Bharat's answer for details.

You need to add the following to your prototxt:

layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
python_param {
module: 'rpn.anchor_target_layer' # python module name where your implementation is
layer: 'AnchorTargetLayer' # the name of the class implementation
param_str: "'feat_stride': 16" # optional parameters to the layer
}
}

How to add a "Python" layer using pythonic NetSpec interface?

It's very simple:

import caffe
from caffe import layers as L

ns = caffe.NetSpec()
# define layers here...
ns.rpn_labels, ns.rpn_bbox_targets, \
ns.rpn_bbox_inside_weights, ns.rpn_bbox_outside_weights = \
L.Python(ns.rpn_cls_score, ns.gt_boxes, ns.im_info, ns.data,
name='rpn-data',
ntop=4, # tell caffe to expect four output blobs
python_param={'module': 'rpn.anchor_target_layer',
'layer': 'AnchorTargetLayer',
'param_str': '"\'feat_stride\': 16"'})

How to use a net with a "Python" layer?

Invoking python code from caffe is nothing you need to worry about. Caffe uses boost API to call python code from compiled c++.

What do you do need to do?

Make sure the python module implementing your layer is in $PYTHONPATH so that when caffe imports it - it can be found.

For instance, if your module my_python_layer.py is in /path/to/my_python_layer.py then

PYTHONPATH=/path/to:$PYTHONPATH $CAFFE_ROOT/build/tools/caffe train -solver my_solver.prototxt

should work just fine.

How to test my layer?

You should always test your layer before putting it to use.
Testing the forward function is entirely up to you, as each layer has a different functionality.

Testing the backward method is easy, as this method only implements a gradient of forward it can be numerically tested automatically!

Check out test_gradient_for_python_layer testing utility:

import numpy as np
from test_gradient_for_python_layer import test_gradient_for_python_layer

# set the inputs
input_names_and_values = [('in_cont', np.random.randn(3,4)),
('in_binary', np.random.binomial(1, 0.4, (3,1))]
output_names = ['out1', 'out2']
py_module = 'folder.my_layer_module_name'
py_layer = 'my_layer_class_name'
param_str = 'some params'
propagate_down = [True, False]

# call the test
test_gradient_for_python_layer(input_names_and_values, output_names,
py_module, py_layer, param_str,
propagate_down)

# you are done!

Special Notice

It is worth while noting that python code runs on CPU only. Thus, if you plan to have a Python layer in the middle of your net you will see a significant degradation in performance if you plan on using GPU. This happens because caffe needs to copy blobs from GPU to CPU before calling python layer and then copy back to GPU to proceed with the forward/backward pass.

This degradation is far less significant if the python layer is either an input layer or the topmost loss layer.

Update: On Sep 19th, 2017 PR #5904 was merged into master. This PR exposes GPU pointers of blobs via the python interface.
You may access blob._gpu_data_ptr and blob._gpu_diff_ptr directly from python at your own risk.

What's the difference between net.layers.blobs and net.params in Caffe

You are mixing the trainable net parameters (stored in net.params) and the input data to the net (stored in net.blobs):

Once you are done training the model, net.params are fixed and will not change. However, for each new input example you are feeding to the net, net.blobs will store the different layers' response to that particular input.

How should I use blobs in a Caffe Python layer, and when does their training take place?

You can add as many internal parameters as you wish, and these parameters (Blobs) may have whatever shape you want them to be.

To add Blobs (in your layer's class):

def setup(self, bottom, top):
self.blobs.add_blob(2) # add two blobs
self.blobs[0].reshape(3, 4) # first blob is 2D
self.blobs[0].data[...] = 0 # init
self.blobs[1].reshape(10) # second blob is 1D with 10 elements
self.blobs[1].data[...] = 1 # init to 1

What is the "meaning" of each parameter and how to organize them in self.blobs is entirely up to you.

How are trainable parameters being "trained"?

This is one of the cool things about caffe (and other DNN toolkits as well), you don't need to worry about it!

What do you need to do? All you need is to compute the gradient of the loss w.r.t the parameters and store it in self.blobs[i].diff. Once the gradients are updated, caffe's internals takes care of updating the parameters according to the gradients/learning rate/momentum/update policy etc.

So,

You must have a non-trivial backward method for your layer

backward(self, top, propagate_down, bottom):
self.blobs[0].diff[...] = # diff of parameters
self.blobs[1].diff[...] = # diff for all the blobs

You might want to test your implementation of the layer, once you complete it.

Have a look at this PR for a numerical test of the gradients.

How to create Data layer in caffe?

It seems like "DummyData" layer will do the job for you:

layer {
type: "DummyData"
name: "hidden_seed"
top: "hidden_seed"
dummy_data_param {
shape { dim: 300 dim: 250 } # your desired data shape
data_filler { type: "constant" value: 0 } # fill with all zeros
}
}

caffe `Python` layer not found?

You can find out what layer types caffe has in python simply by examining caffe.layer_types_list(). For example, if you actually have a "Python" layer, then

list(caffe.layer_type_list()).index('Python')

Should actually return an index for its name in the layer types list.

As for L.Python() - this caffe.NetSpec() interface is used to programatically write a net prototxt, and at the writing stage layer types are not checked. You can actually write whatever layer you want:

L.YouDontThinkTheyNameALayerLikeThis()

Is totally cool. Even converting it to prototxt:

print "{}".format(L.YouDontThinkTheyNameALayerLikeThis().to_proto())

Actually results with this:

layer {
name: "YouDontThinkTheyNameALayerLikeThis1"
type: "YouDontThinkTheyNameALayerLikeThis"
top: "YouDontThinkTheyNameALayerLikeThis1"
}

You'll get an error message once you try to run this "net" using caffe...

How to timing each layer(including python layer) with pycaffe interface

Just add timing lines to ForwardFromto() in net.cpp

Caffe python layer backword pass implementation

Your function is quite simple, if you are willing to ignore the np.around:

Sample Image

For x=x_min and for x=x_max the derivative is zero, for all other x the derivative is 255/(x_max-x_min).

This can be implemented by

def forward(self, bottom, top):
in_ = bottom[0].data
self.x_min = in_.min(axis=(0, 1), keepdims=True) # cache min/max for backward
self.x_max = in_.max(axis=(0, 1), keepdims=True)
top[0].data[...] = 255*((in_-self.x_min)/(self.x_max-self.x_min)))

def backward(self, top, propagate_down, bottom):
in_ = bottom[0].data
b, c = in_.shape[:2]
diff = np.tile( 255/(self.x_max-self.x_min), (b, c, 1, 1) )
diff[ in_ == self.x_min ] = 0
diff[ in_ == self.x_max ] = 0
bottom[0].diff[...] = diff * top[0].diff

Do not forget to test this numberically. This can be done, e.g., using test_gradient_for_python_layer.



Related Topics



Leave a reply



Submit