Sobel Derivative in Opencv

Sobel derivative in OpenCV

This code snippet is to demonstrate how to compute Sobel 3x3 derivatives convolving the image with Sobel kernels. You can easily extend to different kernel sizes giving the kernel radius as input to my_sobel, and creating the appropriate kernel.

#include <opencv2\opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;

void my_sobel(const Mat1b& src, Mat1s& dst, int direction)
{
Mat1s kernel;
int radius = 0;

// Create the kernel
if (direction == 0)
{
// Sobel 3x3 X kernel
kernel = (Mat1s(3,3) << -1, 0, +1, -2, 0, +2, -1, 0, +1);
radius = 1;
}
else
{
// Sobel 3x3 Y kernel
kernel = (Mat1s(3, 3) << -1, -2, -1, 0, 0, 0, +1, +2, +1);
radius = 1;
}

// Handle border issues
Mat1b _src;
copyMakeBorder(src, _src, radius, radius, radius, radius, BORDER_REFLECT101);

// Create output matrix
dst.create(src.rows, src.cols);

// Convolution loop

// Iterate on image
for (int r = radius; r < _src.rows - radius; ++r)
{
for (int c = radius; c < _src.cols - radius; ++c)
{
short s = 0;

// Iterate on kernel
for (int i = -radius; i <= radius; ++i)
{
for (int j = -radius; j <= radius; ++j)
{
s += _src(r + i, c + j) * kernel(i + radius, j + radius);
}
}
dst(r - radius, c - radius) = s;
}
}
}

int main(void)
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);

// Compute custom Sobel 3x3 derivatives
Mat1s sx, sy;
my_sobel(img, sx, 0);
my_sobel(img, sy, 1);

// Edges L1 norm
Mat1b edges_L1;
absdiff(sx, sy, edges_L1);

// Check results against OpenCV
Mat1s cvsx,cvsy;
Sobel(img, cvsx, CV_16S, 1, 0);
Sobel(img, cvsy, CV_16S, 0, 1);
Mat1b cvedges_L1;
absdiff(cvsx, cvsy, cvedges_L1);

Mat diff_L1;
absdiff(edges_L1, cvedges_L1, diff_L1);

cout << "Number of different pixels: " << countNonZero(diff_L1) << endl;

return 0;
}

What is Sobel operator's order?

Yes. Order one estimates the first derivative in the specific direction.
Order two estimates the second derivative (the rate of change of the rate of change of intensity).
And so on.

Think of position (intensity), speed (order = 1), acceleration (order = 2), jerk (rate of change of acceleration - order 3)

Usually higher order derivatives are not too useful, especially due to the discretization of an image and the limited size stencils image operations usually work with.

Applying Sobel first order twice should theoretically give you the second order Sobel, but in practice this is not true due to the discretization of both the image and the Sobel operator.

How to normalize OpenCV Sobel filter given kernel size

I think this snippet will do the trick:

def gradient(img, dx, dy, ksize):
deriv_filter = cv2.getDerivKernels(dx=dx, dy=dy, ksize=ksize, normalize=True)
return cv2.sepFilter2D(img, -1, deriv_filter[0], deriv_filter[1])

What is the Sobel operator?

The Sobel operator estimates the derivative.

The correct definition of the Sobel operator to estimate the horizontal derivative is:

  | 1  0 -1 |
| 2 0 -2 | / 8
| 1 0 -1 |

The division by 8 is important to get the right magnitude. People often leave it out because they don't care about the actual derivative, they care about comparing the gradient in different places of the same image. Multiplying everything by 8 makes no difference there, and so leaving out the /8 keeps things simple.

You will see the kernel defined with the inverse signs some places. These are cases where the kernel is applied by correlation instead of convolution (which differ by a mirroring of the kernel), such as the case of OpenCV. These can also be cases where people copy stuff without understanding them, resulting in a gradient with the wrong sign.

But then again, the Sobel operator is mostly applied to obtain the gradient magnitude (the square root of the sum of the squares of the horizontal and vertical derivatives). In this case, reversing the signs doesn't matter any more.


Note that np.gradient(img) is comparable to convolving with [1,0,-1]/2. This is another way to estimate the derivative. Sobel adds a regularization (==smoothing) in the perpendicular direction.


You will get a better understanding of each implementation if you use a more meaningful test image. Try for example a black image with a white square in the middle. You will be able to compare the strength of the estimated gradients, their direction (I assume some libraries use a different definition of x and y axes), and you will be able to see the effect of the regularization.



Related Topics



Leave a reply



Submit