Understanding Region of Interest in Opencv 2.4

Understanding region of interest in openCV 2.4

cv::Mat imageROI;
imageROI= image(cv::Rect(385,270,logo.cols,logo.rows));

The cv::Mat constructor wich takes a rectangle as a parameter:

Mat::Mat(const Mat& m, const Rect& roi)

returns a matrix that is pointing to the ROI of the original image, located at the place specified by the rectangle. so imageROI is really the Region of Interest (or subimage/submatrix) of the original image "image". If you modify imageROI it will consequently modify the original, larger matrix.

As for your example, the problem is that you are calling the constructor from an object which does not exists (image). You should replace:

imageROI= image(Rect(385,270,logo.cols,logo.rows));

by:

imageROI= src1(Rect(385,270,logo.cols,logo.rows));

assuming that src1 is your "big image" that you want to insert the logo into (the logo being car1.jpg). You should not forget to first read your big image, by the way!

Region of Interest using opencv

Nash Ruddin has some excellent articles on OpenCV that I've found very helpful. There's one on ROI as well that contains a decent blurb on the topic: OpenCV Region of Interest (ROI)

Essentially, you're right about what's going on. You've 'zoomed' in on the image to localise your processing to a specific area. It's not that setting an ROI has affected the image, it's only set a focus to a lower-resolution part of the image. After your processing on the area is finished, you can reset the ROI:

  cvResetImageROI(img);

... and that will be a bit clearer to see again.

OpenCV C++, getting Region Of Interest (ROI) using cv::Mat

You can use the overloaded function call operator on the cv::Mat:

cv::Mat img = ...;
cv::Mat subImg = img(cv::Range(0, 100), cv::Range(0, 100));

Check the OpenCV documentation for more information and for the overloaded function that takes a cv::Rect. Note that using this form of slicing creates a new matrix header, but does not copy the data.

Exact Meaning of the parameters given to initialize MSER in OpenCV 2.4.x?

I am going to presume that you know the basics of how MSER feature detection works (if not, Wikipedia, and short recap follows).

You have two types of MSER regions, positive and negative.

First type, you get by thresholding with all intensities (for grayscale images, 0 to 255). E.g. for a threshold T = 100, all pixels with intensity < 100 are assigned black, or foreground, and all pixels >= 100 intensity are white or background.

Now, imagine you're observing a specific pixel p. At some threshold, let's call it T1, it will start belonging to the foreground and stay that way until T=255. At T1 a pixel will belong to a component CC_T1(p). 5 gray levels later, it will belong to the component CC_(T1+5)(p).

All of these connected components, obtained for all the thresholds, are potential candidates for MSER. (Other type of components is obtained if you reverse my black/foreground and white/background assignments for thresholding).

Parameters help decide which potential candidates are indeed maximally stable:

  • delta

    For every region, variation is measured:

    V_T = (size(CC_T(p))-size(CC_{T-delta}(p)))/size(CC_{T-delta}(p))

    for every possible threshold Ti. If this variation for a pixels is a local minimum of a variation, that is, V_T < V_{T-1} and V_T < V_{T+1}, the region is maximally stable.

    The parameter delta indicates through how many different gray levels does a region need to be stable to be considered maximally stable. For a larger delta, you will get less regions.

    note: In the original paper introducing MSER regions, the actual formula is:

    V_T = (size(CC_{T+delta}(p))-size(CC_{T-delta}(p)))/size(CC_T(p))

    The OpenCV implementation uses a slightly different formula to speed up the feature extraction.

  • minArea, maxArea

    If a region is maximally stable, it can still be rejected if it has less than minArea pixels or more than maxArea pixels.

  • maxVariation

    Back to the variation from point 1 (the same function as for delta): if a region is maximally stable, it can still be rejected if the the regions variation is bigger than maxVariation.

    That is, even if the region is "relatively" stable (more stable than the neigbouring regions), it may not be "absolutely" stable enough. For smaller maxVariation, you will get less regions

  • minDiversity

    This parameter exists to prune regions that are too similar (e.g. differ for only a few pixels).

    For a region CC_T1(p) that is maximally stable, find a region CC_T2(p) which is the "parent maximally stable region". That means, T2 > T1, CC_T2(p) is a maximally stable region and there is no T2 > Tx > T1 such that CC_Tx(p) is maximally stable. Now, compare how much bigger the parent is:

    diversity = (size(CC_T2(p)) - size(CC_T1(p))) / size(CC_T1(p))

    If this diversity is smaller than maxDiversity, remove the region CC_T1(p). For larger diversity, you will get less regions.

    (For the exact formula for this parameter I had to dig through the program code)

Region of Interest opencv python

Okay, On further analysis realized that the cv2 since it has been supporting numpy array structure, there is no longer any need for a API, the entire image can be manipulated in the array itself.
eg:

img = cv2.imread('image.png')
img = img[c1:c1+25,r1:r1+25]

Here c1 is the left side column pixel location, and r1 is the corresponding row location. And img now has the image specified within the pixels as the ROI.

EDIT:
Very nicely explained here, How to copy a image region using opencv in python?

crop and Save ROI as new image in OpenCV 2.4.2 using cv::Mat

Using cv::Mat objects will make your code substantially simpler. Assuming the detected face lies in a rectangle called faceRect of type cv::Rect, all you have to type to get a cropped version is:

cv::Mat originalImage;
cv::Rect faceRect;
cv::Mat croppedFaceImage;

croppedFaceImage = originalImage(faceRect).clone();

Or alternatively:

originalImage(faceRect).copyTo(croppedImage);

This creates a temporary cv::Matobject (without copying the data) from the rectangle that you provide. Then, the real data is copied to your new object via the clone or copy method.

opencv android java matrix submatrix (ROI Region of Interest)

sub = mRgba.submat(r);

Imgproc.cvtColor(sub, sub, Imgproc.COLOR_RGBA2GRAY, 1); //make it gray
Imgproc.cvtColor(sub, sub, Imgproc.COLOR_GRAY2RGBA, 4); //change to rgb

sub.copyTo(mRgba.submat(r));

ok this seems to do the trick :) it copies the changed subpicture/matrix back in the region of the source.. (what is normally done with setROI and copyto)

I'm getting an error from the implementation of my region of interest

Your error code generally means, that ROI you want to crop is out of the bounds of the source matrix - e.g. source matrix is of size 480x480 and you want to crop out ROI of size 300x300 from position (200, 200), where 300+200 > 480.


According to docs

src – Input 8-bit 3-channel image.
dst – Input 8-bit 3-channel image.
mask – Input 8-bit 1 or 3-channel image.
result – Output image with the same size and type as dst.

src, dst and result should be of type CV_8UC3 - three channel images, while you are passing just one channel images CV_8UC1, which most likely cause the error here.

The solution is to use 3-channel (color) images or different operation accepting 1-channel images.

With OpenCV, try to extract a region of a picture described by ArrayOfArrays

I'm guessing what you want to do is just extract the regions in the the detected contours. Here is a possible solution:

using namespace cv;

int main(void)
{
vector<Mat> subregions;
// contours_final is as given above in your code
for (int i = 0; i < contours_final.size(); i++)
{
// Get bounding box for contour
Rect roi = boundingRect(contours_final[i]); // This is a OpenCV function

// Create a mask for each contour to mask out that region from image.
Mat mask = Mat::zeros(image.size(), CV_8UC1);
drawContours(mask, contours_final, i, Scalar(255), CV_FILLED); // This is a OpenCV function

// At this point, mask has value of 255 for pixels within the contour and value of 0 for those not in contour.

// Extract region using mask for region
Mat contourRegion;
Mat imageROI;
image.copyTo(imageROI, mask); // 'image' is the image you used to compute the contours.
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi); // Save this if you want a mask for pixels within the contour in contourRegion.

// Store contourRegion. contourRegion is a rectangular image the size of the bounding rect for the contour
// BUT only pixels within the contour is visible. All other pixels are set to (0,0,0).
subregions.push_back(contourRegion);
}

return 0;
}

You might also want to consider saving the individual masks to optionally use as a alpha channel in case you want to save the subregions in a format that supports transparency (e.g. png).

NOTE: I'm NOT extracting ALL the pixels in the bounding box for each contour, just those within the contour. Pixels that are not within the contour but in the bounding box are set to 0. The reason is that your Mat object is an array and that makes it rectangular.

Lastly, I don't see any reason for you to just save the pixels in the contour in a specially created data structure because you would then need to store the position for each pixel in order to recreate the image. If your concern is saving space, that would not save you much space if at all. Saving the tightest bounding box would suffice. If instead you wish to just analyze the pixels in the contour region, then save a copy of the mask for each contour so that you can use it to check which pixels are within the contour.



Related Topics



Leave a reply



Submit