Shift Image Content with Opencv

Shift image content with OpenCV

Is there a function to perform directly this operation with OpenCV?

https://github.com/opencv/opencv/issues/4413 (previously
http://web.archive.org/web/20170615214220/http://code.opencv.org/issues/2299)

or you would do this

    cv::Mat out = cv::Mat::zeros(frame.size(), frame.type());
frame(cv::Rect(0,10, frame.cols,frame.rows-10)).copyTo(out(cv::Rect(0,0,frame.cols,frame.rows-10)));

Shift image in OpenCV

You'll need to construct the warp mat manually to achieve this instead of using getAffineTransform. Try using warpAffine with the following Mat (untested):

Mat warpMat = new Mat( 2, 3, CvType.CV_64FC1 );
int row = 0, col = 0;
warpMat.put(row ,col, 1, 0, offsetx, 0, 1, offsety);

Sources:

Shift image content with OpenCV

declare Mat in OpenCV java

shift image content in OpenCV

Both approach work ok, even if using an affine transform is an overkill here. You probably have an error in the code you didn't show in the question.

Also, you can use colRange, which will simplify your code.

Check that results of both approaches are equivalent, and no extra unwanted columns appears:

#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;

int main()
{
Mat1b img(3, 10);
randu(img, Scalar(0), Scalar(255));

Mat1b img2 = img.clone();

//imshow("img", img);
//waitKey();

cout << img << endl << endl;

int offset = 1;
Mat trans_mat = (Mat_<double>(2, 3) << 1, 0, offset, 0, 1, 0);

for (int i = 0; i < 100; ++i)
{
// Random data
Mat1b randomData(img.rows, offset);
randu(randomData, Scalar(0), Scalar(255));

// Copying roi
img.colRange(0, img.cols - offset).copyTo(img.colRange(offset, img.cols));
randomData.copyTo(img.colRange(0, offset));
//randu(img.colRange(0, offset), Scalar(0), Scalar(255));

// Warping
cv::Mat warped;
warpAffine(img2, warped, trans_mat, img2.size());

img2 = warped.clone();
randomData.copyTo(img2.colRange(0, offset));
//randu(img2.colRange(0, offset), Scalar(0), Scalar(255));

//imshow("img", img2);
//waitKey();

cout << img << endl << endl;
cout << img2 << endl << endl;

}
return 0;
}

This are the data of the first iteration.

Original data

[ 91,   2,  79, 179,  52, 205, 236,   8, 181, 239;
26, 248, 207, 218, 45, 183, 158, 101, 102, 18;
118, 68, 210, 139, 198, 207, 211, 181, 162, 197]

Data shifted by copying roi

[191,  91,   2,  79, 179,  52, 205, 236,   8, 181;
196, 26, 248, 207, 218, 45, 183, 158, 101, 102;
40, 118, 68, 210, 139, 198, 207, 211, 181, 162]

Data shifted by warping

[191,  91,   2,  79, 179,  52, 205, 236,   8, 181;
196, 26, 248, 207, 218, 45, 183, 158, 101, 102;
40, 118, 68, 210, 139, 198, 207, 211, 181, 162]

OpenCv: Translate Image, Wrap Pixels Around Edges (C++)

From my point of view, the most "efficient" way would be to set up the four corresponding ROIs by using cv::Rect, and manually copying the contents using cv::copyTo. Maybe, there's also a possibility without copying the actual content, just pointing to the data in the input cv::Mat - but unfortunately, at least I couldn't found one.

Nevertheless, here's my code:

// Shift input image by sx pixels to the left, and sy pixels to the top.
cv::Mat transWrap(cv::Mat& input, const int sx, const int sy)
{
// Get image dimensions.
const int w = input.size().width;
const int h = input.size().height;

// Initialize output with same dimensions and type.
cv::Mat output = cv::Mat(h, w, input.type());

// Copy proper contents manually.
input(cv::Rect(sx, sy, w - sx, h - sy)).copyTo(output(cv::Rect(0, 0, w - sx, h - sy)));
input(cv::Rect(0, sy, sx, h - sy)).copyTo(output(cv::Rect(w - sx, 0, sx, h - sy)));
input(cv::Rect(sx, 0, w - sx, sy)).copyTo(output(cv::Rect(0, h - sy, w - sx, sy)));
input(cv::Rect(0, 0, sx, sy)).copyTo(output(cv::Rect(w - sx, h - sy, sx, sy)));

return output;
}

int main()
{
cv::Mat input = cv::imread("images/tcLUa.jpg", cv::IMREAD_COLOR);
cv::resize(input, input, cv::Size(), 0.25, 0.25);
cv::Mat output = transWrap(input, 300, 150);

return 0;
}

Of course, the code seems repetitive, but wrapped into an own function, it won't bother you in your main code. ;-)

The output should be, what you want to achieve:

Output

Hope that helps!

Fastest way to move image in OpenCV

cv::Rect and cv::Mat::copyTo

cv::Mat img=cv::imread("image.jpg");
cv::Mat imgTranslated(img.size(),img.type(),cv::Scalar::all(0));
img(cv::Rect(50,30,img.cols-50,img.rows-30)).copyTo(imgTranslated(cv::Rect(0,0,img.cols-50,img.rows-30)));

Why opencv 'cv2.morphologyEx' operations shift images in one direction during iterations?

The main issue is the (3,11) argument passed to cv2.morphologyEx.

According to the documentation of morphologyEx, kernel is a Structuring element, and not the size of the kernel.

Passing (3,11) is probably like passing np.array([1, 1]) (or just undefined behavior).

The correct syntax is passing 3x11 NumPy a array of ones (and uint8 type):

img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, np.ones((3, 11), np.uint8), iterations=25)

Using large kernel with 25 iterations is too much, so I reduced it to 3x5 and 5 iterations.

The following code sample shows that the image is not shifted:

import cv2
import numpy as np

test_image = "test.bmp"
#image = cv2.imread(test_image, cv2.COLOR_BAYER_BG2RGB) # cv2.COLOR_BAYER_BG2RGB is not in place
image = cv2.imread(test_image, cv2.IMREAD_GRAYSCALE) # Read image as grayscale
blurred = cv2.medianBlur(image, 3)
ret, binary = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY)
#img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, (3, 11), iterations=25)
img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, np.ones((3, 5), np.uint8), iterations=5)

montage_img = np.dstack((255-image, 0*image, 255-img_adj)) # Place image in the blue channel and img_adj in the red channel

# Show original and output images using OpenCV imshow method (instead of using matplotlib)
cv2.imshow('image', image)
cv2.imshow('img_adj', img_adj)
cv2.imshow('montage_img', montage_img)
cv2.waitKey()
cv2.destroyAllWindows()

image:

Sample Image

img_adj:

Sample Image

montage_img:

Sample Image


A better solution would be finding the largest connected component (that is not the background):

import cv2
import numpy as np

test_image = "test.bmp"
image = cv2.imread(test_image, cv2.IMREAD_GRAYSCALE) # Read image as grayscale
ret, binary = cv2.threshold(image, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)

nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(binary, 8) # Finding connected components

# Find the largest non background component.
# Note: range() starts from 1 since 0 is the background label.
max_label, max_size = max([(i, stats[i, cv2.CC_STAT_AREA]) for i in range(1, nb_components)], key=lambda x: x[1])

res = np.zeros_like(binary) + 255
res[output == max_label] = 0

cv2.imshow('res', res)
cv2.waitKey()
cv2.destroyAllWindows()

Result:

Sample Image



Related Topics



Leave a reply



Submit