How to crop the internal area of a contour?
It is unclear in your question whether you want to actually crop out the information that is defined within the contour or mask out the information that isn't relevant to the contour chosen. I'll explore what to do in both situations.
Masking out the information
Assuming you ran cv2.findContours
on your image, you will have received a structure that lists all of the contours available in your image. I'm also assuming that you know the index of the contour that was used to surround the object you want. Assuming this is stored in idx
, first use cv2.drawContours
to draw a filled version of this contour onto a blank image, then use this image to index into your image to extract out the object. This logic masks out any irrelevant information and only retain what is important - which is defined within the contour you have selected. The code to do this would look something like the following, assuming your image is a grayscale image stored in img
:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you actually want to crop...
If you want to crop the image, you need to define the minimum spanning bounding box of the area defined by the contour. You can find the top left and lower right corner of the bounding box, then use indexing to crop out what you need. The code will be the same as before, but there will be an additional cropping step:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Now crop
(y, x) = np.where(mask == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
out = out[topy:bottomy+1, topx:bottomx+1]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
The cropping code works such that when we define the mask to extract out the area defined by the contour, we additionally find the smallest horizontal and vertical coordinates which define the top left corner of the contour. We similarly find the largest horizontal and vertical coordinates that define the bottom left corner of the contour. We then use indexing with these coordinates to crop what we actually need. Note that this performs cropping on the masked image - that is the image that removes everything but the information contained within the largest contour.
Note with OpenCV 3.x
It should be noted that the above code assumes you are using OpenCV 2.4.x. Take note that in OpenCV 3.x, the definition of cv2.findContours
has changed. Specifically, the output is a three element tuple output where the first image is the source image, while the other two parameters are the same as in OpenCV 2.4.x. Therefore, simply change the cv2.findContours
statement in the above code to ignore the first output:
_, contours, _ = cv2.findContours(...) # Your call to find contours
Crop black edges with OpenCV
I am not sure whether all your images are like this. But for this image, below is a simple python-opencv code to crop it.
first import libraries :
import cv2
import numpy as np
Read the image, convert it into grayscale, and make in binary image for threshold value of 1.
img = cv2.imread('sofwin.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
Now find contours in it. There will be only one object, so find bounding rectangle for it.
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
x,y,w,h = cv2.boundingRect(cnt)
Now crop the image, and save it into another file.
crop = img[y:y+h,x:x+w]
cv2.imwrite('sofwinres.png',crop)
Below is the result :
Robustly crop rotated bounding box on photos
After some research, this is what I get:
This is how I get it:
- pad the original image on each side (500 pixels in my case)
- find the four corner points of the shoe (the four points should form a polygon enclosing the shoe, but do not need to be exact rectangle)
- employing the code here to crop the shoe:
img = cv2.imread("padded_shoe.jpg")
# four corner points for padded shoe
cnt = np.array([
[[313, 794]],
[[727, 384]],
[[1604, 1022]],
[[1304, 1444]]
])
print("shape of cnt: {}".format(cnt.shape))
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
width = int(rect[1][0])
height = int(rect[1][1])
src_pts = box.astype("float32")
dst_pts = np.array([[0, height-1],
[0, 0],
[width-1, 0],
[width-1, height-1]], dtype="float32")
M = cv2.getPerspectiveTransform(src_pts, dst_pts)
warped = cv2.warpPerspective(img, M, (width, height))
Cheers, hope it helps.
Cropping square area around digit which may lie anywhere in a rectangle area
This can be achieved by using the boundingRect
function from OpenCV. Therefore, you just have to inverse your input image, so that you have black background and white digits.
Let's have a look at the following code snippet:
import cv2
import numpy as np
# Set up test image, white background, black letter with anti-aliasing
img = 255 * np.ones((50, 50), np.uint8)
cv2.putText(img, 't', (20, 30), cv2.FONT_HERSHEY_COMPLEX, 1.0, 0, 3, cv2.LINE_AA)
# Generate inverse image (black background, white letter)
inv = 255 - img
# Detect bounding rectangle for any non-zero pixels
x, y, w, h = cv2.boundingRect(inv)
# Generate cropped image from obtained parameters
crop = img[y:y+h, x:x+w]
# Output
cv2.imshow('img', img)
cv2.imshow('crop', crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
The test image img
looks like this:
And, the cropped image crop
looks like this:
Now, of course, the image is not square, as you requested. So, further work needs to be done to get the maximum of w
and h
and crop the sub image properly. Furthermore, you have to check for not violating image borders, etc. That's all some effort I will leave to you. :-)
Hope that helps!
Related Topics
Pyeval_Initthreads in Python 3: How/When to Call It? (The Saga Continues Ad Nauseam)
When to Use Sys.Path.Append and When Modifying %Pythonpath% Is Enough
List All Base Classes in a Hierarchy of Given Class
Browse Files and Subfolders in Python
Preserving Global State in a Flask Application
Is There a Difference Between Using a Dict Literal and a Dict Constructor
Python Float to Int Conversion
Destroywindow Does Not Close Window on MAC Using Python and Opencv
How to Correctly Parse Utf-8 Encoded HTML to Unicode Strings with Beautifulsoup
How to Get the Domain Name of My Site Within a Django Template
Filedialog, Tkinter and Opening Files
Putting Many Python Pandas Dataframes to One Excel Worksheet
What Is the Meaning of "Failed Building Wheel for X" in Pip Install
Generating Discrete Random Variables with Specified Weights Using Scipy or Numpy