Ios:Retrieve Rectangle Shaped Image from the Background Image

iOS:Retrieve rectangle shaped image from the background image

Here is a full answer using a small wrapper class to separate the c++ from objective-c code.

I had to raise another question on stackoverflow to deal with my poor c++ knowledge - but I have worked out everything we need to interface c++ cleanly with objective-c code, using the squares.cpp sample code as an example. The aim is to keep the original c++ code as pristine as possible, and to keep the bulk of the work with openCV in pure c++ files for (im)portability.

I have left my original answer in place as this seems to go beyond an edit. The complete demo project is on github

CVViewController.h / CVViewController.m

  • pure Objective-C

  • communicates with openCV c++ code via a WRAPPER... it neither knows nor cares that c++ is processing these method calls behind the wrapper.

CVWrapper.h / CVWrapper.mm

  • objective-C++

does as little as possible, really only two things...

  • calls to UIImage objC++ categories to convert to and from UIImage <> cv::Mat
  • mediates between CVViewController's obj-C methods and CVSquares c++ (class) function calls

CVSquares.h / CVSquares.cpp

  • pure C++
  • CVSquares.cpp declares public functions inside a class definition (in this case, one static function).

    This replaces the work of main{} in the original file.
  • We try to keep CVSquares.cpp as close as possible to the C++ original for portability.

CVViewController.m

//remove 'magic numbers' from original C++ source so we can manipulate them from obj-C
#define TOLERANCE 0.01
#define THRESHOLD 50
#define LEVELS 9

UIImage* image =
[CVSquaresWrapper detectedSquaresInImage:self.image
tolerance:TOLERANCE
threshold:THRESHOLD
levels:LEVELS];

CVSquaresWrapper.h

//  CVSquaresWrapper.h

#import <Foundation/Foundation.h>

@interface CVSquaresWrapper : NSObject

+ (UIImage*) detectedSquaresInImage:(UIImage*)image
tolerance:(CGFloat)tolerance
threshold:(NSInteger)threshold
levels:(NSInteger)levels;

@end

CVSquaresWrapper.mm

//  CVSquaresWrapper.mm
// wrapper that talks to c++ and to obj-c classes

#import "CVSquaresWrapper.h"
#import "CVSquares.h"
#import "UIImage+OpenCV.h"

@implementation CVSquaresWrapper

+ (UIImage*) detectedSquaresInImage:(UIImage*) image
tolerance:(CGFloat)tolerance
threshold:(NSInteger)threshold
levels:(NSInteger)levels
{
UIImage* result = nil;

//convert from UIImage to cv::Mat openCV image format
//this is a category on UIImage
cv::Mat matImage = [image CVMat];

//call the c++ class static member function
//we want this function signature to exactly
//mirror the form of the calling method
matImage = CVSquares::detectedSquaresInImage (matImage, tolerance, threshold, levels);

//convert back from cv::Mat openCV image format
//to UIImage image format (category on UIImage)
result = [UIImage imageFromCVMat:matImage];

return result;
}

@end

CVSquares.h

//  CVSquares.h

#ifndef __OpenCVClient__CVSquares__
#define __OpenCVClient__CVSquares__

//class definition
//in this example we do not need a class
//as we have no instance variables and just one static function.
//We could instead just declare the function but this form seems clearer

class CVSquares
{
public:
static cv::Mat detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels);
};

#endif /* defined(__OpenCVClient__CVSquares__) */

CVSquares.cpp

//  CVSquares.cpp

#include "CVSquares.h"

using namespace std;
using namespace cv;

static int thresh = 50, N = 11;
static float tolerance = 0.01;

//declarations added so that we can move our
//public function to the top of the file
static void findSquares( const Mat& image, vector<vector<Point> >& squares );
static void drawSquares( Mat& image, vector<vector<Point> >& squares );

//this public function performs the role of
//main{} in the original file (main{} is deleted)
cv::Mat CVSquares::detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels)
{
vector<vector<Point> > squares;

if( image.empty() )
{
cout << "Couldn't load " << endl;
}

tolerance = tol;
thresh = threshold;
N = levels;
findSquares(image, squares);
drawSquares(image, squares);

return image;
}

// the rest of this file is identical to the original squares.cpp except:
// main{} is removed
// this line is removed from drawSquares:
// imshow(wndname, image);
// (obj-c will do the drawing)

UIImage+OpenCV.h

The UIImage category is an objC++ file containing the code to convert between UIImage and cv::Mat image formats. This is where you move your two methods -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat and - (cv::Mat)cvMatWithImage:(UIImage *)image

//UIImage+OpenCV.h

#import <UIKit/UIKit.h>

@interface UIImage (UIImage_OpenCV)

//cv::Mat to UIImage
+ (UIImage *)imageFromCVMat:(cv::Mat&)cvMat;

//UIImage to cv::Mat
- (cv::Mat)CVMat;

@end

The method implementations here are unchanged from your code (although we don't pass a UIImage in to convert, instead we refer to self)

Detect cropping rectangle for UIImage with transparent background

Have you had a chance to see https://gist.github.com/spinogrizz/3549921 ?

it looks like it's exactly what you need.

just so it's not lost, a copy & paste from that page:

- (UIImage *) imageByTrimmingTransparentPixels {
int rows = self.size.height;
int cols = self.size.width;
int bytesPerRow = cols*sizeof(uint8_t);

if ( rows < 2 || cols < 2 ) {
return self;
}

//allocate array to hold alpha channel
uint8_t *bitmapData = calloc(rows*cols, sizeof(uint8_t));

//create alpha-only bitmap context
CGContextRef contextRef = CGBitmapContextCreate(bitmapData, cols, rows, 8, bytesPerRow, NULL, kCGImageAlphaOnly);

//draw our image on that context
CGImageRef cgImage = self.CGImage;
CGRect rect = CGRectMake(0, 0, cols, rows);
CGContextDrawImage(contextRef, rect, cgImage);

//summ all non-transparent pixels in every row and every column
uint16_t *rowSum = calloc(rows, sizeof(uint16_t));
uint16_t *colSum = calloc(cols, sizeof(uint16_t));

//enumerate through all pixels
for ( int row = 0; row < rows; row++) {
for ( int col = 0; col < cols; col++)
{
if ( bitmapData[row*bytesPerRow + col] ) { //found non-transparent pixel
rowSum[row]++;
colSum[col]++;
}
}
}

//initialize crop insets and enumerate cols/rows arrays until we find non-empty columns or row
UIEdgeInsets crop = UIEdgeInsetsMake(0, 0, 0, 0);

for ( int i = 0; i<rows; i++ ) { //top
if ( rowSum[i] > 0 ) {
crop.top = i; break;
}
}

for ( int i = rows; i >= 0; i-- ) { //bottom
if ( rowSum[i] > 0 ) {
crop.bottom = MAX(0, rows-i-1); break;
}
}

for ( int i = 0; i<cols; i++ ) { //left
if ( colSum[i] > 0 ) {
crop.left = i; break;
}
}

for ( int i = cols; i >= 0; i-- ) { //right
if ( colSum[i] > 0 ) {
crop.right = MAX(0, cols-i-1); break;
}
}

free(bitmapData);
free(colSum);
free(rowSum);

if ( crop.top == 0 && crop.bottom == 0 && crop.left == 0 && crop.right == 0 ) {
//no cropping needed
return self;
}
else {
//calculate new crop bounds
rect.origin.x += crop.left;
rect.origin.y += crop.top;
rect.size.width -= crop.left + crop.right;
rect.size.height -= crop.top + crop.bottom;

//crop it
CGImageRef newImage = CGImageCreateWithImageInRect(cgImage, rect);

//convert back to UIImage
return [UIImage imageWithCGImage:newImage];
}
}

iOS detect rectangles from camera with openCV

So the solution was actually pretty simple...

Instead of trying to use matImage to set the imageView.image, it just needed to transform matImage to be actually modified in the imageView since the CvVideoCamera was already initialized with (and linked to) the imageView:

self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];

finally the function was like this:

#ifdef __cplusplus
-(void)processImage:(cv::Mat &)matImage
{
matImage = CVSquares::detectedSquaresInImage(matImage, self.angleTolerance, self.threshold, self.levels, self.accuracy);
}
#endif

UINavigationBar with shaped background

Override

- (CGSize) sizeThatFits:(CGSize)size  {
return CGSizeMake(custom_width, custom_height);
}

in order to return the size for your custom navigation bar.

Note that if you use a height that is not a multiple of 4, it will cause trouble if you hide and then show the navigation bar at any point (it gets shifted by 1 pixel from the top)

How to mask a square image into an image with round corners in iOS?

You can use CoreGraphics to create a path for a round rectangle with this code snippet:

static void addRoundedRectToPath(CGContextRef context, CGRect rect, float ovalWidth, float ovalHeight)
{
float fw, fh;
if (ovalWidth == 0 || ovalHeight == 0) {
CGContextAddRect(context, rect);
return;
}
CGContextSaveGState(context);
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
CGContextScaleCTM (context, ovalWidth, ovalHeight);
fw = CGRectGetWidth (rect) / ovalWidth;
fh = CGRectGetHeight (rect) / ovalHeight;
CGContextMoveToPoint(context, fw, fh/2);
CGContextAddArcToPoint(context, fw, fh, fw/2, fh, 1);
CGContextAddArcToPoint(context, 0, fh, 0, fh/2, 1);
CGContextAddArcToPoint(context, 0, 0, fw/2, 0, 1);
CGContextAddArcToPoint(context, fw, 0, fw, fh/2, 1);
CGContextClosePath(context);
CGContextRestoreGState(context);
}

And then call CGContextClip(context); to clip it to the rectangle path. Now any drawing done, including drawing an image, will be clipped to the round rectangle shape.

As an example, assuming "image" is a UIImage, and this is in a drawRect: method:

CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
addRoundedRectToPath(context, self.frame, 10, 10);
CGContextClip(context);
[image drawInRect:self.frame];
CGContextRestoreGState(context);

How to cut the shape of a logo(png image) from another image programmatically in Swift?

You can do it programmatically using UIBezierPath:

// lets create a view and an image for testing
let picture = UIImage(data: try! Data(contentsOf: URL(string: "http://i.stack.imgur.com/Xs4RX.jpg")!))!

// lets create a view and an image for testing
let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: picture.size.width, height: picture.size.height))
imageView.image = picture

// now a layer for the mask
let maskLayer = CAShapeLayer()

// a path for the logo
let maskPath = CGMutablePath()

// create your logo path (I've added this circle to represent your logo path)
maskPath.addEllipse(in: CGRect(x: imageView.frame.midX - 150, y: imageView.frame.midY - 150, width: 300, height: 300))

// you will need a rectangle that covers the whole image area to intersect with your logo path
maskPath.addRect(CGRect(x: 0, y: 0, width: picture.size.width, height: picture.size.height))

// add the mask to your maskLayer
maskLayer.path = maskPath

// choose the fill rule EvenOdd
maskLayer.fillRule = kCAFillRuleEvenOdd

// add your masklayer to your view
imageView.layer.mask = maskLayer
imageView

If you need to use an image and invert the alpha of you logo programmatically you can do as follow using kCGBlendModeDestinationOut:

import UIKit

extension UIImage {
func masked(with image: UIImage, position: CGPoint? = nil, inverted: Bool = false) -> UIImage? {
let position = position ??
CGPoint(x: size.width.half - image.size.width.half,
y: size.height.half - image.size.height.half)
defer { UIGraphicsEndImageContext() }
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(at: .zero)
image.draw(at: position, blendMode: inverted ? .destinationOut : .destinationIn, alpha: 1)
return UIGraphicsGetImageFromCurrentImageContext()
}
}

let picture = UIImage(data: try! Data(contentsOf: URL(string: "http://i.stack.imgur.com/Xs4RX.jpg")!))!
let logo = UIImage(data: try! Data(contentsOf: URL(string: "https://www.dropbox.com/s/k7vk3xvcvcly1ik/chat_bubble.png?dl=1")!))!

let view = UIView(frame: UIScreen.main.bounds)
view.backgroundColor = .blue
let iv = UIImageView(frame: UIScreen.main.bounds)
iv.contentMode = .scaleAspectFill
iv.image = picture.masked(with: logo, inverted: true)
view.addSubview(iv)

Clip image to square in SwiftUI

A ZStack will help solve this by allowing us to layer views without one effecting the layout of the other.

For the text:

.frame(minWidth: 0, maxWidth: .infinity) to expand the text horizontally to its parent's size

.frame(minHeight: 0, maxHeight: .infinity) is useful in other situations

As for the image:

.aspectRatio(contentMode: .fill) to make the image maintain its aspect ratio rather than squashing to the size of its frame.

.layoutPriority(-1) to de-prioritize laying out the image to prevent it from expanding its parent (the ZStack within the ForEach in our case).

The value for layoutPriority just needs to be lower than the parent views which will be set to 0 by default. We have to do this because SwiftUI will layout a child before its parent, and the parent has to deal with the child size unless we manually prioritize differently.

The .clipped() modifier uses the bounding frame to mask the view so you'll need to set it to clip any images that aren't already 1:1 aspect ratio.

    var body: some View {
HStack {
ForEach(0..<3, id: \.self) { index in
ZStack {
Image(systemName: "doc.plaintext")
.resizable()
.aspectRatio(contentMode: .fill)
.layoutPriority(-1)
VStack {
Spacer()
Text("yes")
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.white)
}
}
.clipped()
.aspectRatio(1, contentMode: .fit)
.border(Color.red)
}
}
}

Edit: While geometry readers are super useful I think they should be avoided whenever possible. It's cleaner to let SwiftUI do the work. This is my initial solution with a Geometry Reader that works just as well.

        HStack {
ForEach(0..<3, id: \.self) { index in
ZStack {
GeometryReader { proxy in
Image(systemName: "pencil")
.resizable()
.scaledToFill()
.frame(width: proxy.size.width)
VStack {
Spacer()
Text("yes")
.frame(width: proxy.size.width)
.background(Color.white)
}
}
}
.clipped()
.aspectRatio(1, contentMode: .fit)
.border(Color.red)
}
}



Related Topics



Leave a reply



Submit