Ciimage Memory Not Released

CIImage memory not released

You should use the autoreleasepool function to force the release of the objects inside its scope. This function will help you to manage the memory footprint more precisely.

static func blendImages(blendFrames: Int, blendMode: CIImage.BlendMode, imagePaths: [URL], completion: @escaping (_ progress: Double) -> Void) {
var currentIndex = 0

while currentIndex + blendFrames < imagePaths.count {
autorelease {
// your code
}

currentIndex += 1
}
}

You can check this post to have more informations.

CIImage not getting freed from memory in Swift

The solution to this problem was rather easy: There was no memory leak and the CIImage were not being retained for longer than they should've been.

macOS "Activity Monitor" as well as Xcode's basic views for resource usage show the amount of memory that macOS thinks a process will need and thus allocates to it.

When you run a memory intensive app it'll retain that allocation, without actually using any memory, even after the memory intensive tasks have been completed and the objects removed from RAM.

Loading that many CIImages caused macOS to allocate a lot of memory to my app, in order to accommodate its reserver intensive task. Since it can't predict / recognize when that task is done, the app retains that memory allocation without actually using it. This is the case until macOS needs memory for another more current task or your app quits.

CIImage memory leak

I see the same leak you're seeing when profiling the code. Try this instead which seems to avoid the leak and give you the same results:

- (UIImage*)blurImage:(UIImage*)image withStrength:(float)strength
{
@autoreleasepool {
CIImage* inputImage = [[CIImage alloc] initWithCGImage:image.CGImage];
CIFilter* filter = [CIFilter filterWithName:@"CIGaussianBlur"];
[filter setValue:inputImage forKey:@"inputImage"];
[filter setValue:[NSNumber numberWithFloat:strength] forKey:@"inputRadius"];

CIImage* result = [filter valueForKey:kCIOutputImageKey];
float scale = [[UIScreen mainScreen] scale];
CIImage* cropped = [result imageByCroppingToRect:CGRectMake(0, 0, image.size.width * scale, image.size.height * scale)];

return [[UIImage alloc] initWithCIImage:cropped];
}
}

CIContext createCGImage memory leak?

So, the problem was actually in the "self.textDetector?.detect(in: visionImage ..." call.
It kept a strong reference to the visionImage.

I wasn't able to fix that, but I was able to work around the issue by letting VisionImage take the rotation into account instead of rotating the image myself....

I ended up with this working code:

func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Only run when currentFrame is finished
guard self.currentPixelBuffer == nil else { return } // , case .normal = frame.camera.trackingState
self.currentPixelBuffer = frame.capturedImage

guard let currentPixelBuffer = self.currentPixelBuffer else { return }
let visionImage = VisionImage(buffer: self.getCMSampleBuffer(pixelBuffer: currentPixelBuffer))

let metadata = VisionImageMetadata()
switch UIApplication.shared.statusBarOrientation {
case .landscapeLeft:
metadata.orientation = .bottomRight
case .landscapeRight:
metadata.orientation = .topLeft
case .portrait:
metadata.orientation = .rightTop
case .portraitUpsideDown:
metadata.orientation = .leftBottom
default:
metadata.orientation = .topLeft
}

visionImage.metadata = metadata

self.backgroundQueue.async {
self.textDetector?.detect(in: visionImage, completion: { [weak self] (features, error) in

CIDetector isn't releasing memory

I came across the same issue and it seems to be a bug (or maybe by design, for caching purposes) with reusing a CIDetector.

I was able to get around it by not reusing the CIDetector, instead instantiating one as needed and then releasing it (or, in ARC terms, just not keeping a reference around) when the detection is completed. There is some cost to doing this, but if you are doing the detection on a background thread as you said, that cost is probably worth it when compared to unbounded memory growth.

Perhaps a better solution would be, if you a detecting multiple images in a row, to create one detector, use it for all (or maybe, if the growth is too large, release & create a new one every N images. You'll have to experiment to see what N should be).

I've filed a Radar bug about this issue with Apple: http://openradar.appspot.com/radar?id=6645353252126720



Related Topics



Leave a reply



Submit