iOS Uiimage Storage Formats, Memory Usage and Encoding/Decoding

iOS UIImage storage formats, memory usage and encoding / decoding

I created a test app on an iPad 2 that loaded 200 384x384 pixels jpeg2000 image files (117,964,800 bytes worth of raw pixels) using the following three methods: [UIImage imageNamed], [UIImage imageWithContentsOfFile:] and [UIImage imageWithData]. The jpeg2000 file set was 100 textures which I then copied into an extra 100 files with a "copy" suffix, to see if iOS does any duplicate file checking, which it does. More on that down below.

The test was done in two steps

  1. Simply load the images and store them in an array.
  2. A separate button creates UIImageViews for each image and displays them.

Here are the results:

[UIImage imageNamed:]

Step 1: Memory only increased by about the sum total of all the jpeg2000 files (which were about 50K each, so memory went up by about 5 MB). I assume the duplicate files were not duplicated at this point and were somehow consolidated by iOS as memory would have gone up by 10MB at this point if there were no duplicate checking.

Step 2: Memory went up significantly (to about 200 MB), presumably because the images were decoded into BGRA format in preparation to display in UIImageView. It looks like there was no duplicate filtering at this stage and that separate raw memory was allocated for every image. I'm not sure why, but this was about 80 MB more than the actual raw pixel memory usage should have been.

[UIImage imageWithContentsOfFile:]

Step 1: Memory usage was identical to [UIImage imageNamed:], so there was duplicate filtering at this stage.

Step 2: Memory usage went up to 130 MB. For some reason this is 70 MB lower than [UIImage imageNamed:]. This number is much closer to the expected amount of raw pixel memory for the 200 images.

[UIImage imageWithData:]

[NSData dataWithContentsOfFile:] used first.

Step 1: Memory usage was 15 MB. I assume there was no duplicate filtering here as this is close to the total file size for all the jpeg2000 data.

Step 2: Memory usage went up to 139 MB. This is more than [UIImage imageWithContentsOfFile:], but not by much.

Summary

iOS appears to reference the compressed data for a UIImage loaded using the above three methods until the raw pixels are actually needed, at which point it is decoded.

[UIImage imageNamed:] never deallocated the memory because of all my image views referencing the images. It would have deallocated memory of non-referenced images had I staggered the loading and allowed the run loop to execute. One advantage is that repeated [UIImage imageNamed:] calls to the same image were essentially free. Do not use this method for anything other than GUI images, or you may run out of memory.

[UIImage imageWithContentsOfFile:] behaves like [UIImage imageNamed:] in memory usage until the raw pixels are needed, at which point it is much more efficient in memory usage for some reason. This method also causes the memory to be freed immediately when the UIImage is deallocated. Repeated calls to [UIImage imageWithContentsOfFile:] with the same file appear to use a cached copy until all the UIImage's referencing the file are deallocated.

[UIImage imageWithData:] does no caching or duplicate checking and always creates a new image.

I tested the same set as PNG files and the step 1 results for imageNamed and imageWithContentsOfFile showed even less memory being used (about 0.5 MB), and imageWithData showed the sum total of all the PNG files compressed. My guess is that iOS simply stores a reference to the file and doesn't do anything else with it until decoding time. The step 2 results for PNG were identical.

Reducing peak memory consumption

I think your problem is that you are doing all the snapshots in the same context (the context the for loop is in). I believe the memory is not being released until the context ends, which is when the graph goes down.

I would suggest you reduce the scope of the context, so instead of using a for loop to draw all frames you would keep track of the progress with some iVars and draw just one frame; whenever you finish rendering the frame, you can call the function again with a dispatch_after and modify the variables. Even if the delay is 0, it will allow the context to end and clean up the memory that is no longer being used.

PS. When I mean context I don't mean a graphics context, I mean a certain scope in your code.

How to conform UIImage to Codable?

A solution: roll your own wrapper class conforming to Codable.

One solution, since extensions to UIImage are out, is to wrap the image in a new class you own. Otherwise, your attempt is basically straight on. I saw this done beautifully in a caching framework by Hyper Interactive called, well, Cache.

Though you'll need to visit the library to drill down into the dependencies, you can get the idea from looking at their ImageWrapper class, which is built to be used like so:

let wrapper = ImageWrapper(image: starIconImage)
try? theCache.setObject(wrapper, forKey: "star")

let iconWrapper = try? theCache.object(ofType: ImageWrapper.self, forKey: "star")
let icon = iconWrapper.image

Here is their wrapper class:

// Swift 4.0
public struct ImageWrapper: Codable {
public let image: Image

public enum CodingKeys: String, CodingKey {
case image
}

// Image is a standard UI/NSImage conditional typealias
public init(image: Image) {
self.image = image
}

public init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
let data = try container.decode(Data.self, forKey: CodingKeys.image)
guard let image = Image(data: data) else {
throw StorageError.decodingFailed
}

self.image = image
}

// cache_toData() wraps UIImagePNG/JPEGRepresentation around some conditional logic with some whipped cream and sprinkles.
public func encode(to encoder: Encoder) throws {
var container = encoder.container(keyedBy: CodingKeys.self)
guard let data = image.cache_toData() else {
throw StorageError.encodingFailed
}

try container.encode(data, forKey: CodingKeys.image)
}
}

I'd love to hear what you end up using.

UPDATE: It turns out the OP wrote the code that I referenced (the Swift 4.0 update to Cache) to solve the problem. The code deserves to be up here, of course, but I'll also leave my words unedited for the dramatic irony of it all. :)

Memory performance of storing image as base64 in UserDefaults

Step 1: Don't convert back and forth to base64. As Matt says, there's no reason for it. Your various storage options support binary data, so store it directly as binary data. (Data, even, since there are methods for writing Data objects to files in various formats.)

Step 2: Don't store large objects in UserDefaults. UserDefaults is intended to store small things like switch settings. Instead use a file, either in the Documents or the Caches directory.

Swift2 retrieving images from Firebase

Instead of

let decodedData = NSData(base64EncodedString: self.base64String as String, 
options: NSDataBase64DecodingOptions())

try adding IgnoreUnknownCharacters

NSDataBase64DecodingOptions.IgnoreUnknownCharacters

Use Example: Encode a jpg, store and read from firebase

encode and write our favorite starship

    if let image = NSImage(named:"Enterprise.jpeg") {
let imageData = image.TIFFRepresentation
let base64String = imageData!.base64EncodedStringWithOptions(.Encoding64CharacterLineLength)
let imageRef = myRootRef.childByAppendingPath("image_path")
imageRef.setValue(base64String)

read and decode

       imageRef.observeEventType(.Value, withBlock: { snapshot in

let base64EncodedString = snapshot.value
let imageData = NSData(base64EncodedString: base64EncodedString as! String,
options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
let decodedImage = NSImage(data:imageData!)
self.myImageView.image = decodedImage

}, withCancelBlock: { error in
print(error.description)
})

EDIT 2019_05_17

Update to Swift 5 and Firebase 6

func writeImage() {
if let image = NSImage(named:"Enterprise.jpg") {
let imageData = image.tiffRepresentation
if let base64String = imageData?.base64EncodedString() {
let imageRef = self.ref.child("image_path")
imageRef.setValue(base64String)
}
}
}

func readImage() {
let imageRef = self.ref.child("image_path")
imageRef.observeSingleEvent(of: .value, with: { snapshot in
let base64EncodedString = snapshot.value as! String
let imageData = Data(base64Encoded: base64EncodedString, options: Data.Base64DecodingOptions.ignoreUnknownCharacters)!
let decodedImage = NSImage(data: imageData)
self.myImageView.image = decodedImage
})
}


Related Topics



Leave a reply



Submit