Creating Gif Image Color Maps in iOS 11

Creating Gif Image Color Maps in iOS 11

This is a bug in iOS 11.0 and 11.1. Apple has fixed this in iOS 11.2+

iOS Colors Incorrect When Saving Animated Gif

It seems that turning off the global color map fixes the problem:

let loopingProperty: [String: AnyObject] = [
kCGImagePropertyGIFLoopCount as String: 0 as NSNumber,
kCGImagePropertyGIFHasGlobalColorMap as String: false as NSNumber
]

Note that unlike PNGs, GIFs can use only a 256 color map, without transparency. For animated GIFs there can be either a global or a per-frame color map.

Unfortunately, Core Graphics does not allow us to work with color maps directly, therefore there is some automatic color conversion when the GIF is encoded.

It seems that turning off the global color map is all what is needed. Also setting up color map explicitly for every frame using kCGImagePropertyGIFImageColorMap would probably work too.

Since this seems not to work reliably, let's create our own color map for every frame:

struct Color : Hashable {
let red: UInt8
let green: UInt8
let blue: UInt8

var hashValue: Int {
return Int(red) + Int(green) + Int(blue)
}

public static func ==(lhs: Color, rhs: Color) -> Bool {
return [lhs.red, lhs.green, lhs.blue] == [rhs.red, rhs.green, rhs.blue]
}
}

struct ColorMap {
var colors = Set<Color>()

var exported: Data {
let data = Array(colors)
.map { [$0.red, $0.green, $0.blue] }
.joined()

return Data(bytes: Array(data))
}
}

Now let's update our methods:

func getScaledImages(_ scale: Int) -> [(CGImage, ColorMap)] {
var sourceImages = [UIImage]()
var result: [(CGImage, ColorMap)] = []

...

var colorMap = ColorMap()
let pixelData = imageRef.dataProvider!.data
let rawData: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

for y in 0 ..< imageRef.height{
for _ in 0 ..< scale {
for x in 0 ..< imageRef.width{
let offset = y * imageRef.width * 4 + x * 4

let color = Color(red: rawData[offset], green: rawData[offset + 1], blue: rawData[offset + 2])
colorMap.colors.insert(color)

for _ in 0 ..< scale {
pixelPointer[byteIndex] = rawData[offset]
pixelPointer[byteIndex+1] = rawData[offset+1]
pixelPointer[byteIndex+2] = rawData[offset+2]
pixelPointer[byteIndex+3] = rawData[offset+3]

byteIndex += 4
}
}
}
}

let cgImage = context.makeImage()!
result.append((cgImage, colorMap))

and

func createAnimatedGifFromImages(_ images: [(CGImage, ColorMap)]) -> URL {

...

for (image, colorMap) in images {
let frameProperties: [String: AnyObject] = [
String(kCGImagePropertyGIFDelayTime): 0.2 as NSNumber,
String(kCGImagePropertyGIFImageColorMap): colorMap.exported as NSData
]

let properties: [String: AnyObject] = [
String(kCGImagePropertyGIFDictionary): frameProperties as AnyObject
];

CGImageDestinationAddImage(destination, image, properties as CFDictionary);
}

Of course, this will work only if the number of colors is less than 256. I would really recommend a custom GIF library that can handle the color conversion correctly.

Creating a large GIF with CGImageDestinationFinalize - running out of memory

You can use AVFoundation to write a video with your images. I've uploaded a complete working test project to this github repository. When you run the test project in the simulator, it will print a file path to the debug console. Open that path in your video player to check the output.

I'll walk through the important parts of the code in this answer.

Start by creating an AVAssetWriter. I'd give it the AVFileTypeAppleM4V file type so that the video works on iOS devices.

AVAssetWriter *writer = [AVAssetWriter assetWriterWithURL:self.url fileType:AVFileTypeAppleM4V error:&error];

Set up an output settings dictionary with the video parameters:

- (NSDictionary *)videoOutputSettings {
return @{
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: @((size_t)size.width),
AVVideoHeightKey: @((size_t)size.height),
AVVideoCompressionPropertiesKey: @{
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline31,
AVVideoAverageBitRateKey: @(1200000) }};
}

You can adjust the bit rate to control the size of your video file. I've chosen the codec profile pretty conservatively here (it supports some pretty old devices). You might want to choose a later profile.

Then create an AVAssetWriterInput with media type AVMediaTypeVideo and your output settings.

NSDictionary *outputSettings = [self videoOutputSettings];
AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];

Set up a pixel buffer attribute dictionary:

- (NSDictionary *)pixelBufferAttributes {
return @{
fromCF kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
fromCF kCVPixelBufferCGBitmapContextCompatibilityKey: @YES };
}

You don't have to specify the pixel buffer dimensions here; AVFoundation will get them from the input's output settings. The attributes I've used here are (I believe) optimal for drawing with Core Graphics.

Next, create an AVAssetWriterInputPixelBufferAdaptor for your input using the pixel buffer settings.

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:input
sourcePixelBufferAttributes:[self pixelBufferAttributes]];

Add the input to the writer and tell the writer to get going:

[writer addInput:input];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];

Next we'll tell the input how to get video frames. Yes, we can do this after we've told the writer to start writing:

    [input requestMediaDataWhenReadyOnQueue:adaptorQueue usingBlock:^{

This block is going to do everything else we need to do with AVFoundation. The input calls it each time it's ready to accept more data. It might be able to accept multiple frames in a single call, so we'll loop as long is it's ready:

        while (input.readyForMoreMediaData && self.frameGenerator.hasNextFrame) {

I'm using self.frameGenerator to actually draw the frames. I'll show that code later. The frameGenerator decides when the video is over (by returning NO from hasNextFrame). It also knows when each frame should appear on screen:

            CMTime time = self.frameGenerator.nextFramePresentationTime;

To actually draw the frame, we need to get a pixel buffer from the adaptor:

            CVPixelBufferRef buffer = 0;
CVPixelBufferPoolRef pool = adaptor.pixelBufferPool;
CVReturn code = CVPixelBufferPoolCreatePixelBuffer(0, pool, &buffer);
if (code != kCVReturnSuccess) {
errorBlock([self errorWithFormat:@"could not create pixel buffer; CoreVideo error code %ld", (long)code]);
[input markAsFinished];
[writer cancelWriting];
return;
} else {

If we couldn't get a pixel buffer, we signal an error and abort everything. If we did get a pixel buffer, we need to wrap a bitmap context around it and ask frameGenerator to draw the next frame in the context:

                CVPixelBufferLockBaseAddress(buffer, 0); {
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB(); {
CGContextRef gc = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(buffer), CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer), 8, CVPixelBufferGetBytesPerRow(buffer), rgb, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); {
[self.frameGenerator drawNextFrameInContext:gc];
} CGContextRelease(gc);
} CGColorSpaceRelease(rgb);

Now we can append the buffer to the video. The adaptor does that:

                    [adaptor appendPixelBuffer:buffer withPresentationTime:time];
} CVPixelBufferUnlockBaseAddress(buffer, 0);
} CVPixelBufferRelease(buffer);
}

The loop above pushes frames through the adaptor until either the input says it's had enough, or until frameGenerator says it's out of frames. If the frameGenerator has more frames, we just return, and the input will call us again when it's ready for more frames:

        if (self.frameGenerator.hasNextFrame) {
return;
}

If the frameGenerator is out of frames, we shut down the input:

        [input markAsFinished];

And then we tell the writer to finish. It'll call a completion handler when it's done:

        [writer finishWritingWithCompletionHandler:^{
if (writer.status == AVAssetWriterStatusFailed) {
errorBlock(writer.error);
} else {
dispatch_async(dispatch_get_main_queue(), doneBlock);
}
}];
}];

By comparison, generating the frames is pretty straightforward. Here's the protocol the generator adopts:

@protocol DqdFrameGenerator <NSObject>

@required

// You should return the same size every time I ask for it.
@property (nonatomic, readonly) CGSize frameSize;

// I'll ask for frames in a loop. On each pass through the loop, I'll start by asking if you have any more frames:
@property (nonatomic, readonly) BOOL hasNextFrame;

// If you say NO, I'll stop asking and end the video.

// If you say YES, I'll ask for the presentation time of the next frame:
@property (nonatomic, readonly) CMTime nextFramePresentationTime;

// Then I'll ask you to draw the next frame into a bitmap graphics context:
- (void)drawNextFrameInContext:(CGContextRef)gc;

// Then I'll go back to the top of the loop.

@end

For my test, I draw a background image, and slowly cover it up with solid red as the video progresses.

@implementation TestFrameGenerator {
UIImage *baseImage;
CMTime nextTime;
}

- (instancetype)init {
if (self = [super init]) {
baseImage = [UIImage imageNamed:@"baseImage.jpg"];
_totalFramesCount = 100;
nextTime = CMTimeMake(0, 30);
}
return self;
}

- (CGSize)frameSize {
return baseImage.size;
}

- (BOOL)hasNextFrame {
return self.framesEmittedCount < self.totalFramesCount;
}

- (CMTime)nextFramePresentationTime {
return nextTime;
}

Core Graphics puts the origin in the lower left corner of the bitmap context, but I'm using a UIImage, and UIKit likes to have the origin in the upper left.

- (void)drawNextFrameInContext:(CGContextRef)gc {
CGContextTranslateCTM(gc, 0, baseImage.size.height);
CGContextScaleCTM(gc, 1, -1);
UIGraphicsPushContext(gc); {
[baseImage drawAtPoint:CGPointZero];

[[UIColor redColor] setFill];
UIRectFill(CGRectMake(0, 0, baseImage.size.width, baseImage.size.height * self.framesEmittedCount / self.totalFramesCount));
} UIGraphicsPopContext();

++_framesEmittedCount;

I call a callback that my test program uses to update a progress indicator:

    if (self.frameGeneratedCallback != nil) {
dispatch_async(dispatch_get_main_queue(), ^{
self.frameGeneratedCallback();
});
}

Finally, to demonstrate variable frame rate, I emit the first half of the frames at 30 frames per second, and the second half at 15 frames per second:

    if (self.framesEmittedCount < self.totalFramesCount / 2) {
nextTime.value += 1;
} else {
nextTime.value += 2;
}
}

@end

How set Custom Annotation markers ( animated rings around a point) on GMSMapView

in

- (RMMapLayer *)mapView:(RMMapView *)mpView layerForAnnotation:(RMAnnotation *)annotation
{

UIImageView *pulseRingImg = [[UIImageView alloc] initWithFrame: CGRectMake(-30, -30, 78, 78)];
pulseRingImg.image = [UIImage imageNamed:@"PulseRing.png"];
pulseRingImg.userInteractionEnabled = NO;

CABasicAnimation *theAnimation;
theAnimation=[CABasicAnimation animationWithKeyPath:@"transform.scale.xy"];
theAnimation.duration=2.0;
theAnimation.repeatCount=HUGE_VALF;
theAnimation.autoreverses=NO;
pulseRingImg.alpha=0;
theAnimation.fromValue=[NSNumber numberWithFloat:0.0];
theAnimation.toValue=[NSNumber numberWithFloat:1.0];
pulseRingImg.alpha = 1;
[pulseRingImg.layer addAnimation:theAnimation forKey:@"pulse"];
pulseRingImg.userInteractionEnabled = NO;

[mapView addSubview:pulseRingImg];
[marker addSublayer:pulseRingImg.layer];

return marker;

}

PulseRing.png in [UIImage imageNamed:@"PulseRing.png"] is

PulseRing.png

Getting reference from:

ios - how to do a native "Pulse effect" animation on a UIButton

CABasicAnimation *theAnimation;

theAnimation=[CABasicAnimation animationWithKeyPath:@"opacity"];
theAnimation.duration=1.0;
theAnimation.repeatCount=HUGE_VALF;
theAnimation.autoreverses=YES;
theAnimation.fromValue=[NSNumber numberWithFloat:1.0];
theAnimation.toValue=[NSNumber numberWithFloat:0.0];
[myButton.layer addAnimation:theAnimation forKey:@"animateOpacity"];

create UIImage programmatically for show marker on google map?

Thanks to @knshn for giving me the link in comment. here is my solution

-(UIImage *)getImage :(UIImage *)icon stop:(NSString *)stopNumber color:(UIColor *)color
{
// create label
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, icon.size.width,icon.size.height)];
[label setText:stopNumber];
[label setTextColor:color];
[label setFont:[UIFont boldSystemFontOfSize:11]];
label.textAlignment = NSTextAlignmentCenter;

// use UIGraphicsBeginImageContext() to draw them on top of each other

//start drawing
UIGraphicsBeginImageContext(icon.size);

//draw image
[icon drawInRect:CGRectMake(0, 0, icon.size.width, icon.size.height)];

//draw label
[label drawTextInRect:CGRectMake((icon.size.width - label.frame.size.width)/2, -5, label.frame.size.width, label.frame.size.height)];

//get the final image
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();
return resultImage;
}

Use in Swift

func getImage(_ icon: UIImage?, stop stopNumber: String?, color: UIColor?) -> UIImage? {
// create label
let label = UILabel(frame: CGRect(x: 0, y: 0, width: icon?.size.width ?? 0.0, height: icon?.size.height ?? 0.0))
label.text = stopNumber
label.textColor = color
label.font = FontFamily.Metropolis.semiBold.font(size: 15)
label.textAlignment = .center

//start drawing
UIGraphicsBeginImageContext(icon?.size ?? CGSize.zero)

//draw image
icon?.draw(in: CGRect(x: 0, y: 0, width: icon?.size.width ?? 0.0, height: icon?.size.height ?? 0.0))

//draw label
label.drawText(in: CGRect(x: ((icon?.size.width ?? 0.0) - label.frame.size.width) / 2, y: -3, width: label.frame.size.width, height: label.frame.size.height))

//get the final image
let resultImage = UIGraphicsGetImageFromCurrentImageContext()

UIGraphicsEndImageContext()
return resultImage
}

Pulse ring animation around a Google Maps marker iOS

somehow it is working now. I created a custom view and set that view into GMSMarker iconView. After that added animation into view layer.

UIView *view = [[UIView alloc] initWithFrame:CGRectMake(200, 200, 100, 100)];
view.backgroundColor = [UIColor redColor];
view.layer.cornerRadius = 50;

GMSMarker *m = [GMSMarker markerWithPosition:mapView_.myLocation.coordinate];
m.iconView = view;
m.map = mapView_;

CABasicAnimation *scaleAnimation = [CABasicAnimation animationWithKeyPath:@"transform.scale"];
scaleAnimation.duration = 1.5;
scaleAnimation.repeatCount = HUGE_VAL;
scaleAnimation.autoreverses = YES;
scaleAnimation.fromValue = [NSNumber numberWithFloat:0.1];
scaleAnimation.toValue = [NSNumber numberWithFloat:1.2];

[view.layer addAnimation:scaleAnimation forKey:@"scale"];

Another method:


GMSMarker *m = [GMSMarker markerWithPosition:mapView_.myLocation.coordinate];

//custom marker image
UIImageView *pulseRingImg = [[UIImageView alloc] initWithFrame: CGRectMake(-30, -30, 78, 78)];
pulseRingImg.image = [UIImage imageNamed:@"Pulse"];
pulseRingImg.userInteractionEnabled = NO;

//transform scale animation
CABasicAnimation *theAnimation;
theAnimation = [CABasicAnimation animationWithKeyPath:@"transform.scale.xy"];
theAnimation.duration = 3.5;
theAnimation.repeatCount = HUGE_VALF;
theAnimation.autoreverses = NO;
theAnimation.fromValue = [NSNumber numberWithFloat:0.0];
theAnimation.toValue = [NSNumber numberWithFloat:2.0];

//alpha Animation for the image
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:@"opacity"];
animation.duration = 3.5;
animation.repeatCount = HUGE_VALF;
animation.values = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:1.0],
[NSNumber numberWithFloat:0.5],
[NSNumber numberWithFloat:0.0], nil];
animation.keyTimes = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:0.0],
[NSNumber numberWithFloat:1.2],
[NSNumber numberWithFloat:3.5], nil];
[pulseRingImg.layer addAnimation:animation forKey:@"opacity"];

[pulseRingImg.layer addAnimation:theAnimation forKey:@"pulse"];
pulseRingImg.userInteractionEnabled = NO;

m.iconView = pulseRingImg;
[m.layer addSublayer:pulseRingImg.layer];
m.map = mapView_;
m.groundAnchor = CGPointMake(0.5, 0.5);

Another one:

m = [GMSMarker markerWithPosition:mapView_.myLocation.coordinate];

//custom marker image
UIImageView *pulseRingImg = [[UIImageView alloc] initWithFrame: CGRectMake(-30, -30, 78, 78)];
pulseRingImg.image = [UIImage imageNamed:@"Pulse"];
pulseRingImg.userInteractionEnabled = NO;

float duration = 3.5f;

[CATransaction begin];
[CATransaction setAnimationDuration: duration];

//transform scale animation
CABasicAnimation *theAnimation;
theAnimation = [CABasicAnimation animationWithKeyPath:@"transform.scale.xy"];
theAnimation.repeatCount = HUGE_VALF;
theAnimation.autoreverses = NO;
theAnimation.fromValue = [NSNumber numberWithFloat:0.0];
theAnimation.toValue = [NSNumber numberWithFloat:2.0];

[pulseRingImg.layer addAnimation:theAnimation forKey:@"pulse"];
pulseRingImg.userInteractionEnabled = NO;

[CATransaction setCompletionBlock:^{
//alpha Animation for the image
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:@"opacity"];
animation.duration = duration;
animation.repeatCount = HUGE_VALF;
animation.values = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:1.0],
[NSNumber numberWithFloat:0.0], nil];
[m.iconView.layer addAnimation:animation forKey:@"opacity"];
}];

[CATransaction commit];

m.iconView = pulseRingImg;
[m.layer addSublayer:pulseRingImg.layer];
m.map = mapView_;
m.groundAnchor = CGPointMake(0.5, 0.5);

Swift 3.0 code is below
NOTE: Change the duration based on your requirement

           let m = GMSMarker(position: camera.target)

//custom marker image
let pulseRingImg = UIImageView(frame: CGRect(x: -30, y: -30, width: 78, height: 78))
pulseRingImg.image = UIImage(named: "Pulse")
pulseRingImg.isUserInteractionEnabled = false
CATransaction.begin()
CATransaction.setAnimationDuration(3.5)

//transform scale animation
var theAnimation: CABasicAnimation?
theAnimation = CABasicAnimation(keyPath: "transform.scale.xy")
theAnimation?.repeatCount = Float.infinity
theAnimation?.autoreverses = false
theAnimation?.fromValue = Float(0.0)
theAnimation?.toValue = Float(2.0)
theAnimation?.isRemovedOnCompletion = false

pulseRingImg.layer.add(theAnimation!, forKey: "pulse")
pulseRingImg.isUserInteractionEnabled = false
CATransaction.setCompletionBlock({() -> Void in

//alpha Animation for the image
let animation = CAKeyframeAnimation(keyPath: "opacity")
animation.duration = 3.5
animation.repeatCount = Float.infinity
animation.values = [Float(2.0), Float(0.0)]
m.iconView?.layer.add(animation, forKey: "opacity")
})

CATransaction.commit()
m.iconView = pulseRingImg
m.layer.addSublayer(pulseRingImg.layer)
m.map = gmapView
m.groundAnchor = CGPoint(x: 0.5, y: 0.5)

pulse Image:
pulse image for animation



Related Topics



Leave a reply



Submit