Confusion Due to Swift Lacking Implicit Conversion of Cgfloat

Confusion due to Swift lacking implicit conversion of CGFloat

This is a problem with double to float conversion.

On a 64-bit machine, CGFloat is defined as double and you will compile it without problems because M_PI and x are both doubles.

On a 32-bit machine, CGFloat is a float but M_PI is still a double. Unfortunately, there are no implicit casts in Swift, so you have to cast explicitly:

return (CGFloat(M_PI) * (x) / 180.0)

The type for 180.0 literal is inferred.

In Swift 3

M_PI is deprecated, use CGFloat.pi instead:

return (x * .pi / 180.0)

Casting CGFloat to Float in Swift

You can use the Float() initializer:

let cgFloat: CGFloat = 3.14159
let someFloat = Float(cgFloat)

Weird casting needed in Swift

Edit: NB: CGFloat has changed in beta 4, specifically to make handling this 32/64-bit difference easier. Read the release notes and don't take the below as gospel now: it was written for beta 2.

After a clue from this answer I've worked it out: it depends on the selected project architecture. If I leave the Project architecture at the default of (armv7, arm64), then I get the same error as you with this code:

 // Error with arm7 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)

...and need to cast to a Float (well, CGFloat underneath, I'm sure) to make it work:

 // Works with explicit cast on arm7 target
ringLayer.transform = CATransform3DRotate(ringLayer.transform, Float(-M_PI/2), 0, 0, 1)

However, if I change the target architecture to arm64 only, then the code works as written in the Apple example from the video:

 // Works fine with arm64 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)

So to answer your question, I believe this is because CGFloat is defined as double on 64-bit architecture, so it's okay to use M_PI (which is also a double)-derived values as a CGFloat parameter. However, when arm7 is the target, CGFloat is a float, not a double, so you'd be losing precision when passing M_PI (still a double)-derived expressions directly as a CGFloat parameter.

Note that Xcode by default will only build for the "active" architecture for Debug builds—I found it was possible to toggle this error by switching between iPhone 4S and iPhone 5S schemes in the standard drop-down in the menu bar of Xcode, as they have different architectures. I'd guess that in the demo video, there's a 64-bit architecture target selected, but in your project you've got a 32-bit architecture selected?

Given that a CGFloat is double-precision on 64-bit architectures, the simplest way of dealing with this specific problem would be to always cast to CGFloat.

But as a demonstration of dealing with this type of issue when you need to do different things on different architectures, Swift does support conditional compilation:

    #if arch(x86_64) || arch(arm64)
ringLayer.transform = CATransform3DRotate (ringLayer.transform, -M_PI / 2, 0, 0, 1)
#else
ringLayer.transform = CATransform3DRotate (ringLayer.transform, CGFloat(-M_PI / 2), 0, 0, 1)
#endif

However, that's just an example. You really don't want to be doing this sort of thing all over the place, so I'd certainly stick to simply using CGFloat(<whatever POSIX double value you need>) to get either a 32- or 64-bit value depending on the target architecture.

Apple have added much more help for dealing with different floats in later compiler releases—for example, in early betas you couldn't even take floor() of a single-precision float easily, whereas now (currently Xcode 6.1) there are overrides for floor(), ceil(), etc. for both float and double, so you don't need to be fiddling with conditional compilation.

Convert CGFloat to CFloat in Swift

For some reason I needed to make an explicit typecast to CDouble to make it work.

let roundedWidth = round(CDouble(CGRectGetWidth(self.bounds)))

I find this pretty strange since a CGFloat is a CDouble by definition. Seems like the compilator got a bit confused here for some reason.

Does Swift support implicit conversion?

There is no implicitly cast in Swift.

Easy way of conversion in swift is using constructor of particular type.

Like if you want to get Float from double then you can use Float(doubleValue) and Same way if you want to convert float to integer then you can use Int(floatValue).

In your case:

let intValue = UInt8(doubleValue)

Beware that you will lose any value after the decimal point. So, choose a better way. Above conversion is just to help you in understanding.

Note that Swift always chooses Double (rather than Float) when inferring the type of floating-point numbers.

Could not find an overload for '/' that accepts the supplied arguments

There are no such implicit conversions in Swift, so you'll have to explicitly convert that yourself:

average = Double(total) / Double(numbers.count)

From The Swift Programming Language: “Values are never implicitly converted to another type.” (Section: A Swift Tour)

But you're now using Swift, not Objective-C, so try to think in a more functional oriented way. Your function can be written like this:

func getAverage(numbers: Int...) -> Double {
let total = numbers.reduce(0, combine: {$0 + $1})
return Double(total) / Double(numbers.count)
}

reduce takes a first parameter as an initial value for an accumulator variable, then applies the combine function to the accumulator variable and each element in the array. Here, we pass an anonymous function that uses $0 and $1 to denote the first and second parameters it gets passed and adds them up.

Even more concisely, you can write this: numbers.reduce(0, +).

Note how type inference does a nice job of still finding out that total is an Int.

Why does the Swift compiler throw: Could not find an overload for ' ' that accepts the supplied arguments

The declaration

var schemaVersion: Int = (settings["schemaVersion"] as String).toInt()!

doesn't have anything to do with your error since you are redefining schemaVersion locally here:

... withSchemaBuilder: { database, schemaVersion in 

The method is defined as

+ (void)openDatabaseAtPath:(NSString *)path
withSchemaBuilder:(void (^)(FMDatabase *db, int *schemaVersion))schemaBuilder;

so I assume the type of schemaVersion in the block will be something like CMutablePointer <CInt>

Logically, you can't compare that to an Int (1) directly. If it's really an CMutablePointer<Int>, you should use something like:

schemaVersion.withUnsafePointer { unsafeSchemaVersion in
if unsafeSchemaVersion.memory < 1

...

unsafeSchemaVersion.memory = 1
}

That's equivalent to the following in Obj-C:

 if (*schemaVersion < 1) {
...
*schemaVersion = 1;
}

If you want to reference the variable you have defined outside the closure, you should rename it or rename the closure parameter.

Convert Float to Int in Swift

You can convert Float to Int in Swift like this:

var myIntValue:Int = Int(myFloatValue)
println "My value is \(myIntValue)"

You can also achieve this result with @paulm's comment:

var myIntValue = Int(myFloatValue)


Related Topics



Leave a reply



Submit