Swift: Casting a FloatingPoint conforming value to Double
The problem there is that you need to extend BinaryFloatingPoint
instead of FloatingPoint
protocol:
extension BinaryFloatingPoint {
// Creates a new instance from the given value if possible otherwise returns nil
var double: Double? { Double(exactly: self) }
// Creates a new instance from the given value, rounded to the closest possible representation.
var doubleValue: Double { .init(self) }
}
Playground testing
let cgFloat: CGFloat = .pi // 3.141592653589793
let exactlyDouble = cgFloat.double // 3.141592653589793
let closestDoubleValue = cgFloat.doubleValue // 3.141592653589793
You can also create generic methods to return any floating point type that conforms to the BinaryFloatingPoint
protocol:
extension BinaryFloatingPoint {
func floatingPoint<F: BinaryFloatingPoint>() -> F? { F(exactly: self) }
func floatingPointValue<F: BinaryFloatingPoint>() -> F { .init(self) }
}
Playground testing
let float80pi: Float80 = .pi // 3.1415926535897932385
let closestCGFloatPiValue: CGFloat = float80pi.floatingPointValue() // 3.141592653589793
let closestDoublePiValue: Double = float80pi.floatingPointValue() // 3.141592653589793
let closestFloatPiValue: Float = float80pi.floatingPointValue() // 3.141593
let exactlyCGFloat: CGFloat? = float80pi.floatingPoint() // nil
let exactlyDouble: Double? = float80pi.floatingPoint() // nil
let exactlyFloat: Float? = float80pi.floatingPoint() // nil
Swift: Casting a generic, FloatingPoint value to Int
Simple C++ templates are nothing more than a macro-ish, type less, source code replacement mechanism. The calling code is replaced with the applied template and the compiler checks if the resulting code makes any sense. You are right, a roundedInt<T>
unbounded function should work fine in C++ land.
Swift generics instead are typesafe by design, meaning the generic code must be sound on its own, independently of any particulars of a given call site. In your example, the FloatingPoint
protocol is the type guiding the compilation (and not the actual T
type used by the calling code).
(By the way, Java/C# generics also resembles Swift style as well.)
Regarding your actual problem, you could simply provide two overloads for the roundedInt
function:
func roundedInt(_ f: Float) -> Int { ... }
func roundedInt(_ f: Double) -> Int { ... }
which should cover most use cases. Of course, assuming you are only writing small helper functions with this... and not full blown libraries/frameworks!
Otherwise, try @Sweeper string-based solution. But please keep in mind that besides the potential loss of precision he correctly noted, there are also some nasty performance problems lurking in there as well.
Why do conversions of Float to Double fail at runtime?
As Alexander already said, you cannot forcefully cast from Double
to Float
. The literal constant 0.01
has type Double
in your
function, and the cast as! T
fails if T
is Float
A simple solution is to define the function for BinaryFloatingPoint
instead, which describes a "binary floating point type" and extends the FloatingPoint
protocol.
All the usual floating point types (Float
, Double
andCGFloat
) conform to BinaryFloatingPoint
.
Since BinaryFloatingPoint
also inherits from ExpressibleByFloatLiteral
, you can now compare against the0.01
literal:
func roughlyEq<T: BinaryFloatingPoint>(_ a: T, _ b: T) -> Bool {
return abs(a - b) < 0.01
}
Here the type of 0.01
is inferred from the context as T
.
Or as a protocol extension method:
extension BinaryFloatingPoint {
func roughlyEq(_ other: Self) -> Bool {
return abs(self - other) < 0.01
}
}
swift: issue in converting string to double
There are two different issues here. First – as already mentioned in
the comments – a binary floating point number cannot represent the
number 8.7
precisely. Swift uses the IEEE 754 standard for representing
single- and double-precision floating point numbers, and if you assign
let x = 8.7
then the closest representable number is stored in x
, and that is
8.699999999999999289457264239899814128875732421875
Much more information about this can be found in the excellent
Q&A Is floating point math broken?.
The second issue is: Why is the number sometimes printed as "8.7"
and sometimes as "8.6999999999999993"?
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
let x = 8.7
print(x) // 8.7
Is Double("8.7")
different from 8.7
? Is one more precise than
the other?
To answer these questions, we need to know how the print()
function works:
- If an argument conforms to
CustomStringConvertible
, the print function calls itsdescription
property and prints the result
to the standard output. - Otherwise, if an argument conforms to
CustomDebugStringConvertible
,
the print function calls isdebugDescription
property and prints
the result to the standard output. - Otherwise, some other mechanism is used. (Not imported here for our
purpose.)
The Double
type conforms to CustomStringConvertible
, therefore
let x = 8.7
print(x) // 8.7
produces the same output as
let x = 8.7
print(x.description) // 8.7
But what happens in
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
Double(str)
is an optional, and struct Optional
does not
conform to CustomStringConvertible
, but toCustomDebugStringConvertible
. Therefore the print function calls
the debugDescription
property of Optional
, which in turn
calls the debugDescription
of the underlying Double
.
Therefore – apart from being an optional – the number output is
the same as in
let x = 8.7
print(x.debugDescription) // 8.6999999999999993
But what is the difference between description
and debugDescription
for floating point values? From the Swift source code one can see
that both ultimately call the swift_floatingPointToString
function in Stubs.cpp, with the Debug
parameter set to false
and true
, respectively.
This controls the precision of the number to string conversion:
int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
Precision = std::numeric_limits<T>::max_digits10;
}
For the meaning of those constants, see http://en.cppreference.com/w/cpp/types/numeric_limits:
digits10
– number of decimal digits that can be represented without change,max_digits10
– number of decimal digits necessary to differentiate all values of this type.
So description
creates a string with less decimal digits. That
string can be converted to a Double
and back to a string giving
the same result.debugDescription
creates a string with more decimal digits, so that
any two different floating point values will produce a different output.
Summary:
- Most decimal numbers cannot be represented exactly as a binary
floating point value. - The
description
anddebugDescription
methods of the floating
point types use a different precision for the conversion to a
string. As a consequence, - printing an optional floating point value uses a different precision for the conversion than printing a non-optional value.
Therefore in your case, you probably want to unwrap the optional
before printing it:
let str = "8.7"
if let d = Double(str) {
print(d) // 8.7
}
For better control, use NSNumberFormatter
or formatted
printing with the %.<precision>f
format.
Another option can be to use (NS)DecimalNumber
instead of Double
(e.g. for currency amounts), see e.g. Round Issue in swift.
Requiring, for a protocol, that an instance variable conform to a protocol ; rather than have a specific type
What you're looking for is an associated type. This means exactly what you've described (the required type conforms to a protocol rather than being the existential of that protocol):
protocol CompatibleFloatingPoint {
associatedtype BitPattern: FixedWidthInteger
var bitPattern : BitPattern { get }
}
extension Float: CompatibleFloatingPoint {}
extension Double: CompatibleFloatingPoint {}
For more details, see Associated Types in the Swift Programming Language.
Swift 4: Array for all types of Ints and Floating point numbers
If you want two different types in one array, why don't you use a union, e.g. using an enum
:
enum MyNumber {
case integer(Int)
case double(Double)
}
let numbers: [MyNumber] = [.integer(10), .double(2.0)]
However, in most situations it would be probably better to just convert all Int
into Double
and just have a [Double]
array.
Literal numbers in FloatingPoint protocol
The FloatingPoint
protocol inherits from ExpressibleByIntegerLiteral
via the inheritance chain
FloatingPoint - SignedNumeric - Numeric - ExpressibleByIntegerLiteral
and that is why the second function uprightAngle2
compiles: Values
of the type T
are created from the integer literals 2
and 3
.
The first function uprightAngle1
does not compile because
the FloatingPoint
protocol does not inherit from ExpressibleByFloatLiteral
, i.e. values of type T
can not
be created from a floating point literal like 1.5
.
Possible solutions:
Create rational values as
let half: T = 1/2
. (Notlet half = T(1/2)
,
that would truncate the division result before creating theT
value.)Replace
FloatingPoint
byBinaryFloatingPoint
(which inherits
fromExpressibleByFloatLiteral
).
For more information about the design of the floating point
protocols see SE-0067 Enhanced Floating Point Protocols.
The floating point types in the Swift Standard Library (Double
, Float
, CGFloat
, Float80
) as well asCGFloat
from the Core Graphics framework all conform to theBinaryFloatingPoint
protocol, so this protocol is "sufficiently
generic" for many applications.
Convert String type array to Float type array in swift Cannot assign value of type 'String' to subscript of type 'Double'
You can use a NumberFormatter
to handle strings containing floats that use commas as their decimal separators. I generally wrap custom formatters in a class. That would look something like this:
class FloatFormatter {
let formatter: NumberFormatter
init() {
formatter = NumberFormatter()
formatter.decimalSeparator = ","
}
func float(from string: String) -> Float? {
formatter.number(from: string)?.floatValue
}
}
Substituting this into your example code (with a fix to the type of your float array) you get:
var numbersString = [["564,00", "577,00", "13,00"], ["563,00", "577,00", "14,00"]]
var numbersFloat: [[Float]] = [[564.00, 577.00, 13.00], [563.00, 577.00, 14.00]]
let floatFormatter = FloatFormatter()
for row in 0...numbersString.count-1 {
for col in 0...numbersString[0].count-1 {
numbersFloat[row][col] = floatFormatter.float(from: numbersString[row][col])!
}
}
This works, but it's not very Swifty. Using map would be better (that way you do not need to worry about matching the sizes of your arrays and pre-allocating the float array).
let floatFormatter = FloatFormatter()
let numbersString = [["564,00", "577,00", "13,00"], ["563,00", "577,00", "14,00"]]
let numbersFloat = numbersString.map { (row: [String]) -> [Float] in
return row.map { stringValue in
guard let floatValue = floatFormatter.float(from: stringValue) else {
fatalError("Failed to convert \(stringValue) to float.")
}
return floatValue
}
}
Is there a way to convert any generic Numeric into a Double?
Just for purposes of syntactical illustration, here's an example of making this a generic and arriving at a Double for all three types:
func f<T:Numeric>(_ i: T) {
var d = 0.0
switch i {
case let ii as Int:
d = Double(ii)
case let ii as Int32:
d = Double(ii)
case let ii as Double:
d = ii
default:
fatalError("oops")
}
print(d)
}
But whether this is better than overloading is a matter of opinion. In my view, overloading is far better, because with the generic we are letting a bunch of unwanted types in the door. The Numeric contract is a lie. A triple set of overloads for Double, Int, and Int32 would turn the compiler into a source of truth.
Related Topics
Ios13 Uialertcontroller with Custom View & Preferredstyle as Actionsheet Grayscale All Colors
How to Get Exactly the Same Point on Different Screen Sizes
How to Read Ansi Escape Code Response Value in Swift
Raw Value of Enumeration, Default Value of a Class/Structure, What's the Different
Example for Drag and Drop Inside Nscollectionview
Handle Single Click and Double Click While Updating the View
Unrecognized Selector Sent to Instance When No Related Entities Found in Core Data
Value' Is Inaccessible Due to 'Internal' Protection Level
Differencebetween Convenience Init VS Init in Swift, Explicit Examples Better
Swift - Associated Value or Extension for an Enum
Swift Deleting Table View Cell When Timer Expires
How to Loop All Firebase Children at Once in the Same Loop
Stanford Calculator App Keeps Crashing
Swift: Skspritekit, Using Storyboards, Uiviewcontroller and Uibutton to Set in Game Parameters
Get Images from Document Directory Not File Path Swift 3
How to Handle Async Requests in Swift
How to Figure Out the Day Difference in the Following Example