How to Convert Between Related Types Through a Common Initializer

How can I convert between related types through a common initializer?

The problem is that in your init(_ x:FloatConvertible), Swift cannot infer what the concrete type of x is. It just knows that it's a FloatConvertible. Therefore when you try to do Self(x), while it can infer the concrete type of Self, it doesn't know which initialiser you want to call, meaning that it will default to your init(_ x:FloatConvertible) initialiser, thus creating an infinite loop.

If you give your custom initialiser an argument name, you'll see that Swift complains that it can't find the correct initialiser:

protocol FloatConvertible {
init(c x:FloatConvertible)
}

extension FloatConvertible {
init(c x:FloatConvertible) {
// error: missing argument name 'c:' in call
// (i.e it can't find the concrete type's initialiser)
self.init(Self(x))
}
}

A potential solution therefore is to resolve this at runtime by switching over the concrete types that x could be. However this isn't nearly as good as resolving this statically, as you can benefit from increased safety and in some cases increased performance.

In order to do this statically, you could add a generic _asOther 'shadow' function to your protocol that can convert a given floating point type to another, as well as adding the concrete type's initialisers to your protocol requirement.

This will save you from having to list out all the possible combinations of conversions – you can now just invoke _asOther from your initialiser.

protocol FloatConvertible {
init(_ other:Float)
init(_ other:Double)
init(_ other:CGFloat)
init(fromOther x:FloatConvertible)

func _asOther<T:FloatConvertible>() -> T
}

extension FloatConvertible {
init(fromOther x:FloatConvertible) {self = x._asOther()}
}

// note that we have to implement these for each extension,
// so that Swift uses the concrete types of self, preventing an infinite loop
extension Float : FloatConvertible {
func _asOther<T:FloatConvertible>() -> T {return T(self)}
}

extension Double : FloatConvertible {
func _asOther<T:FloatConvertible>() -> T {return T(self)}
}

extension CGFloat : FloatConvertible {
func _asOther<T:FloatConvertible>() -> T {return T(self)}

// note that CGFloat doesn't implement its own initialiser for this,
// so we have to implement it ourselves
init(_ other:CGFloat) {self = other}
}

func transmute<T:FloatConvertible, U:FloatConvertible>(value: T, to: U.Type) -> U {
return U(fromOther: value)
}

let f = transmute(value: CGFloat(2.6), to: Float.self)
print(type(of: f), f) // prints: Double 2.59999990463257

In the initialiser, _asOther will be called on the input value, with the type of self being inferred for the generic parameter T (in this context self is guaranteed to be a concrete type). The _asOther function will then get called on x, which will return the value as the given destination type.

Note that you don't have to use the fromOther: argument label for your custom initialiser – this will still work without any label. Although I would strongly advocate for using it to catch any problems with your code at compile time (Swift would accept code that would cause infinite loops at runtime otherwise).


Also as a side note, you should maybe re-think your design for how you want your * overload to work. It would make more sense to be returning the more precise type that you input into it (i.e Float * Double = Double) – otherwise you're just needlessly losing precision.

How to cast generic number type 'T' to CGFloat

As an extension to my answer here, you could achieve this statically through using a 'shadow method' in order to allow Numeric types to coerce themselves to any other Numeric type (given that the initialiser for the destination type is listed as a protocol requirement).

For example, you could define your Numeric protocol like so:

protocol Numeric : Comparable, Equatable {

init(_ v:Float)
init(_ v:Double)
init(_ v:Int)
init(_ v:UInt)
init(_ v:Int8)
init(_ v:UInt8)
init(_ v:Int16)
init(_ v:UInt16)
init(_ v:Int32)
init(_ v:UInt32)
init(_ v:Int64)
init(_ v:UInt64)
init(_ v:CGFloat)

// 'shadow method' that allows instances of Numeric
// to coerce themselves to another Numeric type
func _asOther<T:Numeric>() -> T
}

extension Numeric {

// Default implementation of init(fromNumeric:) simply gets the inputted value
// to coerce itself to the same type as the initialiser is called on
// (the generic parameter T in _asOther() is inferred to be the same type as self)
init<T:Numeric>(fromNumeric numeric: T) { self = numeric._asOther() }
}

And then conform types to Numeric like so:

// Implementations of _asOther() – they simply call the given initialisers listed
// in the protocol requirement (it's required for them to be repeated like this,
// as the compiler won't know which initialiser you're referring to otherwise)
extension Float : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Double : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension CGFloat : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Int : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Int8 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Int16 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Int32 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension Int64 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension UInt : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension UInt8 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension UInt16 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension UInt32 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}
extension UInt64 : Numeric {func _asOther<T:Numeric>() -> T { return T(self) }}

Example usage:

class MyClass<T : Numeric> {

var top : T

init(_ top:T) {
self.top = top
}

func topAsCGFloat() -> CGFloat {
return CGFloat(fromNumeric: top)
}
}

let m = MyClass(Int32(42))
let converted = m.topAsCGFloat()

print(type(of:converted), converted) // prints: CGFloat 42.0

This solution is probably no shorter than implementing a method that switches through every type that conforms to Numeric – however as this solution doesn't rely on runtime type-casting, the compiler will likely have more opportunities for optimisation.

It also benefits from static type-checking, meaning that you cannot conform a new type to Numeric without also implementing the logic to convert that type to another type of Numeric (in your case, you would crash at runtime if the type wasn't handled in the switch).

Furthermore, because of encapsulation, it's more flexible to expand, as the logic to convert types is done in each individual concrete type that conforms to Numeric, rather than a single method that handles the possible cases.

What is the best way of converting NON-String type to String type in Swift, using initializer vs \() ?

No it is not the only way of converting it. You can add another constrain to your generic type requiring it to conform to LosslessStringConvertible as well. Note that all BinaryFloatingPoint types conforms to CustomStringConvertible but not all of them conforms to LosslessStringConvertible (i.e CGFloat).

If you don't care about your method supporting CGFloat you can constrain it LosslessStringConvertible otherwise you need to use CustomStringConvertible's String(describing:) initializer.


This will not support CGFloat

func test<T: BinaryFloatingPoint & LosslessStringConvertible>(value: T) {        
let stringValue = String(value)
print(stringValue)

}

This will support all BinaryFloatingPoint floating point types. Note that you don't need to constrain to CustomStringConvertible. It is only for demonstration purposes.

func test<T: BinaryFloatingPoint & CustomStringConvertible>(value: T) {
let stringValue = String(describing: value)
print(stringValue)
}

You can also make CGFloat conform to LosslessStringConvertible as well:

extension CGFloat: LosslessStringConvertible {
private static let formatter = NumberFormatter()
public init?(_ description: String) {
guard let number = CGFloat.formatter.number(from: description) as? CGFloat else { return nil }
self = number
}
}

This will allow you to support all floating point types with your generic method and use the String initializer as well.

Ambiguous overload resolution with initializer_list

The code is ill-formed. §8.5.4/(3.6) applies:

Otherwise, if T is a class type, constructors are considered. The
applicable constructors are enumerated and the best one is chosen
through overload resolution (13.3, 13.3.1.7).

Now, §13.3.3.1.5 goes

When an argument is an initializer list (8.5.4), it is not an expression and special rules apply for converting
it to a parameter type. […]
if the parameter type is std::initializer_list<X> and all
the elements of the initializer list can be implicitly converted to X,
the implicit conversion sequence is the worst conversion necessary to
convert an element of the list to X
, or if the initializer list has no
elements, the identity conversion.

Converting 1.1, which is of type double (!), to int is a Floating-integral conversion with Conversion rank, while the conversion from 1.1 to float is a Floating point conversion - also having Conversion rank.

Sample Image

Thus both conversions are equally good, and since §13.3.3.2/(3.1) cannot distinguish them either, the call is ambiguous. Note that narrowing doesn't play a role until after overload resolution is done and hence cannot affect the candidate set or the selection process. More precisely, a candidate must meet the requirement set in 13.3.2/3:

Second, for F to be a viable function, there shall exist for each
argument an implicit conversion sequence (13.3.3.1) that converts
that argument to the corresponding parameter of F.

However, as shown in the second quote, the implicit conversion sequence that converts {1.1} to std::initializer_list<int> is the worst conversion from 1.1 to int, which is a Floating-integral conversion - and a valid (and existing!) one at that.


If instead you pass {1.1f} or alter the initializer_list<float> to <double>, the code is well-formed, as converting 1.1f to float is an identity conversion. The standard gives a corresponding example in (3.6):

[ Example:

struct S {
S(std::initializer_list<double>); // #1
S(std::initializer_list<int>); // #2

};
S s1 = { 1.0, 2.0, 3.0 }; // invoke #1

end example ]

Even more interestingly,

struct S {
S(std::initializer_list<double>); // #1
S(std::initializer_list<int>); // #2

};
S s1 = { 1.f }; // invoke #1

Is also valid - because the conversion from 1.f to double is a Floating point promotion, having Promotion rank, which is better than Conversion rank.

How to implement two inits with same content without code duplication in Swift?

I just had the same problem.

As GoZoner said, marking your variables as optional will work. It's not a very elegant way because you then have to unwrap the value each time you want to access it.

I will file an enhancement request with Apple, maybe we could get something like a "beforeInit" method that is called before every init where we can assign the variables so we don't have to use optional vars.

Until then, I will just put all assignments into a commonInit method which is called from the dedicated initialisers. E.g.:

class GradientView: UIView {
var gradientLayer: CAGradientLayer? // marked as optional, so it does not have to be assigned before super.init

func commonInit() {
gradientLayer = CAGradientLayer()
gradientLayer!.frame = self.bounds
// more setup
}

init(coder aDecoder: NSCoder!) {
super.init(coder: aDecoder)
commonInit()
}

init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}

override func layoutSubviews() {
super.layoutSubviews()
gradientLayer!.frame = self.bounds // unwrap explicitly because the var is marked optional
}
}

Thanks to David I had a look at the book again and I found something which might be helpful for our deduplication efforts without having to use the optional variable hack. One can use a closure to initialize a variable.

Setting a Default Property Value with a Closure or Function

If a stored property’s default value requires some customization or setup, you can use a closure or global function to provide a customized default value for that property. Whenever a new instance of the type that the property belongs to is initialized, the closure or function is called, and its return value is assigned as the property’s default value. These kinds of closures or functions typically create a temporary value of the same type as the property, tailor that value to represent the desired initial state, and then return that temporary value to be used as the property’s default value.

Here’s a skeleton outline of how a closure can be used to provide a default property value:

class SomeClass {
let someProperty: SomeType = {
// create a default value for someProperty inside this closure
// someValue must be of the same type as SomeType
return someValue
}()
}

Note that the closure’s end curly brace is followed by an empty pair of parentheses. This tells Swift to execute the closure immediately. If you omit these parentheses, you are trying to assign the closure itself to the property, and not the return value of the closure.

NOTE

If you use a closure to initialize a property, remember that the rest of the instance has not yet been initialized at the point that the closure is executed. This means that you cannot access any other property values from within your closure, even if those properties have default values. You also cannot use the implicit self property, or call any of the instance’s methods.

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/de/jEUH0.l

This is the way I will use from now on, because it does not circumvent the useful feature of not allowing nil on variables. For my example it'll look like this:

class GradientView: UIView {
var gradientLayer: CAGradientLayer = {
return CAGradientLayer()
}()

func commonInit() {
gradientLayer.frame = self.bounds
/* more setup */
}

init(coder aDecoder: NSCoder!) {
super.init(coder: aDecoder)
commonInit()
}

init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
}

TypeScript and field initializers

Update

Since writing this answer, better ways have come up. Please see the other answers below that have more votes and a better answer. I cannot remove this answer since it's marked as accepted.



Old answer

There is an issue on the TypeScript codeplex that describes this: Support for object initializers.

As stated, you can already do this by using interfaces in TypeScript instead of classes:

interface Name {
givenName: string;
surname: string;
}
class Person {
name: Name;
age: number;
}

var bob: Person = {
name: {
givenName: "Bob",
surname: "Smith",
},
age: 35,
};

Invalid Initializer in C String

From the C Standard (6.7.9 Initialization)

14 An array of character type may be initialized by a character
string literal
or UTF−8 string literal, optionally enclosed in
braces
. Successive bytes of the string literal (including the
terminating null character if there is room or if the array is of
unknown size) initialize the elements of the array.

That is you could declare a character array and initialize it with a string literal like

char lower[] = SAMPLESTRING;

However you declared an array of pointers of the type char * instead of a character array. In this case you need to enclose the initializer in braces like

char* lower[] = { SAMPLESTRING };

In the line above there is declared an array with one element of the pointer type char * that points to the first character of the string literal SAMPLESTRING. However using the pointer (the single element of the array) you may not change the string literal. Any attempt to change a string literal results in undefined behavior.

Taking into account this call of printf

printf("Lower: %s\nUpper: %s\n", lower, upper);

where there is used the format string "%s" it seems you are going to deal exactly with character arrays. Otherwise you had ro write

printf("Lower: %s\nUpper: %s\n", lower[0], upper[0]);

So you need to write

char lower[] = SAMPLESTRING;
char upper[] = SAMPLESTRING;

or

char lower[] = { SAMPLESTRING };
char upper[] = { SAMPLESTRING };

instead of

char* lower[] = SAMPLESTRING;
char* upper[] = SAMPLESTRING;

In this case calls of strcpy will look like

strcpy( lower, SAMPLESTRING);
strcpy( upper, SAMPLESTRING);

though the arrays already contain this string literal due to their initializations.



Related Topics



Leave a reply



Submit