Matching an Overloaded Function to Its Polymorphic Argument

Matching an overloaded function to its polymorphic argument

Overloads in C++ are resolved at compile time, based on the static type of the argument.

There's a technique known as "double-dispatch" that might be of use:

class Model {
virtual void dispatchRender(Renderer &r) const = 0;
};

class Cube : public Model {
virtual void dispatchRender(Renderer &r) const {
r.renderModel(*this); // type of "this" is const Cube*
};

int main() {
Cube cube;
Model &model = cube;
SimpleOpenGLRenderer renderer;
cube.dispatchRender(renderer);
}

Note that the Renderer base class needs to contain all the overloads that SimpleOpenGLRenderer currently does. If you want it to be specific to SimpleOpenGLRenderer what overloads exist then you could put a Simple-specific dispatch function in Model, or you could ignore this technique and instead use dynamic_cast repeatedly in SimpleOpenGLRenderer::renderModel to test the type.

Isn't java suppose to match overloaded functions to most specific type?

You've created an infinite recursion in multiply(Operand) implementation.

While line return other.multiply(this); is executed, method multiply(Operand) repeatedly calls itself because it's the only method accessible for the type Operand. It happens because variable other is of type Operand and it doesn't know the method multiply(Value), hence the multiply call is mapped by the Compiler to multiply(Operand).

That's how you might fix this problem:

@Override
public Operand multiply(Operand other) { // type of other not known
System.out.println("dispatch of " + getClass().getSimpleName() +
".multiply(" + other.getClass().getSimpleName()+")");

if (other instanceof Value otherValue) {
return otherValue.multiply(this); // note that's not a polymorphic call
}

throw new RuntimeException(); // todo
}

In this case, compiler will be sure that multiply() call should be mapped to the multiply(Value), since it knows the type of the variable otherValue and would resolve the method call to the most specific overloaded version which is multiply(Value) because this is of type Value, hence this method is more specific than multiply(Operand) which would require performing widening conversion.

Here's a link to the part of Java Language Specification that describes how method resolution works.

In short, the compiler needs to find potentially applicable methods based on the type and method name. Then it analyzes method signatures if there's a method applicable by so-called strict invocation (i.e. provided arguments and signature match exactly) no further action is needed (which is the case in all situations described above). Otherwise, compiler would try to apply widening reference conversion (in case if argument is of reference type).

Also note multiply(Operand) and multiply(Value) are overloaded methods, i.e. they absolutely independent and might have different return types, access modifiers and sets of parameters (which totally unacceptable for overridden methods).

C++ polymorphism function taking void * and other pointer type as argument: is it considered ambiguous?

According to overload resolution rules (section Ranking of implicit conversion sequences), as the argument can be converted to either function's parameter type, the best viable function in this case would be the one whose implicit conversion is better.

For:

class Foo {
public:
void bar(void*);
void bar(int*);
};

// ...

Foo foo;
int* p2;
foo.bar(p2);

The first is rank 3 (Conversion), while the second is rank 1 (Exact Match). As an exact match, which requires no conversion, is better than a conversion, it will call void bar(int*).

It gets more complex in your second case, however:

class Foo {
public:
virtual void bar(void*);
virtual void bar(Foo*);
virtual ~Foo() = default;
};

class FooTwo : public Foo {};

// ...

Foo foo;
FooTwo footwo;

foo.bar(&footwo);

As both are rank 3 (Conversion), this then follows conversion ranking rules. And as both conversions have the same conversion rank, this then goes to extended conversion ranking rules. Extended rule 2 states:

Conversion that converts pointer-to-derived to pointer-to-base is better than the conversion of pointer-to-derived to pointer-to-void, and conversion of pointer-to-base to void is better than pointer-to-derived to void.

Considering this, void bar(Foo*) is considered a better match than void bar(void*), meaning that it will be selected by foo.bar(&footwo);.

See here for an example of the latter.

How to make the program use function overload for derived class objects

It doesn't work because overloads in C++ are resolved at compile time.

You should considere using a virtual print function from Card.

Something like this.

class Card {
public:
virtual void print() { std::cout << "basic" << std::endl; }
}

class Minion : public Card {
public:
void print() override { std::cout << "minion" << std::endl; }
}

class Spell : public Card {
public:
void print() override { std::cout << "spell" << std::endl; }
}

Then to use this print function you'll do this way.

void Deck::addCard(const unique_ptr<Card>& card)
{
card.print();
}

Otherwise there's always double dispatch pattern or maybe visitor pattern.

Found all this in this old post.

c++ polymorphic method defined in derived class matches type that should match to the parent

Polymorphism as in calling virtual methods is orthogonal to symbol and overload resolution. The former happens run-time, the rest at compile-time.

object->foo() always resolves the symbol at compile-time - member variable with overloaded operator() or a method. virtual only delays selecting "the body" of the method. The signature is always fixed, including the return value of course. Otherwise the type system would break. This is one of the reasons why there cannot be virtual function templates.

What you are actually experiencing is name hiding and overload resolution.

For Base* p; p->foo(2.1), the list of possible symbol candidates is only Base::foo(int). The compiler cannot know that p points to Derived (in general) because the choice must be done at compile time. Since int is implicitly convertible to double, foo(int) is chosen. Because the method is virtual, if Derived were to provide its own foo(int) override, it would have been called instead of Base::foo(int) at run-time.

For Derived*d; d->foo(2), the compiler first looks for symbol foo in Derived. Since there is foo(double) which is valid as foo(2), it is chosen. If there was no valid candidate, only then the compiler would look into base classes. If there was Derived::foo(int), possibly virtual, the compiler would choose this instead because it is a better match.

You can disable the name hiding by writing

class Derived: public Base{
using Base::foo;
};

It injects all Base::foo methods into Derived scope. After this, Base::foo(int) (now really Derived::foo(int)) is chosen as the better match.

Overloaded / Polymorphic Functions with different types

You should investigate these sorts of questions in ghci. It's an invaluable learning resource:

$ ghci
GHCi, version 9.0.1: https://www.haskell.org/ghc/ :? for help
ghci> :t 1
1 :: Num p => p
ghci> :t 3.0
3.0 :: Fractional p => p
ghci> :t 1 + 3.0
1 + 3.0 :: Fractional a => a

First lesson: Numeric literals are polymorphic. 1 isn't an Int, it's polymorphic. It can be any instance of Num that is necessary for the code to compile. 3.0 isn't a Float, it's any instance of Fractional that is necessary for the code to compile. (The difference being the decimal in the literal - it restricts the types allowed.)

Second Lesson: when you combine things into an expression, types get unified. When you unify Num and Fractional constraints, you get a Fractional constraint. This is because Fractional is defined to require all instances of it also be instances of Num.

For a bit more, let's turn on warnings and see what additional information they provide.

ghci> :set -Wall
ghci> 1

<interactive>:5:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Show a0) arising from a use of ‘print’ at <interactive>:5:1
(Num a0) arising from a use of ‘it’ at <interactive>:5:1
• In a stmt of an interactive GHCi command: print it
1
ghci> 1 + 3.0

<interactive>:6:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Double’
(Show a0) arising from a use of ‘print’ at <interactive>:6:1-7
(Fractional a0) arising from a use of ‘it’ at <interactive>:6:1-7
• In a stmt of an interactive GHCi command: print it
4.0

When printing a value, ghci requires the type have a Show instance. Fortunately, that isn't too important of a detail here, but it's why the defaulting warnings refer to Show.

The lessons to observe here are that the default type for something with a Num instance if inference doesn't require something more specific is Integer, not Int. The default type for something with a Fractional instance if inference doesn't require something more specific is Double, not Float. (Float is basically never used. Forget it exists.)

So when inference runs, the expression 1 + 3.0 is inferred to have the type Fractional a => a. In the absence of further requirements on the type, defaulting kicks in and says "a is Double". Then that information flows back through the (+) to its arguments and requires each of them to also be Double. Fortunately, each argument is a polymorphic literal that can take the type Double. Type checking succeeds, instances are chosen, addition happens, the result is printed.

It's very important to this process that numeric literals are polymorphic. Haskell has no implicit conversions between any pair of types. Especially not numeric types. If you want to actually convert values from one type to another, you must call a function that does the conversion you desire. (fromIntegral, round, floor, ceiling, and realToFrac are the most common numeric conversion functions.) But when the values are polymorphic, it means inference can pick a matching type without needing a conversion.

dynamic polymorphism in c++ and function overloading

As the other answers say, the function calls will be:

void setInput(BaseClass* object);         // F0
void setInput(Derived1Class* object); // F1
void setInput(Derived2Class* object); // F2
void setInput(Derived3Class* object); // F3

task.setInput(base); // calls F0
task.setInput(derived1); // calls F1
task.setInput(derived2); // calls F0
task.setInput(derived3); // calls F2

So the function called depends on the type of the pointer and not the type of the object it points at (no overriding polymorphism on function arguments).

There is polymorphism available which we can't see in this example because there is a definition of setInput () for each pointer type.
This is function overloading which will allow derived pointers to be matched
with base pointers eg
if the only function defined was

void setInput(BaseClass* object); // F0

then all the following calls would still compile and call FO:

task.setInput(base);                      // would call F0        
task.setInput(derived1); // would call F0
task.setInput(derived2); // would call F0
task.setInput(derived3); // would call F0

The compiler will favor matching exact types before converting from
dervied to base type, when working out which function to call.

How does the compiler know which function to pick for a polymorphic type?

In general, the candidate function whose parameters match the arguments most closely is the one that is called.

It picks the closest match, the ofstream overload is closer, but ostream is valid if the other overload isn't present. If there is an exact match, it will go for it.

Before overload resolution begins, the functions selected by name lookup and template argument deduction are combined to form the set of candidate functions (the exact criteria depend on the context in which overload resolution takes place)

The argument-parameter implicit conversion sequences considered by overload resolution correspond to implicit conversions used in copy initialization (for non-reference parameters), except that when considering conversion to the implicit object parameter or to the left-hand side of assignment operator, conversions that create temporary objects are not considered.
Each type of standard conversion sequence is assigned one of three ranks:

  1. Exact match: no conversion required, lvalue-to-rvalue conversion,
    qualification conversion, function pointer conversion, (since C++17)
    user-defined conversion of class type to the same class

  2. Promotion: integral promotion, floating-point promotion

  3. Conversion: integral conversion, floating-point conversion,
    floating-integral conversion, pointer conversion, pointer-to-member
    conversion, boolean conversion, user-defined conversion of a derived
    class to its base

The rank of the standard conversion sequence is the worst of the ranks of the standard conversions it holds (there may be up to three conversions)



Related Topics



Leave a reply



Submit