C++ Templates Turing-Complete

C++ templates Turing-complete?

Example

#include <iostream>

template <int N> struct Factorial
{
enum { val = Factorial<N-1>::val * N };
};

template<>
struct Factorial<0>
{
enum { val = 1 };
};

int main()
{
// Note this value is generated at compile time.
// Also note that most compilers have a limit on the depth of the recursion available.
std::cout << Factorial<4>::val << "\n";
}

That was a little fun but not very practical.

To answer the second part of the question:

Is this fact useful in practice?

Short Answer: Sort of.

Long Answer: Yes, but only if you are a template daemon.

To turn out good programming using template meta-programming that is really useful for others to use (ie a library) is really really tough (though do-able). To Help boost even has MPL aka (Meta Programming Library). But try debugging a compiler error in your template code and you will be in for a long hard ride.

But a good practical example of it being used for something useful:

Scott Meyers has been working extensions to the C++ language (I use the term loosely) using the templating facilities. You can read about his work here 'Enforcing Code Features'

Is C++ preprocessor metaprogramming Turing-complete?

No. The C++ preprocessor does not allow for unlimited state. You only have a finite number of on/off states, plus a include stack. This makes it a push-down automaton, not a turing machine (this ignores also the fact that preprocessor recursion is limited - but so is template recursion).

However, if you bend your definitions a bit, this is possible by invoking the preprocessor multiple times - by allowing the preprocessor to generate a program which re-invokes the preprocessor, and looping externally, it is indeed possible to make a turing machine with the preprocessor. The linked example uses C, but it should be adaptable into C++ easily enough.

Turing machine: But why use template metaprogramming?

The primary reason why one would implement Turing machines using template metaprogramming is not because it's easier than in "ordinary" C++ (it isn't), but to demonstrate that C++ templates are Turing complete.

Is constexpr-based computation Turing complete?

tl;dr: constexpr in C++11 was not Turing-complete, due to a bug in the specification of the language, but that bug has been addressed in later drafts of the standard, and clang already implements the fix.

constexpr, as specified in the ISO C++11 international standard, is not Turing-complete. Sketch proof:

  • Every constexpr function f's result (or non-termination) on a particular sequence of arguments, a..., is determined only by the values of a...
  • Every argument value that can be constructed inside a constant expression must be of a literal type, which by [basic.types]p10 is either:

    • a scalar type,
    • a reference,
    • an array of literal type, or
    • a class type
  • Each of the above cases has a finite set of values.

    • For a scalar, non-pointer type, this follows trivially.
    • For a pointer or reference to be used in a constant expression, it must be initialized by an address or reference constant expression, so must refer to an object with static storage duration, of which there are only a finite quantity in any program.
    • For an array, the bound must be a constant, and each member must be initialized by a constant expression, from which the result follows.
    • For a class type, there are a finite number of members, and each member must be of literal type and initialized by a constant expression, from which the result follows.
  • Therefore, the set of possible inputs a... which f can receive is finite, so any finitely-described constexpr system is a finite state machine, and thus is not Turing-complete.

However, since the publication of the C++11 standard, the situation has changed.

The problem described in Johannes Schaub's answer to std::max() and std::min() not constexpr was reported to the C++ standardization committee as core issue 1454. At the February 2012 WG21 meeting, we decided that this was a defect in the standard, and the chosen resolution included the ability to create values of class types with pointer or reference members that designate temporaries. This allows an unbounded quantity of information to be accumulated and processed by a constexpr function, and is sufficient to make constexpr evaluation Turing-complete (assuming that the implementation supports recursion to an unbounded depth).

In order to demonstrate the Turing-completeness of constexpr for a compiler that implements the proposed resolution of core issue 1454, I wrote a Turing-machine simulator for clang's test suite:

http://llvm.org/svn/llvm-project/cfe/trunk/test/SemaCXX/constexpr-turing.cpp

Trunk versions of both g++ and clang implement the resolution of this core issue, but g++'s implementation currently is unable to process this code.

What Turing complete macro languages are available for C/C++

C macros are in fact Turing complete, if processed more than once. Check out this related question and in particular the Turing machine implementation linked to in the accepted answer.

But yes, this is cheating. The solution used there strongly implies that C macros really aren’t Turing complete.

Turing complete template engines

Perl's Template::Toolkit allows for inlining of Perl if the EVAL_PERL flag is set. Within the template, PERL and RAWPERL blocks allow inlining, to the extent (in the case of RAWPERL) that the internals are exposed, and inlined code is evaluated through eval() (the quoted eval). This provides full access to the Perl interpreter.

Perl is itself considered to have a Turing Complete grammar. So given that Template::Toolkit does provide access to Perl itself, the templating system inherits that characteristic.

Though setting EVAL_PERL to allow for inlined Perl within a template is considered an advanced (and presumably seldom-used) feature, it is available for the strong-hearted (and questionably-sane).

Is C# 4.0 compile-time turing complete?

Unlike templates in C++, generics in C# (and other .net lang) are a runtime generated feature. The compiler does do some checking as to verify the types use but, actual substitution happens at runtime. Same goes for Co and contravariance if I'm not mistaken as well as even the preprocessor directives. Lots of CLR magic.

(At the implementation level, the primary difference is that C#
generic type substitutions are performed at runtime and generic type
information is thereby preserved for instantiated objects)

See MSDN

http://msdn.microsoft.com/en-us/library/c6cyy67b(v=vs.110).aspx

Update:
The CLR does preform type checking via information stored in the metadata associated with the compiled assemblies(Vis-à-vis Jit Compliation), It does this as one of its many services,(ShuggyCoUk answer on this question explains it in detail) (others include memory management and exception handling). So with that I would infer that the compiler has a understanding of state as progression and state as in machine internal state (TC,in part, mean being able to review data (symbols) with so reference to previous data(symbols) , conditionally and evaluate) (I hesitated to state the exact def of TC as I, myself am not sure I have it fully grasped, so feel free to fill in the blanks and correct me when applicable ) So with that I would say with a bit of trepidation, yes, yes it can be.



Related Topics



Leave a reply



Submit