Turing completeness bullshit
I'm writing this article because I heard stupid things
about Turing compleness too many times, in particular when
talking about programming languages.
– A: Can I do * in the
Foo programming languages?
– B: Foo is Turing complete so
you can do everything.
Wait, what?
Clearly, B doesn't understand what Turing
completeness is about. Turing completeness is
about computability, that's all.
One could easily create a programming language that can simulate a Turing
machine but which can't even print anything on the screen, and I'm not even
talking about coding a GUI (that was one of the * that I heard in A's
question). Imagine Brainfuck without the .
and ,
instructions (which respectively prints the byte at the data pointer and
accepts one byte of input, storing its value in the byte at the data
pointer). Still Turing complete, but not that much useful.
The other side of the question is: does a programming language have to be Turing complete to do anything we could want it to do? For instance SQL is not Turing complete, and it doesn't need to be. SQL is a powerful database querying language, and as long as you use it this way, you will be able to do anything you want with it. Just remember to use the right tool for the right job.
Turing completeness is an important notion in computability of course, and I'm not saying it's not important, it is. What I wanted here was to clarify what it means to be Turing complete for a programming language: almost nothing. Turing completeness is not an ultimate goal, not even always necessary. Turing completeness doesn't mean that everything is feasible.