Precise description of how to encode inputs to a UTM

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

Posts: 1
Joined: Tue Mar 08, 2016 1:18 am UTC

Precise description of how to encode inputs to a UTM

Postby speeze » Tue Mar 08, 2016 2:10 am UTC

We all know about Turing machines.
We all know that there are universal Turing machines, which in some sense "simulate" other TMs.
We all were given a hand-wavey description of how to encode (TM, input) pairs for consumption by these UTMs.
I can find lots of descriptions of UTMs;
I can find lots of proofs that they're universal;
but I can't find a precise description of the encoding scheme for any of them.
Could any of you smart folks help with this?
As far as I can tell, nobody has asked this question in the history of the Internet.

You could totally stop reading this post at this point, but here's some background:

I've recently had the burning passion to actually run a UTM. I know it's trivial to write e.g. a Python program that simulates a TM, and you could call that Python program a UTM; but that's kinda cheating, because that program isn't really a description of a Turing machine. It would be really hard to translate that program into a data structure that could be fed to the program as input. The UTM isn't written in its own language, so to speak. What I really want is a UTM that I could run on itself.

Here are the conventions I'm familiar with:
- a Turing machine is a set of 5-tuples: (state, symbol, new_symbol, new_state, direction)
- a Turing machine [imath]U[/imath] is "universal" iff there exists some function [imath]enc_U[/imath] that, given a TM [imath]M[/imath] and an input [imath]x[/imath] to that TM, returns some string x' in U's alphabet, where U accepts [imath]x'[/imath] iff M accepts [imath]x[/imath].

In my dream world, somebody would reply to this thread with two things:
1. A set of 5-tuples describing a UTM [imath]U[/imath].
2. A description of the corresponding function [imath]enc_U[/imath].
But I'd be grateful for any help at all.

User avatar
My HERO!!!
Posts: 5400
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex

Re: Precise description of how to encode inputs to a UTM

Postby Xanthir » Wed Mar 09, 2016 4:41 pm UTC

There is no generic "encoding scheme" for UTMs. How you decode the symbols into something meaningful is entirely dependent on the machine itself.

If you're looking for concrete examples, the Wikipedia article references a few papers with explicit source code. I have not read these papers myself, tho.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Precise description of how to encode inputs to a UTM

Postby korona » Fri Mar 11, 2016 8:42 pm UTC

There are brainfuck interpreters written in brainfuck. That is pretty close to a UTM. Writing a brainfuck to TM 5-tuple converter should not be that hard.

The reason that no one constructs a UTM 5-tuple is that is extremely technical and you gain little insight from doing it.

Posts: 90
Joined: Fri May 14, 2010 12:49 pm UTC

Re: Precise description of how to encode inputs to a UTM

Postby radams » Wed Aug 10, 2016 8:11 am UTC

Turing's original 1936 paper gives explicitly both the encoding scheme, and the full list of 4-tuples for the universal machine (Turing called this the table of the machine).

Encoding function is on pages 239-241.

Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 7 guests