We all know about Turing machines.

We all know that there are universal Turing machines, which in some sense "simulate" other TMs.

We all were given a hand-wavey description of how to encode (TM, input) pairs for consumption by these UTMs.

I can find lots of descriptions of UTMs;

I can find lots of proofs that they're universal;

but I can't find a precise description of the encoding scheme for any of them.

Could any of you smart folks help with this?

As far as I can tell, nobody has asked this question in the history of the Internet.

You could totally stop reading this post at this point, but here's some background:

I've recently had the burning passion to actually run a UTM. I know it's trivial to write e.g. a Python program that simulates a TM, and you could call that Python program a UTM; but that's kinda cheating, because that program isn't really a description of a Turing machine. It would be really hard to translate that program into a data structure that could be fed to the program as input. The UTM isn't written in its own language, so to speak. What I really want is a UTM that I could run on itself.

Here are the conventions I'm familiar with:

- a Turing machine is a set of 5-tuples: (state, symbol, new_symbol, new_state, direction)

- a Turing machine [imath]U[/imath] is "universal" iff there exists some function [imath]enc_U[/imath] that, given a TM [imath]M[/imath] and an input [imath]x[/imath] to that TM, returns some string x' in U's alphabet, where U accepts [imath]x'[/imath] iff M accepts [imath]x[/imath].

In my dream world, somebody would reply to this thread with two things:

1. A set of 5-tuples describing a UTM [imath]U[/imath].

2. A description of the corresponding function [imath]enc_U[/imath].

But I'd be grateful for any help at all.

## Precise description of how to encode inputs to a UTM

**Moderators:** phlip, Moderators General, Prelates

- Xanthir
- My HERO!!!
**Posts:**5400**Joined:**Tue Feb 20, 2007 12:49 am UTC**Location:**The Googleplex-
**Contact:**

### Re: Precise description of how to encode inputs to a UTM

There is no generic "encoding scheme" for UTMs. How you decode the symbols into something meaningful is entirely dependent on the machine itself.

If you're looking for concrete examples, the Wikipedia article references a few papers with explicit source code. I have not read these papers myself, tho.

If you're looking for concrete examples, the Wikipedia article references a few papers with explicit source code. I have not read these papers myself, tho.

(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

### Re: Precise description of how to encode inputs to a UTM

There are brainfuck interpreters written in brainfuck. That is pretty close to a UTM. Writing a brainfuck to TM 5-tuple converter should not be that hard.

The reason that no one constructs a UTM 5-tuple is that is extremely technical and you gain little insight from doing it.

The reason that no one constructs a UTM 5-tuple is that is extremely technical and you gain little insight from doing it.

### Re: Precise description of how to encode inputs to a UTM

Turing's original 1936 paper gives explicitly both the encoding scheme, and the full list of 4-tuples for the universal machine (Turing called this the table of the machine).

http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Encoding function is on pages 239-241.

http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Encoding function is on pages 239-241.

### Who is online

Users browsing this forum: No registered users and 7 guests