lambda calculus is a formal theory of functions as computations
it can be seen as providing the “missing story” of how functions (and relations) in logic are actually realized
it speaks to the tension between functions as:
functions proper, abstract (set-theoretic) relations between inputs and outputs
algorithms, i.e., rules-for-computation which have running time (and space) behavior
programs, i.e., specific pieces of texts which describe an algorithm or function
lambda calculus is one of several computationally complete paradigms for describing algorithmic computation
lambda calculus is a language-based approach using abstraction and application
recursion theory is built on arithmetic with open-ended recursive definitions
Turing machines are an abstract machine account which is (somewhat) closer to operational hardware
Turing Machines and Computability/Decidability
basic ideas
Turing machines are an abstract version of computing machines used to study what computers can do in theory
in this sense they provide a more imperative, low-level alternative to (e.g.) lambda calculus (which is functional and high-level)
the idea is to model the most basic steps of computation, so that it is inarguably simple, rote, and “mechanical”
definition and operation of a Turing Machine
a Turing machine (TM) consists of:
a “control” component (a finite function which specifies actions to be taken)
a “state”: like a DFA, the machine has a memory component which helps determine actions
a tape: this serves both as input (like with a DFA) and as memory and output
the tape is infinite (or at least potentially infinite) in one direction (blank after input)
the machine can decide at each step to move left or write, and can re-write symbols on the tape (a tape “head” does this)
there are finite sets associated with the possible states and the tape alphabet (which includes a blank)
one state is the initial one, and several may be designated as “final” or accepting states
when the machine runs, it just repeatedly reads the current tape symbol and performs simple actions in response
the tape “head” sits above a tape position
formally, a machine is a 7-tuple: (state set, tape alphabet, blank, input symbols, delta function, initial state, final states)
see the WIkipedia article or your notes for the details … but they won’t be tested on the final
the power of Turing Machines
we can do simple things like arithmetic on tallies or radix-based numerals
we can store and retrieve data
we can describe data structures in terms of numbers or in terms of symbols
we can (roughly) define and call methods or functions
importantly, we can implement general recursion (i.e., not bounded in advance)
TMs have the same power as Lambda Calculus, or combinators, or several other, similar formalisms
we believe this allows us to compute anything computable (Church’s Thesis), i.e., any computable function
not all functions (say from numbers to numbers) can be computed, because there are more functions than (descriptions of) computations to be done
variations on Turing Machines & “robustness”
many variations on TMs are possible, but none adds to the power of what they compute
(i.e., we can always simulate the new features with a simpler TM as above)
basic differences (when & how movement or writing is incurred) do not help
more tapes do not help (simulate many tapes on one)
non-determinism does not help (simulate a back-tracking exploration)
one reason we believe we have the “right” idea is that it doesn’t change when pushed or tweaked in modest ways, in various directions
TM simulation and universal machines
we can describe Turing Machines in any number of ways
notation: <M> is the description or representation of Turing Machine M
we can write such a description on another TM’s tape
without too much trouble (because TMs are so simple in operation), we can simulate the described machine
a TM which can read such a description for any other machine <M> and simulate is called universal
such machines / simulations are a bit tedious to define, but not especially difficult
undecidable problems (incl. the Halting Problem)
some problems are called decidable: we can write a TM that always terminates and answers correctly
other problems are only “half decidable”: we can halt and say yes, but if the answer is “No”, we may loop forever (also called semi-decidable or recognizable)
there are even problems which are not recognizable: the machine may loop even if the answer is “Yes”
reducibility and decidability of problems
if solving problem A leads to a solution to problem B, we say that B reduces to A
if a known undecidable problem reduces to an another one, the second must also be undecidable
the “domino-matching” problem (really: Post Correspondence Problem)
an easy-to understand problem that is nevertheless undecidable
we have a cache or inventory of dominos, treated as a set (no duplicates here)
each domino is split into top and bottom halves, with a string in each
we lay down a sequence of dominos from the inventory, including as many duplicates as we need
we try to make the top and bottom strings equal, as we read across the sequence in order
given a set of dominos, is this possible = Post Correspondence Problem
it is undecidable: if the answer is “Yes”, we can definitely say so … but if “No”, we may loop forever
intuitively, it’s too hard to say, in general, when we should stop looking
Rice’s Theorem
given a problem P about Turing Machines, we ask for the following qualifications
for any M1, M2 where L(M1) = L(M2): <M1> ∈ P if and only if <M2> ∈ P
(i.e., only the language matters)
there exist machines M1 and M2 such that <M1> ∈ P <M2> ∉ P