|
 |
Final Exam Study Guide—CS 465: Language, Logic and Computation
|
|
|
 |
Exam Logistics
|
|
|
 |
the exam will be held Saturday, May 9, 2015, from 2-5 pm (as decreed by the College) in the usual place (Ford 204)
|
|
|
 |
the exam will be closed book, closed notes, closed computer/internet
|
|
|
 |
the exam will consist of several kinds of questions; typically:
|
|
|
 |
10-15 True-or-False questions at 1-2 points each
|
|
|
 |
8-12 short answer questions at 4-6 points each
|
|
|
 |
4-5 longer answer questions at 8-15 points each
|
|
|
 |
the emphasis will be on conceptual issues, versus the labs, which emphasize practice in programming
|
|
|
 |
you may also want to review: the course home page, the homework problems, the lab write-ups, and your solutions to them
|
|
|
 |
a significant portion of the exam will be based on material which was also tested by the midterm exam
|
|
|
 |
see the Midterm Exam Review and your own graded midterm
|
|
|
 |
Monadic Parsing Combinators
|
|
|
 |
parsing is the problem of converting a string representation of language into a term or tree-like representation
|
|
|
 |
in Haskell, we usually represent terms as values of an algebraic data type, built from constructors
|
|
|
 |
we can also parse directly from a string to a “meaning” value: e.g., from an arithmetic expression to a number
|
|
|
 |
some languages have “meanings” which are in some sense pending extra information (e.g., a function, which takes an input)
|
|
|
 |
in Haskell, we can use a parser monad to make parsing easier and more natural to express
|
|
|
 |
for the exam, you don’t need to know any of the details of how a monad works
|
|
|
 |
… but you should know that: “A parser for things is a function from Strings to lists of pairs of things and Strings.”
|
|
|
 |
Boolean Logic
|
|
|
 |
like arithmetic algebra, but over a finite (2-element) domain, and with a different sort of interpretation
|
|
|
 |
the 2-element domain of Boolean values (True, False) is interpreted as representing factual determination
|
|
|
 |
there are a total of 4 different distinguishable (total, strict) functions from a Boolean argument to a Boolean result
|
|
|
 |
identity, negation, constant True and constant False
|
|
|
 |
there are a total of 16 different distinguishable (total, strict) functions from pairs of Booleans to a Boolean result
|
|
|
 |
these combinatoric results generalize to yield 2^(2^n) functions of n arguments
|
|
|
 |
(an inspection of the structure of truth tables (see below) yields this result)
|
|
|
 |
we usually choose a small subset of operators to work with and define any others in terms of these
|
|
|
 |
a basis for Boolean Algebra is a set of operators from which all the others are definable
|
|
|
 |
typical choices include disjunction/OR (a sum of sorts), conjunction/AND (a product of sorts), implication and negation
|
|
|
 |
meaning can be ascribed to terms in the algebra in the usual recursive way (in Haskell, with a fold)
|
|
|
 |
we can exploit Haskell's Eq, Ord, Enum and Bounded classes to provide implementations for many binary operators in terms of orderings
|
|
|
 |
e.g., conjunction is the min operation, disjunction the max operation
|
|
|
 |
Propositional and Predicate Logic
|
|
|
 |
logic is about the formal representation of informal arguments, about truth, consequence and the validity of arguments
|
|
|
 |
in some cases the “logical encoding” of natural language terms as logical operators is problematic (e.g., exclusive or and implication)
|
|
|
 |
starting with Boolean algebra, we add (propositional) variables
|
|
|
 |
each variable represents a statement or sentence (i.e., in informal, natural language) which is true or false
|
|
|
 |
variables are taken as abstract (with no specific meaning), so terms are interpreted as functions relative to an interpretation of their variables
|
|
|
 |
in a Haskell implementation, we can produce a function which takes a function parameter as the meaning of a term
|
|
|
 |
the function parameter itself takes variables to a Boolean result (i.e., this is the interpretation function)
|
|
|
 |
a natural representation of functions-over-interpretations is in terms of truth tables:
|
|
|
 |
we write one column for each variable
|
|
|
 |
an alternative approach to semantics is based on formal proofs or derivations
|
|
|
 |
these are tree structures built according to inference rules which are claimed to preserve truth in fundamental ways
|
|
|
 |
for suitable operator semantics and inference rules, we can show that the two forms of meaning coincide:
|
|
|
 |
every formula provable from no assumptions is true in every interpretation (and vice versa)
|
|
|
 |
every formula whose truth follows some others can be proved from those assumptions (and vice versa)
|
|
|
 |
in predicate logic, we extend the language of logic to include predicates/relations and functions
|
|
|
 |
sets of relations and functions are taken abstractly, i.e., no specific set, but rather some arbitrary set, as a parameter of the framework of logic
|
|
|
 |
first-order logic adds quantifiers (‘for all’ and ‘there exists’) which range over some domain of individual entities
|
|
|
 |
Lambda Calculus and Combinators
|
|
|
 |
lambda calculus is a formal theory of functions as computations
|
|
|
 |
it can be seen as providing the “missing story” of how functions (and relations) in logic are actually realized
|
|
|
 |
it speaks to the tension between functions as:
|
|
|
 |
functions proper, abstract (set-theoretic) relations between inputs and outputs
|
|
|
 |
algorithms, i.e., rules-for-computation which have running time (and space) behavior
|
|
|
 |
programs, i.e., specific pieces of texts which describe an algorithm or function
|
|
|
 |
lambda calculus is one of several computationally complete paradigms for describing algorithmic computation
|
|
|
 |
lambda calculus is a language-based approach using abstraction and application
|
|
|
 |
recursion theory is built on arithmetic with open-ended recursive definitions
|
|
|
 |
Turing machines are an abstract machine account which is (somewhat) closer to operational hardware
|
|
|
 |
Type Systems
|
|
|
 |
type systems arise historically from the need to avoid paradoxes associated with self-reference (e.g. the set of all sets)
|
|
|
 |
in modern programming languages and systems, types serve both as a check against errors and a high-level sketch of system structure
|
|
|
 |
formally, types are a second syntactic domain, in addition to the usual terms of a language
|
|
|
 |
simple type systems involve an algebra of operators (sums and products) and constants (unit and base types)
|
|
|
 |
more complex type systems may involve variables, binding constructs (quantifiers and lambda abstraction) and recursion
|
|
|
 |
rules are given which relate terms to a type: terms are not considered well-formed unless they ‘have’ a type by the rules
|
|
|
 |
in most basic systems (without overt recursion), the reduction of well-typed terms always terminates
|
|
|
 |
there is a direct connection between typing rules and those of various (propositional/first-order) logics
|
|
|
 |
Turing Machines and Computability/Decidability
|
|
|
 |
Turing machines are an abstract version of computing machines used to study what computers can do in theory
|
|
|
 |
in this sense they provide a more imperative, low-level alternative to (e.g.) lambda calculus (which is functional and high-level)
|
|
|
 |
the idea is to model the most basic steps of computation, so that it is inarguably simple, rote, and “mechanical”
|
|
|
 |
a Turing machine (TM) consists of:
|
|
|
 |
a “control” component (a finite function which specifies actions to be taken)
|
|
|
 |
a “state”: like a DFA, the machine has a memory component which helps determine actions
|
|
|
 |
a tape: this serves both as input (like with a DFA) and as memory and output
|
|
|
 |
the tape is infinite (or at least potentially infinite) in one direction (blank after input)
|
|
|
 |
the machine can decide at each step to move left or write, and can re-write symbols on the tape (a tape “head” does this)
|
|
|
 |
there are finite sets associated with the possible states and the tape alphabet (which includes a blank)
|
|
|
 |
one state is the initial one, and several may be designated as “final” or accepting states
|
|
|
 |
when the machine runs, it just repeatedly reads the current tape symbol and performs simple actions in response
|
|
|
 |
the tape “head” sits above a tape position
|
|
|
 |
formally, a machine is a 7-tuple: (state set, tape alphabet, blank, input symbols, delta function, initial state, final states)
|
|
|
 |
see the WIkipedia article or your notes for the details … but they won’t be tested on the final
|
|
|