Assignment 7

CSE596, Fall 2014
Problem Set 7
Due Wed. Oct. 29
Lectures and Reading. Friday through Monday will be in section 5.4. Savitch’s Theorem
(5.13 in the text) will be postponed until after we cover Chapter 6 as stated before, and the
Immerman-Szelepcs´enyi Theorem (which is most of section 5.7) will be skipped for now. It
is worth looking ahead to sections 6.1–6.2 and even 6.3 for some more context. Then tackle
Section 5.5 and the corresponding section 2.6 of the ALR notes; note that ALR does the
hierarchy for DSPACE and NTIME while the text is the only source for DTIME (and I will
skip the ALR NTIME proof—it is also likely I will skim/skip section 5.6 and the parallel ALR
section 2.7 in order to have the full week of Nov. 4 for NP-completeness).
Assignment 7 — due Wednesday, Oct. 29
(A) Just one “study guide” problem this time, but a long one that is really a study guide
to appreciate diagonalization in complexity classes before we hit section 5.5 (ALR 2.6). Filling
in something the text elides on page 78, let us formally define a linear bounded automaton
(LBA) to be a single-tape Turing machine M that gets inputs x in the form ∧x$, and whose
instructions involving the ∧ and $ characters all have (∧/∧, R) and ($/$, L) as the actions.
This syntactically enforces that M never moves outside the UNIX-style endmarkers and so
never uses an initially-blank cell. M can be nondeterministic (an NLBA) not just a DLBA.
Every DFA—including two-way DFAs which can go left as well as right—is a DLBA, and every
NFA is an NLBA. But LBAs are much more powerful than FAs. Combining the k-tapes-to-1
theorem (2.1/5.1 in the text) with the space-“compression” theorem (5.1) basically proves
that every language in DSPACE[O(n)] can be accepted by a DLBA, and every language
in NSPACE[O(n)] by an NLBA, so the names DLBA and NLBA (perhaps in fancy script
DLBA, N LBA or sans-serif DLBA, NLBA) are used for those classes too.
To get rolling, let us observe that it is easy to tell whether a single-tape Turing machine
Me is syntactically an LBA: just scan the instructions. Indeed, a DLBA can do this scan,
given a suitable textual format for the code e. Ditto whether Me is deterministic, so that it
is syntactically a DLBA. Now define the following “diagonal language for DLBAs”:
DDLBA = { e : Me is syntactically a DLBA but does not accept e }.
Satisfy yourself that DDLBA is not in the class DLBA by the logic of diagonalization. But
then the problem is to find the flaw in the following “poof” of the “thro’em” that DDLBA is
accepted by the following DLBA M after all:
On input e, M first treats its tape as having two “tracks”—in the manner of studyguide problem (B) on Assgt. 6—and copies e to each track. The textual format of e on the
first track enables navigating the code of Me in manners we have seen before with TMs or
RAM/assembly programs—for instance, we can use a ‘!’ sign to keep track of the current
state and use a third track as scratch-space to match up state labels in instructions. (Such
details will be referenced in section 5.5/ALR 2.6.) The computation itself is done on the
second track. Since Me is syntactically a DLBA which M itself can check beforehand, the
computation on the second track never wanders outside the ∧, $ endmarkers either. Finally,
there is the problem that Me (e) might go into an infinite loop, and we don’t have space to
write down all of the configurations so as to detect that one has repeated. However, we can use
a fact that will be noted in the proof of Theorem 5.11 (via Corollary 5.8) in the text: even if
we have a nondeterministic s(n) space-bounded machine, if it has any accepting computation
at all on an input x of length n, it has one that accepts within 2O(s(n)) steps. So M can use a
4th track as a binary counter up to 2O(n) and halt and reject if Me (e) goes on for more steps
than the counter allows—it must be in an infinite loop.
If this reasoning looks airtight to you, then we have a proof by diagonalization that
DLBA = DLBA. Happily (for our universe) there is an important flaw. Can you realize it?
A few good sentences will suffice. The graded problem (1) continues the theme.
(1) A certification system for programs in some high-level language such as Java aims to
guarantee that certified programs P run safely—without uncaught exceptions or seg-faults or
etc.—for all possible inputs. The certification C of a program P must be extractable comments
in the code of P itself. (Java and other modern languages are evolving to use annotations for
similar purposes.) At bare minimum, such a system must have two properties:
(i) Whether a program carries a certification that is valid is decidable.
(ii) Every validly-certified program is total.
One of the “ominous” facts in the theory of computation is that even this minimalist approach
to guaranteeing program safety actually limits one’s potential for solving problems—and you
can demonstrate this from ideas already given in Chapters 3 and 5:
(a) Define a language D such that D is decidable, but there are no validly-certified programs
that decide D. This means that no given set of certified programs can decide all decidable
languages.
(b) Note that attaching a knk +k “clock” that demonstrably shuts off the Turing machine M
it is attached to (when the clock rings) is a way of creating a program that is certified to
halt within polynomial time. Every language in P can be recognized by such a “clocked”
program. Use this idea to define a language D that is decidable but does not belong
to P. (If your language D belonged to NP you’d be instantly famous and $1,000,000
richer, but it won’t—for reasons less subtle than the study-guide problem (A): “nk ” is
not polynomially bounded if k can vary as n. 15 + 6 = 21 pts.)
(2) For each of the following stated relationships between complexity classes, say
whether it is known to be true or not. In all cases where it is “known,” you can prove it
by applying theorems in Homer-Selman, section 5.4. Where you say “not known,” explain
why the theorem comparing the two complexity measures involved fails to yield the stated
O(1)
relationship. Note that E = DTIME[2O(n) ] and EXP = DTIME[2n ]. (5 × 6 = 30 pts., for
54 total on the set)
(a) DSPACE[(log n)2 ] ⊆ P.
(b) PSPACE ⊆ EXP.
(c) NP ⊆ E.
(d) NP ⊆ DSPACE[n2 ].
(e) NTIME[O(n)] ⊆ E.