This site is supported by donations to The OEIS Foundation.

Differential Analytic Turing Automata

From OeisWiki
Jump to: navigation, search

Author: Jon Awbrey

The task ahead is to chart a course from general ideas about transformational equivalence classes of graphs to a notion of differential analytic turing automata (DATA). It may be a while before we get within sight of that goal, but it will provide a better measure of motivation to name the thread after the envisioned end rather than the more homely starting place.

The basic idea is as follows. One has a set of graphs and a set of transformation rules, and each rule has the effect of transforming graphs into graphs, In the cases that we shall be studying, this set of transformation rules partitions the set of graphs into transformational equivalence classes (TECs).

There are many interesting excursions to be had here, but I will focus mainly on logical applications, and and so the TECs I talk about will almost always have the character of logical equivalence classes (LECs).

An example that will figure heavily in the sequel is given by rooted trees as the species of graphs and a pair of equational transformation rules that derive from the graphical calculi of C.S. Peirce, as revived and extended by George Spencer Brown.

Here are the fundamental transformation rules, also referred to as the arithmetic axioms, more precisely, the arithmetic initials.

PERS Figure 01.jpg (1)
PERS Figure 02.jpg (2)

That should be enough to get started.

Cactus Language

I will be making use of the cactus language extension of Peirce's Alpha Graphs, so called because it uses a species of graphs that are usually called "cacti" in graph theory. The last exposition of the cactus syntax that I've written can be found here:

The representational and computational efficiency of the cactus language for the tasks that are usually associated with boolean algebra and propositional calculus makes it possible to entertain a further extension, to what we may call differential logic, because it develops this basic level of logic in the same way that differential calculus augments analytic geometry to handle change and diversity. There are several different introductions to differential logic that I have written and distributed across the Internet. You might start with the following couple of treatments:

I will draw on those previously advertised resources of notation and theory as needed, but right now I sense the need for some concrete examples.

Example 1

Let's say we have a system that is known by the name of its state space and we have a boolean state variable where

We observe for a while, relative to a discrete time frame, and we write down the following sequence of values for

“Aha!” we say, and think we see the way of things, writing down the rule where is the next state after and is the negation of in boolean logic.

Another way to detect patterns is to write out a table of finite differences. For this example, we would get:

And of course, all the higher order differences are zero.

This leads to thinking of as having an extended state and this additional language gives us the facility of describing state transitions in terms of the various orders of differences. For example, the rule can now be expressed by the rule

There is a more detailed account of differential logic in the following paper:

For future reference, here are a couple of handy rosetta stones for translating back and forth between different notations for the boolean functions where

Example 2

For a slightly more interesting example, let's suppose that we have a dynamic system that is known by its state space and we have a boolean state variable In addition, we are given an initial condition and a law

The initial condition has two cases:

Here is a table of the two trajectories or orbits that we get by starting from each of the two permissible initial states and staying within the constraints of the dynamic law


Note that the state that is, is a stable attractor for both orbits.

Further discussion of this example, complete with charts and graphs, can be found at this location:

Example 3

One more example may serve to suggest just how much dynamic complexity can be built on a universe of discourse that has but a single logical feature at its base. But first, there are a few more elements of general notation that that we'll need to describe finite dimensional universes of discourse and the qualitative dynamics that we envision occurring in them.

Let be the alphabet of logical features or variables that are used to describe the -dimensional universe of discourse One may picture a venn diagram whose overlapping “circles” are labeled with the feature names in Staying with this picture, one visualizes the universe of discourse as having two layers:

  1. The set of points or cells — the latter used in another sense of the word than when we speak of cellular automata.
  2. The set of propositions, boolean-valued functions, or maps from to

Thus we picture the universe of discourse as an ordered pair having points in the underlying space and propositions in the function space

A more complete discussion of these notations can be found here:

Now, to the Example.

Once again, let us begin with a 1-feature alphabet In the discussion that follows I will consider a class of trajectories that are ruled by the constraint that for all greater than some fixed and I will indulge in the use of some picturesque language to describe salient classes of such curves. Given the finite order condition, there is a highest order non-zero difference that is exhibited at each point in the course of any determinate trajectory. Relative to any point of the corresponding orbit or curve, let us call this highest order differential feature the drive at that point. Curves of constant drive are then referred to as gear curves.

One additional piece of notation will be needed here. Starting from the base alphabet we define and notate as the order extended alphabet over

Let us now consider the family of 4th gear curves through the extended space These are the trajectories that are generated subject to the law where it is understood in making such a statement that all higher order differences are equal to

Since and all higher order are fixed, the entire dynamics can be plotted in the extended space Thus, there is just enough room in a planar venn diagram to plot both orbits and to show how they partition the points of As it turns out, there are exactly two possible orbits, of eight points each, as illustrated in Figures 16-a and 16-b. See here:

Here are the 4th gear curves over the 1-feature universe arranged in the form of tabular arrays, listing the extended state vectors as they occur in one cyclic period of each orbit.


In this arrangement, the temporal ordering of states can be reckoned by a kind of parallel round-up rule. Specifically, if is any pair of adjacent digits in a state vector then the value of in the next state is the addition being taken mod 2, of course.

A more complete discussion of this arrangement is given here:

Example 4

I am going to tip-toe in silence/consilience past many questions of a philosophical nature/nurture that might be asked at this juncture, no doubt to revisit them at some future opportunity/importunity, however the cases happen to align in the course of their inevitable fall.

Instead, let's follow the adage to “keep it concrete and simple”, taking up the consideration of an incrementally more complex example, but having a slightly more general character than the orders of sequential transformations that we've been discussing up to this point.

The types of logical transformations that I have in mind can be thought of as transformations of discourse because they map a universe of discourse into a universe of discourse by way of logical equations between the qualitative features or logical variables in the source and target universes.

The sequential transformations or state transitions that we have been considering so far are actually special cases of these more general logical transformations, specifically, they are the ones that have a single universe of discourse, as it happens to exist at different moments in time, in the role of both the source and the target universes of the transformation in question.

Onward and upward to Flatland, the differential analysis of transformations between 2-dimensional universes of discourse.

Consider the transformation from the universe to the universe that is defined by this system of equations:

The parenthetical expressions on the right are the cactus forms for the boolean functions that correspond to inclusive disjunction and logical equivalence, respectively. Table 1 summarizes the basic elements of the cactus notation for propositional logic.


20px
20px
Cactus A Big.jpg
Cactus (A) Big.jpg
Cactus ABC Big.jpg
Cactus ((A)(B)(C)) Big.jpg
Cactus (A(B)) Big.jpg

65px

Cactus ((A,B)) Big.jpg

Cactus (A,B,C) Big.jpg

Cactus ((A),(B),(C)) Big.jpg

Cactus (A,(B,C)) Big.jpg


Cactus (X,(A),(B),(C)) Big.jpg


The component notation allows us to give a name and a type to this transformation, and permits us to define it by means of the compact description that follows:

The information that defines the logical transformation can be represented in the form of a truth table, as shown below.



A more complete framework of discussion and a fuller development of this example can be found in the neighborhood of the following site:

Consider the transformation of textual elements (TOTE) in progress:

Taken as a transformation from the universe to the universe this is a particular type of formal object, and it can be studied at that level of abstraction until the chickens come home to roost, as they say, but when the time comes to count those chickens, if you will, the terms of artifice that we use to talk about abstract objects, almost as if we actually knew what we were talking about, need to be fully fledged or fleshed out with extra bits of interpretive data (BOIDs).

And so, to decompress the story, the TOTE that we use to convey the FOMA has to be interpreted before it can be applied to anything that actually puts supper on the table, so to speak.

What are some of the ways that an abstract logical transformation like gets interpreted in the setting of a concrete application?

Mathematical parlance comes part way to the rescue here and tosses us the line that a transformation of syntactic signs can be interpreted in either one of two ways, as an alias or as an alibi.

When we consider a transformation in the alias interpretation, we are merely changing the terms that we use to describe what may very well be, to some approximation, the very same things.

For example, in some applications the discursive universes and are best understood as diverse frames, instruments, reticules, scopes, or templates, that we adopt for the sake of viewing from variant perspectives what we conceive to be roughly the same underlying objects.

When we consider a transformation in the alibi interpretation, we are thinking of the objective things as objectively moving around in space or changing their qualitative characteristics. There are times when we think of this alibi transformation as taking place in a dimension of time, and then there are times when time is not an object.

For example, in some applications the discursive universes and are actually the same universe, and what we have is a frame where is the next state of and is the next state of notated as and This permits us to rewrite the transformation as follows:

All in all, then, we have three different ways in general of applying or interpreting a transformation of discourse, that we might sum up as one brand of alias and two brands of alibi, all together, the Elseword, the Elsewhere, and the Elsewhen.

No more angels on pinheads, the brass tacks next time.

Differential Analysis

It is time to formulate the differential analysis of a logical transformation, or a mapping of discourse. It is wise to begin with the first order differentials.

We are considering an abstract logical transformation that can be interpreted in a number of different ways. Let's fix on a couple of major variants that might be indicated as follows:

is just one example among — well, now that I think of it — how many other logical transformations from the same source to the same target universe? In the light of that question, maybe it would be advisable to contemplate the character of within the fold of its most closely akin transformations.

Given the alphabets and along with the corresponding universes of discourse and how many logical transformations of the general form are there?

Since and can be any propositions of the type there are choices for each of the maps and and thus there are different mappings altogether of the form

The set of all functions of a given type is customarily denoted by placing its type indicator in parentheses, in the present instance writing and so the cardinality of this function space can most conveniently be summed up by writing:

Given any transformation of this type, the (first order) differential analysis of is based on the definition of a couple of further transformations, derived by way of operators on that ply between the (first order) extended universes, and of own source and target universes.

First, the enlargement map (or the secant transformation) is defined by the following pair of component equations:

Second, the difference map (or the chordal transformation) is defined in a component-wise fashion as the boolean sum of the initial proposition and the enlarged or shifted proposition for in accord with following pair of equations:

Maintaining a strict analogy with ordinary difference calculus would perhaps have us write but the sum and difference operations are the same thing in boolean arithmetic. It is more often natural in the logical context to consider an initial proposition then to compute the enlargement and finally to determine the difference so we let the variant order of terms reflect this sequence of considerations.

Given these general considerations about the operators and let's return to particular cases, and carry out the first order analysis of the transformation

By way of getting our feet back on solid ground, let's crank up our current case of a transformation of discourse, with concrete type or abstract type and let it spin through a sufficient number of turns to see how it goes, as viewed under the scope of what is probably its most straightforward view, as an elsewhen map

In the upshot there are two basins of attraction, the state and the state with the orbit making up an isolated basin and the orbit leading to the basin

On first examination of our present example we made a likely guess at a form of rule that would account for the finite protocol of states that we observed the system passing through, as spied in the light of its boolean state variable and that rule is well-formulated in any of these styles of notation:

In the current example, we already know in advance the program that generates the state transitions, and it is a rule of the following equivalent and easily derivable forms:

Well, the last one is not such a fall off a log, but that is exactly the purpose for which we have been developing all of the foregoing machinations.

Here is what I got when I just went ahead and calculated the finite differences willy-nilly:

To be honest, I have never thought of trying to hack the problem in such a brute-force way until just now, and so I know enough to expect a not inappreciable probability of error about all that I've taken the risk to write out here, but let me forge ahead and see what I can see.

What we are looking for is — one rule to rule them all, a rule that applies to every state and works every time.

What we see at first sight in the tables above are patterns of differential features that attach to the states in each orbit of the dynamics. Looked at locally to these orbits, the isolated fixed point at is no problem, as the rule describes it pithily enough. When it comes to the other orbit, the first thing that comes to mind is to write out the law

Symbolic Method

It ought to be clear at this point that we need a more systematic symbolic method for computing the differentials of logical transformations, using the term differential in a loose way at present for all sorts of finite differences and derivatives, leaving it to another discussion to sharpen up its more exact technical senses.

For convenience of reference, let's recast our current example in the following form:

In their application to this logical transformation the operators and respectively produce the enlarged map and the difference map whose components can be given as follows.

But these initial formulas are purely definitional, and help us little to understand either the purpose of the operators or the significance of the results. Working symbolically, let's apply a more systematic method to the separate components of the mapping

A sketch of this work is presented in the following series of Figures, where each logical proposition is expanded over the basic cells of the 2-dimensional universe of discourse

Computation Summary for Logical Disjunction

The venn diagram in Figure 1.1 shows how the proposition can be expanded over the universe of discourse to produce a logically equivalent exclusive disjunction, namely,

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/%\%%%%%/%\          |
|         /%%%\%%%/%%%\%%%/%%%\         |
|        /%%%%%\%/%%%%%\%/%%%%%\        |
|       o%%%%%%%o%%%%%%%o%%%%%%%o       |
|      /%\%%%%%/%\%%%%%/%\%%%%%/%\      |
|     /%%%\%%%/%%%\%%%/%%%\%%%/%%%\     |
|    /%%%%%\%/%%%%%\%/%%%%%\%/%%%%%\    |
|   o%%%%%%%o%%%%%%%o%%%%%%%o%%%%%%%o   |
|   |\%%%%%/%\%%%%%/ \%%%%%/%\%%%%%/|   |
|   | \%%%/%%%\%%%/   \%%%/%%%\%%%/ |   |
|   |  \%/%%%%%\%/     \%/%%%%%\%/  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/ \     / \%%%%%/|   |   |
|   |   | \%%%/   \   /   \%%%/ |   |   |
|   | u |  \%/     \ /     \%/  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     / \     /    |       |
|       |     \   /   \   /     |       |
|       | du   \ /     \ /   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.1.  f = ((u)(v))

Figure 1.2 expands over to give:

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      /%\%%%%%/%\     /%\%%%%%/%\      |
|     /%%%\%%%/%%%\   /%%%\%%%/%%%\     |
|    /%%%%%\%/%%%%%\ /%%%%%\%/%%%%%\    |
|   o%%%%%%%o%%%%%%%o%%%%%%%o%%%%%%%o   |
|   |\%%%%%/ \%%%%%/%\%%%%%/ \%%%%%/|   |
|   | \%%%/   \%%%/%%%\%%%/   \%%%/ |   |
|   |  \%/     \%/%%%%%\%/     \%/  |   |
|   |   o       o%%%%%%%o       o   |   |
|   |   |\     /%\%%%%%/%\     /|   |   |
|   |   | \   /%%%\%%%/%%%\   / |   |   |
|   | u |  \ /%%%%%\%/%%%%%\ /  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.2.  Ef = ((u + du)(v + dv))

Figure 1.3 expands over to produce:

o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     /%\     / \          |
|         /   \   /%%%\   /   \         |
|        /     \ /%%%%%\ /     \        |
|       o       o%%%%%%%o       o       |
|      / \     / \%%%%%/ \     / \      |
|     /   \   /   \%%%/   \   /   \     |
|    /     \ /     \%/     \ /     \    |
|   o       o       o       o       o   |
|   |\     /%\     /%\     /%\     /|   |
|   | \   /%%%\   /%%%\   /%%%\   / |   |
|   |  \ /%%%%%\ /%%%%%\ /%%%%%\ /  |   |
|   |   o%%%%%%%o%%%%%%%o%%%%%%%o   |   |
|   |   |\%%%%%/%\%%%%%/%\%%%%%/|   |   |
|   |   | \%%%/%%%\%%%/%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\%/%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.3.  Df = f + Ef

I'll break this here in case anyone wants to try and do the work for on their own.

Computation Summary for Logical Equality

The venn diagram in Figure 2.1 shows how the proposition can be expanded over the universe of discourse to produce a logically equivalent exclusive disjunction, namely,

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          / \%%%%%/%\%%%%%/ \          |
|         /   \%%%/%%%\%%%/   \         |
|        /     \%/%%%%%\%/     \        |
|       o       o%%%%%%%o       o       |
|      / \     / \%%%%%/ \     / \      |
|     /   \   /   \%%%/   \   /   \     |
|    /     \ /     \%/     \ /     \    |
|   o       o       o       o       o   |
|   |\     / \     /%\     / \     /|   |
|   | \   /   \   /%%%\   /   \   / |   |
|   |  \ /     \ /%%%%%\ /     \ /  |   |
|   |   o       o%%%%%%%o       o   |   |
|   |   |\     /%\%%%%%/%\     /|   |   |
|   |   | \   /%%%\%%%/%%%\   / |   |   |
|   | u |  \ /%%%%%\%/%%%%%\ /  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/%\%%%%%/    |       |
|       |     \%%%/%%%\%%%/     |       |
|       | du   \%/%%%%%\%/   dv |       |
|       o-------o%%%%%%%o-------o       |
|                \%%%%%/                |
|                 \%%%/                 |
|                  \%/                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.1.  g = ((u, v))

Figure 2.2 expands over to give:

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              / \%%%%%/ \              |
|             /   \%%%/   \             |
|            /     \%/     \            |
|           o       o       o           |
|          /%\     /%\     /%\          |
|         /%%%\   /%%%\   /%%%\         |
|        /%%%%%\ /%%%%%\ /%%%%%\        |
|       o%%%%%%%o%%%%%%%o%%%%%%%o       |
|      / \%%%%%/ \%%%%%/ \%%%%%/ \      |
|     /   \%%%/   \%%%/   \%%%/   \     |
|    /     \%/     \%/     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     /%\     /%\     /|   |
|   | \   /%%%\   /%%%\   /%%%\   / |   |
|   |  \ /%%%%%\ /%%%%%\ /%%%%%\ /  |   |
|   |   o%%%%%%%o%%%%%%%o%%%%%%%o   |   |
|   |   |\%%%%%/ \%%%%%/ \%%%%%/|   |   |
|   |   | \%%%/   \%%%/   \%%%/ |   |   |
|   | u |  \%/     \%/     \%/  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     /%\     /    |       |
|       |     \   /%%%\   /     |       |
|       | du   \ /%%%%%\ /   dv |       |
|       o-------o%%%%%%%o-------o       |
|                \%%%%%/                |
|                 \%%%/                 |
|                  \%/                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.2.  Eg = ((u + du, v + dv))

Figure 2.3 expands over to yield the form:

o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              /%\     /%\              |
|             /%%%\   /%%%\             |
|            /%%%%%\ /%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      / \%%%%%/ \     / \%%%%%/ \      |
|     /   \%%%/   \   /   \%%%/   \     |
|    /     \%/     \ /     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     / \     /%\     /|   |
|   | \   /%%%\   /   \   /%%%\   / |   |
|   |  \ /%%%%%\ /     \ /%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.3.  Dg = g + Eg

Differential : Locally Linear Approximation

 

'Tis a derivative from me to mine,
And only that I stand for.

  Winter's Tale, 3.2.43–44

We've talked about differentials long enough that I think it's way past time we met with some.

When the term is being used with its more exact sense, a differential is a locally linear approximation to a function, in the context of this logical discussion, then, a locally linear approximation to a proposition.

Recall the form of the current example:

To speed things along, I will skip a mass of motivating discussion and just exhibit the simplest form of a differential for the current example of a logical transformation after which the majority of the easiest questions will have been answered in visually intuitive terms.

For we have and so we can proceed componentwise, patching the pieces back together at the end.

We have prepared the ground already by computing these terms:

As a matter of fact, computing the symmetric differences and has already taken care of the localizing part of the task by subtracting out the forms of and from the forms of and respectively. Thus all we have left to do is to decide what linear propositions best approximate the difference maps and respectively.

This raises the question: What is a linear proposition?

The answer that makes the most sense in this context is this: A proposition is just a boolean-valued function, so a linear proposition is a linear function into the boolean space

In particular, the linear functions that we want will be linear functions in the differential variables and

As it turns out, there are just four linear propositions in the associated differential universe These are the propositions that are commonly denoted: in other words:

Notions of Approximation

 

for equalities are so weighed
that curiosity in neither can
make choice of either's moiety.

  King Lear, Sc.1.5–7 (Quarto)
 

for qualities are so weighed
that curiosity in neither can
make choice of either's moiety.

  King Lear, 1.1.5–6 (Folio)

Justifying a notion of approximation is a little more involved in general, and especially in these discrete logical spaces, than it would be expedient for people in a hurry to tangle with right now. I will just say that there are naive or obvious notions and there are sophisticated or subtle notions that we might choose among. The later would engage us in trying to construct proper logical analogues of Lie derivatives, and so let's save that for when we have become subtle or sophisticated or both. Against or toward that day, as you wish, let's begin with an option in plain view.

Figure 1.4 illustrates one way of ranging over the cells of the underlying universe and selecting at each cell the linear proposition in that best approximates the patch of the difference map that is located there, yielding the following formula for the differential

o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     / \     / \          |
|         /   \   /   \   /   \         |
|        /     \ /     \ /     \        |
|       o       o       o       o       |
|      / \     /%\     /%\     / \      |
|     /   \   /%%%\   /%%%\   /   \     |
|    /     \ /%%%%%\ /%%%%%\ /     \    |
|   o       o%%%%%%%o%%%%%%%o       o   |
|   |\     /%\%%%%%/ \%%%%%/%\     /|   |
|   | \   /%%%\%%%/   \%%%/%%%\   / |   |
|   |  \ /%%%%%\%/     \%/%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.4.  df = linear approx to Df

Figure 2.4 illustrates one way of ranging over the cells of the underlying universe and selecting at each cell the linear proposition in that best approximates the patch of the difference map that is located there, yielding the following formula for the differential

o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              /%\     /%\              |
|             /%%%\   /%%%\             |
|            /%%%%%\ /%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      / \%%%%%/ \     / \%%%%%/ \      |
|     /   \%%%/   \   /   \%%%/   \     |
|    /     \%/     \ /     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     / \     /%\     /|   |
|   | \   /%%%\   /   \   /%%%\   / |   |
|   |  \ /%%%%%\ /     \ /%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.4.  dg = linear approx to Dg

Well, that was easy, seeing as how is already linear at each locus,

Analytic Series

We have been conducting the differential analysis of the logical transformation defined as and this means starting with the extended transformation and breaking it into an analytic series, and so on until there is nothing left to analyze any further.

As a general rule, one proceeds by way of the following stages:

In our analysis of the transformation we carried out Step 1 in the more familiar form and we have just reached Step 2 in the form where is the residual term that remains for us to examine next.

Note. I'm am trying to give quick overview here, and this forces me to omit many picky details. The picky reader may wish to consult the more detailed presentation of this material at the following locations:

Let's push on with the analysis of the transformation:

For ease of comparison and computation, I will collect the Figures that we need for the remainder of the work together on one page.

Computation Summary for Logical Disjunction

Figure 1.1 shows the expansion of over to produce the expression:

Figure 1.2 shows the expansion of over to produce the expression:

In general, tells you what you would have to do, from wherever you are in the universe if you want to end up in a place where is true. In this case, where the prevailing proposition is the indication of tells you this: If and are both true where you are, then just don't change both and at once, and you will end up in a place where is true.

Figure 1.3 shows the expansion of over to produce the expression:

In general, tells you what you would have to do, from wherever you are in the universe if you want to bring about a change in the value of that is, if you want to get to a place where the value of is different from what it is where you are. In the present case, where the reigning proposition is the term of tells you this: If and are both true where you are, then you would have to change both and in order to reach a place where the value of is different from what it is where you are.

Figure 1.4 approximates by the linear form that expands over as follows:

Figure 1.5 shows what remains of the difference map when the first order linear contribution is removed, namely:

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/%\%%%%%/%\          |
|         /%%%\%%%/%%%\%%%/%%%\         |
|        /%%%%%\%/%%%%%\%/%%%%%\        |
|       o%%%%%%%o%%%%%%%o%%%%%%%o       |
|      /%\%%%%%/%\%%%%%/%\%%%%%/%\      |
|     /%%%\%%%/%%%\%%%/%%%\%%%/%%%\     |
|    /%%%%%\%/%%%%%\%/%%%%%\%/%%%%%\    |
|   o%%%%%%%o%%%%%%%o%%%%%%%o%%%%%%%o   |
|   |\%%%%%/%\%%%%%/ \%%%%%/%\%%%%%/|   |
|   | \%%%/%%%\%%%/   \%%%/%%%\%%%/ |   |
|   |  \%/%%%%%\%/     \%/%%%%%\%/  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/ \     / \%%%%%/|   |   |
|   |   | \%%%/   \   /   \%%%/ |   |   |
|   | u |  \%/     \ /     \%/  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     / \     /    |       |
|       |     \   /   \   /     |       |
|       | du   \ /     \ /   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.1.  f = ((u)(v))
o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      /%\%%%%%/%\     /%\%%%%%/%\      |
|     /%%%\%%%/%%%\   /%%%\%%%/%%%\     |
|    /%%%%%\%/%%%%%\ /%%%%%\%/%%%%%\    |
|   o%%%%%%%o%%%%%%%o%%%%%%%o%%%%%%%o   |
|   |\%%%%%/ \%%%%%/%\%%%%%/ \%%%%%/|   |
|   | \%%%/   \%%%/%%%\%%%/   \%%%/ |   |
|   |  \%/     \%/%%%%%\%/     \%/  |   |
|   |   o       o%%%%%%%o       o   |   |
|   |   |\     /%\%%%%%/%\     /|   |   |
|   |   | \   /%%%\%%%/%%%\   / |   |   |
|   | u |  \ /%%%%%\%/%%%%%\ /  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.2.  Ef = ((u + du)(v + dv))
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     /%\     / \          |
|         /   \   /%%%\   /   \         |
|        /     \ /%%%%%\ /     \        |
|       o       o%%%%%%%o       o       |
|      / \     / \%%%%%/ \     / \      |
|     /   \   /   \%%%/   \   /   \     |
|    /     \ /     \%/     \ /     \    |
|   o       o       o       o       o   |
|   |\     /%\     /%\     /%\     /|   |
|   | \   /%%%\   /%%%\   /%%%\   / |   |
|   |  \ /%%%%%\ /%%%%%\ /%%%%%\ /  |   |
|   |   o%%%%%%%o%%%%%%%o%%%%%%%o   |   |
|   |   |\%%%%%/%\%%%%%/%\%%%%%/|   |   |
|   |   | \%%%/%%%\%%%/%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\%/%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.3.  Difference Map Df = f + Ef
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     / \     / \          |
|         /   \   /   \   /   \         |
|        /     \ /     \ /     \        |
|       o       o       o       o       |
|      / \     /%\     /%\     / \      |
|     /   \   /%%%\   /%%%\   /   \     |
|    /     \ /%%%%%\ /%%%%%\ /     \    |
|   o       o%%%%%%%o%%%%%%%o       o   |
|   |\     /%\%%%%%/ \%%%%%/%\     /|   |
|   | \   /%%%\%%%/   \%%%/%%%\   / |   |
|   |  \ /%%%%%\%/     \%/%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.4.  Linear Proxy df for Df
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     /%\     / \          |
|         /   \   /%%%\   /   \         |
|        /     \ /%%%%%\ /     \        |
|       o       o%%%%%%%o       o       |
|      / \     /%\%%%%%/%\     / \      |
|     /   \   /%%%\%%%/%%%\   /   \     |
|    /     \ /%%%%%\%/%%%%%\ /     \    |
|   o       o%%%%%%%o%%%%%%%o       o   |
|   |\     / \%%%%%/%\%%%%%/ \     /|   |
|   | \   /   \%%%/%%%\%%%/   \   / |   |
|   |  \ /     \%/%%%%%\%/     \ /  |   |
|   |   o       o%%%%%%%o       o   |   |
|   |   |\     / \%%%%%/ \     /|   |   |
|   |   | \   /   \%%%/   \   / |   |   |
|   | u |  \ /     \%/     \ /  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     / \     /    |       |
|       |     \   /   \   /     |       |
|       | du   \ /     \ /   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 1.5.  Remainder rf = Df + df

Computation Summary for Logical Equality

Figure 2.1 shows the expansion of over to produce the expression:

Figure 2.2 shows the expansion of over to produce the expression:

In general, tells you what you would have to do, from wherever you are in the universe if you want to end up in a place where is true. In this case, where the prevailing proposition is the component of tells you this: If and are both true where you are, then change either both or neither of and at the same time, and you will attain a place where is true.

Figure 2.3 shows the expansion of over to produce the expression:

In general, tells you what you would have to do, from wherever you are in the universe if you want to bring about a change in the value of that is, if you want to get to a place where the value of is different from what it is where you are. In the present case, where the ruling proposition is the term of tells you this: If and are both true where you are, then you would have to change one or the other but not both and in order to reach a place where the value of is different from what it is where you are.

Figure 2.4 approximates by the linear form that expands over as follows:

Figure 2.5 shows what remains of the difference map when the first order linear contribution is removed, namely:

o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              /%\%%%%%/%\              |
|             /%%%\%%%/%%%\             |
|            /%%%%%\%/%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          / \%%%%%/%\%%%%%/ \          |
|         /   \%%%/%%%\%%%/   \         |
|        /     \%/%%%%%\%/     \        |
|       o       o%%%%%%%o       o       |
|      / \     / \%%%%%/ \     / \      |
|     /   \   /   \%%%/   \   /   \     |
|    /     \ /     \%/     \ /     \    |
|   o       o       o       o       o   |
|   |\     / \     /%\     / \     /|   |
|   | \   /   \   /%%%\   /   \   / |   |
|   |  \ /     \ /%%%%%\ /     \ /  |   |
|   |   o       o%%%%%%%o       o   |   |
|   |   |\     /%\%%%%%/%\     /|   |   |
|   |   | \   /%%%\%%%/%%%\   / |   |   |
|   | u |  \ /%%%%%\%/%%%%%\ /  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/%\%%%%%/    |       |
|       |     \%%%/%%%\%%%/     |       |
|       | du   \%/%%%%%\%/   dv |       |
|       o-------o%%%%%%%o-------o       |
|                \%%%%%/                |
|                 \%%%/                 |
|                  \%/                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.1.  g = ((u, v))
o---------------------------------------o
|                                       |
|                   o                   |
|                  /%\                  |
|                 /%%%\                 |
|                /%%%%%\                |
|               o%%%%%%%o               |
|              / \%%%%%/ \              |
|             /   \%%%/   \             |
|            /     \%/     \            |
|           o       o       o           |
|          /%\     /%\     /%\          |
|         /%%%\   /%%%\   /%%%\         |
|        /%%%%%\ /%%%%%\ /%%%%%\        |
|       o%%%%%%%o%%%%%%%o%%%%%%%o       |
|      / \%%%%%/ \%%%%%/ \%%%%%/ \      |
|     /   \%%%/   \%%%/   \%%%/   \     |
|    /     \%/     \%/     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     /%\     /%\     /|   |
|   | \   /%%%\   /%%%\   /%%%\   / |   |
|   |  \ /%%%%%\ /%%%%%\ /%%%%%\ /  |   |
|   |   o%%%%%%%o%%%%%%%o%%%%%%%o   |   |
|   |   |\%%%%%/ \%%%%%/ \%%%%%/|   |   |
|   |   | \%%%/   \%%%/   \%%%/ |   |   |
|   | u |  \%/     \%/     \%/  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     /%\     /    |       |
|       |     \   /%%%\   /     |       |
|       | du   \ /%%%%%\ /   dv |       |
|       o-------o%%%%%%%o-------o       |
|                \%%%%%/                |
|                 \%%%/                 |
|                  \%/                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.2.  Eg = ((u + du, v + dv))
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              /%\     /%\              |
|             /%%%\   /%%%\             |
|            /%%%%%\ /%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      / \%%%%%/ \     / \%%%%%/ \      |
|     /   \%%%/   \   /   \%%%/   \     |
|    /     \%/     \ /     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     / \     /%\     /|   |
|   | \   /%%%\   /   \   /%%%\   / |   |
|   |  \ /%%%%%\ /     \ /%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.3.  Difference Map Dg = g + Eg
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              /%\     /%\              |
|             /%%%\   /%%%\             |
|            /%%%%%\ /%%%%%\            |
|           o%%%%%%%o%%%%%%%o           |
|          /%\%%%%%/ \%%%%%/%\          |
|         /%%%\%%%/   \%%%/%%%\         |
|        /%%%%%\%/     \%/%%%%%\        |
|       o%%%%%%%o       o%%%%%%%o       |
|      / \%%%%%/ \     / \%%%%%/ \      |
|     /   \%%%/   \   /   \%%%/   \     |
|    /     \%/     \ /     \%/     \    |
|   o       o       o       o       o   |
|   |\     /%\     / \     /%\     /|   |
|   | \   /%%%\   /   \   /%%%\   / |   |
|   |  \ /%%%%%\ /     \ /%%%%%\ /  |   |
|   |   o%%%%%%%o       o%%%%%%%o   |   |
|   |   |\%%%%%/%\     /%\%%%%%/|   |   |
|   |   | \%%%/%%%\   /%%%\%%%/ |   |   |
|   | u |  \%/%%%%%\ /%%%%%\%/  | v |   |
|   o---+---o%%%%%%%o%%%%%%%o---+---o   |
|       |    \%%%%%/ \%%%%%/    |       |
|       |     \%%%/   \%%%/     |       |
|       | du   \%/     \%/   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.4.  Linear Proxy dg for Dg
o---------------------------------------o
|                                       |
|                   o                   |
|                  / \                  |
|                 /   \                 |
|                /     \                |
|               o       o               |
|              / \     / \              |
|             /   \   /   \             |
|            /     \ /     \            |
|           o       o       o           |
|          / \     / \     / \          |
|         /   \   /   \   /   \         |
|        /     \ /     \ /     \        |
|       o       o       o       o       |
|      / \     / \     / \     / \      |
|     /   \   /   \   /   \   /   \     |
|    /     \ /     \ /     \ /     \    |
|   o       o       o       o       o   |
|   |\     / \     / \     / \     /|   |
|   | \   /   \   /   \   /   \   / |   |
|   |  \ /     \ /     \ /     \ /  |   |
|   |   o       o       o       o   |   |
|   |   |\     / \     / \     /|   |   |
|   |   | \   /   \   /   \   / |   |   |
|   | u |  \ /     \ /     \ /  | v |   |
|   o---+---o       o       o---+---o   |
|       |    \     / \     /    |       |
|       |     \   /   \   /     |       |
|       | du   \ /     \ /   dv |       |
|       o-------o       o-------o       |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   o                   |
|                                       |
o---------------------------------------o
Figure 2.5.  Remainder rg = Dg + dg


| Have I carved enough, my lord --
| Child, you are a bone.
|
| Leonard Cohen, "Teachers" (1967)

Visualization

In my work on Differential Logic and Dynamic Systems, I found it useful to develop several different ways of visualizing logical transformations, indeed, I devised four distinct styles of picture for the job. Thus far in our work on the mapping we've been making use of what I call the areal view of the extended universe of discourse, but as the number of dimensions climbs beyond four, it's time to bid this genre adieu and look for a style that can scale a little better. At any rate, before we proceed any further, let's first assemble the information that we have gathered about from several different angles, and see if it can be fitted into a coherent picture of the transformation

In our first crack at the transformation we simply plotted the state transitions and applied the utterly stock technique of calculating the finite differences.

A quick inspection of the first Table suggests a rule to cover the case when namely, To put it another way, the Table characterizes Orbit 1 by means of the data: Another way to convey the same information is by means of the extended proposition:

A more fine combing of the second Table brings to mind a rule that partly covers the remaining cases, that is, This much information about Orbit 2 is also encapsulated by the extended proposition which says that and are not both true at the same time, while is equal in value to and is opposite in value to

Turing Machine Example

See Theme One Program for documentation of the cactus graph syntax and the propositional modeling program used below.

By way of providing a simple illustration of Cook's Theorem, namely, that “Propositional Satisfiability is NP-Complete”, I will describe one way to translate finite approximations of turing machines into propositional expressions, using the cactus language syntax for propositional calculus that I will describe in more detail as we proceed.

Space and time limited turing machine,
with units of space and units of time.
Space and time limited turing machine,
for computing the parity of a bit string, with number of tape cells of input equal to

I will follow the pattern of discussion in Herbert Wilf (1986), Algorithms and Complexity, pp. 188–201, but translate his logical formalism into cactus language, which is more efficient in regard to the number of propositional clauses that are required.

A turing machine for computing the parity of a bit string is described by means of the following Figure and Table.

400px


Table 4.  Parity Machine
o-------o--------o-------------o---------o------------o
| State | Symbol | Next Symbol | Ratchet | Next State |
|   Q   |   S    |     S'      |   dR    |     Q'     |
o-------o--------o-------------o---------o------------o
|   0   |   0    |     0       |   +1    |     0      |
|   0   |   1    |     1       |   +1    |     1      |
|   0   |   #    |     #       |   -1    |     #      |
|   1   |   0    |     0       |   +1    |     1      |
|   1   |   1    |     1       |   +1    |     0      |
|   1   |   #    |     #       |   -1    |     *      |
o-------o--------o-------------o---------o------------o


The TM has a finite automaton (FA) as one component. Let us refer to this particular FA by the name of

The tape head (that is, the read unit) will be called The registers are also called tape cells or tape squares.

Finite Approximations

To see how each finite approximation to a given turing machine can be given a purely propositional description, one fixes the parameter and limits the rest of the discussion to describing which is not really a full-fledged TM anymore but just a finite automaton in disguise.

In this example, for the sake of a minimal illustration, we choose and discuss Since the zeroth tape cell and the last tape cell are both occupied by the character that is used for both the beginning of file and the end of file markers, this allows for only one digit of significant computation.

To translate into propositional form we use the following collection of basic propositions, boolean variables, or logical features, depending on what one prefers to call them:

The basic propositions for describing the present state function are these:

The proposition of the form says:

At the point-in-time the finite state machine is in the state

The basic propositions for describing the present register function are these:

The proposition of the form says:

At the point-in-time the tape head is on the tape cell

The basic propositions for describing the present symbol function are these:

The proposition of the form says:

At the point-in-time the tape cell bears the mark

Initial Conditions

Given but a single free square on the tape, there are just two different sets of initial conditions for the finite approximation to the parity turing machine that we are presently considering.

Initial Conditions for Tape Input "0"

The following conjunction of 5 basic propositions describes the initial conditions when is started with an input of "0" in its free square:

This conjunction of basic propositions may be read as follows:

At time machine is in the state

At time scanner is reading cell

At time cell contains the symbol

At time cell contains the symbol

At time cell contains the symbol

Initial Conditions for Tape Input "1"

The following conjunction of 5 basic propositions describes the initial conditions when is started with an input of "1" in its free square:

This conjunction of basic propositions may be read as follows:

At time machine is in the state

At time scanner is reading cell

At time cell contains the symbol

At time cell contains the symbol

At time cell contains the symbol

Propositional Program

A complete description of in propositional form is obtained by conjoining one of the above choices for initial conditions with all of the following sets of propositions, that serve in effect as a simple type of declarative program, telling us all that we need to know about the anatomy and behavior of the truncated TM in question.

Mediate Conditions

Terminal Conditions

State Partition

Register Partition

Symbol Partition

Interaction Conditions

Transition Relations

Interpretation of the Propositional Program

Let us now run through the propositional specification of our truncated TM, and paraphrase what it says in ordinary language.

Mediate Conditions

In the interpretation of the cactus language for propositional logic that we are using here, an expression of the form expresses a conditional, an implication, or an if-then proposition, commonly read in one of the following ways:

A text string expression of the form corresponds to a graph-theoretic data-structure of the following form:


o---------------------------------------o
|                                       |
|                 p   q                 |
|                 o---o                 |
|                 |                     |
|                 @                     |
|                                       |
o---------------------------------------o
|               ( p ( q ))              |
o---------------------------------------o


Taken together, the Mediate Conditions state the following:

If at is in state then at is in state and

If at is in state then at is in state and

If at is in state then at is in state and

If at is in state then at is in state

Terminal Conditions

In cactus syntax, an expression of the form expresses the disjunction The corresponding cactus graph, here just a tree, has the following shape:


o---------------------------------------o
|                                       |
|                 p   q                 |
|                 o   o                 |
|                  \ /                  |
|                   o                   |
|                   |                   |
|                   @                   |
|                                       |
o---------------------------------------o
|               ((p) (q))               |
o---------------------------------------o


In effect, the Terminal Conditions state the following:

At time machine is in state or

At time machine is in state

State Partition

In cactus syntax, an expression of the form expresses a statement to the effect that exactly one of the expressions is true, for Expressions of this form are called universal partition expressions, and the corresponding painted and rooted cactus (PARC) has the following shape:


o---------------------------------------o
|                                       |
|         e_1   e_2   ...   e_k         |
|          o     o           o          |
|          |     |           |          |
|          o-----o--- ... ---o          |
|           \               /           |
|            \             /            |
|             \           /             |
|              \         /              |
|               \       /               |
|                \     /                |
|                 \   /                 |
|                  \ /                  |
|                   @                   |
|                                       |
o---------------------------------------o
|       ((e_1),(e_2),(...),(e_k))       |
o---------------------------------------o


The State Partition segment of the propositional program consists of three universal partition expressions, taken in conjunction expressing the condition that has to be in one and only one of its states at each point in time under consideration. In short, we have the constraint:

At each of the points in time for in the set

can be in exactly one state for in the set

Register Partition

The Register Partition segment of the propositional program consists of three universal partition expressions, taken in conjunction saying that the read head must be reading one and only one of the registers or tape cells available to it at each of the points in time under consideration. In sum:

At each of the points in time for

is reading exactly one cell for

Symbol Partition

The Symbol Partition segment of the propositional program for consists of nine universal partition expressions, taken in conjunction stipulating that there has to be one and only one symbol in each of the registers at each point in time under consideration. In short, we have:

<p>At each of the points in time for in

in each of the tape registers for in

there can be exactly one sign for in

Interaction Conditions

In briefest terms, the Interaction Conditions simply express the circumstance that the mark on a tape cell cannot change between two points in time unless the tape head is over the cell in question at the initial one of those points in time. All that we have to do is to see how they manage to say this.

Consider a cactus expression of the following form:

This expression has the corresponding cactus graph:

o---------------------------------------o
|                                       |
|         p<i>_r<j>   p<i+1>_r<j>_s<k>  |
|                 o   o                 |
|                  \ /                  |
|    p<i>_r<j>_s<k> o                   |
|                   |                   |
|                   @                   |
|                                       |
o---------------------------------------o

A propositional expression of this form can be read as follows:

At the time the tape cell bears the mark
it is not the case that:
At the time the tape head is on the tape cell
At the time the tape cell bears the mark

The eighteen clauses of the Interaction Conditions simply impose one such constraint on symbol changes for each combination of the times registers and symbols

Transition Relations

The Transition Relation segment of the propositional program for consists of sixteen implication statements with complex antecedents and consequents. Taken together, these give propositional expression to the TM Figure and Table that were given at the outset.

Just by way of a single example, consider the clause: