This site is supported by donations to The OEIS Foundation.

# Differential Logic and Dynamic Systems • Part 2

Author: Jon Awbrey

## Back to the Beginning : Exemplary Universes

 I would have preferred to be enveloped in words, borne way beyond all possible beginnings. — Michel Foucault, The Discourse on Language, [Fou, 215]

To anchor our understanding of differential logic, let us look at how the various concepts apply in the simplest possible concrete cases, where the initial dimension is only 1 or 2. In spite of the obvious simplicity of these cases, it is possible to observe how central difficulties of the subject begin to arise already at this stage.

### A One-Dimensional Universe

 There was never any more inception than there is now, Nor any more youth or age than there is now; And will never be any more perfection than there is now, Nor any more heaven or hell than there is now. — Walt Whitman, Leaves of Grass, [Whi, 28]

Let ${\displaystyle {\mathcal {X}}=\{x_{1}\}=\{A\}}$ be an alphabet representing one boolean variable or a single logical feature.  In this example the capital letter ${\displaystyle {\text{“}}A{\text{”}}}$ is used informally, to name a logical feature instead of the corresponding space.  At any rate, the basis element ${\displaystyle A=x_{1}}$ may be interpreted as a simple proposition or a coordinate projection ${\displaystyle A=x_{1}:\mathbb {B} {\xrightarrow {i}}\mathbb {B} .}$  The space ${\displaystyle X=\langle A\rangle =\{{\texttt {(}}A{\texttt {)}},A\}}$ of points (cells, vectors, interpretations) has cardinality ${\displaystyle 2^{n}=2^{1}=2}$ and is isomorphic to ${\displaystyle \mathbb {B} =\{0,1\}.}$  Moreover, ${\displaystyle X}$ may be identified with the set of singular propositions ${\displaystyle \{x:\mathbb {B} {\xrightarrow {s}}\mathbb {B} \}.}$  The space of linear propositions ${\displaystyle X^{*}=\{\mathrm {hom} :\mathbb {B} {\xrightarrow {\ell }}\mathbb {B} \}=\{0,A\}}$ is algebraically dual to ${\displaystyle X}$ and also has cardinality ${\displaystyle 2.}$  Here, ${\displaystyle {}^{\backprime \backprime }0{}^{\prime \prime }}$ is interpreted as denoting the constant function ${\displaystyle 0:\mathbb {B} \to \mathbb {B} ,}$ amounting to the linear proposition of rank ${\displaystyle 0,}$ while ${\displaystyle A}$ is the linear proposition of rank ${\displaystyle 1.}$  Last but not least we have the positive propositions ${\displaystyle \{\mathrm {pos} :\mathbb {B} {\xrightarrow {p}}\mathbb {B} \}=\{A,1\},}$ of rank ${\displaystyle 1}$ and ${\displaystyle 0,}$ respectively, where ${\displaystyle {}^{\backprime \backprime }1{}^{\prime \prime }}$ is understood as denoting the constant function ${\displaystyle 1:\mathbb {B} \to \mathbb {B} .}$  In sum, there are ${\displaystyle 2^{2^{n}}=2^{2^{1}}=4}$ propositions altogether in the universe of discourse, comprising the set ${\displaystyle X^{\uparrow }=\{f:X\to \mathbb {B} \}=\{0,{\texttt {(}}A{\texttt {)}},A,1\}\cong (\mathbb {B} \to \mathbb {B} ).}$

The first order differential extension of ${\displaystyle {\mathcal {X}}}$ is ${\displaystyle \mathrm {E} {\mathcal {X}}=\{x_{1},\mathrm {d} x_{1}\}=\{A,\mathrm {d} A\}.}$  If the feature ${\displaystyle A}$ is understood as applying to some object or state, then the feature ${\displaystyle \mathrm {d} A}$ may be interpreted as an attribute of the same object or state that says that it is changing significantly with respect to the property ${\displaystyle A,}$ or that it has an escape velocity with respect to the state ${\displaystyle A.}$  In practice, differential features acquire their logical meaning through a class of temporal inference rules.

For example, relative to a frame of observation that is left implicit for now, one is permitted to make the following sorts of inference:  From the fact that ${\displaystyle A}$ and ${\displaystyle \mathrm {d} A}$ are true at a given moment one may infer that ${\displaystyle {\texttt {(}}A{\texttt {)}}}$ will be true in the next moment of observation.  Altogether in the present instance, there is the fourfold scheme of inference that is shown below:

 ${\displaystyle {\begin{matrix}{\text{From}}&{\texttt {(}}A{\texttt {)}}&{\text{and}}&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\text{infer}}&{\texttt {(}}A{\texttt {)}}&{\text{next.}}\\[8pt]{\text{From}}&{\texttt {(}}A{\texttt {)}}&{\text{and}}&\mathrm {d} A&{\text{infer}}&A&{\text{next.}}\\[8pt]{\text{From}}&A&{\text{and}}&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\text{infer}}&A&{\text{next.}}\\[8pt]{\text{From}}&A&{\text{and}}&\mathrm {d} A&{\text{infer}}&{\texttt {(}}A{\texttt {)}}&{\text{next.}}\end{matrix}}}$

It might be thought that an independent time variable needs to be brought in at this point, but it is an insight of fundamental importance that the idea of process is logically prior to the notion of time. A time variable is a reference to a clock — a canonical, conventional process that is accepted or established as a standard of measurement, but in essence no different than any other process. This raises the question of how different subsystems in a more global process can be brought into comparison, and what it means for one process to serve the function of a local standard for others. But these inquiries only wrap up puzzles in further riddles, and are obviously too involved to be handled at our current level of approximation.

 The clock indicates the moment . . . . but what does      eternity indicate? — Walt Whitman, Leaves of Grass, [Whi, 79]

Observe that the secular inference rules, used by themselves, involve a loss of information, since nothing in them can tell us whether the momenta ${\displaystyle \{{\texttt {(}}\mathrm {d} A{\texttt {)}},\mathrm {d} A\}}$ are changed or unchanged in the next instance. In order to know this, one would have to determine ${\displaystyle \mathrm {d} ^{2}A,}$ and so on, pursuing an infinite regress. Ultimately, in order to rest with a finitely determinate system, it is necessary to make an infinite assumption, for example, that ${\displaystyle \mathrm {d} ^{k}A=0}$ for all ${\displaystyle k}$ greater than some fixed value ${\displaystyle M.}$ Another way to escape the regress is through the provision of a dynamic law, in typical form making higher order differentials dependent on lower degrees and estates.

### Example 1. A Square Rigging

 Urge and urge and urge, Always the procreant urge of the world. — Walt Whitman, Leaves of Grass, [Whi, 28]

By way of example, suppose that we are given the initial condition ${\displaystyle A=\mathrm {d} A}$ and the law ${\displaystyle \mathrm {d} ^{2}A={\texttt {(}}A{\texttt {)}}.}$ Since the equation ${\displaystyle A=\mathrm {d} A}$ is logically equivalent to the disjunction ${\displaystyle A~\mathrm {d} A~{\text{or}}~{\texttt {(}}A{\texttt {)(}}\mathrm {d} A{\texttt {)}},}$ we may infer two possible trajectories, as displayed in Table 11. In either case the state ${\displaystyle A~{\texttt {(}}\mathrm {d} A{\texttt {)(}}\mathrm {d} ^{2}A{\texttt {)}}}$ is a stable attractor or a terminal condition for both starting points.

 ${\displaystyle {\text{Time}}}$ ${\displaystyle {\text{Trajectory 1}}}$ ${\displaystyle {\text{Trajectory 2}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]1\\[4pt]2\\[4pt]3\\[4pt]4\end{matrix}}}$ ${\displaystyle {\begin{matrix}A&\mathrm {d} A&{\texttt {(}}\mathrm {d} ^{2}A{\texttt {)}}\\[4pt]{\texttt {(}}A{\texttt {)}}&\mathrm {d} A&\mathrm {d} ^{2}A\\[4pt]A&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\texttt {(}}\mathrm {d} ^{2}A{\texttt {)}}\\[4pt]A&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\texttt {(}}\mathrm {d} ^{2}A{\texttt {)}}\\[4pt]{}^{\shortparallel }&{}^{\shortparallel }&{}^{\shortparallel }\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}A{\texttt {)}}&{\texttt {(}}\mathrm {d} A{\texttt {)}}&\mathrm {d} ^{2}A\\[4pt]{\texttt {(}}A{\texttt {)}}&\mathrm {d} A&\mathrm {d} ^{2}A\\[4pt]A&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\texttt {(}}\mathrm {d} ^{2}A{\texttt {)}}\\[4pt]A&{\texttt {(}}\mathrm {d} A{\texttt {)}}&{\texttt {(}}\mathrm {d} ^{2}A{\texttt {)}}\\[4pt]{}^{\shortparallel }&{}^{\shortparallel }&{}^{\shortparallel }\end{matrix}}}$

Because the initial space ${\displaystyle X=\langle A\rangle }$ is one-dimensional, we can easily fit the second order extension ${\displaystyle \mathrm {E} ^{2}X=\langle A,\mathrm {d} A,\mathrm {d} ^{2}A\rangle }$ within the compass of a single venn diagram, charting the couple of converging trajectories as shown in Figure 12.

 ${\displaystyle {\text{Figure 12.}}~~{\text{The Anchor}}}$

If we eliminate from view the regions of ${\displaystyle \mathrm {E} ^{2}X}$ that are ruled out by the dynamic law ${\displaystyle \mathrm {d} ^{2}A={\texttt {(}}A{\texttt {)}},}$ then what remains is the quotient structure that is shown in Figure 13. This picture makes it easy to see that the dynamically allowable portion of the universe is partitioned between the properties ${\displaystyle A}$ and ${\displaystyle \mathrm {d} ^{2}A.}$ As it happens, this fact might have been expressed “right off the bat” by an equivalent formulation of the differential law, one that uses the exclusive disjunction to state the law as ${\displaystyle {\texttt {(}}A{\texttt {,}}\mathrm {d} ^{2}A{\texttt {)}}.}$

 ${\displaystyle {\text{Figure 13.}}~~{\text{The Tiller}}}$

What we have achieved in this example is to give a differential description of a simple dynamic process. In effect, we did this by embedding a directed graph, which can be taken to represent the state transitions of a finite automaton, in a dynamically allotted quotient structure that is created from a boolean lattice or an ${\displaystyle n}$-cube by nullifying all of the regions that the dynamics outlaws. With growth in the dimensions of our contemplated universes, it becomes essential, both for human comprehension and for computer implementation, that the dynamic structures of interest to us be represented not actually, by acquaintance, but virtually, by description. In our present study, we are using the language of propositional calculus to express the relevant descriptions, and to comprehend the structure that is implicit in the subsets of a ${\displaystyle n}$-cube without necessarily being forced to actualize all of its points.

One of the reasons for engaging in this kind of extremely reduced, but explicitly controlled case study is to throw light on the general study of languages, formal and natural, in their full array of syntactic, semantic, and pragmatic aspects. Propositional calculus is one of the last points of departure where we can view these three aspects interacting in a non-trivial way without being immediately and totally overwhelmed by the complexity they generate. Often this complexity causes investigators of formal and natural languages to adopt the strategy of focusing on a single aspect and to abandon all hope of understanding the whole, whether it's the still living natural language or the dynamics of inquiry that lies crystallized in formal logic.

From the perspective that I find most useful here, a language is a syntactic system that is designed or evolved in part to express a set of descriptions. When the explicit symbols of a language have extensions in its object world that are actually infinite, or when the implicit categories and generative devices of a linguistic theory have extensions in its subject matter that are potentially infinite, then the finite characters of terms, statements, arguments, grammars, logics, and rhetorics force an excess of intension to reside in all these symbols and functions, across the spectrum from the object language to the metalinguistic uses. In the aphorism from W. von Humboldt that Chomsky often cites, for example, in [Cho86, 30] and [Cho93, 49], language requires “the infinite use of finite means”. This is necessarily true when the extensions are infinite, when the referential symbols and grammatical categories of a language possess infinite sets of models and instances. But it also voices a practical truth when the extensions, though finite at every stage, tend to grow at exponential rates.

This consequence of dealing with extensions that are “practically infinite” becomes crucial when one tries to build neural network systems that learn, since the learning competence of any intelligent system is limited to the objects and domains that it is able to represent. If we want to design systems that operate intelligently with the full deck of propositions dealt by intact universes of discourse, then we must supply them with succinct representations and efficient transformations in this domain. Furthermore, in the project of constructing inquiry driven systems, we find ourselves forced to contemplate the level of generality that is embodied in propositions, because the dynamic evolution of these systems is driven by the measurable discrepancies that occur among their expectations, intentions, and observations, and because each of these subsystems or components of knowledge constitutes a propositional modality that can take on the fully generic character of an empirical summary or an axiomatic theory.

A compression scheme by any other name is a symbolic representation, and this is what the differential extension of propositional calculus, through all of its many universes of discourse, is intended to supply. Why is this particular program of mental calisthenics worth carrying out in general? By providing a uniform logical medium for describing dynamic systems we can make the task of understanding complex systems much easier, both in looking for invariant representations of individual cases and in finding points of comparison among diverse structures that would otherwise appear as isolated systems. All of this goes to facilitate the search for compact knowledge and to adapt what is learned from individual cases to the general realm.

### Back to the Feature

 I guess it must be the flag of my disposition, out of hopeful      green stuff woven. — Walt Whitman, Leaves of Grass, [Whi, 31]

Let us assume that the sense intended for differential features is well enough established in the intuition, for now, that we may continue with outlining the structure of the differential extension ${\displaystyle [\mathrm {E} {\mathcal {X}}]=[A,\mathrm {d} A].}$ Over the extended alphabet ${\displaystyle \mathrm {E} {\mathcal {X}}=\{x_{1},\mathrm {d} x_{1}\}=\{A,\mathrm {d} A\}}$ of cardinality ${\displaystyle 2^{n}=2}$ we generate the set of points ${\displaystyle \mathrm {E} X}$ of cardinality ${\displaystyle 2^{2n}=4}$ that bears the following chain of equivalent descriptions:

 ${\displaystyle {\begin{array}{lll}\mathrm {E} X&=&\langle A,\mathrm {d} A\rangle \\[4pt]&=&\{{\texttt {(}}A{\texttt {)}},A\}~\times ~\{{\texttt {(}}\mathrm {d} A{\texttt {)}},\mathrm {d} A\}\\[4pt]&=&\{{\texttt {(}}A{\texttt {)(}}\mathrm {d} A{\texttt {)}},~{\texttt {(}}A{\texttt {)}}\mathrm {d} A,~A{\texttt {(}}\mathrm {d} A{\texttt {)}},~A~\mathrm {d} A\}.\end{array}}}$

The space ${\displaystyle \mathrm {E} X}$ may be assigned the mnemonic type ${\displaystyle \mathbb {B} \times \mathbb {D} ,}$ which is really no different than ${\displaystyle \mathbb {B} \times \mathbb {B} =\mathbb {B} ^{2}.}$ An individual element of ${\displaystyle \mathrm {E} X}$ may be regarded as a disposition at a point or a situated direction, in effect, a singular mode of change occurring at a single point in the universe of discourse. In applications, the modality of this change can be interpreted in various ways, for example, as an expectation, an intention, or an observation with respect to the behavior of a system.

To complete the construction of the extended universe of discourse ${\displaystyle \mathrm {E} X^{\bullet }=[x_{1},\mathrm {d} x_{1}]=[A,\mathrm {d} A]}$ one must add the set of differential propositions ${\displaystyle \mathrm {E} X^{\uparrow }=\{g:\mathrm {E} X\to \mathbb {B} \}\cong (\mathbb {B} \times \mathbb {D} \to \mathbb {B} )}$ to the set of dispositions in ${\displaystyle \mathrm {E} X.}$ There are ${\displaystyle 2^{2^{2n}}=16}$ propositions in ${\displaystyle \mathrm {E} X^{\uparrow },}$ as detailed in Table 14.

 ${\displaystyle A\colon }$ ${\displaystyle 1~1~0~0}$ ${\displaystyle \mathrm {d} A\colon }$ ${\displaystyle 1~0~1~0}$ ${\displaystyle f_{0}}$ ${\displaystyle g_{0}}$ ${\displaystyle 0~0~0~0}$ ${\displaystyle {\texttt {(}}~{\texttt {)}}}$ ${\displaystyle {\text{false}}}$ ${\displaystyle 0}$ ${\displaystyle {\begin{matrix}g_{1}\\[4pt]g_{2}\\[4pt]g_{4}\\[4pt]g_{8}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~0~0~1\\[4pt]0~0~1~0\\[4pt]0~1~0~0\\[4pt]1~0~0~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}A{\texttt {)(}}\mathrm {d} A{\texttt {)}}\\[4pt]{\texttt {(}}A{\texttt {)}}~~\mathrm {d} A~~\\[4pt]~~A~~{\texttt {(}}\mathrm {d} A{\texttt {)}}\\[4pt]A~~~~\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{neither}}~A~{\text{nor}}~\mathrm {d} A\\[4pt]\mathrm {d} A~{\text{and not}}~A\\[4pt]A~{\text{and not}}~\mathrm {d} A\\[4pt]A~{\text{and}}~\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot A\land \lnot \mathrm {d} A\\[4pt]\lnot A\land \mathrm {d} A\\[4pt]A\land \lnot \mathrm {d} A\\[4pt]A\land \mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{1}\\[4pt]f_{2}\end{matrix}}}$ ${\displaystyle {\begin{matrix}g_{3}\\[4pt]g_{12}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~0~1~1\\[4pt]1~1~0~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}A{\texttt {)}}\\[4pt]A\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not}}~A\\[4pt]A\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot A\\[4pt]A\end{matrix}}}$ ${\displaystyle {\begin{matrix}g_{6}\\[4pt]g_{9}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~1~0\\[4pt]1~0~0~1\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}A{\texttt {,}}~\mathrm {d} A{\texttt {)}}\\[4pt]{\texttt {((}}A{\texttt {,}}~\mathrm {d} A{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}A~{\text{not equal to}}~\mathrm {d} A\\[4pt]A~{\text{equal to}}~\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}A\neq \mathrm {d} A\\[4pt]A=\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}g_{5}\\[4pt]g_{10}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~0~1\\[4pt]1~0~1~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}\mathrm {d} A{\texttt {)}}\\[4pt]\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not}}~\mathrm {d} A\\[4pt]\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot \mathrm {d} A\\[4pt]\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}g_{7}\\[4pt]g_{11}\\[4pt]g_{13}\\[4pt]g_{14}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~1~1\\[4pt]1~0~1~1\\[4pt]1~1~0~1\\[4pt]1~1~1~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}~~A~~~~\mathrm {d} A~~{\texttt {)}}\\[4pt]{\texttt {(}}~~A~~{\texttt {(}}\mathrm {d} A{\texttt {))}}\\[4pt]{\texttt {((}}A{\texttt {)}}~~\mathrm {d} A~~{\texttt {)}}\\[4pt]{\texttt {((}}A{\texttt {)(}}\mathrm {d} A{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not both}}~A~{\text{and}}~\mathrm {d} A\\[4pt]{\text{not}}~A~{\text{without}}~\mathrm {d} A\\[4pt]{\text{not}}~\mathrm {d} A~{\text{without}}~A\\[4pt]A~{\text{or}}~\mathrm {d} A\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot A\lor \lnot \mathrm {d} A\\[4pt]A\Rightarrow \mathrm {d} A\\[4pt]A\Leftarrow \mathrm {d} A\\[4pt]A\lor \mathrm {d} A\end{matrix}}}$ ${\displaystyle f_{3}}$ ${\displaystyle g_{15}}$ ${\displaystyle 1~1~1~1}$ ${\displaystyle {\texttt {((}}~{\texttt {))}}}$ ${\displaystyle {\text{true}}}$ ${\displaystyle 1}$

Aside from changing the names of variables and shuffling the order of rows, this Table follows the format that was used previously for boolean functions of two variables. The rows are grouped to reflect natural similarity classes among the propositions. In a future discussion, these classes will be given additional explanation and motivation as the orbits of a certain transformation group acting on the set of 16 propositions. Notice that four of the propositions, in their logical expressions, resemble those given in the table for ${\displaystyle X^{\uparrow }.}$ Thus the first set of propositions ${\displaystyle \{f_{i}\}}$ is automatically embedded in the present set ${\displaystyle \{g_{j}\}}$ and the corresponding inclusions are indicated at the far left margin of the Table.

### Tacit Extensions

 I would really like to have slipped imperceptibly into this lecture, as into all the others I shall be delivering, perhaps over the years ahead. — Michel Foucault, The Discourse on Language, [Fou, 215]

Strictly speaking, however, there is a subtle distinction in type between the function ${\displaystyle f_{i}:X\to \mathbb {B} }$ and the corresponding function ${\displaystyle g_{j}:\mathrm {E} X\to \mathbb {B} ,}$ even though they share the same logical expression. Naturally, we want to maintain the logical equivalence of expressions that represent the same proposition while appreciating the full diversity of that proposition's functional and typical representatives. Both perspectives, and all the levels of abstraction extending through them, have their reasons, as will develop in time.

Because this special circumstance points up an important general theme, it is a good idea to discuss it more carefully. Whenever there arises a situation like this, where one alphabet ${\displaystyle {\mathcal {X}}}$ is a subset of another alphabet ${\displaystyle {\mathcal {Y}},}$ then we say that any proposition ${\displaystyle f:\langle {\mathcal {X}}\rangle \to \mathbb {B} }$ has a tacit extension to a proposition ${\displaystyle {\boldsymbol {\varepsilon }}f:\langle {\mathcal {Y}}\rangle \to \mathbb {B} ,}$ and that the space ${\displaystyle (\langle {\mathcal {X}}\rangle \to \mathbb {B} )}$ has an automatic embedding within the space ${\displaystyle (\langle {\mathcal {Y}}\rangle \to \mathbb {B} ).}$ The extension is defined in such a way that ${\displaystyle {\boldsymbol {\varepsilon }}f}$ puts the same constraint on the variables of ${\displaystyle {\mathcal {X}}}$ that are contained in ${\displaystyle {\mathcal {Y}}}$ as the proposition ${\displaystyle f}$ initially did, while it puts no constraint on the variables of ${\displaystyle {\mathcal {Y}}}$ outside of ${\displaystyle {\mathcal {X}},}$ in effect, conjoining the two constraints.

If the variables in question are indexed as ${\displaystyle {\mathcal {X}}=\{x_{1},\ldots ,x_{n}\}}$ and ${\displaystyle {\mathcal {Y}}=\{x_{1},\ldots ,x_{n},\ldots ,x_{n+k}\},}$ then the definition of the tacit extension from ${\displaystyle {\mathcal {X}}}$ to ${\displaystyle {\mathcal {Y}}}$ may be expressed in the form of an equation:

 ${\displaystyle {\boldsymbol {\varepsilon }}f(x_{1},\ldots ,x_{n},\ldots ,x_{n+k})~=~f(x_{1},\ldots ,x_{n}).}$

On formal occasions, such as the present context of definition, the tacit extension from ${\displaystyle {\mathcal {X}}}$ to ${\displaystyle {\mathcal {Y}}}$ is explicitly symbolized by the operator ${\displaystyle {\boldsymbol {\varepsilon }}:(\langle {\mathcal {X}}\rangle \to \mathbb {B} )\to (\langle {\mathcal {Y}}\rangle \to \mathbb {B} ),}$ where the appropriate alphabets ${\displaystyle {\mathcal {X}}}$ and ${\displaystyle {\mathcal {Y}}}$ are understood from context, but normally one may leave the "${\displaystyle {\boldsymbol {\varepsilon }}}$" silent.

Let's explore what this means for the present Example. Here, ${\displaystyle {\mathcal {X}}=\{A\}}$ and ${\displaystyle {\mathcal {Y}}=\mathrm {E} {\mathcal {X}}=\{A,\mathrm {d} A\}.}$ For each of the propositions ${\displaystyle f_{i}}$ over ${\displaystyle X,}$ specifically, those whose expression ${\displaystyle e_{i}}$ lies in the collection ${\displaystyle \{0,{\texttt {(}}A{\texttt {)}},A,1\},}$ the tacit extension ${\displaystyle {\boldsymbol {\varepsilon }}f}$ of ${\displaystyle f}$ to ${\displaystyle \mathrm {E} X}$ can be phrased as a logical conjunction of two factors, ${\displaystyle f_{i}=e_{i}\cdot \tau ~,}$ where ${\displaystyle \tau }$ is a logical tautology that uses all the variables of ${\displaystyle {\mathcal {Y}}-{\mathcal {X}}.}$ Working in these terms, the tacit extensions ${\displaystyle {\boldsymbol {\varepsilon }}f}$ of ${\displaystyle f}$ to ${\displaystyle \mathrm {E} X}$ may be explicated as shown in Table 15.

 ${\displaystyle {\begin{matrix}0&=&0&\cdot &{\texttt {(}}\mathrm {d} A{\texttt {,(}}\mathrm {d} A{\texttt {))}}&=&&0\\[8pt]{\texttt {(}}A{\texttt {)}}&=&{\texttt {(}}A{\texttt {)}}&\cdot &{\texttt {(}}\mathrm {d} A{\texttt {,(}}\mathrm {d} A{\texttt {))}}&=&{\texttt {(}}A{\texttt {)}}\,\mathrm {d} A~&+&{\texttt {(}}A{\texttt {)(}}\mathrm {d} A{\texttt {)}}\\[8pt]A&=&~A~&\cdot &{\texttt {(}}\mathrm {d} A{\texttt {,(}}\mathrm {d} A{\texttt {))}}&=&~A~~\mathrm {d} A~&+&~A~{\texttt {(}}\mathrm {d} A{\texttt {)}}\\[8pt]1&=&1&\cdot &{\texttt {(}}\mathrm {d} A{\texttt {,(}}\mathrm {d} A{\texttt {))}}&=&&1\end{matrix}}}$

In its effect on the singular propositions over ${\displaystyle X,}$ this analysis has an interesting interpretation. The tacit extension takes us from thinking about a particular state, like ${\displaystyle A}$ or ${\displaystyle {\texttt {(}}A{\texttt {)}},}$ to considering the collection of outcomes, the outgoing changes or the singular dispositions, that spring from that state.

### Example 2. Drives and Their Vicissitudes

 I open my scuttle at night and see the far-sprinkled systems, And all I see, multiplied as high as I can cipher, edge but      the rim of the farther systems. — Walt Whitman, Leaves of Grass, [Whi, 81]

Before we leave the one-feature case let's look at a more substantial example, one that illustrates a general class of curves that can be charted through the extended feature spaces and that provides an opportunity to discuss a number of important themes concerning their structure and dynamics.

Again, let ${\displaystyle {\mathcal {X}}=\{x_{1}\}=\{A\}.}$ In the discussion that follows we will consider a class of trajectories having the property that ${\displaystyle \mathrm {d} ^{k}A=0}$ for all ${\displaystyle k}$ greater than some fixed ${\displaystyle m}$ and we may indulge in the use of some picturesque terms that describe salient classes of such curves. Given the finite order condition, there is a highest order non-zero difference ${\displaystyle \mathrm {d} ^{m}A}$ exhibited at each point in the course of any determinate trajectory that one may wish to consider. With respect to any point of the corresponding orbit or curve let us call this highest order differential feature ${\displaystyle \mathrm {d} ^{m}A}$ the drive at that point. Curves of constant drive ${\displaystyle \mathrm {d} ^{m}A}$ are then referred to as ${\displaystyle m^{\text{th}}}$-gear curves.

• Scholium. The fact that a difference calculus can be developed for boolean functions is well known [Fuji], [Koh, § 8-4] and was probably familiar to Boole, who was an expert in difference equations before he turned to logic. And of course there is the strange but true story of how the Turin machines of the 1840s prefigured the Turing machines of the 1940s [Men, 225-297]. At the very outset of general purpose, mechanized computing we find that the motive power driving the Analytical Engine of Babbage, the kernel of an idea behind all of his wheels, was exactly his notion that difference operations, suitably trained, can serve as universal joints for any conceivable computation [M&M], [Mel, ch. 4].

Given this language, the Example we take up here can be described as the family of ${\displaystyle 4^{\text{th}}}$-gear curves through ${\displaystyle \mathrm {E} ^{4}X}$ ${\displaystyle =}$ ${\displaystyle \langle A,~\mathrm {d} A,~\mathrm {d} ^{2}\!A,~\mathrm {d} ^{3}\!A,~\mathrm {d} ^{4}\!A\rangle .}$ These are the trajectories generated subject to the dynamic law ${\displaystyle \mathrm {d} ^{4}A=1,}$ where it is understood in such a statement that all higher order differences are equal to ${\displaystyle 0.}$ Since ${\displaystyle \mathrm {d} ^{4}A}$ and all higher ${\displaystyle \mathrm {d} ^{k}A}$ are fixed, the temporal or transitional conditions (initial, mediate, terminal — transient or stable states) vary only with respect to their projections as points of ${\displaystyle \mathrm {E} ^{3}X=\langle A,~\mathrm {d} A,~\mathrm {d} ^{2}\!A,~\mathrm {d} ^{3}\!A\rangle .}$ Thus, there is just enough space in a planar venn diagram to plot all of these orbits and to show how they partition the points of ${\displaystyle \mathrm {E} ^{3}X.}$ It turns out that there are exactly two possible orbits, of eight points each, as illustrated in Figure 16.

 ${\displaystyle {\text{Figure 16.}}~~{\text{A Couple of Fourth Gear Orbits}}}$

With a little thought it is possible to devise an indexing scheme for the general run of dynamic states that allows for comparing universes of discourse that weigh in on different scales of observation. With this end in sight, let us index the states ${\displaystyle q\in \mathrm {E} ^{m}X}$ with the dyadic rationals (or the binary fractions) in the half-open interval ${\displaystyle [0,2).}$ Formally and canonically, a state ${\displaystyle q_{r}}$ is indexed by a fraction ${\displaystyle r={\tfrac {s}{t}}}$ whose denominator is the power of two ${\displaystyle t=2^{m}}$ and whose numerator is a binary numeral formed from the coefficients of state in a manner to be described next. The differential coefficients of the state ${\displaystyle q}$ are just the values ${\displaystyle \mathrm {d} ^{k}\!A(q)}$ for ${\displaystyle k=0~{\text{to}}~m,}$ where ${\displaystyle \mathrm {d} ^{0}\!A}$ is defined as being identical to ${\displaystyle A.}$ To form the binary index ${\displaystyle d_{0}.d_{1}\ldots d_{m}}$ of the state ${\displaystyle q}$ the coefficient ${\displaystyle \mathrm {d} ^{k}\!A(q)}$ is read off as the binary digit ${\displaystyle d_{k}}$ associated with the place value ${\displaystyle 2^{-k}.}$ Expressed by way of algebraic formulas, the rational index ${\displaystyle r}$ of the state ${\displaystyle q}$ can be given by the following equivalent formulations:

 ${\displaystyle {\begin{matrix}r(q)&=&\displaystyle \sum _{k}d_{k}\cdot 2^{-k}&=&\displaystyle \sum _{k}{\text{d}}^{k}A(q)\cdot 2^{-k}\\[8pt]=\\[8pt]\displaystyle {\frac {s(q)}{t}}&=&\displaystyle {\frac {\sum _{k}d_{k}\cdot 2^{(m-k)}}{2^{m}}}&=&\displaystyle {\frac {\sum _{k}{\text{d}}^{k}A(q)\cdot 2^{(m-k)}}{2^{m}}}\end{matrix}}}$

Applied to the example of ${\displaystyle 4^{\text{th}}}$-gear curves, this scheme results in the data of Tables 17-a and 17-b, which exhibit one period for each orbit. The states in each orbit are listed as ordered pairs ${\displaystyle (p_{i},q_{j}),}$ where ${\displaystyle p_{i}}$ may be read as a temporal parameter that indicates the present time of the state and where ${\displaystyle j}$ is the decimal equivalent of the binary numeral ${\displaystyle s.}$ Informally and more casually, the Tables exhibit the states ${\displaystyle q_{s}}$ as subscripted with the numerators of their rational indices, taking for granted the constant denominators of ${\displaystyle 2^{m}=2^{4}=16.}$ In this set-up the temporal successions of states can be reckoned as given by a kind of parallel round-up rule. That is, if ${\displaystyle (d_{k},d_{k+1})}$ is any pair of adjacent digits in the state index ${\displaystyle r,}$ then the value of ${\displaystyle d_{k}}$ in the next state is ${\displaystyle {d_{k}}'=d_{k}+d_{k+1}.}$

${\displaystyle {\text{Table 17-a.}}~~{\text{A Couple of Orbits in Fourth Gear : Orbit 1}}}$
${\displaystyle {\text{Time}}}$ ${\displaystyle {\text{State}}}$ ${\displaystyle A}$ ${\displaystyle \mathrm {d} A}$
${\displaystyle p_{i}}$ ${\displaystyle q_{j}}$ ${\displaystyle \mathrm {d} ^{0}\!A}$ ${\displaystyle \mathrm {d} ^{1}\!A}$ ${\displaystyle \mathrm {d} ^{2}\!A}$ ${\displaystyle \mathrm {d} ^{3}\!A}$ ${\displaystyle \mathrm {d} ^{4}\!A}$

${\displaystyle {\begin{matrix}p_{0}\\[4pt]p_{1}\\[4pt]p_{2}\\[4pt]p_{3}\\[4pt]p_{4}\\[4pt]p_{5}\\[4pt]p_{6}\\[4pt]p_{7}\end{matrix}}}$

${\displaystyle {\begin{matrix}q_{01}\\[4pt]q_{03}\\[4pt]q_{05}\\[4pt]q_{15}\\[4pt]q_{17}\\[4pt]q_{19}\\[4pt]q_{21}\\[4pt]q_{31}\end{matrix}}}$

 ${\displaystyle {\begin{matrix}0.\\[4pt]0.\\[4pt]0.\\[4pt]0.\\[4pt]1.\\[4pt]1.\\[4pt]1.\\[4pt]1.\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]0\\[4pt]0\\[4pt]1\\[4pt]0\\[4pt]0\\[4pt]0\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]0\\[4pt]1\\[4pt]1\\[4pt]0\\[4pt]0\\[4pt]1\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]1\\[4pt]0\\[4pt]1\\[4pt]0\\[4pt]1\\[4pt]0\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\end{matrix}}}$

${\displaystyle {\text{Table 17-b.}}~~{\text{A Couple of Orbits in Fourth Gear : Orbit 2}}}$
${\displaystyle {\text{Time}}}$ ${\displaystyle {\text{State}}}$ ${\displaystyle A}$ ${\displaystyle \mathrm {d} A}$
${\displaystyle p_{i}}$ ${\displaystyle q_{j}}$ ${\displaystyle \mathrm {d} ^{0}\!A}$ ${\displaystyle \mathrm {d} ^{1}\!A}$ ${\displaystyle \mathrm {d} ^{2}\!A}$ ${\displaystyle \mathrm {d} ^{3}\!A}$ ${\displaystyle \mathrm {d} ^{4}\!A}$

${\displaystyle {\begin{matrix}p_{0}\\[4pt]p_{1}\\[4pt]p_{2}\\[4pt]p_{3}\\[4pt]p_{4}\\[4pt]p_{5}\\[4pt]p_{6}\\[4pt]p_{7}\end{matrix}}}$

${\displaystyle {\begin{matrix}q_{25}\\[4pt]q_{11}\\[4pt]q_{29}\\[4pt]q_{07}\\[4pt]q_{09}\\[4pt]q_{27}\\[4pt]q_{13}\\[4pt]q_{23}\end{matrix}}}$

 ${\displaystyle {\begin{matrix}1.\\[4pt]0.\\[4pt]1.\\[4pt]0.\\[4pt]0.\\[4pt]1.\\[4pt]0.\\[4pt]1.\end{matrix}}}$ ${\displaystyle {\begin{matrix}1\\[4pt]1\\[4pt]1\\[4pt]0\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]0\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]0\\[4pt]1\\[4pt]1\\[4pt]0\\[4pt]0\\[4pt]1\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]1\\[4pt]0\\[4pt]1\\[4pt]0\\[4pt]1\\[4pt]0\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\\[4pt]1\end{matrix}}}$