This site is supported by donations to The OEIS Foundation.
Inquiry Driven Systems • Appendix A2
Author: Jon Awbrey
• Overview • Part 1 • Part 2 • Part 3 • Part 4 • Part 5 • Part 6 • Part 7 • Part 8 • Part 9 • Part 10 • Part 11 • Part 12 • Part 13 • Part 14 • Part 15 • Part 16 • Appendix A1 • Appendix A2 • References • Document History •
Contents
Syntactic Transformations
We have been examining several distinct but closely related notions of indication. To discuss the import of these ideas in greater depth, it serves to establish a number of logical relations and settheoretic identities that can be found to hold among their roughly parallel arrays of conceptions and constructions. Facilitating this task requires in turn a number of auxiliary concepts and notations. The notions of indication in question are expressed in a variety of different notations, enumerated as follows:
 The functional language of propositions
 The logical language of sentences
 The geometric language of sets
Thus, one way to explain the relationships that hold among these concepts is to describe the translations that are induced among their allied families of notation.
Syntactic Transformation Rules
A good way to summarize these translations and to organize their use in practice is by means of the syntactic transformation rules (STRs) that partially formalize them. Rudimentary examples of STRs are readily mined from the raw materials that are already available in this area of discussion. To begin, let the definition of an indicator function be recorded in the following form:
 
 

In practice, a definition like this is commonly used to substitute one of two logically equivalent expressions or sentences for the other in a context where the conditions of using the definition in this way are satisfied and where the change is perceived as potentially advancing a proof. The employment of a definition in this way can be expressed in the form of an STR that allows one to exchange two expressions of logically equivalent forms for one another in every context where their logical values are the only consideration. To be specific, the logical value of an expression is the value in the boolean domain that the expression stands for in its context or represents to its interpreter.
In the case of Definition 1, the corresponding STR permits one to exchange a sentence of the form with an expression of the form in any context that satisfies the conditions of its use, namely, the conditions of the definition that lead up to the stated equivalence. The relevant STR is recorded in Rule 1. By way of convention, I list the items that fall under a rule roughly in order of their ascending conceptual subtlety or their increasing syntactic complexity, without regard for their normal or typical orders of exchange, since this can vary widely from case to case.
Conversely, any rule of this sort, properly qualified by the conditions under which it applies, can be turned back into a summary statement of the logical equivalence that is involved in its application. This mode of conversion between a static principle and a transformational rule, in other words, between a statement of equivalence and an equivalence of statements, is so automatic that it is usually not necessary to make a separate note of the "horizontal" versus the "vertical" versions of what amounts to the same abstract principle.
As another example of an STR, consider the following logical equivalence, that holds for any and for all
In practice, this logical equivalence is used to exchange an expression of the form with a sentence of the form in any context where one has a relatively fixed in mind and where one is conceiving to vary over its whole domain, namely, the universe This leads to the STR that is given in Rule 2.
Rules like these can be chained together to establish extended rules, just so long as their antecedent conditions are compatible. For example, Rules 1 and 2 combine to give the equivalents that are listed in Rule 3. This follows from a recognition that the function that is introduced in Rule 1 is an instance of the function that is mentioned in Rule 2. By the time one arrives in the "consequence box" of either Rule, then, one has in mind a comparatively fixed a proposition or about things in and a variable argument
 
 

A large stock of rules can be derived in this way, by chaining together segments that are selected from a stock of previous rules, with perhaps the whole process of derivation leading back to an axial body or a core stock of rules, with all recurring to and relying on an axiomatic basis. In order to keep track of their derivations, as their pedigrees help to remember the reasons for trusting their use in the first place, derived rules can be annotated by citing the rules from which they are derived.
In the present discussion, I am using a particular style of annotation for rule derivations, one that is called proof by grammatical paradigm or proof by syntactic analogy. The annotations in the right hand margin of the Rule Box interweave the numerators and the denominators of the paradigm being employed, in other words, the alternating terms of comparison in a sequence of analogies. Taking the syntactic transformations marked in the Rule Box one at a time, each step is licensed by its formal analogy to a previously established rule.
For example, the annotation may be read to say that is to as is to where the step from to is permitted by a previously accepted rule.
This can be illustrated by considering the derivation of Rule 3 in the augmented form that follows:

Notice how the sequence of analogies pivots on the term viewing it first under the aegis of as the second term of the first analogy, and then turning to view it again under the guise of as the first term of the second analogy.
By way of convention, rules that are tailored to a particular application, case, or subject, and rules that are adapted to a particular goal, object, or purpose, I frequently refer to as Facts.
Besides linking rules together into extended sequences of equivalents, there is one other way that is commonly used to get new rules from old. Novel starting points for rules can be obtained by extracting pairs of equivalent expressions from a sequence that falls under an established rule and then stating their equality in the appropriate form of equation.
For example, extracting the expressions and that are given as equivalents in Rule 3 and explicitly stating their equivalence produces the equation recorded in Corollary 1.
 


There are a number of issues, that arise especially in establishing the proper use of STRs, that are appropriate to discuss at this juncture. The notation is intended to represent the proposition denoted by the sentence There is only one problem with the use of this form. There is, in general, no such thing as "the" proposition denoted by Generally speaking, if a sentence is taken out of context and considered across a variety of different contexts, there is no unique proposition that it can be said to denote. But one is seldom ever speaking at the maximum level of generality, or even found to be thinking of it, and so this notation is usually meaningful and readily understandable whenever it is read in the proper frame of mind. Still, once the issue is raised, the question of how these meanings and understandings are possible has to be addressed, especially if one desires to express the regulations of their syntax in a partially computational form. This requires a closer examination of the very notion of context, and it involves engaging in enough reflection on the contextual evaluation of sentences that the relevant principles of its successful operation can be discerned and rationalized in explicit terms.
A sentence that is written in a context where it represents a value of or as a function of things in the universe where it stands for a value of or depending on how the signs that constitute its proper syntactic arguments are interpreted as denoting objects in in other words, where it is bound to lead its interpreter to view its own truth or falsity as determined by a choice of objects in is a sentence that might as well be written in the context whether this frame is explicitly marked around it or not.
More often than not, the context of interpretation fixes the denotations of most of the signs that make up a sentence, and so it is safe to adopt the convention that only those signs whose objects are not already fixed are free to vary in their denotations. Thus, only the signs that remain in default of prior specification are subject to treatment as variables, with a decree of functional abstraction hanging over all of their heads.
Going back to Rule 1, we see that it lists a pair of concrete sentences and authorizes exchanges in either direction between the syntactic structures that have these two forms. But a sentence is any sign that denotes a proposition, and so there are any number of less obvious sentences that can be added to this list, extending the number of items that are licensed to be exchanged. For example, a larger collection of equivalent sentences is recorded in Rule 4.
The first and last items on this list, namely, the sentence stating and the sentence stating are just the pair of sentences from Rule 3 whose equivalence for all is usually taken to define the idea of an indicator function At first sight, the inclusion of the other items appears to involve a category confusion, in other words, to mix the modes of interpretation and to create an array of mismatches between their ostensible types and the ruling type of a sentence. On reflection, and taken in context, these problems are not as serious as they initially seem. For example, the expression ostensibly denotes a proposition, but if it does, then it evidently can be recognized, by virtue of this very fact, to be a genuine sentence. As a general rule, if one can see it on the page, then it cannot be a proposition but can at most be a sign of one.
The use of the basic logical connectives can be expressed in the form of an STR as follows:
 
 

As a general rule, the application of an STR involves the recognition of an antecedent condition and the facilitation of a consequent condition. The antecedent condition is a state whose initial expression presents a match, in a formal sense, to one of the sentences that are listed in the STR, and the consequent condition is achieved by taking its suggestions seriously, in other words, by following its sequence of equivalents and implicants to some other link in its chain.
Generally speaking, the application of a rule involves the recognition of an antecedent condition as a case that falls under a clause of the rule. This means that the antecedent condition is able to be captured in the form, conceived in the guise, expressed in the manner, grasped in the pattern, or recognized in the shape of one of the sentences in a list of equivalents or a chain of implicants.
A condition is amenable to a rule if any of its conceivable expressions formally match any of the expressions that are enumerated by the rule. Further, it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that needs to be checked on input for whether it fits the antecedent condition and there are several types of output that are generated as a consequence, only a few of which are usually needed at any given time.
Editing Note. Need a transition here. Give a brief description of the Tables of Translation Rules that have now been moved to the Appendices, and then move on to the rest of the Definitions and Proof Schemata.
A rule that allows one to turn equivalent sentences into identical propositions:
Compare:
Editing Note. The last draft I can find has 5 variants for the next box, "Value Rule 1", and I can't tell right off which I meant to use. Until I can get back to this, here's a link to the collection of variants:
 
 

 
 

 
 

 
 

 
 

Given an indexed set of sentences, for it is possible to consider the logical conjunction of the corresponding propositions. Various notations for this concept are be useful in various contexts, a sufficient sample of which are recorded in Definition 6.
 
 

 
 

 
 

 
 

 
 

Editing Note. Check earlier and later drafts to see where came from. Are these just placeholders for the Value or Evaluation Rules?
 
 

For instance, the observation that expresses the equality of sets in terms of their indicator functions can be formalized according to the pattern in Rule 9, namely, at lines R9a, R9b, and R9c, and these components of Rule 9 can be cited in future uses by their indices in this list. Using Rule 7, annotated as R7, to adduce a few properties of indicator functions to the account, it is possible to extend Rule 9 by another few steps, referenced as R9d, R9e, R9f, and R9g.
 
 

 
 

 
 

An application of Rule 11 involves the recognition of an antecedent condition as a case under the Rule, that is, as a condition that matches one of the sentences in the Rule's chain of equivalents, and it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that has to be checked on input for whether it fits the antecedent condition, and there is the choice of three types of output that are generated as a consequence, only one of which is generally needed at any given time. More often than not, though, a rule is applied in only a few of its possible ways. The usual antecedent and the usual consequents for Rule 11 can be distinguished in form and specialized in practice as follows:
marks the usual starting place for an application of the Rule, that is, the standard form of antecedent condition that is likely to lead to an invocation of the Rule. 
records the trivial consequence of applying the upspar operator to both sides of the initial equation. 
gives a version of the indicator function with called the extensional or relational form of the indicator function. 
gives a version of the indicator function with called its functional form. 
Applying Rule 9, Rule 8, and the Logical Rules to the special case where one obtains the following general Fact:
 
 

Derived Equivalence Relations
One seeks a method of general application for approaching the individual sign relation, a way to select an aspect of its form, to analyze it with regard to its intrinsic structure, and to classify it in comparison with other sign relations. With respect to a particular sign relation, one approach that presents itself is to examine the relation between signs and interpretants that is given directly by its connotative component and to compare it with the various forms of derived, indirect, mediate, or peripheral relationships that can be found to exist among signs and interpretants by way of secondary considerations or subsequent studies. Of especial interest are the relationships among signs and interpretants that can be obtained by working through the collections of objects that they commonly or severally denote.
A classic way of showing that two sets are equal is to show that every element of the first belongs to the second and that every element of the second belongs to the first. The problem with this strategy is that one can exhaust a considerable amount of time trying to prove that two sets are equal before it occurs to one to look for a counterexample, that is, an element of the first that does not belong to the second or an element of the second that does not belong to the first, in cases where that is precisely what one ought to be seeking. It would be nice if there were a more balanced, impartial, or neutral way to go about this task, one that did not require such an undue commitment to either side, a technique that helps to pinpoint the counterexamples when they exist, and a method that keeps in mind the original relation of proving that and showing that to probing, testing, and seeing whether.
A different way of seeing that two sets are equal, or of seeing whether two sets are equal, is based on the following observation:
Two sets are equal as sets 
The indicator functions of the two sets are equal as functions 
The values of the two indicator functions are equal to each other on all domain elements. 
It is important to notice the hidden quantifier, of a universal kind, that lurks in all three equivalent statements but is only revealed in the last.
In making the next set of definitions and in using the corresponding terminology it is taken for granted that all of the references of signs are relative to a particular sign relation that either remains to be specified or is already understood. Further, I continue to assume that in which case this set is called the syntactic domain of
In the following definitions, let let and let
Recall the definition of the connotative component of a sign relation in the following form:
Equivalent expressions for this concept are recorded in Definition 8.
 
 

Editing Note. Need a discussion of converse relations here. Perhaps it would work to introduce the operators that Peirce used for the converse of a dyadic relative namely,
The dyadic relation that is the converse of the connotative relation can be defined directly in the following fashion:
A few of the many different expressions for this concept are recorded in Definition 9.
 
 

Recall the definition of the denotative component of in the following form:
Equivalent expressions for this concept are recorded in Definition 10.
 
 

The dyadic relation that is the converse of the denotative relation can be defined directly in the following fashion:
A few of the many different expressions for this concept are recorded in Definition 11.
 
 

The denotation of in written is defined as follows:
In other words:
Equivalent expressions for this concept are recorded in Definition 12.
 
 

Signs are equiferent if they refer to all and only the same objects, that is, if they have exactly the same denotations. In other language for the same relation, signs are said to be denotatively equivalent or referentially equivalent, but it is probably best to check whether the extension of this concept over the syntactic domain is really a genuine equivalence relation before jumping to the conclusions that are implied by these latter terms.
To define the equiference of signs in terms of their denotations, one says that is equiferent to under and writes to mean that Taken in extension, this notion of a relation between signs induces an equiference relation on the syntactic domain.
For each sign relation this yields a binary relation that is defined as follows:
These definitions and notations are recorded in the following display.
 
 

The relation is defined and the notation is meaningful in every situation where the corresponding denotation operator makes sense, but it remains to check whether this relation enjoys the properties of an equivalence relation.

Reflexive property.
Is it true that for every ?
By definition, if and only if
Thus, the reflexive property holds in any setting where the denotations are defined for all signs in the syntactic domain of

Symmetric property.
Does imply for all ?
In effect, does imply for all signs and in the syntactic domain ?
Yes, so long as the sets and are welldefined, a fact which is already being assumed.

Transitive property.
Does and imply for all ?
To belabor the point, does and imply for all ?
Yes, once again, under the stated conditions.
It should be clear at this point that any question about the equiference of signs reduces to a question about the equality of sets, specifically, the sets that are indexed by these signs. As a result, so long as these sets are welldefined, the issue of whether equiference relations induce equivalence relations on their syntactic domains is almost as trivial as it initially appears.
Taken in its settheoretic extension, a relation of equiference induces a denotative equivalence relation (DER) on its syntactic domain This leads to the formation of denotative equivalence classes (DECs), denotative partitions (DEPs), and denotative equations (DEQs) on the syntactic domain. But what does it mean for signs to be equiferent?
Notice that this is not the same thing as being semiotically equivalent, in the sense of belonging to a single semiotic equivalence class (SEC), falling into the same part of a semiotic partition (SEP), or having a semiotic equation (SEQ) between them. It is only when very felicitous conditions obtain, establishing a concord between the denotative and the connotative components of a sign relation, that these two ideas coalesce.
In general, there is no necessity that the equiference of signs, that is, their denotational equivalence or their referential equivalence, induces the same equivalence relation on the syntactic domain as that defined by their semiotic equivalence, even though this state of accord seems like an especially desirable situation. This makes it necessary to find a distinctive nomenclature for these structures, for which I adopt the term denotative equivalence relations (DERs). In their train they bring the allied structures of denotative equivalence classes (DECs) and denotative partitions (DEPs), while the corresponding statements of denotative equations (DEQs) are expressible in the form
The uses of the equal sign for denoting equations or equivalences are recalled and extended in the following ways:
 If is an arbitrary equivalence relation, then the equation means that
 If is a sign relation such that is a SER on then the semiotic equation means that
 If is a sign relation such that is its DER on then the denotative equation means that in other words, that
The use of square brackets for denoting equivalence classes is recalled and extended in the following ways:
 If is an arbitrary equivalence relation, then is the equivalence class of under
 If is a sign relation such that is a SER on then is the SEC of under
 If is a sign relation such that is a DER on then is the DEC of under
By applying the form of Fact 1 to the special case where and one obtains the following facts.
 
 

 
 

 
 

Digression on Derived Relations
A better understanding of derived equivalence relations (DERs) can be achieved by placing their constructions within a more general context and thus comparing the associated type of derivation operation, namely, the one that takes a triadic relation into a dyadic relation with other types of operations on triadic relations. The proper setting would permit a comparative study of all their constructions from a basic set of projections and a full array of compositions on dyadic relations.
To that end, let the derivation be expressed in the following way:
From this may be abstracted a way of composing two dyadic relations that have a domain in common. For example, let and be dyadic relations that have the middle domain in common. Then we may define a form of composition, notated where is defined as follows: