This site is supported by donations to The OEIS Foundation.

User:Jon Awbrey/Mathematical Notes

From OeisWiki
Jump to: navigation, search
Mathematical Notes
CAT. Category Theory
DARM. Differential And Riemannian Manifolds
DIF. Differential Geometry For Engineers
GRAPH. Graph Theory
HOC. Higher Order Categorical Logic
INF. Information Flow
MOD. Model Theory
PAS. Probability And Statistics
SEM. Program Semantics
SET. Set Theory
TOP. Topology
MAT. Meta Links

Contents

CAT. Category Theory

CAT. Note 1

| Excerpts from 'Categories for the Working Mathematician' by Saunders Mac Lane
|
| Introduction
|
| Category theory starts with the observation that many properties of
| mathematical systems can be unified and simplified by a presentation
| with diagrams of arrows.  Each arrow f : X -> Y represents a function;
| that is, a set X, a set Y, and a rule x ~> f x which assigns to each
| element x in X an element f x in Y;  whenever possible we write f x
| and not f(x), omitting unneccessary parentheses.  A typical diagram
| of sets and functions is:
|
|         Y
|         o
|        ^ \
|       /   \
|    f /     \ g
|     /       \
|    /         v
|   o---------->o
| X       h       Z
|
| It is commutative when h is h = g o f, where g o f is the usual composite
| function g o f : X -> Z, defined by x ~> g(f x).  The same diagrams apply
| in other mathematical contexts;  thus in the "category" of all topological
| spaces, the letters X, Y, and Z represent topological spaces while f, g,
| and h stand for continuous maps.  Again, in the "category" of all groups,
| X, Y, and Z stand for groups, f, g, and h for homomorphisms.
|
| Mac Lane, 'Cat Work Math', p. 1.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 2

| Introduction (cont.)
|
| Many properties of mathematical constructions may
| be represented by universal properties of diagrams.
| Consider the cartesian product X x Y of two sets,
| consisting as usual of all ordered pairs <x, y>
| of elements x in X and y in Y.  The projections
| <x, y> ~> x, <x, y> ~> y of the product on its
| "axes" X and Y are functions p : X x Y -> X,
| q : X x Y -> Y.  Any function h : W -> X x Y
| from a third set W is uniquely determined by
| its composites p o h and q o h.  Conversely,
| given W and two functions f and g as in the
| diagram below, there is a unique function h
| which makes the diagram commute;  namely,
| h w = <f w, g w> for each w in W.
|
|            W
|            o
|           /|\
|          / | \
|         /  |  \
|        /   |   \
|     f /    |    \ g
|      /     |     \
|     /      |      \
|    /       |       \
|   v        v        v
|  o<--------o-------->o
| X    p    XxY    q    Y
|
| Thus, given X and Y, <p, q> is "universal" among pairs of
| functions from some set to X and Y, because any other such
| pair <f, g> factors uniquely (via h) through the pair <p, q>.
| This property describes the cartesian product X x Y uniquely
| (up to a bijection);  the same diagram, read in the category
| of topological spaces or of groups, describes uniquely the
| cartesian product of spaces or the direct product of groups.
|
| Mac Lane, 'Cat Work Math', p. 1.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 3

| Introduction (cont.)
|
|            W
|            o
|           /|\
|          / | \
|         /  |  \
|        /   |   \
|     f /    |    \ g
|      /     |     \
|     /      |      \
|    /       |       \
|   v        v        v
|  o<--------o-------->o
| X    p    XxY    q    Y
|
| Adjointness is another expression for these universal properties.  If we write
| hom(W, X) for the set of all functions f : W -> X and hom(<U, V>, <X, Y>) for
| the set of all pairs of functions f : U -> X, g : V -> Y, the correspondence
| h ~> <p h, q h> = <f, g> indicated in the diagram above is a bijection:
|
| hom(W, X x Y)  ~=~  hom(<W, W>, <X, Y>).
|
| This bijection is "natural" in the sense (to be made more precise later)
| that it is defined in "the same way" for all sets W and for all pairs of
| sets <X, Y> (and it is likewise "natural" when interpreted for topological
| spaces or for groups).  This natural bijection involves two constructions
| on sets:  The construction W ~> W, W which sends each set to the diagonal
| pair !D!W = <W, W>, and the construction <X, Y> ~> X x Y which sends each
| pair of sets to its cartesian product.  Given the bijection above, we say
| that the construction X x Y is a 'right adjoint' to the construction !D!,
| and that !D! is left adjoint to the product.  Adjoints, as we shall see,
| occur throughout mathematics.
|
| Mac Lane, 'Cat Work Math', pp. 1-2.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 4

| Introduction (cont.)
|
| The construction "cartesian product" is called a "functor"
| because it applies suitably to sets 'and' to the functions
| between them;  two functions k : X -> X' and h : Y -> Y'
| have a function k x h as their cartesian product:
|
| k x h : X x Y -> Y x Y',  <x, y> ~> <k x, h y>.
|
| Observe also that the one-point set 1 = {0} serves as an identity
| under the operation "cartesian product", in view of the bijections:
|
|              !q!       !r!
| (1).  1 x X -----> X <----- X x 1
|
| given by !q!<0, x> = x, !r!<x, 0> = x.
|
| Mac Lane, 'Cat Work Math', p. 2.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

Nota Bene.  I am having to change the names of some of Mac Lane's maps,
due to his use of letters like "l" and the Greek lambda !l! that do not
asciify without risk of confusion with the number 1 and the identity !1!.

CAT. Note 5

| Introduction (cont.)
|
| The notion of a monoid (a semigroup with identity)
| plays a central role in category theory.  A monoid M
| may be described as a set M together with two functions:
|
| (2).  !m! : M x M -> M
|
|       !h! : 1 -> M
|
| such that the following two diagrams in !m! and !h! commute:
|
| (3).
|                   1 x !m!
| M x M x M o------------------>o M x M
|           |                   |
|           |                   |
|           |                   |
|   !m! x 1 |                   | !m!
|           |                   |
|           |                   |
|           v                   v
|     M x M o------------------>o M
|                    !m!
|
|                 !h! x 1     M x M     1 x !h!
|     1 x M o------------------>o------------------>o M x 1
|           |                   |                   |
|           |                   |                   |
|           |                   |                   |
|       !q! |                   | !m!               | !r!
|           |                   |                   |
|           |                   |                   |
|           v                   v                   v
|         M o===================o===================o M
|                               M
|
| Here 1 in 1 x !m! is the identity function M -> M, and 1 in 1 x M
| is the one-point set 1 = {0}, while !q! and !r! are the bijections
| of (1) above.  To say that these diagrams commute means that the
| following composites are equal:
|
| !m! o (1 x !m!)  =  !m! o (!m! x 1)
|
| !m! o (!h! x 1)  =  !q!
|
| !m! o (1 x !h!)  =  !r!
|
| These diagrams may be rewritten with elements, writing the function !m! (say)
| as a product !m!(x, y) = x y for x, y in M and replacing the function !h! on
| the one-point set 1 = {0} by its (only) value, an element !h!(0) = u in M.
| The diagrams above then become:
|
| <x, y, z> o|----------------->o <x, yz>
|           -                   -
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           v                   v
|   <xy, z> o|----------------->o (xy)z = x(yz) 
|
|    <0, x> o|----------------->o <u, x>
|           -                   -
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           v                   v
|         x o===================o u x
|
|    <x, u> o<-----------------|o <x, 0>
|           -                   -
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           |                   |
|           v                   v
|       x u o===================o x
|
| They are exactly the familiar axioms on a monoid, that the
| multiplication be associative and have an element u as
| left and right identity.
|
| This indicates, conversely, how algebraic identities
| may be expressed by commutative diagrams.
|
| Mac Lane, 'Cat Work Math', pp. 2-3.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 6

| Introduction (cont.)
|
| The same process applies to other identities;  for example, one may describe
| a group as a monoid M equipped with a function !z! : M -> M (of course, the
| function x ~> x^(-1)) such that the following diagram commutes:
|
| (4).
|            !d!      M x M     1 x !z!
| M o------------------>o------------------>o M x M
|   |                                       |
|   |                                       |
|   |                                       |
|   |                                       | !m!
|   |                                       |
|   |                                       |
|   v                                       v
| 1 o-------------------------------------->o M
|
|                    <x, x>
| x o|----------------->o------------------>o <x, x^(-1)>
|   -                                       -
|   |                                       |
|   |                                       |
|   |                                       |
|   |                                       |
|   |                                       |
|   v                                       v
| 0 o|------------------------------------->o u = x x^(-1)
|
| Here !d! : M -> M x M is the diagonal function x ~> <x, x> for x in M, while
| the unnamed vertical arrow M -> 1 = {0} is the evident (and unique) function
| from M to the one-point set.  As indicated [in the element-mapping diagram],
| this diagram does state that !z! assigns to each element x in M an element
| x^(-1) which is a right inverse to x.
|
| This definition of a group by arrows !m!, !h!, and !z!
| in such commutative diagrams makes no explicit mention
| of group elements, so applies to other circumstances:
|
| If the letter 'M' stands for a topological space (not just a set) and the arrows
| are continuous maps (not just functions), then the conditions (3) and (4) define
| a topological group -- for they specify that M is a topological space with a
| binary operation !m! of multiplication which is continuous (simultaneously
| in its arguments) and which has a continuous right inverse, all satisfying
| the usual group axioms.
|
| Again, if the letter 'M' stands for a differentiable manifold (of class C^oo)
| while 1 is the one-point manifold and the arrows !m!, !h!, and !z! are smooth
| mappings of manifolds, then the diagrams (3) and (4) become the definition of
| a Lie group.
|
| Thus groups, topological groups, and Lie groups can
| all be described as "diagrammatic" groups in the
| respective categories of sets, of topological
| spaces, and of differentiable manifolds.
|
| Mac Lane, 'Cat Work Math', pp. 3-4.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 7

| Introduction (concl.)
|
| Finally, an 'action' of a monoid <M, !m!, !h!> on a set S
| is defined to be a function !n! : M x S -> S such that the
| following two diagrams commute:
|
|                   1 x !n!
| M x M x S o------------------>o M x S
|           |                   |
|           |                   |
|           |                   |
|   !m! x 1 |                   | !n!
|           |                   |
|           |                   |
|           v                   v
|     M x S o------------------>o S
|                    !n!
|
|                 !h! x 1
|     1 x S o------------------>o M x S
|             \                 |
|               \               |
|                 \             |
|                   \           |
|                 !q! \         | !n!
|                       \       |
|                         \     |
|                           \   |
|                             \ |
|                               o
|                               S
|
| If we write !n!(x, s) = x * s to denote the
| result of the action of the monoid element x
| on the element s in S, these diagrams state
| just that:
|
| x * (y * s)  =  (x y) * s
|
| u * s  =  s
|
| for all x, y in M and all s in S.  These are the
| usual conditions for the action of a monoid on a
| set, familiar especially in the case of a group
| acting on a set as a group of transformations.
| If we shift from the category of sets to the
| category of topological spaces, we get the
| usual continuous action of a topological
| monoid M on a topological space S.  ...
|
| Mac Lane, 'Cat Work Math', p. 5.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 8

| Excerpts from 'Categories for the Working Mathematician' by Saunders Mac Lane
|
| 1.  Categories, Functors, and Natural Transformations
|
| 1.1.  Axioms for Categories
|
| First we describe categories directly by means of axioms,
| without using any set theory, and call them "metacategories".
| Actually, we begin with a simpler notion, a (meta)graph.
|
| A 'metagraph' consists of:
|
|   'objects' a, b, c, ...,
|
|   'arrows' f, g, h, ...,
|
| and two operations, as follows:
|
|   'Domain', which assigns to each arrow f an object a = dom f.
|
|   'Codomain', which assigns to each arrow f an object b = cod f.
|
| These operations on f are best indicated by displaying f
| as an actual arrow starting at its domain (or "source")
| and ending at its codomain (or "target").
|
|                         f
|    f : a -> b   or   a ---> b.
|
| A finite graph may be readily exhibited:
|
|                               --->
|    Thus   o--->o--->o   or   o    o
|                               --->
|
| A 'metacategory' is a metagraph with two additional operations:
|
|   'Identity',
|
|    which assigns to each object 'a'
|    an arrow id_a = 1_a : a -> a.
|
|   'Composition',
|
|    which assigns to each pair <g, f> of arrows with
|    dom g = cod f an arrow g o f, called their 'composite',
|    with g o f : dom f -> cod g.  This operation may be
|    pictured by the diagram:
|
|            b
|            o
|           ^ \
|          /   \
|       f /     \ g
|        /       \
|       /         v
|    a o---------->o c
|          g o f
|
|    which exhibits all domains and codomains involved.
|
| These operations in a metacategory are subject to the two following axioms:
|
|   'Associativity'.
|
|    For given objects and arrows in the configuration:
|
|       f      g      k
|    a ---> b ---> c ---> d
|
|    one always has the equality:
|
|    k o (g o f)  =  (k o g) o f.                (1)
|
|    This axiom asserts that the associative law holds for
|    the operation of composition whenever it makes sense (i.e.,
|    whenever the composites on either side of (1) are defined).
|    This equation is represented pictorially by the statement
|    that the following diagram is commutative:
|
|        k o (g o f) = (k o g) o f
|    a o-------------------------->o d
|      | .                       ^ |
|      |   .  g o f     k o g  .   |
|      |     .               .     |
|      |       .           .       |
|      |         .       .         |
|      |           .   .           |
|    f |             .             | k
|      |           .   .           |
|      |         .       .         |
|      |       .           .       |
|      |     .               .     |
|      |   .                   .   |
|      v .                       v |
|    b o-------------------------->o c
|                    g
|
|   'Unit law'.
|
|    For all arrows f : a -> b and g : b -> c
|    composition with the identity arrow 1_b gives:
|
|    1_b o f  =  f   and   g o 1_b  =  g.        (2)
|
|    This axiom asserts that the identity arrow 1_b of each object b
|    acts as an identity for the operation of composition, whenever
|    this makes sense.  The Eqs. (2) may be represented pictorially
|    by the statement that the following diagram is commutative:
|
|           f
|    a o-------->o b
|       \        |\
|        \       | \
|         \      |  \
|          \     |   \
|         f \   1_b   \ g
|            \   |     \
|             \  |      \
|              \ |       \
|               vv        v
|              b o-------->o c
|                     g
|
|    We use many such diagrams consisting of vertices (labelled by objects
|    of a category) and edges (labelled by arrows of the same category).
|    Such a diagram is 'commutative' when, for each pair of vertices
|    c and c', any two paths formed from directed edges leading from
|    c to c' yield, by composition of labels, equal arrows from
|    c to c'.  A considerable part of the effectiveness of
|    categorical methods rests on the fact that such
|    diagrams in each situation vividly represent
|    the actions of the arrows at hand.
|
|    If b is any object of a metacategory C, the corresponding identity arrow
|    1_b is uniquely determined by the properties (2).  For this reason, it is
|    sometimes convenient to identify the identity arrow 1_b with the object b
|    itself, writing b : b -> b.  Thus 1_b = b = id_b, as may be convenient.
|
| Mac Lane, 'Cat Work Math', pp. 7-8.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 9

| 1.1.  Axioms for Categories (cont.)
|
| A metacategory is to be any interpretation which satisfies all these axioms.
| An example is the 'metacategory of sets', which has objects all sets and
| arrows all functions, with the usual identity functions and the usual
| composition of functions.  Here "function" means a function with
| specified domain and specified codomain.  Thus a function
| f : X -> Y consists of a set X, its domain, a set Y,
| its codomain, and a rule x ~> fx (i.e., a suitable
| set of ordered pairs <x, fx>) which assigns, to
| each element x in X, an element fx in Y.  These
| values will be written as fx, f_x, or f(x), as
| may be convenient.  For example, for any set S,
| the assignment s ~> s for all s in S describes
| the 'identity function' 1_S : S -> S;  if S is a
| subset of Y, the assignment s ~> s also describes
| the 'inclusion' or 'insertion function' S -> Y;
| these functions are 'different' unless S = Y.
| Given functions f : X -> Y and g : Y -> Z,
| the 'composite' function g o f : X -> Z is
| defined by (g o f)x = g(fx) for all x in X.
| Observe that g o f will mean first apply f,
| then g -- in keeping with the practice of
| writing each function f to the left of its
| argument.  Note, however, that many authors
| use the opposite convention.
|
| To summarize, the metacatgory of all sets has as
| objects, all sets, as arrows, all functions with the
| usual composition.  The metacategory of all groups is
| described similarly:  Objects are all groups G, H, K;
| arrows are all those functions f from the set G to
| the set H for which f : G -> H is a homomorphism
| of groups.  There are many other metacategories:
| All topological spaces with continuous functions
| as arrows;  all compact Hausdorff spaces with the
| same arrows;  all ringed spaces with their morphisms,
| etc.  The arrows of any metacategory are often called
| its 'morphisms'.
|
| Mac Lane, 'Cat Work Math', pp. 8-9.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 10

| 1.1.  Axioms for Categories (concl.)
|
| Since the objects of a metacategory correspond exactly to
| its identity arrows, it is technically possible to dispense
| altogether with the objects and deal only with arrows.  The
| data for an 'arrows-only metacategory' C consist of arrows,
| certain ordered pairs <g, f>, called the composable pairs of
| arrows, and an operation assigning to each composable pair
| <g, f> an arrow g o f, called their composite.  We say
| "g o f" is defined" for "<g, f> is a composable pair".
|
| With these data one 'defines' an identity of C to be an arrow u
| such that f o u = f whenever the composite f o u is defined and
| u o g = g whenever u o g is defined.  The data are then required
| to satisfy the following three axioms:
|
| 1.  The composite (k o g) o f is defined if and only if
|     the composite k o (g o f) is defined.  When either is
|     defined, they are equal (and this 'triple composite' is
|     written as k o g o f).
|
| 2.  The triple composite k o g o f is defined
|     whenever both composites k o g and g o f
|     are defined.
|
| 3.  For each arrow g of C there exist identity arrows
|     u and u' of C such that u' o g and g o u are defined.
|
| In view of the explicit definition given above for
| identity arrows, the last axiom is a quite powerful
| one;  it implies that u' and u are unique in (3), and
| it gives for each arrow g a codomain u' and a domain u.
| These axioms are equivalent to the preceding ones.  More
| explicitly, given a metacategory of objects and arrows,
| its arrows, with the given composition, satisfy the
| "arrows-only" axioms;  conversely, an arrows-only
| metacategory satisfies the objects-and-arrows
| axioms when the identity arrows, defined as
| above, are taken as the objects (Proof as
| exercise).
| 
| Mac Lane, 'Cat Work Math', p. 9.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 11

| 1.2.  Categories
|
| A category (as distinguished from a metacategory) will
| mean any interpretation of the category axioms within
| set theory.  Here are the details.  A 'directed graph'
| (also called a "diagram scheme") is a set O of objects,
| a set A of arrows, and two functions:
|
|        dom
|      ------->
|    A          O                                       (1)
|      ------->
|        cod
|
| In this graph, the set of composable pairs of arrows is the set:
|
|    A x_O A  =  {<g, f>  :  g, f in A  and  dom g = cod f},
|
| called the "product over O".
|
| A 'category' is a graph with two additional functions:
|
|          id                         o
| 1.  O ------> A,      2.  A x_O A -----> A,
|                                                       (2)
|     c ~~~~~~> id_c,        <g, f> ~~~~~> g o f,
|
| called identity and composition,
| [the latter] also written as g f,
| such that:
|
|    dom(id_a)  =  a  =  cod(id_a),
|
|    dom(g o f) =  dom f,
|
|    cod(g o f) =  cod g,                               (3)
|
| for all objects a in O and all composable pairs
| of arrows <g, f> in A x_O A, and such that the
| associativity and unit axioms (1.1) and (1.2)
| hold.  In treating a category C, we usually
| drop the letters A and O, and write:
|
|    c in C,    f in C                                  (4)
|
| for "c is an object of C" and "f is an arrow of C",
| respectively.  We also write:
|
|    hom(b, c)  =  {f : f in C, dom f = b, cod f = c}   (5)
|
| for the set of arrows from b to c.  Categories can
| be defined directly in terms of composition acting
| on these "hom-sets" (Section 8 below);  we do not
| follow this custom because we put the emphasis
| not on sets (a rather special category), but
| on axioms, arrows, and diagrams of arrows.
| We will later observe that our definition
| of a category amounts to saying that a
| category is a monoid for the product
| x_O, in the general sense described
| in the introduction.
|
| Mac Lane, 'Cat Work Math', p. 10.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 12

NB.  Mac Lane uses a symbol for the one object and one (identity) arrow
     category that looks like a dot with a sling out of it and an arrow
     back into it.  I will use a "Greek amphora" or "emphattic at" sign
     for this, like so "!@!".

| 1.2.  Categories (cont.)
|
| For the moment, we consider examples.
|
|    $0$  is the empty category (no objects, no arrows).
|
|    $1$  is the category !@! with one object and one (identity) arrow.
|
|    $2$  is the category !@! -> !@! with two objects a, b,
|         and just one arrow a -> b not the identity.
|
|    $3$  is the category with three objects whose non-identity arrows
|         are arranged as in the triangle [in the "transitive" manner]:
|
|            o
|           ^ \
|          /   v
|         o---->o
|
|    $||$ is the category with two objects a, b and just two
|         arrows a -> b not the identity arrows.  We call two
|         such arrows 'parallel arrows'.
|
| In each of the cases above there is only
| one possible definition of composition.
|
| Mac Lane, 'Cat Work Math', pp. 10-11.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 13

| 1.2.  Categories (cont.)
|
| Discrete Categories.  A category is 'discrete' when every arrow
| is an identity.  Every set X is the set of objects of a discrete
| category (just add one identity arrow x -> x for each x in X),
| and every discrete category is so determined by its set of
| objects.  Thus, discrete categories are sets.
|
| Monoids.  A monoid is a category with one object.  Each monoid is thus
| determined by the set of all its arrows, by the identity arrow, and
| by the rule for the composition of arrows.  Since any two arrows
| have a composite, a monoid may then be described as a set M with
| a binary operation M x M -> M which is associative and has an
| identity (= unit).  Thus a monoid is exactly a semigroup with
| identity element.  For any category C and any object a in C,
| the set hom(a, a) of all arrows a -> a is a monoid.
|
| Groups.  A group is a category with one object in which
| every arrow has a (two-sided) inverse under composition.
|
| Matrices.  For each commutative ring K, the set Matr_K of
| all rectangular matrices with entries in K is a category;
| the objects are all positive integers m, n, ..., and each
| m x n matrix A is regarded as an arrow A : n -> m, with
| composition the usual matrix product.
|
| Sets.  If V is any set of sets, we take Ens_V to be the category
| with objects all sets X in V, arrows 'all' functions f : X -> Y,
| with the usual composition of functions.  By Ens we mean any one
| of these categories.
|
| Mac Lane, 'Cat Work Math', p. 11.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 14

| 1.2.  Categories (cont.)
|
| Preorders.  By a preorder we mean a category P in which, given objects
| p and p', there is at most one arrow p -> p'.  In any preorder P, define
| a binary relation =< on the objects of P with p =< p' if and only if there
| is an arrow p -> p' in P.  This binary relation is reflexive (because there
| is an identity arrow p -> p for each p) and transitive (because arrows can be
| composed).  Hence a preorder is a set (of objects) equipped with a reflexive
| and transitive binary relation.  Conversely, any set P with such a relation
| determines a preorder, in which the arrows p -> p' are exactly those ordered
| pairs <p, p'> for which p =< p'.  Since the relation is transitive, there is
| a unique way of composing these arrows;  since it is reflexive, there are the
| necessary identity arrows.
|
| Preorders include 'partial orders' (preorders with the added axiom that
| p =< p' and p' =< p imply p = p') and 'linear orders' (partial orders
| such that, given p and p', either p =< p' or p' =< p).
|
| Mac Lane, 'Cat Work Math', p. 11.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 15

| 1.2.  Categories (cont.)
|
| Ordinal Numbers.  We regard each ordinal number n as the linearly ordered
| set of all the preceding ordinals n = {0, 1, ..., n-1};  in particular, 0
| is the empty set, while the first infinite ordinal is !w! = {0, 1, 2, ...}.
| Each ordinal n is linearly ordered, and hence is a category (a preorder).
| For example, the categories $1$, $2$, and $3$ listed above are the preorders
| belonging to the (linearly ordered) ordinal numbers 1, 2, and 3.  Another
| example is the linear order !w! [omega].  As a category, it consists of
| the arrows:
|
|    0 -> 1 -> 2 -> 3 -> ...,
|
| all their composites, and the identity arrows for each object.
|
| !D! is the category with objects all finite ordinals and arrows
| f : m -> n all order-preserving functions (i =< j in m implies
| f_i =< f_j in n).  This category !D! [Delta], sometimes called
| the 'simplicial category', plays a central role (Chapter 7).
|
| Finord = Set_!w! is the category with objects all finite ordinals n
| and arrows f : m -> n all functions from m to n.  This is essentially
| the category of all finite sets, using just one finite set n for each
| finite cardinal number n.
|
| Mac Lane, 'Cat Work Math', pp. 11-12.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 16

| 1.2.  Categories (concl.)
|
| Large Categories.  In addition to the metacategory of all sets --
| which is not a set -- we want an actual category Set, the category
| of all 'small' sets.  We shall assume that there is a big enough set
| U, the "universe", then describe a set x as "small" if it is a member
| of the universe, and take Set to be the category whose set U of objects
| is the set of all small sets, with arrows all functions from one small set
| to another.  With this device (details in Section 7 below) we construct other
| familiar large categories, as follows:
|
| Set.    Objects, all small sets;
|         arrows, all functions between them.
|
| Set_*.  Pointed sets:  Objects, small sets each with a selected base point;
|         arrows, base-point-preserving functions.
|
| Ens.    Category of all sets and functions within a (variable) set V.
|
| Cat.    Objects, all small categories;
|         arrows, all functors (Section 3).
|
| Mon.    Objects, all small monoids;
|         arrows, all morphisms of monoids.
|
| Grp.    Objects, all small groups;
|         arrows, all morphisms of groups.
|
| Ab.     Objects, all small (additive) abelian groups,
|         with morphisms of such.
|
| Rng.    All small rings, with the ring homomorphisms
|         (preserving units) between them.
|
| CRng.   All small commutative rings and their morphisms.
|
| R-Mod.  All small left modules over the ring R, with linear maps.
|
| Mod-R.  Small right R-modules.
|
| K-Mod.  Small modules over the commutative ring K.
|
| Top.    Small topological spaces and continuous maps.
|
| Toph.   Topological spaces, with arrows homotopy classes of maps.
|
| Top_*.  Spaces with selected base point,
|         base-point-preserving maps.
|
| Particular categories (like these) will always appear
| in bold-face type [not shown here].  Script capitals
| are used by many authors to denote categories.
|
| Mac Lane, 'Cat Work Math', p. 12.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 17

| 1.3.  Functors
|
| A 'functor' is a morphism of categories.  In detail, for
| categories C and B a functor T : C -> B with domain C and
| codomain B consists of two suitably related functions:  The
| 'object function' T, which assigns to each object c of C an
| object Tc of B and the 'arrow function' (also written T) which
| assigns to each arrow f : c -> c' of C an arrow Tf : Tc -> Tc'
| of B, in such a way that:
|
|    T(1_c)  =  1_Tc,    T(g o f)  =  Tg o Tf,                 (1)
|
| the latter whenever the composite g o f is defined in C.  A functor,
| like a category, can be described in the "arrows-only" fashion:  It
| is a function T from arrows f of C to arrows Tf of B, carrying each
| identity of C to an identity of B and each composable pair <g, f>
| in C to a composable pair <Tg, Tf> in B, with Tg o Tf = T(g o f).
|
| A simple example is the power set functor $P$ : Set -> Set.  Its object
| function assigns to each set X the usual power set $P$X, with elements
| all subsets S c X;  its arrow function assigns to each f : X -> Y that
| map $P$f : $P$X -> $P$Y which sends each S c X to its image fS c Y.
| Since both $P$(1_X) = 1_$P$X and $P$(g o f) = $P$g o $P$f, this
| clearly defines a functor $P$ : Set -> Set.
|
| Mac Lane, 'Cat Work Math', p. 13.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 18

NB.  When necessary to embolden characters,
     I will use percent brackets, for example:
     %R% = the real numbers, %Z% = the integers.

| 1.3.  Functors (cont.)
|
| Functors were first explicitly recognized in algebraic topology,
| where they arise naturally when geometric properties are described
| by means of algebraic invariants.
|
| For example, singular homology in a given dimension n (n a natural number)
| assigns to each topological space X an abelian group H_n (X), the n^th
| homology group of X, and also to each continuous map f : X -> Y of
| spaces a corresponding homomorphism H_n (f) : H_n (X) -> H_n (Y)
| of groups, and this in such a way that H_n becomes a functor
| Top -> Ab.
|
| For example, if X = Y = S^1 is the circle, H_1 (S^1) = %Z%, so
| the group homomorphism H_1 (f) : %Z% -> %Z% is determined by
| an integer d (the image of 1);  this integer is the usual
| "degree" of the continuous map f : S^1 -> S^1.  In this
| case and in general, homotopic maps f, g : X -> Y yield
| the same homomorphism H_n (X) -> H_n (Y), so H_n can
| actually be regarded as a functor Toph -> Grp,
| defined on the homotopy category.
|
| The Eilenberg-Steenrod axioms for homology start with the axioms
| that H_n, for each natural number n, is a functor on Toph, and
| continue with certain additional properties of these functors.
| The more recently developed extraordinary homology and
| cohomology theories are also functors on Toph.
|
| The homotopy groups !p!_n (X) of a space X can also be regarded as
| functors;  since they depend on the choice of a base point in X,
| they are functors Top_* -> Grp.
|
| The leading idea in the use of functors in topology is that H_n or !p!_n
| gives an algebraic picture or image not just of the topological spaces,
| but also of all the continuous maps between them.
|
| Mac Lane, 'Cat Work Math', p. 13.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 19

| 1.3.  Functors (cont.)
|
| Functors arise naturally in algebra.
|
| To any commutative ring K the set of all non-singular
| n x n matrices with entries in K is the usual general
| linear group GL_n (K);  moreover, each homomorphism
| f : K -> K' of rings produces in the evident way a
| homomorphism GL_n f : GL_n (K) -> GL_n (K') of groups.
| These data define for each natural number n a functor
| GL_n : CRng -> Grp.
|
| For any group G the set of all products of commutators x y x^(-1) y^(-1),
| (x, y in G), is a normal subgroup [G, G] of G, called the 'commutator'
| subgroup.  Since any homomorphism G -> H of groups carries commutators
| to commutators, the assignment G ~> [G, G] defines an evident functor
| Grp -> Grp, while G ~> G/[G, G] defines a functor Grp -> Ab, the
| factor-commutator functor.  Observe, however, that the center Z(G)
| of G (all a in G with ax = xa for all x) does not naturally define
| a functor Grp -> Grp, because a homomorphism G -> H may carry an
| element in the center of G to one not in the center of H.
|
| A functor which simply "forgets" some or all of the structure of an
| algebraic object is commonly called a 'forgetful' functor (or, an
| 'underlying' functor).  Thus the forgetful functor U : Grp -> Set
| assigns to each group G the set UG of its elements ("forgetting"
| the multiplication and hence the group structure), and assigns
| to each morphism f : G -> G' of groups the same function f,
| regarded just as a function between sets.  The forgetful
| functor U : Rng -> Ab assigns to each ring R the additive
| abelian group of R and to each morphism f : R -> R' of
| rings the same function, regarded just as a morphism
| of addition.
|
| Mac Lane, 'Cat Work Math', p. 14.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 20

| 1.3.  Functors (cont.)
|
| Functors may be composed.  Explicitly, given functors:
|
|       T      S
|    C ---> B ---> A
|
| between categories A, B, and C, the composite functions:
|
|    c ~> S(Tc),    f ~> S(Tf)
|
| on objects c and arrows f of C define a functor S o T : C -> A, called the
| 'composite' (in that order) of S with T.  This composition is associative.
| For each category B there is an identity functor I_B : B -> B, which acts as
| an identity for this composition.  Thus we may consider the metacategory of
| all categories:  its objects are all categories, its arrows are all functors
| with the composition above.  Similarly, we may form the category Cat of all
| small categories -- but not the category of all categories.
|
| Mac Lane, 'Cat Work Math', p. 14.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 21

| 1.3.  Functors (cont.)
|
| An 'isomorphism' T : C -> B of categories is a functor
| T from C to B which is a bijection, both on objects and
| on arrows.  Alternatively, but equivalently, a functor
| T : C -> B is an isomorphism if and only if there is a
| functor S : B -> C for which both composites S o T and
| T o S are identity functors;  then S is the 'two-sided
| inverse' S = T^(-1).
|
| Certain properties much weaker than isomorphism will be useful.
|
| A functor T : C -> B is 'full' when to every pair c, c' of objects of C
| and to every arrow g : Tc -> Tc' of B, there is an arrow f : c -> c' of C
| with g = Tf.  Clearly the composite of two full functors is a full functor.
|
| A functor T : C -> B is 'faithful' (or an embedding) when to every pair
| c, c' of objects of C and to every pair f_1, f_2 : c -> c' of parallel
| arrows of C the equality Tf_1 = Tf_2 : Tc -> Tc' implies f_1 = f_2.
| Again, composites of faithful functors are faithful.  For example,
| the forgetful functor Grp -> Set is faithful but not full and
| not a bijection on objects.
|
| These two properties may be visualized in terms of hom-sets (see (2.5)).
| Given a pair of objects c, c' in C, the arrow function of T : C -> B
| assigns to each f : c -> c' an arrow Tf : Tc -> Tc' and so defines
| a function:
|
|    T_c,c' : hom(c, c') -> hom(Tc, Tc'),    f ~> Tf.
|
| Then T is full when every such function is surjective, and faithful
| when every such function is injective.  For a functor which is both
| full and faithful (i.e., "fully faithful"), every such function is
| a bijection, but this need not mean that the functor itself is an
| isomorphism of categories, for there may be objects of B not in
| the image of T.
| 
| Mac Lane, 'Cat Work Math', pp. 14-15.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 22

| 1.3.  Functors (concl.)
|
| A 'subcategory' S of a category C is a collection of
| some of the objects and some of the arrows of C, which
| includes with each arrow f both the object dom f and the
| object cod f, with each object s its identity arrow 1_s,
| and with each pair of composable arrows s -> s' -> s"
| their composite.  These conditions ensure that these
| collections of objects and arrows themselves constitute
| a category S.  Moreover, the injection (inclusion) map
| S -> C which sends each object and each arrow of S to
| itself (in C) is a functor, the 'inclusion functor'.
| This inclusion functor is automatically faithful.
|
| We say that S is a 'full subcategory' of C when the inclusion functor
| S -> C is full.  A full subcategory, given C, is thus determined by
| giving just the set of its objects, since the arrows between any two
| of these objects s, s' are all morphisms s -> s' in C.  For example,
| the category Set_f of all finite sets is a full subcategory of the
| category Set.
| 
| Mac Lane, 'Cat Work Math', p. 15.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 23

| 1.4.  Natural Transformations
|
| Given two functors S, T : C -> B, a 'natural transformation'
| !t! : S -> T is a function which assigns to each object c of C
| an arrow !t!_c = !t!c : Sc -> Tc of B in such a way that every
| arrow f : c -> c' in C yields a diagram:
|
|                       !t!c
|    c  o      Sc  o------------>o  Tc
|       |          |             |
|       |          |             |
|    f  |      Sf  |             |  Tf                        (1)
|       |          |             |
|       v          v             v
|    c' o      Sc' o------------>o  Tc'
|                       !t!c'
|
| which is commutative.  When this holds, we also say that
| !t!_c : Sc -> Tc is 'natural' in c.  If we think of the
| functor S as giving a picture in B of (all the objects
| and arrows of) C, then a natural transformation !t! is
| a set of arrows mapping (or, translating) the picture S
| to the picture T, with all squares (and parallegrams!)
| like that above commutative:
|
|       a               Sa         !t!a          Ta
|       o                o---------------------->o
|       |\               |\                      |\
|       | \ f            | \ Sf                  | \ Tf
|       |  \             |  \                    |  \
|       |   v            |   v Sb             Th |   v
|     h |    o b      Sh |    o------------------|--->o Tb
|       |   /            |   /        !t!b       |   /
|       |  /             |  /                    |  /
|       | / g            | / Sg                  | / Tg
|       vv               vv                      vv
|       o                o---------------------->o
|       c               Sc         !t!c          Tc
|
| We call !t!a, !t!b, !t!c, ..., the 'components'
| of the natural transformation !t!.
|
| A natural transformation is often called a 'morphism of functors';
| a natural transformation !t! with every component !t!c invertible in
| B is called a 'natural equivalence' or better a 'natural isomorphism';
| in symbols, !t! : S ~=~ T.  In this case, the inverses (!t!c)^(-1) in B
| are the components of a natural isomorphism !t!^(-1) : T -> S.
|
| Mac Lane, 'Cat Work Math', p. 16.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Note 24

| 1.4.  Natural Transformations (cont.)
|
|


| Mac Lane, 'Cat Work Math', p. 16.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Category Theory • Discussion

CAT. Discussion Note 1


I have tried to get through the IFF proposal several times now,
and even though I have a basic acquaintance with category theory
as used in math and computer science, it's been pretty rough going.
I thought of trying to start an online review of this material, but
second thought tells me that it won't be possible for us to evaluate
this work unless we sort it out by layers of complexity and usability.
Third thought tells me that we probably can't do a good job of this
without doing a bit of spadework in category theory first.  So, in
support of all of these aims, I will start running an introduction
to category theory on the generic ontology list.  In order to stick
with an authoritative standard text, I will use selected excerpts from
Mac Lane's "Categories for the Working Mathematician", which is a book
that everybody should have, mathematically employed or not.  We will be
accomplishing a lot if we can even get through the first 30 pages or so.
I will also run a separate discussion thread, in or out of the main SUO
group, as the need arises, in which I will try to explain in plainer
language the use and the significance of these basic formal tools.
I am hoping that the IFF proposers will try to relate their ideas
to this basic groundwork whenever it seems appropriate.

CAT. Discussion Note 2


Links to the first three installments from Mac Lane are given below.
I am chunking this into small pieces specifically to facilitate the
group's discussion of content, formalism, motivation, or whatever.
In the 5 pages of his Introduction, Mac Lane is giving merely a
quick overview of some leading ideas and typical constructions,
so don't worry about the speed of it, as all of these things
will be gone back over in full detail, as time goes by.

For anybody who's up to a comparative study, all of the same basic notions
of category theory are covered from the standpoint of logical applications
in Lambek & Scott's 'Higher Order Categorical Logic', and there are links
to a sampling of that work below.

Category Theory, 01-03.

Higher Order Categorical Logic, 01-30.

CAT. Discussion Note 3


JS = John Sowa

JS: When a reader with the expected level of prerequisites
    finds it difficult to read a text, even after making
    several diligent attempts, the fault is the author's.

JS: Although I have been sympathetic to the IFF efforts, I have repeatedly
    pointed out that the current document is not suitable as a standard.
    It should be considered the developers' first attempt at writing a
    technical report that could be used (by them) as the basis for a
    standard.  Readers are expected to meet an author halfway, but
    the author is also expected to meet the readers halfway.

JS: As an example of the level of formality that is appropriate
    for the IFF, I recommend any decent textbook of computer science
    for first-year computer science graduate students.  Anything that
    is unreadable by students who have been accepted for a CS graduate
    program at a good university is inappropriate as a standards document.

JS: Those excerpts that you extracted from textbooks on
    category theory are intended to be read by advanced
    undergraduates and beginning graduate students --
    i.e., by the kind of people who might be expected
    to read the IFF document.  Yet they are much more
    readable than the IFF document.

JS: I recognize that the IFF developers have tackled a very large complex task,
    and they are trying to state the standard at a very high level of generality
    and abstraction.  I commend them for their ambition.  But I believe that it
    is now time to scale back the project to something that is more easily (1)
    readable by people who have a BS degree in computer science, (2) salable to
    people who see the need for ontology but not the need for an opaque formalism,
    and last but not least, (3) implementable.

JS: For several years now, I have been arguing for a lattice of theories
    as a framework for relating various ontologies.  The IFF developers
    have assured me that their framework is general enough to accommodate
    the lattice I would like to see as a special case.  I believe that is
    probably true.  I also believe that category theory is the proper
    formalism to use for what the IFF is supposed to do.

JS: But if the IFF developers cannot write a readable document that
    presents their ideas, I suggest that they scale back version 1.0
    of the proposed standard to something that isn't much more than
    the simple lattice I have been proposing.  Then at some point
    in the future, after people have started using and implementing
    version 1.0, they can develop version 2.0 with all the power
    and glory that they are now trying to document.

JS: As an example of the level of readability and formality that I
    believe would be appropriate for the IFF standard, I recommend
    my tutorial on math and logic:

    http://www.jfsowa.com/logic/math.htm

Everything you say is quite apt and I join in recommending all the goodies
on your web pages.  I am operating on the principle that the IFF proposal
is our only current starter document, and so I am looking for ways that
I can add some value to it.  I will have my criticisms down the road,
but I think that one of the big futilities that we've had over the
past few years is wrangling about the finer facets before we've
roughed out or even mined the stone.  So I will stick for the
time being with the basic category theory, which along with
naive set theory needs no apology, as it used on a routine
basis to carry on the everyday business of mathematics --
the original "upper ontology" if anything ever was one --
not to mention computer science, and even more and more
engineering these days via the systems theory connection.
And it's the main way that I ever learned for talking about
lattices and all kinds of other orders.  So this common core
of category theory will probably have to be a central part of
the scientific components of any standard upper ontology, if not
necessarily the commonsense classification, naive ontology modules.

CAT. Discussion Note 4


Yesterday's discussion of the "lattice of theories" brings back
to mind a number of recurring issues, and these in turn lead me
to think of a generic criticism that I would have about most of
the projects, formal or informal, advanced in this group so far.
Let us call it the "Forgetfulness Of Semiotic Computability".

The lattice is a fine and proper place to store our ideas.
But lattices and their elements are mathematical objects.
This means that we are not given these objects directly,
but only the signs of them.  When I was a mathematician
I spake as a mathematician, which means that I remained
bissfully oblivious most of the time as to what it might
take to connect a concrete sign with its abstract object.
But when I put aside the larger part of that heavenly
paradise and started to focus on what was computable,
the world began to look a whole lot different, and
a lot of what I'd taken for granted as solid ground
became pure blue sky to my new computational and
sign-theoretic eyes.  I still see a whole lot
of blue sky being kited in this group.  And
one of the reasons for this is that nobody,
but me, of course, is even thinking to ask
what the computational properties of all
their high-flown expresssiveness might
actually turn out to be in the end.
I already know what happens to
projects when people do this.
You can read about it in
the Bible, under Babel.

In case it isn't clear, I'm talking about something over and above parsing syntax --
I'm talking about computing denotations and interpretants of signs, and what it
actually takes in practical computational terms to get from a theory, which is
just a set of syntactic elements called sentences, to its proper place in the
object lattice.  This is the really big gap that will have to be crossed in
order to realize any of the envisioned plans for a lattice of theories.

So, don't get me wrong -- I'm all on board with this lattice of theories stuff.
Have been for a long time.  But the next questions that I have to ask all have
to do with what it would take to make it real, and I'm still getting a lot of
the kind of hand-waving that I already know won't cut it.

To Contemplate for Next Time -- The Great Mandala or The Web Of Maya?

o---------------------------------------------------------------------o
|                                                                     |
|      Language 1          Object Domain         Language 2           |
|                                                                     |
|     o-----------o                             o-----------o         |
|    /| s s s ... |\~~~~~~~~~~~~~o~~~~~~~~~~~~~/| s s s ... |\        |
|   / o-----------o \           / \           / o-----------o \       |
|  /                 \         /   \         /                 \      |
| o-----------o       \       /     \       o-----------o       \     |
| | s s s ... |~~~~~~~~\~~~~~o~~~~~~~\~~~~~~| s s s ... |        \    |
| o-----------o         \     \       \     o-----------o         \   |
|  \                     \     \       \     \                     \  |
|   \         o-----------o     \       \     \         o-----------o |
|    \        | s s s ... |~~~~~~\~~~~~~~o~~~~~\~~~~~~~~| s s s ... | |
|     \       o-----------o       \     /       \       o-----------o |
|      \                 /         \   /         \                 /  |
|       \ o-----------o /           \ /           \ o-----------o /   |
|        \| s s s ... |/~~~~~~~~~~~~~o~~~~~~~~~~~~~\| s s s ... |/    |
|         o-----------o                             o-----------o     |
|                                                                     |
o---------------------------------------------------------------------o
Figure 1.  Lattice of Objects Inducing a Diversity of Sign Partitions

CAT. Discussion Note 5


In connection with the ontological use of category theory --
and while we're waiting for Robert Kent's opus to ope --
I would like to bring to the group's attention the
following work of Robert Marty, who I think is
saying something extremely right about the
way that we can use categorical notions
to construct or to discover invariant
objects in the b(l)ooming, buzzing
manifolds of phenomenal data.

Robert Marty, "Foliated Semantic Networks:  Concepts, Facts, Qualities"

http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/abstract11.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/introduction11.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/concept1.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/formal1.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/composit1.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/extending1.htm †
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/application1.htm
http://www.univ-perp.fr/see/rch/lts/marty/semantic-ns/reference1.htm

CAT. Discussion Note 6


JS = John Sowa
MW = Matthew West

Well, if the Ontology Archive ever gets working again,
I am hoping to lead some kind of an e-seminar there,
starting out with MacLane's book, which is the one
that most math folks read if they read only one.

In practice, though, most of this gets learned informally,
in the process of working on some other subject of interest,
with category theory being one of the main tools that gets
used when the going gets tough.

As a person who uses a lot of mappings between different domains
in his work, I think that you would especially benefit from some
of the work that's been done over the years to address precisely
these kinds of tasks.

The official "decade of birth" of category theory is usually given as the 1940's --
though of course you can trace all the main ideas back into the mists, to Riemann
and maybe even to Kant -- but what happened in the 40's was that mathematicians
were getting really bogged down trying to formalize what they do, so long as
they tried to do it in axiomatic set theory and formalized logic, and there
was a sense that most of the real work was falling between the cracks of
what these official doctrines were able to cover.

One of the important things to understand here is these folks were practical people,
with work to do, and if it could not be done effectively and efficiently in the way
that "philosophers of math" said they should be doing it, then they had no choice
but to reflect on the conduct of their own practice and to hammer out their own
tools to the task.

It appears to be a popular misconception, promulgated by the
sort of "philosopher of math" who never does much actual math --
how could they, if they stick to the methods they try to sell
others? -- that your average working theorist spends his days
proving stuff in axiomatic set theory with first order logic.
This is just not how it is -- unless an individual takes up
a special interest in working with both hands tied behind
his or her back.

Of course, some folks will go on to develop baroque and roccoco variations
on just about any subject you give them, if they have a lot of spare time
on their hands, but most of these tools were forged and hammered out to
do solid work on mathematical objects of definite interest.

The fact is that many of the problems that category theory was carved out to solve
are closely analogous to the problems of interrelation between different views and
takes on the world, of the sort that we have before us in the standard ontology job.
It would be a crying shame just to go out and wipe the slate clean and to waste all
of the knowledge that has already been mined, just for the lack of a little effort
to learn the practical methods that were used to mine it.

So I think that there has just got to be a way of explaining
all of this stuff in applicable, practical, sensible terms.

MW: I have been reading (in fits and starts) the book John recommends below.
> 
> It is intended as a undergraduate text.  When John says it has
> been used in High School I am surprised rather than incredulous.
> 
> The book comes in sections each with an exposition of some theory
> and then a number of tutorials with worked examples (and some for
> you to do if you wish) tackling real questions which real students
> asked.
> 
> The main thing I find lacking is any sense of what I would use
> this for -- but this is not so unusual with pure maths.

JS: Category theory is usually considered an esoteric subject
> > because (1) it is not taught in high school and (2) most
> > textbooks are written at an advanced level.
> >
> > But there is a textbook of category theory that has been
> > used for teaching high-school students, and it contains
> > large numbers of examples to illustrate the concepts:
> >
> >    'Conceptual Mathematics:  A First Introduction to Categories',
> >     by F. William Lawvere and Stephen Hoel Schanuel,
> >     Cambridge University Press.
> >
> > The first 100 pages could be read by a high-school student
> > with a little help from a teacher or tutor.  The material
> > gets deeper and proceeds faster later in the book, but the
> > presentation is completely self-contained, and it could be
> > used for self-study by people who have forgotten most of the
> > mathematics they learned in college.
> >
> > I recommend it to anyone who might be interested in learning
> > the IFF system in order to (1) use it or (2) evaluate it.

CAT. Discussion Note 7


One way to get the motivation for category theory
is to look at some of the types of problems that
it was developed to solve.  I will try to give
just my personal intuitions about the kinds of
settings where it becomes indispensable, and
why I feel like these are closely analogous
to the very sorts of problems that face us
in designing conceptual systems that are
capable of supporting communication and
co-operation among different views of
common realities, whether these are
embodied in people or in software.

I think that it all starts with the gap between
realities and representations, in other ways of
stating it, between terrains and maps, or maybe
the constrast between the role of an object and
the role of a sign.  The big problem is that we
tend to think that the objective reality is one,
at least until there is good evidence otherwise,
whereas the big headache about the appearances,
datasets, maps, representations, sign systems,
or variant views of the terrain is that there
is just so darn many of them.

This is a problem that went critical quite some time
ago in mathematics, shortly after Descartes invented
analytic geometry, because instead of thinking about
geometric figures as unitary objects like most folks
intuit them to be in the synthesis of the mind's eye,
all of a sudden there is an embarrassing richness of
different coordinate systems, reference frames, and
points of view, assigning different coordinates to
objects, and all of which differing accounts have
to be reconciled among themselves if we want to
reconstruct the unity of the original figure.

So the first picture I get of the subject looks like this:

       reality
          ?
         / \
        /   \
       /     \
      /       \
     /         \
    v           v
   o<-----T----->o
   representations

The objective reality in question, marked by a question mark "?",
is not a total unknown to us, but it is known to us only in terms
of many different representations.  These are typically expressed
in different coordinate systems, amounting to or being analogous
to data that's gathered from different points of view on things.
So the problem becomes a lot like that of stereoscopic vision,
to recover a more solid sense of the object from the mosaic
of different facets of data that is canvassed by a welter
of diverse reference frames.  Reconstructing the object
depends on finding the proper correspondences between
the elements of the many splintered representations,
which involves us in contemplating the families of
transformations T that exist between each pair of
perspectives.

Anyway, that is how I see it getting started.

CAT. Category Theory • Work Area


"John kicks the cart."

could be translated to

(exists (?EV ?OBJ)
   (and
     (instance ?EV Impelling)
     (instance ?OBJ Device)
     (instance John-1 Human)
     (agent ?EV John-1)
     (patient ?EV ?OBJ))

Zeke buys the farm.
Zeke bites the dust.
Zeke kicks the bucket.

(exists (?EV ?OBJ)
   (and (instance ?EV Impelling)
        (instance ?OBJ Device)
        (instance Zeke Human)
        (agent ?EV Zeke)
        (patient ?EV ?OBJ))

DARM. Differential And Riemannian Manifolds

DARM. Note 1

| Excerpts from 'Differential And Riemannian Manifolds' by Serge Lang
|
| Chapt 2.  Manifolds
|
| Starting with open subsets of Banach spaces [take R^n as a typical example],
| one can glue them together with C^p-isomorphisms [bijective mappings that
| are continuously differentiable up to order at least p].  The result is
| called a manifold.  We begin by giving the formal definition.  We then
| make manifolds into a category, and discuss special types of morphisms.
| We define the tangent space at each point, and apply the criteria
| following the inverse function theorem to get a local splitting
| of a manifold when the tangent space splits at a point.
| 
| We shall wait until the next chapter to give
| a manifold structure to the union of all the
| tangent spaces.
|
| 2.1.  Atlases, Charts, Morphisms
|
| Let X be a set.  An "atlas" of class C^p (p >= 0) on X
| is a collection of pairs (U_i, q_i) (i ranging in some
| indexing set), satisfying the following conditions:
|
| AT 1.  Each U_i is a subset of X and the U_i cover X.
|
| AT 2.  Each q_i is a bijection of U_i onto an open subset q_i (U_i) of
         some Banach space E_i, and for any i, j, [we have the fact that]
|        q_i (U_i |^| U_j) is open in E_i.
|
| AT 3.  The map
|
|        (q_j) o (q_i)^(-1)  :  q_i (U_i |^| U_j)  ->  q_j (U_i |^| U_j)
|
|        is a C^p-isomorphism for each pair of indices i, j.
|
| It is a trivial exercise in point set topology to prove that one
| can give X a topology in a unique way such that each U_i is open,
| and the q_i are topological isomorphisms.
|
| Lang, DARM, pp. 20-21.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 2

For ease of reference, I repeat here the definition of
an "atlas" of class C^p, and then I pick up a few more
definitions from the text.

| Let X be a set.  An "atlas" of class C^p (p >= 0) on X
| is a collection of pairs (U_i, q_i) (i ranging in some
| indexing set), satisfying the following conditions:
|
| AT 1.  Each U_i is a subset of X and the U_i cover X.
|
| AT 2.  Each q_i is a bijection of U_i onto an open subset q_i (U_i) of
         some Banach space E_i, and for any i, j, [we have the fact that]
|        q_i (U_i |^| U_j) is open in E_i.
|
| AT 3.  The map
|
|        (q_j) o (q_i)^(-1)  :  q_i (U_i |^| U_j)  ->  q_j (U_i |^| U_j)
|
|        is a C^p-isomorphism for each pair of indices i, j.
|
| Lang, DARM, p. 20.

An atlas, of course, is a collection of charts:

| Each pair (U_i, q_i) will be called a "chart" of the atlas.
| If a point x of X lies in U_i, then we say that (U_i, q_i)
| is a "chart at x".
|
| Lang, DARM, p. 21.

Below is a paradigmatic picture of the manifold situation with
respect to a typical pair of charts, (U_i, q_i) and (U_j, q_j).
In this Figure and elsewhere, I will make use of the notations
U_ij = U_i |^| U_j  and  q_ij = (q_j) o (q_i)^(-1).

o-----------------------------------------------------------o
| X                                                         |
|                                                           |
|             o-------------o   o-------------o             |
|            /               \ /               \            |
|           /                 o                 \           |
|          /                 / \                 \          |
|         /                 /   \                 \         |
|        /                 /     \                 \        |
|       o                 o       o                 o       |
|       |                 |  U_i  |                 |       |
|       |                 |       |                 |       |
|       |       U_i       |  |^|  |       U_j       |       |
|       |                 |       |                 |       |
|       |                 |  U_j  |                 |       |
|       o                 o       o                 o       |
|        \                 \     /                 /        |
|         \                 \   /                 /         |
|          \                 \ /                 /          |
|           \                 o                 /           |
|            \       |       / \       |       /            |
|             o------|------o   o------|------o             |
|                    |                 |                    |
|                    |                 |                    |
o--------------------|-----------------|--------------------o
                     |                 |
                 q_i |                 | q_j
                     |                 |
o--------------------|-----o     o-----|--------------------o
| E_i                v     |     |     v                E_j |
|                          |     |                          |
|       o----------o       |     |       o----------o       |
|      /            \      |     |      /            \      |
|     /              o     |     |     o              \     |
|    /              / \    |     |    / \              \    |
|   /              /   \   |     |   /   \              \   |
|  o              o     o  |     |  o     o              o  |
|  |              |     |  | q_ij|  |     |              |  |
|  |              |  ------------------>  |              |  |
|  |              |     |  |     |  |     |              |  |
|  | q_i (U_ij) -----   |  |     |  |   ----- q_j (U_ij) |  |
|  |              |     |  |     |  |     |              |  |
|  o              o     o  |     |  o     o              o  |
|   \              \   /   |     |   \   /              /   |
|    \              \ /    |     |    \ /              /    |
|     \              o     |     |     o              /     |
|      \            /      |     |      \            /      |
|       o----------o       |     |       o----------o       |
|                          |     |                          |
|                          |     |                          |
o--------------------------o     o--------------------------o
[Figure 1.  Manifold X with Charts (U_i, q_i) and (U_j, q_j)]

We find next the need for a notion of "compatibility"
among and between different atlases and their charts:

| Suppose that we are given an open subset U of X and a topological isomorphism
| q : U -> U' onto an open subset of some Banach space E.  We shall say that
| (U, q) is "compatible" with the atlas {(U_i, q_i)} if each map (q_i)o(q^-1)
| (defined on a suitable intersection as in AT 3) is a C^p-isomorphism.
|
| Two atlases are said to be "compatible" if each chart of one is compatible with
| the other atlas.  One verifies immediately that the relation of compatibility
| between atlases is an equivalence relation.  An equivalence class of atlases
| of class C^p on X is said to define a structure of "C^p-manifold" on X.
|
| If all the vector spaces E_i in some atlas are toplinearly isomorphic,
| then we can always find an equivalent atlas for which they are all equal,
| say to the vector space E.  We then say that X is an "E-manifold" or that
| X is "modeled" on E.
|
| Lang, DARM, p. 21.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 3

It is time to introduce the concept of "coordinates".

| If E = R^n for some fixed n, then we say
| that the manifold is "n-dimensional".
| In this case, a chart:
|
| q : U -> R^n
|
| is given by n coordinate functions q_1, ..., q_n.
| If 'P' denotes a point of U, these functions are
| often written:
|
| x_1 (P), ..., x_n (P),
|
| or simply x_1, ..., x_n.  They are
| called "local coordinates" on the
| manifold.
|
| Lang, DARM, p. 21.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 4

| 2.1.  Atlases, Charts, Morphisms (cont.)
|
| The collection of C^p-manifolds will be denoted by "Man^p".
| If we look only at those modeled on spaces in a category $A$
| then we write "Man^p ($A$)".  Those modeled on a fixed E will
| be denoted by "Man^p (E)".  We shall make these into categories
| by defining morphisms below.
|
| Let X be manifold, and U an open subset of X.  Then it is possible,
| in the obvious way, to induce a manifold structure on U, by taking
| as charts the intersections:
|
| (U_i |^| U,  q_i | (U_i |^| U)).
|
| [NB.  "f | S" indicates the function f as restricted to the set S.]
|
| If X is a topological space, covered by open subsets V_j, and if we are
| given on each V_j a manifold structure such that for each pair j, j' the
| induced structure on V_j |^| V_j' coincides, then it is clear that we can
| give to X a unique manifold structure inducing the given ones on each V_j.
|
| Lang, DARM, pp. 21-22.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 5

| 2.1.  Atlases, Charts, Morphisms (cont.)
|
| If X, Y are two manifolds, then one can give the
| product X x Y a manifold structure in the obvious way.
| If {(U_i, q_i)} and {(V_j, r_j)} are atlases for X, Y
| respectively, then:
|
| {(U_i x V_j, q_i x r_j)}
|
| is an atlas for the product, and the product of compatible
| atlases gives rise to compatible atlases, so that we do get
| a well-defined product structure.
|
| Let X, Y be two manifolds.  Let f : X -> Y be a map.
| We shall say that f is a "C^p-morphism" if, given x in X,
| there exists a chart (U, q) at x and a chart (V, r) at f(x)
| such that f(U) c V, and the map:
|
| r o f o q^-1 : qU -> rV
|
| is a C^p-morphism in the sense of Chapter 1, Section 3.
| One sees then immediately that this same condition holds
| for any choice of charts (U, q) at x and (V, r) at f(x)
| such that F(U) c V.
|
| It is clear that the composite of two C^p-morphisms is itself
| a C^p-morphism (because it is true for open subsets of vector
| spaces).  The C^p-manifolds and C^p-morphisms form a category.
| The notion of isomorphism is therefore defined ...
|
| If f : X -> Y is a morphism, and (U, q) is a chart
| at a point x in X, while (V, r) is a chart at f(x),
| then we shall also denote by:
|
| f_V,U : qU -> rV
|
| the map rfq^-1 [that is, r o f o q^-1].
| 
| Lang, DARM, p. 22.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 6

| 2.1.  Atlases, Charts, Morphisms (concl.)
|
| It is also convenient to have a local terminology.
| Let U be an open set (of a manifold or a Banach space)
| containing a point x_0.  By a "local isomorphism" at x_0
| we mean an isomorphism:
|
| f : U_1 -> V
|
| from some open set U_1 containing x_0 (and contained in U)
| to an open set V (in some manifold or some Banach space).
| Thus a local isomorphism is essentially a change of chart,
| locally near a given point.
|
| Lang, DARM, p. 23.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 7

| 2.2.  Submanifolds, Immersions, Submersions
|
| Let X be a topological space, and Y a subset of X.
| We say that Y is "locally closed" in X if every point
| y in Y has an open neighborhood U in X such that Y |^| U
| is closed in U.  One verifies easily that a locally closed
| subset is the intersection of an open set and a closed set.
| For instance, any open subset of X is locally closed, and
| any open interval is locally closed in the plane.
|
| Let X be a manifold (of class C^p with p >= 0).  Let Y be a subset of X
| and assume that for each point y in Y there exists a chart (V, r) at y
| such that r gives an isomorphism of V with a product V_1 x V_2 where
| V_1 is open in some space E_1 and V_2 is open in some space E_2,
| and such that:
|
| r(Y |^| V) = V_1 x a_2
|
| for some point a_2 in V_2 (which we could take to be 0).  Then it is clear
| that Y is locally closed in X.  Furthermore, the map r induces a bijection:
|
| r_1 : Y |^| V -> V_1.
|
| The collection of pairs (Y |^| V, r_1) obtained in the above manner constitues
| an atlas for Y, of class C^p.  The verification of this assertion, whose formal
| details we leave to the reader, depends on the following obvious fact.
|
| Lemma 2.1.  Let U_1, U_2, V_1, V_2 be open subsets of Banach spaces,
|
|             and g : U_1 x U_2 -> V_1 x V_2 a C^p-morphism.
|
|             Let a_2 be in U_2 and b_2 be in V_2
|
|             and assume that g maps U_1 x a_2 into V_1 x b_2.
|
|             Then the induced map:
|
|             g_1 : U_1 -> V_1
|
|             is also a morphism.
|
| Indeed, it is obtained as a composite map:
|
| U  ->  U_1 x U_2  ->  V_1 x V_2  ->  V_1,
|
| the first map being an inclusion and the third a projection.
|
| We have therefore defined a C^p-structure on Y which will be called
| a "submanifold" of X.  This structure satisfies a universal mapping
| property, which characterizes it, namely:
|
| | Given any map f : Z -> X from a manifold Z into X such that
| | f(Z) is contained in Y.  Let f_Y : Z -> Y be the induced map.
| | Then f is a morphism if and only if f_Y is a morphism.
|
| The proof of this assertion depends on Lemma 2.1, and is trivial.
|
| Finally, we note that the inclusion of Y into X is a morphism.
| 
| If Y is also a closed subspace of X, then
| we say that it is a "closed submanifold".
|
| Lang, DARM, pp. 23-24.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 8

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| Suppose that X is finite dimensional of dimension n, and that Y
| is a submanifold of dimension m.  Then from the definition we see
| that the local product structure in the neighborhood of a point of
| Y can be expressed in terms of local coordinates as follows.  Each
| point P of Y has an open neighborhood U in X with local coordinates
| (x_1, ..., x_n) such that the points of Y in U are precisely those
| whose last n - m coordinates are 0, that is, those points having
| coordinates of type:
|
| (x_1, ..., x_m, 0, ..., 0).
|
| Let f : Z -> X be a morphism, and let z be in Z.  We shall say that f is
| an "immersion" at z if there exists an open neighborhood Z_1 of z in Z
| such that the restriction of f to Z_1 induces an isomorphism of Z_1
| onto a submanifold of X.  We say that f is an "immersion" if it is
| an immersion at every point.
|
| Notice that there exist injective immersions
| which are not isomorphisms onto submanifolds,
| as given by the following example:
|      ________
|     /        \
|    /          \
|   |           |
|   |           |
|    \          V
|     \__________________________________________
|
| (The arrow means that the line approaches itself without touching.)
|
| An immersion which does give an isomorphism onto a submanifold is
| called an "embedding", and it is called a "closed embedding" if
| this submanifold is closed.
|
| Lang, DARM, pp. 24-25.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 9

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| A morphism f : X -> Y will be called a "submersion" at a point x in X
| if there exists a chart (U, q) at x and a chart (V, r) at f(x) such that
| q gives an isomorphism of U on a product U_1 x U_2 (U_1 and U_2 open in
| some Banach spaces), and such that the map:
|
| rfq^-1  =  f_V,U  :  U_1 x U_2  ->  V
|
| is a projection.  One sees then that the image of a submersion is
| an open subset (a submersion is in fact an open mapping).  We say
| that f is a "submersion" if it is a submersion at every point.
|
| For manifolds modeled on Banach spaces,
| we have the usual criterion for immersions
| and submersions in terms of the derivative.
| 
| Proposition 2.2.  Let X, Y be manifolds of class C^p (p >= 1)
|
|                   modeled on Banach spaces.
|
|                   Let f : X -> Y be a C^p-morphism.
|
|                   Let x be in X.
|
|                   Then:
|
|                   1.  f is an immersion at x if and only if
|
|                       there exists a chart (U, q) at x and (V, r) at f(x)
|
|                       such that f'_V,U (qx) is injective and splits.
|
|                   2.  f is a submersion at x if and only if
|
|                       there exists a chart (U, q) at x and (V, r) at f(x)
|
|                       such that f'_V,U (qx) is surjective and its kernel splits.
|
| Proof.  This is an immediate consequence
|         of Corollaries 5.4 and 5.6 of
|         the inverse mapping theorem.
| 
| The conditions expressed in [Propositions 2.2.1 and 2.2.2] depend only on the
| derivative [f'], and if they hold for one choice of charts (U, q) and (V, r),
| respectively, then they hold for every choice of such charts.  It is therefore
| convenient to introduce a terminology in order to deal with such properties.
|
| Lang, DARM, p. 25.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 10

And now the real fun begins.  It is time to give yet another
but very intriguing couple of definitions of tangent vectors
and tangent apaces, attached to each point x of a manifold X.

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| Let X be a manifold of class C^p (p >= 1).  Let x be a point of X.
| We consider triples (U, q, v) where (U, q) is a chart at x and v is
| an element of the vector space in which qU lies.  We say that two such
| triples (U, q, v) and (V, r, w) are "equivalent" if the derivative of
| rq^-1 at qx maps v on w.  The formula reads:
|
| (rq^-1)'(qx)v  =  w
|
| (obviously an equivalence relation by the chain rule).
|
| An equivalence class of such triples is called a "tangent vector" of X at x.
| The set of such tangent vectors is called the "tangent space" of X at x and
| is denoted by "T_x (X)".  Each chart (U, q) determines a bijection of T_x (X)
| on a Banach space, namely the equivalence class of (U, q, v) corresponds to
| the vector v.  By means of such a bijection it is possible to transport to
| T_x (X) the structure of topological vector space given by the chart, and
| it is immediate that this structure is independent of the chart selected.
|
| Lang, DARM, pp. 25-26.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 11

We now define the "derivative" or the "differential" of a map.
The "derivative" is Lang's name for what most mathematicians
call the "differential", and vice versa, so the reader may
take it as an object lesson in differential translation.

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| If U, V are open in Banach spaces, then to every morphism f
| of class C^p (p >= 1) we can associate its derivative Df(x).
| If now f : X -> Y is a morphism of one manifold into another,
| and x a point of X, then by means of charts we can interpret
| the derivative of f on each chart at x as a mapping:
|
| df(x)  =  T_x f  :  T_x (X)  ->  T_f(x) (Y).
|
| Indeed, this map T_x f is the unique linear map having the following property.
| If (U, q) is a chart at x and (V, r) is a chart at f(x) such that f(U) c V
| and ^v^ is a tangent vector at x represented by v in the chart (U, q),
| then:
|
| T_x f(^v^)
|
| is the tangent vector at f(x) represented by (Df_V,U (x))v.
| The representation of T_x f on the spaces of charts can be
| given in the form of a diagram:
|
|       T_x (X)  o-------->o  E
|                |         |
|         T_x f  |         |  f'_V,U (x)
|                v         v
|    T_f(x) (Y)  o-------->o  F
|
| [NB.  f'_V,U (x) = Df_V,U (x), as an alternate notation.]
|
| The map T_x f is obviously continuous and linear
| for the structure of topological vector space
| which we have placed on T_x (X) and T_f(x) (Y).
|
| As a matter of notation, we shall sometimes write f_*,x instead of T_x f.
|
| The operation T satisfies an obvious functorial property,
| namely, if f : X -> Y and g : Y -> Z are morphisms, then:
|
| T_x (g o f)  =  (T_f(x) g) o (T_x f).
|
| T_x (id)     =  id.
|
| We may reformulate Proposition 2.2:
|
| Proposition 2.3.  Let X, Y be manifolds of class C^p (p >= 1)
|
|                   modeled on Banach spaces.
|
|                   Let f : X -> Y be a C^p-morphism.
|
|                   Let x be in X.
|
|                   Then:
|
|                   1.  f is an immersion at x if and only if
|
|                       the map T_x f is injective and splits.
|
|                   2.  f is a submersion at x if and only if
|
|                       the map T_x f is surjective and its kernel splits.
|
| Nota.  If X, Y are finite dimensional, then the condition that T_x f splits
| is superfluous.  Every subspace of a finite dimensional vector space splits.
|
| Lang, DARM, pp. 26-27.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 12

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| If W is a submanifold of a manifold Y of
| class C^p (p >= 1), then the inclusion:
|
| i : W -> Y
|
| induces a map:
|
| T_w i  :  T_w (W) -> T_w (Y)
|
| which is in fact an injection.
|
| From the definition of a submanifold, one sees immediately
| that the image of T_w i splits.  It will be convenient to
| identify T_w (W) in T_w (Y) if no confusion can result.
|
| A morphism f : X -> Y will be said to be "transversal"
| over the submanifold W of Y if the following condition
| is satisfied.
|
| Let x in X be such that f(x) is in W.
| Let (V, r) be a chart at f(x) such that
| r : V -> V_1 x V_2 is an isomorphism on
| a product, with:
|
| r(f(x)) = (0, 0)  and  r(W |^| V) = V_1 x 0.
|
| Then there exists an open neighborhood U of x
| such that the composite map:
|
|      f         r               proj
| U ------> V ------> V_1 x V_2 ------> V_2
|
| is a submersion.
|
| [Here, "proj" denotes the "projection" proj : V_1 x V_2 -> V_2.]
|
| In particular, if f is transversal over W, then
| f^(-1) (W) is a submanifold of X, because the
| inverse image of 0 by our local composite map:
|
| proj o r o f
|
| is equal to the inverse image of W |^| V by r.
|
| Lang, DARM, p. 27.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 13

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| As with immersions and submersions,
| we have a characterization of
| transversal maps in terms
| of tangent spaces.
|
| Proposition 2.4.  Let X, Y be manifolds of class C^p (p >= 1)
|
|                   modeled on Banach spaces.
|
|                   Let f : X -> Y be a C^p-morphism,
|
|                   and W a submanifold of Y.
|
|                   The map f is transversal over W
|
|                   if and only if
|
|                   for each x in X such that f(x) lies in W,
|
|                   the composite map:
|
|                            T_x (f)
|                   T_x (X) ---------> T_w (Y) ---------> T_w (Y) / T_w (W),
|
|                   with w = f(x), is surjective and its kernel splits.
|
| Proof.  If f is transversal over W, then for each point x in X such
|         that f(x) lies in W, we choose charts as in the definition,
|         and reduce the question to one of maps of open subsets of
|         Banach spaces.  In that case, the conclusion concerning
|         the tangent spaces follows at once from the assumed
|         direct product decompositions.
|
|         Conversely, assume our condition on the tangent map.  The
|         question being local, we can assume that Y = V_1 x V_2 is a
|         product of open sets in Banach spaces such that W = V_1 x 0,
|         and we can also assume that X = U is open in some Banach space,
|         x = 0.  Then we let g : U -> V_2 be the map !p! o f, where !p!
|         is the projection, and note that our assumption means that
|         g'(0) is surjective and its kernel splits.  Furthermore,
|         g^(-1)(0) = f^(-1)(W).  We can then use Corollary 5.7
|         of the inverse mapping theorem to conclude the proof.
|
| Lang, DARM, pp. 27-28.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 14

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| If E is a Banach space, then the diagonal !D! in E x E
| is a closed subspace and splits:  Either factor E x 0
| or 0 x E is a closed complement.  Consequently, the
| diagonal is a closed submanifold of E x E.  If X
| is any manifold of class C^p, p >= 1, then the
| diagonal is therefore also a submanifold.
| (It is closed of course if and only if
| X is Hausdorff.)
|
| Let f : X -> Z and g : Y -> Z be two C^p-morphisms, p >= 1.
| We say that they are "transversal" if the morphism:
|
| f x g  :  X x Y  ->  Z x Z
|
| is transversal over the diagonal.  We remark right away
| that the surjectivity of the map in Proposition 2.4 can
| be expressed in two ways.  Given two points x in X and
| y in Y such that f(x) = g(y) = z, the condition:
|
| Im (T_x f) + Im (T_y g)  =  T_z (Z)
|
| is equivalent to the condition:
|
| Im (T_(x,y) (f x g)) + T_(z,z) (!D!)  =  T_(z,z) (Z x Z).
|
| Thus in the finite dimensional case, we could
| take it as the definition of transversality.
|
| Lang, DARM, pp. 28-29.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 15

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| We use transversality as a sufficient condition under which the fiber product
| of two morphisms exists.  We recall that in any category, the "fiber product"
| of two morphisms f : X -> Z and g : Y -> Z over Z consists of an object P
| and two morphisms:
|
| g_1 : P -> X   and   g_2 : P -> Y
|
| such that f o g_1  =  g o g_2, and satisfying the universal mapping property:
|
| Given an object S and two morphisms:
|
| u_1 : S -> X   and   u_2 : S -> Y
|
| such that f o u_1  =  g o u_2, there exists a unique morphism u : S -> P
| making the following diagram commutative:
|
|              S
|              o
|             /|\
|            / | \
|           /  |  \
|      u_1 /   u   \ u_2
|         /    |    \
|        /     |     \
|       v      v      v
|    X o<------P------>o Y
|       \  g_1   g_2  /
|        \           /
|         \         /
|       f  \       /  g
|           \     /
|            \   /
|             v v
|              o
|              Z
|
| The triple (P, g_1, g_2) is uniquely determined,
| up to a unique isomorphism (in the obvious sense),
| and P is also denoted by X x_Z Y.
|
| Lang, DARM, p. 29.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 16

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| One can view the fiber product unsymmetrically.
| Given two morphisms f, g as in the following
| diagram:
|
|                             o Y
|                             |
|                             |
|                             |
|                             | g
|                             |
|                             |
|                             v
|       X o------------------>o Z
|                   f
|
| assume that their fiber product exists,
| so that we can fill in the diagram:
|
| X x_Z Y o------------------>o Y
|         |                   |
|         |                   |
|         |                   |
|     g_1 |                   | g
|         |                   |
|         |                   |
|         v                   v
|       X o------------------>o Z
|                   f
|
| We say that g_1 is the "pull back" of g by f, and also
| write it as f*(g).  Similarly, we write X x_Z Y as f*(Y).
|
| [The "x_Z" symbol is a cross-product "x" with a subscript "Z".]
|
| Lang, DARM, pp. 29-30.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 17

| 2.2.  Submanifolds, Immersions, Submersions (cont.)
|
| In our category of manifolds, we shall deal only with cases
| when the fiber product can be taken to be the set-theoretic
| fiber product on which a manifold structure has been defined.
| (The set-theoretic fiber product is the set of pairs of points
| projecting on the same point.)  This determines the fiber product
| uniquely, and not only up to a unique isomorphism.
|
| Proposition 2.5.  Let f : X -> Z and g : Y -> Z
|
|                   be two C^p-morphisms with p >= 1.
|
|                   If they are transversal, then:
|
|                   (f x g)^(-1) (!D!_Z),
|
|                   together with the natural morphisms into X and Y
|
|                   (obtained from the projections),
|
|                   is a fiber product of f and g over Z.
|
| Proof.  Obvious.
|
| To construct a fiber product, it suffices to do it locally.
| Indeed, let f : X -> Z and g : Y -> Z be two morphisms.
| Let {V_i} be an open covering of Z, and let:
|
| f_i  :  f^(-1) (V_i)  ->  V_i
|
| and
|
| g_i  :  g^(-1) (V_i)  ->  V_i
|
| be the restrictions of f and g to the respective inverse images of V_i.
| Let P = (f x g)^(-1) (!D!_Z).  Then P consists of the points (x, y) with
| x in X and y in Y such that f(x) = g(y).  We view P as a subspace of X x Y
| (i.e. with the topology induced by that of X x Y).  Similarly, we construct
| P_i with f_i and g_i.  Then P_i is open in P.  The projections on the first
| and second factors give natural maps of P_i into f^(-1)(V_i) and g^(-1)(V_i),
| and of P into X and Y.
|
| Lang, DARM, p. 30.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Note 18

| 2.2.  Submanifolds, Immersions, Submersions (concl.)
|
| Proposition 2.6.  Assume that each P_i admits a manifold structure (compatible
|
|                   with its topology) such that these maps are morphisms,
|
|                   making P_i into a fiber product of f_i and g_i.
|
|                   Then P, with its natural projections,
|
|                   is a fiber product of f and g.
|
| To prove the above assertion, we observe that the P_i form a covering of P.
|
| Furthermore, the manifold structure on P_i |^| P_j induced by that of P_i or P_j
|
| must be the same, because it is the unique fiber product structure over V_i |^| V_j,
|
| for the maps f_ij and g_ij (defined on f^(-1)(V_i |^| V_j) and g^(-1)(V_i |^| V_j),
|
| respectively).  Thus we can give P a manifold structure, in such a way that the
|
| two projections into X and Y are morphisms, and make P into a fiber product
|
| of f and g.
|
| We shall apply the preceding discussion
| to vector bundles in the next chapter, and
| the following local criterion will be useful.
|
| Proposition 2.7.  Let f : X -> Z be a morphism,
|
|                   and g : Z x W -> Z be the
|
|                   projection on the first factor.
|
|                   Then f, g have a fiber product,
|
|                   namely the product X x W
|
|                   together with the morphisms
|
|                   of the following diagram:
|
|                 f x id
|   X x W o------------------>o Z x W
|         |                   |
|         |                   |
|         |                   |
|  proj_1 |                   | proj_2
|         |                   |
|         |                   |
|         v                   v
|       X o------------------>o Z
|                   f
|
| Lang, DARM, pp. 30-31.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

DARM. Commentary Note 1

I will now try to say, in a very tentative way, what I think that
the themes and variations of manifold theory, if suitably adapted,
might have to do with the business of inquiry, modeling, semantics,
semiotics, and sign relations in general, especially in the light of
many compelling questions about change and diversity in our conceptual
and symbolic systems, including the problems of designing interoperable
perspectives and mutually intelligible codes for the worlds we construe
to exist and the worlds we have come to inhabit.

Let's view our archetype of a manifold, the Figure of a space X
and a couple of charts (U_i, q_i) and (U_j, q_j) from its atlas:

o-----------------------------------------------------------o
| X                                                         |
|                                                           |
|             o-------------o   o-------------o             |
|            /               \ /               \            |
|           /                 o                 \           |
|          /                 / \                 \          |
|         /                 /   \                 \         |
|        /                 /     \                 \        |
|       o                 o       o                 o       |
|       |                 |       |                 |       |
|       |                 |       |                 |       |
|       |       U_i       |  U_ij |       U_j       |       |
|       |                 |       |                 |       |
|       |                 |       |                 |       |
|       o                 o       o                 o       |
|        \                 \     /                 /        |
|         \                 \   /                 /         |
|          \                 \ /                 /          |
|           \                 o                 /           |
|            \       |       / \       |       /            |
|             o------|------o   o------|------o             |
|                    |                 |                    |
|                    |                 |                    |
o--------------------|-----------------|--------------------o
                     |                 |
                 q_i |                 | q_j
                     |                 |
o--------------------|-----o     o-----|--------------------o
| E_i                v     |     |     v                E_j |
|                          |     |                          |
|       o----------o       |     |       o----------o       |
|      /            \      |     |      /            \      |
|     /              o     |     |     o              \     |
|    /              / \    |     |    / \              \    |
|   /              /   \   |     |   /   \              \   |
|  o              o     o  |     |  o     o              o  |
|  |              |     |  | q_ij|  |     |              |  |
|  |              |  ------------------>  |              |  |
|  |              |     |  |     |  |     |              |  |
|  |   q_i U_ij -----   |  |     |  |   ----- q_j U_ij   |  |
|  |              |     |  |     |  |     |              |  |
|  o              o     o  |     |  o     o              o  |
|   \              \   /   |     |   \   /              /   |
|    \              \ /    |     |    \ /              /    |
|     \              o     |     |     o              /     |
|      \            /      |     |      \            /      |
|       o----------o       |     |       o----------o       |
|                          |     |                          |
|                          |     |                          |
o--------------------------o     o--------------------------o
Figure 1.  Manifold X with Charts (U_i, q_i) and (U_j, q_j)

| Let X be a set.  An "atlas" of class C^p (p >= 0) on X
| is a collection of pairs (U_i, q_i) (i ranging in some
| indexing set), satisfying the following conditions:
|
| AT 1.  Each U_i is a subset of X, and the U_i cover X.
|
| AT 2.  Each q_i is a bijection of U_i onto an open subset q_i U_i
|        of some Banach space E_i, and for each i, j it is the case
|        that q_i (U_i |^| U_j) is open in E_i.
|
| AT 3.  The map:
|
|        (q_j) o (q_i)^(-1)  :  q_i (U_i |^| U_j)  ->  q_j (U_i |^| U_j)
|
|        is a C^p-isomorphism for each pair of indices i, j.
|
| Each pair (U_i, q_i) will be called a "chart" of the atlas.
| If a point x of X lies in U_i, then we say that (U_i, q_i)
| is a "chart at x".
|
| Lang, DARM, pp. 20-21.
|
| Serge Lang,
|'Differential & Riemannian Manifolds',
| Springer-Verlag, New York, NY, 1995.

Let us now back away from the picture and view it more impressionistically.
We may view X as being the "object space" or the "real" space in which all
of us are really the most interested, at least, if we know what's good for
us, and regard E_i and E_j to be the spaces of, let us say, my impressions,
lexicon, measurements, nomenclature, senses, signs, symbology, terminology,
utterances, vocabulary, whatever it happens to be, and yours, respectively.

Focus on the subsets of X, E_i, E_j that are defined and marked as follows:

   U_ij  =  U_i |^| U_j  c  X

   E_ij  =  q_i U_ij     c  E_i

   E_ji  =  q_j U_ij     c  E_j
 
The mapping of the form (q_j) o (q_i)^(-1) is what does the work
of partially translating my code into yours, to the extent that
it is possible to do so by flipping charts.  This is easier to
see if one lays out the maps in a straight-line presentation:

         (q_i)^(-1)             q_j
   E_ij ------------> U_ij ------------> E_ji

Naturally enough, maps of the form (q_j) o (q_i)^(-1),
that change coordinates from chart to chart within the
same atlas, are known as "transition" or "translation"
maps.  As a short form, let q_ij = (q_j) o (q_i)^(-1).

Here are a couple of helpful hints about
reading these brands of translation maps:

Reading 1.

If E_i is my code space and E_j is your code space,
then I may read the application of the translation
q_ji (w) = ((q_i) o (q_j)^(-1))(w) in this fashion:

| q_ji (w)  =  ((q_i) o (q_j)^(-1))(w)
|
|           =  my name for what you call w.

Reading 2.

If E_i is a new code space and E_j is an old code space,
then we may interpret the application of the translation
q_ji (w) = ((q_i) o (q_j)^(-1))(w) in the following way:

| q_ji (w)  =  ((q_i) o (q_j)^(-1))(w)
|
|           =  our new name for what we used to call w.

In other words, as one says, we are talking about an
objective interpretive situation, where the sign w and
the interpretant sign w' = ((q_i) o (q_j)^(-1))(w) both
denote the shared object x.

Next question:  Does this manifold picture capture the
most generic brand of objective interpretive situation?

DARM. Incidental Note 1

| This paper is based upon the theory already established,
| that the function of conceptions is to reduce the manifold
| of sensuous impressions to unity, and that the validity of
| a conception consists in the impossibility of reducing the
| content of consciousness to unity without the introduction
| of it.
|
| C.S. Peirce, CP 1.545, CE 2.49

DARM. Incidental Note 2

| Now the discovery of ideas as general as these is chiefly
| the willingness to make a brash or speculative abstraction,
| in this case supported by the pleasure of purloining words
| from the philosophers:  "Category" from Aristotle and Kant,
| "Functor" from Carnap ('Logische Syntax der Sprache'), and
| "natural transformation" from then current informal parlance.
|
| Mac Lane, 'Cat.Work.Math.', pp. 29-30.
|
| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| Springer-Verlag, New York, NY, 1971.

DARM. Incidental Note 3

| And let us also, to escape entanglement with
| difficulties about the physical or psychical
| nature of its "object", not call it a feeling
| of fragrance or of any other determinate sort,
| but limit ourselves to assuming that it is a
| feeling of 'q'.
|
| William James, 'The Meaning Of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 3.

DARM. Incidental Note 4

| Now, if this feeling of 'q' be the only creation of
| the god, it will of course form the entire universe.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| pp. 3-4.

DARM. Incidental Note 5

| Well now, can our little feeling, thus left alone in the universe, --
| for the god and we psychological critics may be supposed left out
| of the account, -- can the feeling, I say, be said to have any sort
| of a cognitive function?  For it to 'know', there must be something
| to be known.  What is there, on the present supposition?
| One may reply, "the feeling's content 'q'."
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 5.

DARM. Incidental Note 6

| But does it not seem more proper to call this the
| feeling's 'quality' than its content?  Does not the
| word "content" suggest that the feeling has already
| dirempted itself as an act from its content as an
| object?  And would it be quite safe to assume so
| promptly that the quality 'q' of a feeling is
| one and the same thing with a feeling of the
| quality 'q'?
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 5.

DARM. Incidental Note 7

| The quality 'q', so far, is an entirely subjective fact
| which the feeling carries so to speak endogenously, or
| in its pocket.  If any one pleases to dignify so simple
| a fact as this by the name of knowledge, of course
| nothing can prevent him.  But let us keep closer
| to the path of common usage, and reserve the name
| knowledge for the cognition of "realities", meaning
| by realities things that exist independently of the
| feeling through which their cognition occurs.  If the
| content of the feeling occur nowhere in the universe
| outside of the feeling itself, and perish with the
| feeling, common usage refuses to call it a reality,
| and brands it as a subjective feature of the feeling's
| constitution, or at the most as the feeling's 'dream'.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| pp. 5-6.

DARM. Incidental Note 8

| For the feeling to be cognitive in the specific sense, then,
| it must be self-transcendent;  and we must prevail upon the
| god to 'create a reality outside of it' to correspond to its
| intrinsic quality 'q'.  Thus only can it be redeemed from the
| condition of being a solipsism.  If now the new-created reality
| 'resemble' the feeling's quality 'q', I say that the feeling may
| be held by us 'to be cognizant of that reality'.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 6.

DARM. Incidental Note 9

| Some persons will immediately cry out, "How 'can'
| a reality resemble a feeling?"  Here we find how
| wise we were to name the quality of the feeling
| by an algebraic letter 'q'.  We flank the whole
| difficulty of resemblance between an inner state
| and an outward reality, by leaving it free to any
| one to postulate as the reality whatever sort of
| thing he thinks 'can' resemble a feeling, -- if
| not an outward thing, then another feeling like
| the first one, -- the mere feeling 'q' in the
| critic's mind for example.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 8.

DARM. Incidental Note 10

| Our little supposed feeling, whatever it may be,
| from the cognitive point of view, whether a bit of
| knowledge or a dream, is certainly no psychical zero.
| It is a most positively and definitely qualified inner
| fact, with a complexion all its own.  Of course there
| are many mental facts which it is 'not'.  It knows 'q',
| if 'q' be a reality, with a very minimum of knowledge.
| It neither dates nor locates it.  It neither classes nor
| names it.  And it neither knows itself as a feeling, nor
| contrasts itself with other feelings, nor estimates its
| own duration or intensity.  It is, in short, if there
| is no more of it than this, a most dumb and helpless
| and useless kind of thing.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 10.

DARM. Incidental Note 11

| Now obviously if our supposed feeling of 'q'
| is (if knowledge at all) only knowledge of the
| mere acquaintance-type, it is milking a he-goat,
| as the ancients would have said, to try to extract
| from it any deliverance 'about' anything under the sun,
| even about itself.  And it is as unjust, after our failure,
| to turn upon it and call it a psychical nothing, as it would be,
| after our fruitless attack upon the billy-goat, to proclaim the
| non-lactiferous character of the whole goat-tribe.
|
| William James, 'The Meaning of Truth',
| Longmans, Green, & Co., London, 1909,
| p. 12.

DARM. Incidental Note 12

Apology to 'q'

| It is always the "speechlessness" of sensation, its inability
| to make any "statement", that is held to make the very notion
| of it meaningless, and to justify the student of knowledge in
| scouting it out of existence.  "Significance", in the sense
| of standing as the sign of other mental states, is taken
| to be the sole function of what mental states we have;
| and from the perception that our little primitive
| sensation has as yet no significance in this
| literal sense, it is an easy step to call it
| first meaningless, next senseless, then
| vacuous, and finally to brand it as
| absurd and inadmissible.  But in
| this universal liquidation, this
| everlasting slip, slip, slip,
| of direct acquaintance into
| knowledge-'about', until at
| last nothing is left about
| which the knowledge can be
| supposed to obtain, does
| not all "significance"
| depart from the
| situation?
| And when our knowledge about things has reached its never so complicated perfection,
| must there not needs abide alongside of it and inextricably mixed in with it
| some acquaintance with 'what' things all this knowledge is about?
|
| James, "Func of Cog", pp. 13-14.
|
| William James, "The Function Of Cognition",
| Read before the Aristotelian Society, 1 Dec 1884.
| First published in 'Mind', 10 (1885).  Reprinted in
|'The Meaning Of Truth:  A Sequel To "Pragmatism"',
| Longmans, Green, & Company, London, UK, 1909.

DARM. Incidental Note 13

| Now, our supposed little feeling gives a 'what';
| and if other feelings should succeed which remember the first,
| its 'what' may stand as subject or predicate of some piece of knowledge-about,
| of some judgment, perceiving relations between it and other 'whats' which the other
| feelings may know.  The hitherto dumb 'q' will then receive a name and be no longer speechless.
|
| James, "Func of Cog", p. 14.
|
| William James, "The Function Of Cognition",
| Read before the Aristotelian Society, 1 Dec 1884.
| First published in 'Mind', 10 (1885).  Reprinted in
|'The Meaning Of Truth:  A Sequel To "Pragmatism"',
| Longmans, Green, & Company, London, UK, 1909.

DARM. Incidental Note 14

| But every name, as students of logic know, has its "denotation";  and the
| denotation always means some reality or content, relationless 'ab extra'
| or with its internal relations unanalyzed, like the 'q' which our
| primitive sensation is supposed to know.  No relation-expressing
| proposition is possible except on the basis of a preliminary
| acquaintance with such "facts", with such contents, as this.
| Let the 'q' be fragrance, let it be toothache, or let it be
| a more complex kind of feeling, like that of the full-moon
| swimming in her blue abyss, it must first come in that
| simple shape, and be held fast in that first intention,
| before any knowledge 'about' it can be attained.
| The knowledge 'about' it is 'it' with a context
| added.  Undo 'it', and what is added cannot
| be 'con'-text.
|
| James, "Func of Cog", pp. 14-15.
|
| William James, "The Function Of Cognition",
| Read before the Aristotelian Society, 1 Dec 1884.
| First published in 'Mind', 10 (1885).  Reprinted in
|'The Meaning Of Truth:  A Sequel To "Pragmatism"',
| Longmans, Green, & Company, London, UK, 1909.

DARM. Incidental Note 15

| Let us say no more then about this objection, but enlarge our thesis, thus:
| If there be in the universe a 'q' other than the 'q' in the feeling,
| the latter may have acquaintance with an entity ejective to itself;
| an acquaintance moreover, which, as mere acquaintance, it would be
| hard to imagine susceptible either of improvement or increase,
| being in its way complete;  and which would oblige us (so long
| as we refuse not to call acquaintance knowledge) to say not
| only that the feeling is cognitive, but that all qualities
| of feeling, 'so long as there is anything outside of them
| which they resemble', are feelings 'of' qualities of
| existence, and perceptions of outward fact.
|
| James, "Func of Cog", pp. 15-16.
|
| William James, "The Function Of Cognition",
| Read before the Aristotelian Society, 1 Dec 1884.
| First published in 'Mind', 10 (1885).  Reprinted in
|'The Meaning Of Truth:  A Sequel To "Pragmatism"',
| Longmans, Green, & Company, London, UK, 1909.

DARM. Incidental Note 16

Apostrophe to 'q'

| The point of this vindication of the cognitive function
| of the first feeling lies, it will be noticed, in the
| discovery that 'q' does exist elsewhere than in it.
| In case this discovery were not made, we could not
| be sure the feeling was cognitive;  and in case
| there were nothing outside to be discovered,
| we should have to call the feeling a dream.
| But the feeling itself cannot make the
| discovery.  Its own 'q' is the only
| 'q' it grasps;  and its own nature
| is not a particle altered by
| having the self-transcendent
| function of cognition either
| added to it or taken away.
| The function is accidental;
| synthetic, not analytic;
| and falls outside and
| not inside its being.
|
| James, "Func of Cog", p. 16.
|
| William James, "The Function Of Cognition",
| Read before the Aristotelian Society, 1 Dec 1884.
| First published in 'Mind', 10 (1885).  Reprinted in
|'The Meaning Of Truth:  A Sequel To "Pragmatism"',
| Longmans, Green, & Company, London, UK, 1909.

DARM. Omitted Material

Here is the typical picture of their subject to which manifold theorists
have become accustomed, that, were it to be drawn in a more fluid medium,
and not so badly quartered in this e-current style, would be e-mediately
recognizable as the "Planarian", more popularly, the "Flatworm Diagram".

Once the preambulatory props and supports have been set out in such a way
as to establish our subject on the appropriate numbers and aerity of feet,
my aim for the creature's tentaclive course is to chart a beeline for the
tripod or trivet where our subject might well have been able to read that
destiny from the outset, if our subject had but taken the trouble to read.

I am planning to parade before you only a few more squibs of
Lang's snappy presentation of DARM's before I lay into it with
my own hamlet-fausted soliloquy on what it all means to me, but
I think that I can now safely vouchsafe to your long-suffering
souls one key of importance to its imports.  I hope that this
will serve to suggest at least a hint of a connection to the
business of inquiry, modeling, semiotics, and sign relations,
especially with regard to many pressing questions about change
and diversity in our conceptual and symbolic systems, including
the problems of designing interoperable perspectives and mutually
intelligible codes for the worlds we vorpally construe.

Backing away from all of the pointy little pointillistic details a bit,
let us now take in a grandly more impressionistic view of this picture.
Regard all of that busy-ness about Banach this and C^p that as nothing
more than somebody or another's personal aesthetic with regard to what
they think it might be that makes a space "pretty" or a mapping "nice".

References And Incidental Nuances (RAIN)

http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Riemann.html
http://members.door.net/arisbe/menu/library/bycsp/newlist/nl-frame.htm
http://www.philosophy.ru/library/kant/01/cr_pure_reason.html
http://www.physics.brocku.ca/etc/cargo_cult_science.html
http://ez2www.com/go.php3?site=book&go=0387943382
http://hallmathematics.com/mathematics/1433.shtml
http://hallmathematics.com/mathematics/630.shtml
http://www.peirce.org/writings/p32.html

MOSI. Manifolds Of Sensuous Impressions (2001)

00.  http://suo.ieee.org/ontology/thrd48.html#03045
01.  http://suo.ieee.org/ontology/msg03045.html
02.  http://suo.ieee.org/ontology/msg03046.html
03.  http://suo.ieee.org/ontology/msg03049.html
04.  http://suo.ieee.org/ontology/msg03065.html
05.  http://suo.ieee.org/ontology/msg03066.html
06.  http://suo.ieee.org/ontology/msg03074.html
07.  http://suo.ieee.org/ontology/msg03075.html
08.  http://suo.ieee.org/ontology/msg03079.html
09.  http://suo.ieee.org/ontology/msg03083.html
10.  http://suo.ieee.org/ontology/msg03131.html
11.  http://suo.ieee.org/ontology/msg03144.html
12.  http://suo.ieee.org/ontology/msg03147.html
13.  http://suo.ieee.org/ontology/msg03169.html
14.  http://suo.ieee.org/ontology/msg03205.html
15.  http://suo.ieee.org/ontology/msg03208.html
16.  http://suo.ieee.org/ontology/msg03233.html
17.  http://suo.ieee.org/ontology/msg03260.html
18.  http://suo.ieee.org/ontology/msg03331.html
19.  http://suo.ieee.org/ontology/msg03333.html
20.  http://suo.ieee.org/ontology/msg03839.html
21.  http://suo.ieee.org/ontology/msg03841.html

Document History

DARM. Ontology List

00.  http://suo.ieee.org/ontology/thrd12.html#04770
01.  http://suo.ieee.org/ontology/msg04770.html
02.  http://suo.ieee.org/ontology/msg04771.html
03.  http://suo.ieee.org/ontology/msg04772.html
04.  http://suo.ieee.org/ontology/msg04773.html
05.  http://suo.ieee.org/ontology/msg04774.html
06.  http://suo.ieee.org/ontology/msg04775.html
07.  http://suo.ieee.org/ontology/msg04776.html
08.  http://suo.ieee.org/ontology/msg04777.html
09.  http://suo.ieee.org/ontology/msg04778.html
10.  http://suo.ieee.org/ontology/msg04779.html
11.  http://suo.ieee.org/ontology/msg04780.html
12.  http://suo.ieee.org/ontology/msg04781.html
13.  http://suo.ieee.org/ontology/msg04783.html
14.  http://suo.ieee.org/ontology/msg04784.html
15.  http://suo.ieee.org/ontology/msg04785.html
16.  http://suo.ieee.org/ontology/msg04786.html
17.  http://suo.ieee.org/ontology/msg04787.html
18.  http://suo.ieee.org/ontology/msg04788.html

DARM. Inquiry List

00.  http://stderr.org/pipermail/inquiry/2003-April/thread.html#442
00.  http://stderr.org/pipermail/inquiry/2003-May/thread.html#448

01.  http://stderr.org/pipermail/inquiry/2003-April/000442.html
02.  http://stderr.org/pipermail/inquiry/2003-April/000443.html
03.  http://stderr.org/pipermail/inquiry/2003-April/000444.html
04.  http://stderr.org/pipermail/inquiry/2003-April/000445.html
05.  http://stderr.org/pipermail/inquiry/2003-April/000446.html
06.  http://stderr.org/pipermail/inquiry/2003-April/000447.html
07.  http://stderr.org/pipermail/inquiry/2003-May/000448.html
08.  http://stderr.org/pipermail/inquiry/2003-May/000449.html
09.  http://stderr.org/pipermail/inquiry/2003-May/000450.html
10.  http://stderr.org/pipermail/inquiry/2003-May/000451.html
11.  http://stderr.org/pipermail/inquiry/2003-May/000452.html
12.  http://stderr.org/pipermail/inquiry/2003-May/000453.html
13.  http://stderr.org/pipermail/inquiry/2003-May/000455.html
14.  http://stderr.org/pipermail/inquiry/2003-May/000456.html
15.  http://stderr.org/pipermail/inquiry/2003-May/000457.html
16.  http://stderr.org/pipermail/inquiry/2003-May/000458.html
17.  http://stderr.org/pipermail/inquiry/2003-May/000459.html
18.  http://stderr.org/pipermail/inquiry/2003-May/000460.html

DARM. Ontology List Commentary

00.  http://suo.ieee.org/ontology/thrd13.html#04782
01.  http://suo.ieee.org/ontology/msg04782.html

DARM. Inquiry List Commentary

00.  http://stderr.org/pipermail/inquiry/2003-May/thread.html#454
01.  http://stderr.org/pipermail/inquiry/2003-May/000454.html

DARM. Incidental Notes

DIF. Differential Geometry For Engineers

DIF. Note 1

Collateral with my exposition of differential logic,
it will be useful to pursue a few standard accounts
of differential geometry.  I'll begin with excerpts
from the following text, adopted with a weather eye
out for applications to control systems engineering:

| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

From the Preface:

| This book has been written to acquaint engineers, especially control
| engineers, with the basic concepts and terminology of modern global
| differential geometry.  The ideas discussed are applied here mainly
| as an introduction to the Lie theory of differential equations and
| to the role of Grassmannians in control systems analysis.  To reach
| these topics, the fundamental notions of manifolds, tangent spaces,
| vector fields, and Lie algebras are discussed and exemplified.

Biographical Data for Marius Sophus Lie (1842-1899):

http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Lie.html

DIF. Note 2

| 1.  Introduction
|
| This book presents some basic concepts, facts of global
| differential geometry, and some of its uses to a control
| engineer.  It is not a mathematical treatise;  the subject
| matter is well developed in many excellent books, for example,
| in references [1], [2], and [3], which, however, are intended
| for the reader with an extensive mathematical background.  Here,
| only some basic ideas and a minimum of theorems and proofs are
| presented.  Indeed, a proof occurs only if its presence strongly
| aids understanding.  Even among basic ideas of the subject, many
| directions and results have been neglected.  Only those needed
| for viewing control systems from the standpoint of vector fields
| are discussed.
|
| Differential geometry treats of curves and surfaces,
| the functions that define them, and transformations
| between the coordinates that can be used to specify
| them.  It also treats the differential relations
| that stitch pieces of curves or surfaces together
| or that tell one where to go next.
|
| In thinking of functions that can define surfaces in space, one is
| likely to think of real functions (functions assigning a real number
| to a given point of their argument) of three-space variables such as
| the kinetic energy of a particle, or the distribution of temperature
| in a room.  Differential geometry examines properties inherent in the
| surfaces these functions define that, of course, are due to the sources
| of energy or temperature in the surroundings.  Or, given enough of these
| functions, one might use them as proper coordinates of a problem.  Then
| the generalities of differential geometry show how to operate with them
| when they are used, for example, to describe a dynamic evolution.
|
| Differential geometry, in sum, derives general properties from the study of
| functions and mappings so that methods of characterization or operation can
| be carried over from one situation to another.  Global differential geometry
| refers to the description of properties and operations that are over "large"
| portions of space.
|
| Doolin & Martin, DGFE, pages 1-2.
|
| Bibliography, page 155.
|
| 1.  R. Hermann,
|    'Differential Geometry and the Calculus of Variations',
|     Academic Press, New York, NY, 1968.
|
| 2.  W.M. Boothby,
|    'An Introduction to Differentiable Manifolds and Riemannian Geometry',
|     Academic Press, New York, NY, 1977.
|
| 3.  L. Auslander & R.E. MacKenzie,
|    'Introduction to Differentiable Manifolds',
|     Dover Publications, New York, NY, 1977.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 3

| 1.  Introduction (cont.)
|
| Though the studies of differential geometry began in geodesy and in dynamics
| where intuition can be a faithful guide, the spaces now in this geometry's
| concern are far more general.  Instead of considering a set of three or
| six real functions on a space of vectors of three or six dimensions,
| spaces can be described by longer ordered strings of numbers, by
| sets of numbers ordered in various ways, or by ordered sets of
| products of numbers.  Examples are n-dimensional vector spaces,
| matrices, or multilinear objects like tensors.  It is not just
| these sets of numbers, but also the rules one has of passing
| from one set to another that form the proper subject matter
| of differential geometry, linking it to matters of interest
| in control.
|
| All analytic considerations of geometry begin with a space filled with stacks
| of numbers.  Before one can proceed to discuss the relations that associate
| one point with another or dictate what point follows another, one has to
| establish certain ground rules.  The ground rules that say if one point
| can be distinguished from another, or that there is a point close enough
| to wherever you want to go, are referred to as topological considerations.
| The basic description of the topological spaces underlying all the geometry
| of this paper is given in an appendix on fundamentals of vector calculus.
| This appendix discusses such desired topological characteristics
| as compactness and continuity, which is needed to preserve these
| characteristics in passing from one space to another.  The appendix
| concludes by recalling two theorems from vector calculus that provide
| the basic glue by which manifolds, the word for the fundamental spaces
| of global differential geometry, are assembled.  Since this discussion is
| fundamental to differential geometry, we briefly review it.  The review is
| relegated to an appendix, however, because it is not the topic of this book,
| nor should one dwell on it.
|
| The first two chapters of the body of the book describe manifolds, the spaces
| of our geometry.  Some simple manifolds are mentioned.  Several definitions are
| given, starting with one closest to intuition then passing to one perhaps more
| abstract, but actually less demanding to verify in cases of interest in control
| engineering.  Then mappings between manifolds are considered.  A special space,
| the tangent space, is discussed in chapter 3.  A tangent space is attached to
| every point in the manifold.  Since this is where the calculus is done, it and
| its relations to neighboring tangent spaces and to the manifold that supports it
| must be carefully described.
|
| Computation in these spaces is the topic of the next few chapters.
| Calculus on manifolds is given in chapter 4 on vector fields and
| their algebra, where the connection between global differential
| geometry and linear and nonlinear control begins to become clear.
| Chapters 5 and 6, which treat some algebraic rules, conclude our
| exposition of the fundamentals of the geometry.
|
| The examples given as the development unfolds should not only help the reader
| understand the topic under discussion but should also provide a basic set for
| testing ideas presented in the current literature.  Chapter 7 is intended to
| give the reader a glimpse of the structure supporting the spaces in which
| linear control operates.  More comprehensive applications of differential
| geometry to control are given in the final major chapter of the paper.
|
| Doolin & Martin, DGFE, pages 2-4.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 4

| 2.  Manifolds And Their Maps
|
| The first part of this chapter is devoted to the concept of a manifold.
| It is defined first by a projection then by a more useful though less
| intuitive definition.  Finally, it is seen how implicitly defined
| functions give manifolds.  Examples are considered both to enhance
| intuition and to bring out conceptual details.  The idea of a
| manifold is brought out more clearly by considering mappings
| between manifolds.  The properties of these mappings occupy
| the last part of this chapter.
|
| 2.1.  Differentiable Manifolds
|
| Although the detailed global description of a manifold
| can be quite complicated, basically a differentiable
| manifold is just a topological space (X, !W!) that
| in the neighborhood of each point looks like an
| open subset of R^k.  (In the notation (X, !W!),
| X is some set and !W! consists of all the sets
| defined as open in X and that characterize its
| topology.  As to the notation R^k, each point
| in R^k is specified as an ordered set of k
| real numbers.  These and other notions
| arising below are discussed in the
| appendix.)  This description can
| be formalized into a definition:
|
| 2.1.  Definition.  A subset M of R^n is a k-dimensional manifold
|       if for each x in M there are:  open subsets U and V of R^n
|       with x in U, and a diffeomorphism f from U to V such that:
|
|       f(U |^| M)  =  {y in V : y_(k+1) = ... = y_n = 0}.
|
|       Thus, a point y in the image of f has a representation like:
|
|       y  =  (y_1 (x), y_2 (x), ..., y_k (x), 0, ..., 0).
|
| A straight line is a simple example of a one-dimensional manifold,
| a manifold in R^1.  It is a manifold in R^1 even if it is given, for
| example, in R^2.  There it might represent the surface of solutions of
| the equation of a particle of unit mass under no forces:  x`` = 0 and
| with given initial momentum:  x`(t=0) = a.  [The (`) is a fluxion dot.]
| In the coordinate system y_1 = x, y_2 = x` - a, the manifold is given by
| the points (y_1, 0).  To the particle, its whole world looks like part of
| R^1 though we see its tracks clearly as part of R^2.  Any open subset of
| the straight line is also a one-dimensional manifold, but a closed subset
| of it is not.
|
| The sphere in R^3 is an example of a two-dimensional manifold.
| It is an example of a closed manifold and is often denoted as
| S^2.  Thus, for a point P in R^3:  P = (x_1, x_2, x_3), the
| manifold is given as the set:
|
|       S^2  =  {P in R^3 : (x_1)^2 + (x_2)^2 + (x_3)^2 - 1 = 0}.
|
| Its two-dimensional character is clear when a point in S^2 is
| given in terms of two variables, say, latitude and longitude.
|
| Doolin & Martin, DGFE, pages 5-7.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 5

| 2.  Manifolds And Their Maps
|
| 2.1.  Differentiable Manifolds (cont.)
|
| The examples of one- and two-dimensional manifolds so far have been sets
| given in some R^n and mapped into R^1 or R^2.  Sets forming manifolds are
| not always described naturally in some R^n.  To embed them in an R^n before
| showing that the definition is satisfied may be an undesirably awkward task.
| In fact, it is not necessary, and we will extend our previous definition so
| as to avoid it.  That labor, however, will be avoided only at the expense
| of introducing more formalism now.
|
| Let M be a second countable, Hausdorff topological space.  A 'chart' in M is
| a pair (V, !a!) with V an open set and !a! a C^oo function onto an open set
| in R^n and having a C^oo inverse.  A C^oo 'atlas' is a set of such charts,
| {(V_i, !a!_i)} = !A!, with the following properties:
|
|     1.  M  =  |_| V_i
|
|     2.  If (V_1, !a!_1) and (V_2, !a!_2) are in !A!
|
|         and V_1 |^| V_2 =/= Ø, then
|
|         (!a!_2) o (!a!_1)^(-1) : !a!_1 (V_1 |^| V_2) -> !a!_2 (V_1 |^| V_2)
|
|         is a C^oo diffeomorphism.
|
| [In the Figure below, let W = V_1 |^| V_2 and !t! = (!a!_2) o (!a!_1)^(-1).]
|
| o-----------------------------------------------------------o
| | M                                                         |
| |                                                           |
| |             o-------------o   o-------------o             |
| |            /               \ /               \            |
| |           /                 o                 \           |
| |          /                 / \                 \          |
| |         /                 / W \                 \         |
| |        /                 /     \                 \        |
| |       o                 o       o                 o       |
| |       |                 |  V_1  |                 |       |
| |       |                 |       |                 |       |
| |       |       V_1       |  |^|  |       V_2       |       |
| |       |                 |       |                 |       |
| |       |                 |  V_2  |                 |       |
| |       o                 o       o                 o       |
| |        \                 \     /                 /        |
| |         \                 \   /                 /         |
| |          \                 \ /                 /          |
| |           \                 o                 /           |
| |            \       |       / \       |       /            |
| |             o------|------o   o------|------o             |
| |                    |                 |                    |
| |                    |                 |                    |
| o--------------------|-----------------|--------------------o
|                      |                 |
|               !a!_1  |                 |  !a!_2
|                      |                 |
| o--------------------|-----o     o-----|--------------------o
| | R^n                v     |     |     v                R^n |
| |                          |     |                          |
| |       o----------o       |     |       o----------o       |
| |      /            \      |     |      /            \      |
| |     /              o     |     |     o              \     |
| |    /              / \    |     |    / \              \    |
| |   /              /   \   |     |   /   \              \   |
| |  o              o     o  |     |  o     o              o  |
| |  |              |     |  | !t! |  |     |              |  |
| |  |              |  ------------------>  |              |  |
| |  |              |     |  |     |  |     |              |  |
| |  | !a!_1 (W) ------   |  |     |  |   ------ !a!_2 (W) |  |
| |  |              |     |  |     |  |     |              |  |
| |  o              o     o  |     |  o     o              o  |
| |   \              \   /   |     |   \   /              /   |
| |    \              \ /    |     |    \ /              /    |
| |     \              o     |     |     o              /     |
| |      \            /      |     |      \            /      |
| |       o----------o       |     |       o----------o       |
| |                          |     |                          |
| |                          |     |                          |
| o--------------------------o     o--------------------------o
|
| Figure 2.2.  Sketch (b)
|
| Sketch (b), which illustrates the subsets V_1 and V_2 and
| the maps !a!_1 and !a!_2 may aid in picturing the content
| of condition (2).  With this formalism established, our
| second definition of a manifold can now be given:
|
| 2.2.  Definition.  A C^oo 'manifold' is a pair (M, !A!) where M
|       is a second countable Hausdorff topological space, and !A!
|       is a maximal C^oo atlas.
|
| The conditions on the topology guarantee that the number of charts required to
| cover M is countable.  The word "maximal" gives a technical condition.  It makes
| the atlas the class of collections of just enough charts to form a countable basis
| of charts.  By referring to the class, one is not tied to a representation given by
| a particular set of charts.
|
| Although the definition seems unduly complicated, it turns out to be just what
| is necessary to meet our intuition.  Every m-dimensional manifold determined by
| the definition can in fact be considered as a subset of R^n for some n such that
| m =< n =< 2m + 1.  Any weakening of the definition can allow objects which cannot
| be embedded in some R^n.
|
| Doolin & Martin, DGFE, pages 9-10.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 6

| 2.  Manifolds And Their Maps
|
| 2.1.  Differentiable Manifolds (cont.)
|
| We opened this discussion of differentiable manifolds with the remark
| that basically a differentiable manifold is a topological space that
| in the neighborhood of each point looks like an open subset of R^k.
| The first definition said that each neighborhood, even though
| expressed as a subset of R^n, was equivalent to R^k.  That is,
| the space expressed in R^n really only had k, not n, degrees
| of freedom.  Another way of saying this is by saying that a
| k-dimensional manifold can be expressed using n variables
| with n - k conditions imposed on them.
|
| These remarks are made because, in practice, manifolds are often given
| as the set of points where a certain function vanishes.  The implicit
| function theorem gives conditions under which the vanishing of the
| function gives k constraints (exchanging the k and the n - k of the
| previous paragraph), so that only n - k of the variables are free,
| and the space is a manifold with dimension n - k if the theorem
| is satisfied everywhere.  Then the manifold is said to be given
| implicitly, or by the implicit function theorem.
|
| Formalizing the above remarks, we consider a C^oo function F with
| domain A c R^n and range in R^k.  That is, for every choice of n
| real numbers (x_1, ..., x_n) in A, the function F has the k real
| numbers F = (f_1, ..., f_k).  Let M be the set:
|
|     M  =  {x : F(x) = 0 = (0, 0, ..., 0)}.
|
| If the rank of the Jacobian matrix F' is equal to k for all x in M,
| then M is an (n-k)-dimensional manifold.
|
| Under the conditions stated, the implicit function theorem says that k of
| the variables can be expressed in terms of the other n - k, and the latter
| can be given values arbitrarily.  Another statement of the implicit function
| theorem (see [4, p. 43]) shows that a coordinate transformation can be found
| that assigns the value zero to the k explicit functions.  In other words, the
| conditions of the first definition of a manifold are satisfied.
|
| Doolin & Martin, DGFE, pages 10-12.
|
| References:
|
| 4.  M. Spivak,
|    'Calculus on Manifolds',
|     W.A. Benjamin, New York, NY, 1965. 
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 7

| 2.  Manifolds And Their Maps
|
| 2.2.  Examples
|
| Another trivial but important example of a class of manifolds is
| afforded by any open subset of R^n.  There the atlas may consist
| of the set itself, together with the identity map.  Thus, the
| notion that manifolds are spaces that locally look like open
| subsets of R^n is at least self-consistent.  This example
| is important because the whole idea of the definition
| of manifolds is to be able to see how calculations
| valid in R^n carry over into any other manifold.
|
| Another example of a manifold, which is
| an open set of Euclidean space and which
| is important in systems theory, follows.
| Let:
|
|     x`  =  Ax + bu
|
| be a single-input controllable system.
| Recall that controllability is equivalent
| to having the rank of the matrix:
|
|     [b, A b, A^2 b, ..., A^(n-1) b]
|
| equal to n, where A is an n x n matrix.
| Now let M be the set of pairs (A, b)
| such that a system is controllable:
|
|     M  =  {(A, b) : x` = Ax + bu  is controllable}.
|
| The complement of this set is the
| set that satisfies the condition:
|
|     det [b, A b, A^2 b, ..., A^(n-1) b] = 0.
|
| Since this is a closed set in R^(n^2 + n), the set M
| is open in R^(n^2 + n), and therefore is a manifold.
|
| The system, being of single input, is a special case.
| In general, when the control distribution function B
| is an n x m matrix, M* is also a manifold where M* is
| the set:
|
|     M*  =  {(A, B) : x` = Ax + Bu  is controllable}.
|
| Although the conditions are more involved and less
| easy to describe than the determinant condition above,
| a similar argument shows that the controllable pairs are
| an open subset of R^n(n+m).
|
| A more general example along these same lines is the set
| of triples of matrices (A, B, C) representing the system:
|
|     x`  =  Ax + Bu
|
|     y   =  Cx
|
| If the system is controllable and observable, it can be shown
| that this set of triples is also an open subset of a suitable
| Euclidean space.
|
| Related to this manifold is a set of matrix transfer functions
| T(s).  These are matrices of rational functions that arise as the
| Laplace transforms of the above systems.  Whether this set {T(s)} is
| a manifold is a deep question in systems theory.  It has been answered
| affirmatively by Martin Clark [5], Roger Brockett [6], and independently
| by Michiel Hazewinkel [7], and by Christopher Byrnes and N.E. Hurt [8].
| Much of the study in linear systems is involved with various properties
| of this manifold.
|
| Doolin & Martin, DGFE, pages 17-18.
|
| 5.  J.M.C. Clark,
|    "The Consistent Selection of Parameterizations in System Identification",
|    'Proceedings of the Joint Automatic Control Conference', July 27-30, 1976,
|     Purdue University, West Lafayette, IN, pages 576-585.
|     American Society of Mechanical Engineers, New York, NY.
|
| 6.  R.W. Brockett,
|    "Some Geometric Questions in the Theory of Linear Systems",
|    'IEEE Transactions on Automatic Control', vol. AC-21:4,
|     August 1976, pages 449-455.
|
| 7.  M. Hazewinkel & R.E. Kalman,
|    "On Invariance, Canonical Forms, and Moduli for Linear
|     Constant, Finite-Dimensional, Dynamical Systems", in:
|    'Lecture Notes on Economics & Mathematical System Theory',
|     vol. 131, pages 440-454, Springer-Verlag, Berlin, 1976.
|
| 8.  C.I. Byrnes & N.E. Hurt,
|    "On the Moduli of Linear Dynamical Systems", 'Advances in Mathematics',
|     Suppl. Series, vol. 4, 1978, pages 83-122.  Academic Press, New York, NY.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 8

| 2.  Manifolds And Their Maps
|
| 2.3.  Manifold Maps
|
| We have described manifolds and seen a few examples of them.
| Now we can describe the requirements on functions that allow
| them to be maps between manifolds, say, the manifolds M and N.
|
| A function f,
|
|     f : M -> N,
|
| is a 'manifold map' if for every x in M and chart (V, !a!) with x in V,
| there is a chart (U, !b!) for N with f(V) c U such that the composite
| function !b! o f o !a!^-1,
|
|     !b! o f o !a!^-1  :  !a!(V)  ->  !b!(U),
|
| is a C^oo diffeomorphism.
|
| [Example omitted.]
|
| A final consideration for this section is that of forming manifolds
| from the cartesian products of manifolds.  If we have a manifold M
| with atlas !A!, we can construct a new manifold:
|
|     M x M  =  {(X, Y) : X, Y in M}
|
| from M and !A!.  The charts are constructed from
| the charts of !A! in the natural way as products,
| that is, if (V_1, !a!_1) and (V_2, !a!_2) are
| charts in !A!.  Then a chart for M x M is
| given by (V_1 x V_2, !a!_1 x !a!_2) where:
|
|     (!a!_1 x !a!_2)(X, Y)  =  (!a!_1 (X), !a!_2 (Y)).
|
| Doolin & Martin, DGFE, pages 18-22.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 9

| 3.  Tangent Spaces
|
| The previous chapter defines manifolds and gives several examples of them.
| This chapter considers a basic construction of one manifold from another.
| While the method of construction itself is of interest insofar as it
| illustrates general procedures of modern differential geometry,
| the particular result, the tangent space, is an object of
| great importance.  It is by way of the tangent space
| that calculus can be done in general situations.
|
| To gain familiarity with the idea of a tangent space, it is worthwhile to
| spend some time with an example, that of the tangent space to the sphere.
| The information in the previous section concerning charts for the sphere
| allows charts to be constructed for this new space.  The atlas resulting
| from the construction is examined in the light of the earlier definitions
| to see that this tangent space forms a manifold.  The example is useful, too,
| for giving insight into such things as the dimensionality of a tangent space
| and the fact that its maps preserve its linear and differentiable structure.
| Part of the problem of constructing an atlas is that a map must be inverted
| and that its composition with another map be a diffeomorphism.  Reducing our
| example from a sphere to a circle simplifies this calculation considerably.
|
| Next, preparatory to considering the general construction of a tangent space,
| the notion of equivalence classes of curves on a manifold, and their addition
| and scalar multiplication is explored.  This study provides the guide to the
| constructions that follow, and to the confirmation that the tangent space is
| a manifold.
|
| The rest of the chapter is devoted to the tangent space in general.
| It is seen to be a manifold whose charts and chart maps are derived
| from those of the underlying manifold.  It is seen to have vector
| space properties.  Similar properties of maps between tangent
| manifolds are examined.  The differentiating properties of
| these induced maps are noted.
|
| Doolin & Martin, DGFE, pages 23-24.
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

DIF. Note 10

| 3.  Tangent Spaces
|
| 3.1.  The Tangent Space of Sphere
|
| 


| Doolin & Martin, DGFE, pages 24-
|
| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

GRAPH. Graph Theory

Ontology & SUO Groups

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg00221.html
  2. http://web.archive.org/web/20091122134752/http://suo.ieee.org/email/msg02653.html
  3. http://web.archive.org/web/20041220131023/http://suo.ieee.org/email/msg02706.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg07595.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg07597.html
  6. http://web.archive.org/web/20060504095418/http://suo.ieee.org/email/msg07622.html
  7. http://web.archive.org/web/20070718110457/http://suo.ieee.org/email/msg07975.html

Other Resources

  1. http://www.utm.edu/departments/math/graph/
  2. http://www.utm.edu/departments/math/graph/index.html
  3. http://www.utm.edu/departments/math/graph/glossary.html
  4. http://www.utm.edu/cgi-bin/caldwell/tutor/departments/math/graph/intro
  5. http://www.math.lsa.umich.edu/~mathsch/summ97/graph/graph1/
  6. http://www.math.lsa.umich.edu/~mathsch/summ97/graph/index.html
  7. http://www.dmoz.org/Science/Math/Combinatorics/Graph_Theory/
  8. http://web.archive.org/web/20011203081944/http://graphs.memes.net/index.php3?request=displaypage&NodeID=3

HOC. Higher Order Categorical Logic

HOC. Note 1


| Part 0.  Introduction to Category Theory
|
| 1.  Categories and Functors
|
| In this section we present what our reader is expected
| to know about category theory.  We begin with a rather
| informal definition.
|
| Definition 1.1.  A 'concrete category' is a collection of two kinds
| of entities, called 'objects' and 'morphisms'.  The former are sets
| which are endowed with some kind of structure, and the latter are
| mappings, that is, functions from one object to another, in some
| sense preserving that structure.  Among the morphisms, there is
| attached to each object A the 'identity mapping' 1_A : A -> A
| such that 1_A(a) = a for all a in A.  Moreover, morphisms
| f : A -> B and g : B -> C may be 'composed' to produce
| a morphism gf : A -> C such that (gf)(a) = g(f(a))
| for all a in A.
|
| Examples of concrete categories abound in mathematics;
| here are just three:
|
| Example C1.  The category of 'sets'.  Its objects are
| arbitrary sets and its morphisms are arbitrary mappings.
| We call this category "Sets".
|
| Example C2.  The category of 'monoids'.  Its objects are
| monoids, that is, semigroups with unity element, and its
| morphisms are homomorphisms, that is, mappings which
| preserve multiplication (the semigroup operation)
| and the unity element.
|
| Example C3.  The category of 'preordered sets'.
| Its objects are preordered sets, that is, sets
| with a transitive and reflexive relation on them,
| and its morphisms are monotone mappings, that is,
| mappings which preserve this relation.
|
| The reader will be able to think of many other examples:
| the categories of rings, topological spaces, and Banach
| algebras, to name just a few.  In fact, one is tempted
| to make a generalization, which may be summed up as
| follows, provided we understand "object" to mean
| "structured set".
|
| Slogan 1.  Many objects of interest in mathematics
| congregate in concrete categories.
|
| L&S, page 4.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 2


| We shall now progress from concrete categories
| to abstract ones, in three easy stages.
|
| Definition 1.2.  A 'graph' (usually called a 'directed graph') consists
| of two classes:  the class of 'arrows' (or 'oriented edges') and the class
| of 'objects' (usually called 'nodes' or 'vertices') and two mappings from
| the class of arrows to the class of objects, called 'source' and 'target'
| (often also 'domain' and 'codomain').
|
| o--------------o      source       o--------------o
| |              | ----------------> |              |
| |   Arrows     |                   |   Objects    |
| |              | ----------------> |              |
| o--------------o      target       o--------------o
|
| One writes "f : A -> B" for "source f = A and target f = B".
| A graph is said to be 'small' if the classes of objects and
| arrows are sets.
|
| Example C4.  The category of small 'graphs' is another concrete category.
| Its objects are small graphs and its morphisms are functions F which send
| arrows to arrows and vertices to vertices so that, whenever f : A -> B,
| then F(f) : F(A) -> F(B).
|
| A 'deductive system' is a graph in which to each object A there
| is associated an arrow 1_A : A -> A, the 'identity' arrow, and to
| each pair of arrows f : A -> B and g : B -> C there is associated
| an arrow gf : A -> C, the 'composition' of f with g.  A logician
| may think of the objects as 'formulas' and of the arrows as
| 'deductions' or 'proofs', hence of
|
|  f : A -> B     g : B -> C
| ---------------------------
|         gf : A -> C
|
| as a 'rule of inference'.
|
| (Deductive systems will be discussed further in Part 1.)
|
| A 'category' is a deductive system in which the following equations hold,
| for all f : A -> B, g : B -> C, and h : C -> D.
|
| f 1_A  =  f  =  1_B f,
|
| (hg)f  =  h(gf).
|
| Of course, all concrete categories are categories.  A category is
| said to be 'small' if the classes of arrows and objects are sets.
| While the concrete categories described in examples 1 to 4 are not
| small, a somewhat surprising observation is summarized as follows:
|
| Slogan 2.  Many objects of interest to mathematicians
| are themselves small categories.
|
| L&S, page 5.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 3


| Example C1'.  Any set can be viewed as a category:  a small 'discrete'
| category.  The objects are its elements and there are no arrows except
| the obligatory identity arrows.
|
| Example C2'.  Any monoid can be viewed as a category.  There is only
| one object, which may remain nameless, and the arrows of the monoid
| are its elements.  In particular, the identity arrow is the unity
| element.  Composition is the binary operation of the monoid.
|
| Example C3'.  Any preordered set can be viewed as a category.
| The objects are its elements and, for any pair of objects (a, b),
| there is at most one arrow a -> b, exactly one when a =< b.
|
| L&S, pages 5-6.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 4


| It follows from slogans 1 and 2 that small categories
| themselves should be the objects of a category worthy
| of study.
|
| Example C5.  The category Cat has as objects small categories
| and as morphisms functors, which we shall now define.
|
| Definition 1.3.  A 'functor' F : $A$ -> $B$ is
| first of all a morphism of graphs (see Example C4),
| that is, it sends objects of $A$ to objects of $B$
| and arrows of $A$ to arrows of $B$ such that, if
| f : A -> A', then F(f) : F(A) -> F(A').  Moreover,
| a functor preserves identities and composition;
| thus:
|
| F(1_A)  =  1_F(A),
|
| F(gf)   =  F(g)F(f).
|
| In particular, the identity functor 1_$A$ : $A$ -> $A$ leaves
| objects and arrows unchanged and the composition of functors
| F : $A$ -> $B$ and G : $B$ -> $C$ is given by:
|
| (GF)(A)  =  G(F(A)),
|
| (GF)(f)  =  G(F(f)),
|
| for all objects A of $A$ and all arrows f : A -> A' in $A$.
|
| The reader will now easily check the following assertion.
|
| Proposition 1.4.  When sets, monoids, and preordered sets
| are regarded as small categories, the morphisms between
| them are the same as the functors between them.
|
| The above definition of a functor F : $A$ -> $B$ applies equally well
| when $A$ and $B$ are not necessarily small, provided we allow mappings
| between classes.  Of special interest is the situation when $B$ = Sets
| and $A$ is small.
|
| Slogan 3.  Many objects of interest to mathematicians
| may be viewed as functors from small categories to Sets.
|
| Example F1.  A set may be viewed as a functor
| from a discrete one-object category to Sets.
|
| Example F2.  A small graph may be viewed
| as a functor from the small category
|
| . -> .
|   ->
|
|(with identity arrows not shown) to Sets.
|
| Example F3.  If $M$ = (M, 1, .) is a monoid
| viewed as a one-object category, an $M$-set
| may be regarded as a functor from $M$ to Sets.
|(An $M$-set is a set A together with a mapping
| M x A -> A, usually denoted by (m, a) ~> ma,
| such that 1a = a and (m.m')a = m(m'a) for all
| a in A, m and m' in M.) 
|
| L&S, pages 6-7.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 5


| One we admit that functors $A$ -> $B$ are interesting objects to study,
| we should see in them the objects of yet another category.  We shall
| study such functor categories in the next section.  For the present,
| let us mention two other ways of forming new categories from old.
|
| Example C6.  From any category (or graph) $A$ one forms
| a new category (respectively graph) $A$^op with the same
| objects but with arrows reversed, that is, with the two
| mappings "source" and "target" interchanged.  $A$^op is
| called the 'opposite' or 'dual' of $A$.  A functor from
| $A$^op to $B$ is often called a 'contravariant' functor
| from $A$ to $B$, but we shall avoid this terminology
| except for occasional emphasis.
|
| Example C7.  Given two categories $A$ and $B$, one forms a new category
| $A$ x $B$ whose objects are pairs (A, B), A in $A$ and B in $B$, and whose
| arrows are pairs (f, g) : (A, B) -> (A', B'), where f : A -> A' in $A$ and
| g : B -> B' in $B$.  Composition of arrows is defined componentwise.
|
| L&S, page 7.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 6


| Definition 1.5.  An arrow f : A -> B in a category is called an 'isomorphism'
| if there is an arrow g : B -> A such that gf = 1_A and fg = 1_B.  One writes
| A ~=~ B to mean that such an isomorphism exists and says that A is 'isomorphic'
| with B.
|
| In particular, a functor F : $A$ -> $B$ between two categories is an isomorphism
| if there is a functor G : $B$ -> $A$ such that GF = 1_$A$ and FG = 1_$B$.  We also
| remark that a group is a one-object category in which all arrows are isomorphisms.
|
| To end this section, we shall record three basic isomorphisms.
| Here $1$ is the category with one object and one arrow.
|
| Proposition 1.6.  For any categories $A$, $B$, $C$,
|
| $A$ x $1$  ~=~  $A$,
|
| $A$ x $B$  ~=~  $B$ x $A$,
|
| $A$ x ($B$ x $C$)  ~=~  ($A$ x $B$) x $C$.
|
| L&S, page 7.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 7


| 2.  Natural Transformations
|
| In this section we shall investigate morphisms between functors.
|
| Definition 2.1.  Given functors F, G : $A$ -> $B$,
| a 'natural transformation' t : F -> G is a family
| of arrows t(A) : F(A) -> G(A) in $B$, one arrow for
| each object A of $A$, such that the following square
| commutes for all arrows f : A -> B in $A$:
|
|              t(A)
| F(A) o------------------>o G(A)
|      |                   |
|      |                   |
| F(f) |                   | G(f)
|      |                   |
|      v                   v
| F(B) o------------------>o G(B)
|              t(B)
|
| that is to say, such that
|
| G(f)t(A)  =  t(B)F(f).
|
| It is this concept about which it has been said that it
| necessitated the invention of category theory.  We shall
| give examples of natural transformations later.  For the
| moment, we are interested in another example of a category.
|
| Example C8.  Given categories $A$ and $B$, the 'functor category' $B$^$A$ has
| as objects functors F : $A$ -> $B$ and as arrows natural transformations.
| The 'identity' natural transformation 1_F : F -> F is of course given
| by stipulating that (1_F)(A) = 1_(F(A)) for each object A of $A$.
| If t : F -> G and u : G -> H are natural transformations,
| their 'composition' u o t is given by stipulating that
|
| (u o t)(A)  =  u(A)t(A)
|
| for each object A of $A$.
|
| L&S, page 8.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

Remark On Notation.  An expression like "1_F(A)"
should be interpreted as equivalent to "1_(F(A))".

HOC. Note 8


| To appreciate the usefulness of natural transformations,
| the reader should prove for himself the following, which
| supports Slogan 3.
|
| Proposition 2.2.  When objects such as sets, small graphs, and
| $M$-sets are viewed as functors into Sets (see Examples F1 to F3
| in Section 1), the morphisms between two objects are precisely the
| natural transformations.  Thus, the categories of sets, small graphs,
| and $M$-sets may be identified with the functor categories Sets^$1$,
| Sets^(o>>o), and Sets^$M$, respectively.
|
| Of course, morphisms between sets are mappings, morphisms between graphs
| were described in Definition 1.3, and morphisms between $M$-sets are
| $M$-homomorphisms.  (An $M$-homomorphism f : A -> B between $M$-sets
| is a mapping such that f(ma) = mf(a) for all m in M and a in A.
|
| We record three more basic isomorphisms in the spirit of Proposition 1.6.
|
| Proposition 2.3.  For any categories $A$, $B$, $C$,
|
| $A$^$1$  ~=~  $A$,
|
| $C$^($A$ x $B$)  ~=~  ($C$^$B$)^$A$,
|
| ($A$ x $B$)^$C$  ~=~  $A$^$C$ x $B$^$C$.
|
| We shall leave the lengthy proof of this to the reader.  We only mention here
| the functor $C$^($A$ x $B$) -> ($C$^$B$)^$A$, which will be used later.
| We describe its action on objects by stipulating that it assigns to
| a functor F : $A$ x $B$ -> $C$ the functor F* : $A$ -> $C$^$B$
| which is defined as follows:
|
| For any object A of $A$,
|
| the functor F*(A) : $B$ -> $C$
| is given by
|
| F*(A)(B)  =  F(A, B),
| F*(A)(g)  =  F(1_A, g),
|
| for any object B of $B$
| and any arrow g : B -> B' in $B$.
|
| For any arrow f : A -> A',
|
| the natural transformation F*(A) -> F*(A')
| is given by
|
| F*(f)(B)  =  F(f, 1_B),
|
| for all objects B of $B$.
|
| Finally, to any natural transformation t : F -> G
| between functors F, G : $A$ x $B$ -> $C$ we assign
| the natural transformation t* : F* -> G* which is
| given by
|
| t*(A)(B)  =  t(A, B)
|
| for all objects A of $A$ and B of $B$.
|
| L&S, pages 8-9.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

Remark on Notation.  In this transcription the symbol "o>>o"
indicates the small category that is otherwise represented,
minus identity arrows, by either of the following diagrams:

. -> .
  ->

or

o--------------o                   o--------------o
|              | ----------------> |              |
|              |                   |              |
|              | ----------------> |              |
o--------------o                   o--------------o

HOC. Note 9


| This may be as good a place as any to mention that
| natural transformations may also be composed with
| functors.
|
| Definition 2.4.  In the situation
|
|               F
|      L       --->      K
| $D$ ---> $A$      $B$ ---> $C$,
|              --->
|               G
|
| if t : F -> G is a natural transformation, one obtains natural
| transformations Kt : KF -> KG between functors from $A$ to $C$
| and tL : FL -> GL between functors from $D$ to $B$ defined as
| follows:
|
| (Kt)(A)  =  K(t(A)),
|
| (tL)(D)  =  t(L(D)),
|
| for all objects A of $A$ and D of $D$.
|
| If H : $A$ -> $B$ is another functor and
| u : G -> H another natural transformation,
| then the reader will easily check the
| following "distributive laws":
|
| K(u o t)  =  (Ku) o (Kt),
|
| (u o t)L  =  (uL) o (tL).
|
| L&S, pages 9-10.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 10


| If we compare Slogans 1 and 3, we are led to ask:
| which categories may be viewed as categories of
| functors into Sets?  In preparation for an answer
| to that question we need another definition.
|
| Definition 2.5.  If A and B are objects of a category $A$,
| we denote by Hom_$A$ (A, B) the class of arrows A -> B.
| (Later, the subscript $A$ will often be omitted.)
| If it so happens that Hom_$A$ (A, B) is a set
| for all objects A and B, $A$ is said
| to be 'locally small'.
|
| One purpose of this definition is
| to describe the following functor.
|
| Example F4.  If $A$ is a locally small category,
| then there is a functor
|
| Hom_$A$ : $A$^op x $A$ -> Sets.
|
| For an object (A, B) of $A$^op x $A$, the value of this
| functor is Hom_$A$ (A, B), as suggested by the notation.
| For an arrow (g, h) : (A, B) -> (A', B') of $A$^op x $A$,
| where g : A' -> A and h : B -> B' in $A$, Hom_$A$ (g, h)
| sends f in Hom_$A$ (A, B) to hfg in Hom_$A$ (A', B').
|
| Applying the isomorphism
|
| Sets^($A$^op x $A$) -> (Sets^$A$)^($A$^op)
|
| of Proposition 2.3, we obtain a functor
|
| (Hom_$A$)* : $A$^op -> Sets^$A$
|
| and, dually, a functor
|
| (Hom_$A$^op)* : $A$ -> Sets^($A$^op).
|
| We shall see that the latter functor allows us to assert that
| $A$ is isomorphic to a "full" subcategory of Sets^($A$^op).
|
| L&S, page 10.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 11


| Definition 2.6.  A 'subcategory' $C$ of a category $B$ is any category whose
| class of objects and arrows is contained in the class of objects and arrows
| of $C$ respectively and which is closed under the "operations" source, target,
| identity, and composition.  By saying that a subcategory $C$ of $B$ is 'full'
| we mean that, for any objects C, C' of $C$, Hom_$C$ (C, C') = Hom_$B$ (C, C').
|
| For example, a proper subgroup of a group is a subcategory
| which is not full, but the category of Abelian groups is
| a full subcategory of the category of all groups.
|
| The arrows F -> G in Sets^($A$^op) are natural transformations.
| We therefore write Nat(F, G) in place of Hom(F, G) in Sets^($A$^op).
|
| Objects of the latter category are sometimes called "contravariant" functors
| from $A$ to Sets.  Among them is the functor h_A = Hom_$A$ (-, A) which sends
| the object A' of $A$ onto the set Hom_$A$ (A', A) and the arrow f : A' -> A"
| onto the mapping Hom_$A$ (f, 1_A) : Hom_$A$ (A", A) -> Hom_$A$ (A', A).
|
| The following is known as Yoneda's Lemma.
|
| Proposition 2.7.  If $A$ is locally small and F : $A$^op -> Sets,
| then Nat(h_A, F) is in one-to-one correspondence with F(A).
|
| Proof.  If a is in F(A), we obtain a natural transformation ?a? : h_A -> F
| by stipulating that ?a?(B) : Hom_$A$ (B, A) -> F(B) sends g : B -> A onto
| F(g)(a).  (Notice that F is contravariant, so F(g) : F(A) -> F(B).)  Conversely,
| if t : h_A -> F is a natural transformation, we obtain the element t(A)(1_A)
| in F(A).  It is a routine exercise to check that the mappings a ~> ?a? and
| t ~> t(A)(1_A) are inverse to one another.
|
| L&S, pages 10-11.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 12


| Definition 2.8.  A functor H : $A$ -> $B$ is
| said to be 'faithful' if the induced mappings
|
| Hom_$A$ (A, A') -> Hom_$B$ (H(A), H(A'))
|
| sending f : A -> A' onto H(f) : H(A) -> H(A')
| for all A, A' in $A$ are injective and 'full'
| if they are surjective.  A 'full embedding'
| is a full and faithful functor which is also
| injective on objects, that is, for which
| H(A) = H(A') implies A = A'.
|
| Corollary 2.9.  If $A$ is locally small, the Yoneda functor
|
| (Hom_$A$^op)* : $A$ -> Sets^($A$^op)
|
| is a full embedding.
|
| Proof.  Writing H = (Hom_$A$^op)*,
| we see that the induced mapping
|
| Hom(A, A') -> Nat(H(A), H(A'))
|
| sends f : A -> A' onto the natural transformation
| H(f) : H(A) -> H(A') which, for all objects B of $A$,
| gives rise to the mapping
|
| H(f)(B) = Hom(1_B, f) : Hom(B, A) -> Hom(B, A').
|
| Now f is in H(A')(A), hence ?f? : H(A) -> H(A'),
| as defined in the proof of Proposition 2.7,
| is given by
|
| ?f?(B)(g)  =  H(A')(g)(f)
|
|            =  Hom_$A$ (g, 1_A')(f)
|
|            =  fg
|
|            =  Hom_$A$ (1_B, f)(g)
|
|            =  H(f)(B)(g),
|
| hence ?f?  =  H(f).
|
| Thus the mapping f ~> H(f) is a bijection
| and so H is full and faithful.
|
| Finally, to show that H is injective on objects,
| assume H(A) = H(A'), then Hom(A, A) = Hom (A, A'),
| so A' must be the target of the identity arrow 1_A,
| thus A' = A.
|
| L&S, page 11.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 13


I am skipping ahead to Part 1 of L&S -- we have been reading
from Part 0 (Introduction to Category Theory) all this time --
in order to pick up a quantum of motivation for the endeavor.

| Part 1.  Cartesian Closed Categories & Lambda Calculus
|
| Introduction to Part 1
|
| Lambda calculus or combinatory logic is a topic that logicians have studied
| since 1924.  Cartesian closed categories are more recent in origin, having
| been invented by Lawvere (1964, see also Eilenberg & Kelly, 1966).  Both are
| attempts to describe axiomatically the process of substitution, so it is not
| surprising to find that these two subjects are essentially the same.  More
| precisely, there is an equivalence of categories between the category of
| cartesian closed categories and the category of typed lambda calculi with
| surjective pairing.  This remains true if cartesian closed categories are
| provided with a weak natural numbers object and if typed lambda calculi
| are assumed to have a natural numbers type with iterator.
|
| This result depends crucially on the 'functional completeness' of cartesian
| closed categories, which goes back to the functional completeness of combinatory
| logic due to Schoenfinkel and Curry.  It asserts, in particular, that every arrow
| !f!(x) : 1 -> B expressible as a polynomial in an indeterminate arrow x : 1 -> A
| over a cartesian closed category $A$ (with given objects A and B) is uniquely
| of the form
|
|    x      f
| 1 ---> A ---> B,
|
| where f is an arrow in $A$ not depending on x.
|
| Functional completeness is closely related to the 'deduction theorem' for
| positive intuitionistic propositional calculi presented as deductive systems.
| In our version, it associates with each proof T |- B on the assumption T |- A
| a proof of A |- B without assumptions.  However, functional completeness goes
| beyond this;  it asserts that the proof of T |- B on the assumption T |- A is,
| in some sense, 'equivalent' to the proof by transitivity:
|
|  T |- A      A |- B
| --------------------.
|        T |- B
|
| Deductive systems are also used to construct free cartesian closed categories
| generated by graphs, whose arrows A -> B are equivalence classes of proofs.
|
| L&S, page 41.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 14


| Historical Perspective on Part 1
|
| For the purpose of this discussion, it will suffice
| to define a 'cartesian closed category' as a category
| with an object 1 and operations (-) x (-) and (-)^(-)
| on objects satisfying conditions which assure that:
|
| 1.  Hom(A, 1)      ~=~  {*},
|
| 2.  Hom(C, A x B)  ~=~  Hom(C, A) x Hom(C, B),
|
| 3.  Hom(A, C^B)    ~=~  Hom(A x B, C).
|
| Here {*} is supposed a typical one-element set,
| chosen once and for all.
|
| It will be instructive to reverse the historical process
| and see how combinatory logic could have been discovered
| by rigorous application of Occam's razor.
|
| Condition 1 says that, for each object A, there is only
| one arrow A -> 1, hence we might as well forget about
| the object 1 and the arrow leading to it.  However,
| the arrows 1 -> A must be preserved, let us call
| them 'entities of type A'.
|
| Condition 2 says that the arrows C -> A x B are in one-to-one
| correspondence with pairs of arrows C -> A and C -> B, hence
| we might as well forget about the arrows going into A x B.
|
| Condition 3 says that the arrows A x B -> C are in one-to-one
| correspondence with the arrows A -> C^B, hence we might as well
| forget about the arrows coming out of A x B too.  Consequently,
| we might as well forget about A x B altogether.
|
| We end up with a category with a binary operation "exponentiation"
| on objects.  Of course, this will have to satisfy some conditions,
| but these may be a little difficult to state.  It is interesting
| to note that Eilenberg and Kelly went on a similar 'tour de force'
| and ended up with a category with exponentiation in which some
| monstrous diagrams had to commute.
|
| We may go a little further and forget about the category structure
| as well, since arrows A -> B are in one-to-one correspondence with
| entities of type B^A, which we shall write B <= A for typographical
| reasons.  Composition of arrows is then represented by a single
| entity of type ((C <= A) <= (C <= B)) <= (B <= A).  However, we
| do need a binary operation on entities called "application":
| given entities f of type B^A and a of type A, there is
| an entity f`a (read "f of a") of type B.
|
| We have now arrived at typed combinatory logic.  But even this
| came rather late in the thinking of logicians, although type
| theory had already been introduced by Russell and Whitehead.
| Let us continue on our journey backwards in time and apply
| Occam's razor still further.
|
| An arrow A -> B in a category has a source A and a target B.
| But what if there is only one object?  Such a category is called
| a monoid and, indeed, the original presentation of combinatory logic
| by Curry does describe a monoid with additional structure.  (The binary
| operation of multiplication is defined in terms of the primitive operation
| of application.)  Underlying untyped combinatory logic there is a tacit
| ontological assumption, namely that all entities are functions and
| that each function can be applied to any entity.
|
| To present the work of Schoenfinkel and Curry in the modern language of
| universal algebra, one should think of an algebra A = (|A|, `, I, K, S),
| where |A| is a set, (`) is a binary operation, and I, K, S are elements
| of |A|, or nullary operations.  According to Schoenfinkel, these had to
| satisfy the following identities:
|
|         I`a  =  a,
|
|     (K`a)`b  =  a,
|
| ((S`f)`g)`c  =  (f`c)`(g`c),
|
| for all elements a, b, c, f, g in |A|.  (Actually, he defined I in terms of
| K and S, but this is beside the point here.)  The reader may think of I as the
| identity function and of K as the function which assigns to every entity a the
| function with constant value a.  It is a bit more difficult to put S into words
| and we shall refrain from doing so.
|
| Schoenfinkel (1924) discovered a remarkable result, usually called
| "functional completeness".  In modern terms this may be expressed
| as follows:  Every polynomial !f!(x) in an indeterminate x over
| a Schoenfinkel algebra A can be written in the form f`x, where
| f is in |A|.
|
| From now on in our exposition,
| the arrow of time will point
| in its customary direction.
|
| L&S, pages 42-44.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 15


Notation.  The Greek letters phi and lambda
are represented as !f! and lam, respectively.

| Curry (1930) rediscovered Schoenfinkel's results, but went further in his
| thinking.  He discovered that a finite set of additional identities would
| assure that the element f representing the polynomial !f!(x) was uniquely
| determined.  We shall not reproduce these identities here, but reserve the
| name "Curry algebra" for a Schoenfinkel algebra which satisfies them.
|
| Using the terminology of Church (1941), one writes f as lam_x !f!(x),
| which must then satisfy two equations:
|
| beta.  (lam_x !f!(x))`a  =  !f!(a),
|
|  eta.   lam_x (f`x)      =   f.
|
| (Many mathematicians write x ~> !f!(x) in place of lam_x !f!(x).)
|
| A lambda calculus is a formal language built up from variables x, y, z, ...
| by means of term forming operations (-)`(-) and lam_x (-), the latter
| being assumed to bind all free occurrences of the variable x occurring
| in (-), such that the two given identities hold.  The basic entities
| I, K, S may then be defined formally by:
|
| I  =  lam_x x,
|
| K  =  lam_x lam_y x,
|
| S  =  lam_u lam_v lam_z ((u`z)`(v`z)).
|
| (Actually, Church would have called such a language
|  a lambda-K-calculus and Curry might have called it
|  a lambda-beta-eta-calculus, but never mind.)
|
| L&S, page 44.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 16


| Both Curry and Church realized the importance of introducing types into
| combinatory logic or lambda calculus.  To do this one just has to observe
| that, if f has type B <= A and a has type A, then f`a has type B, as already
| pointed out.  In particular, the basic entities I, K, and S, suitably equipped
| with subscripts, should have prescribed types.  Thus I_A, K_A,B, and S_A,B,C
| have types:
|
|     I_A   :     A <= A,
|
|   K_A,B   :    (A <= B) <=  A,
|
| S_A,B,C   :   ((A <= C) <= (B <= C)) <= ((A <= B) <= C),
|
| respectively.
|
| As pointed out in the book by Curry and Feys, these three types are precisely
| the axioms of intuitionistic implicational logic.  Moreover, the rule which
| computes the type of f`a from those of f and a corresponds to modus ponens:
| from B <= A and A one may infer B.  In fact, Schoenfinkel's definition of
| I in terms of K and S is exactly the same as the known proof that A <= A
| may be derived from the other two axioms.
|
| Incidentally, several early texts on propositional logic
| used only implication and negation as primitive connectives,
| having eliminated conjunction and other connectives by suitable
| definitions, again inspired by Occam's razor.  The observation that
| it is more natural to retain conjunction and other connectives as
| primitive is probably due to Gentzen and was made again by Lawvere
| in a categorical context.
|
| Curry and Feys also realized that the proof of Schoenfinkel's version
| of functional completeness was really the same as the proof of the usual
| deduction theorem:  if one can prove B on the assumption A then one can
| prove B <= A without any assumption.  In fact, it asserts that the proof
| of B on the assumption A is "equivalent" to the proof by modus ponens:
|
|  B <= A        A
| -----------------.
|         B
|
| From our viewpoint, Curry's version of functional completeness,
| which insists on the uniqueness of f such that !f!(x) equals f`x,
| then presupposes that entities are not proofs but equivalence
| classes of proofs.
|
| L&S, pages 44-45.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 17


| In connection with cartesian closed categories,
| the analogy with propositional logic requires that
| 1, A x B, and B^A be written as T, A & B, and B <= A,
| respectively.  (For other structured categories, the
| senior author had pointed out and exploited a similar
| analogy with certain deductive systems, beginning with
| the so-called "syntactic calculus" (see Lambek, 1961b,
| Appendix 2), which traces the idea back to joint work
| with George D. Findlay in 1956.)  The relation between
| lambda calculi with product types and cartesian closed
| categories then suggests the observation:
|
| types  =  formulas,
|
| terms  =  proofs,
|
| or rather equivalence classes of proofs.  Independently,
| W. Howard in 1969 privately circulated an influential
| manuscript on the equivalence of typed lambda terms
| (there called "constructions") and derivations in
| various calculi, which finally appeared in the
| 1980 Curry Festschrift (see also Stenlund 1972).
|
| L&S, page 45.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 18


| Up to this point we have avoided discussing natural numbers.
| In an untyped lambda calculus natural numbers are easily
| defined (Church 1941).  Writing
|
| f o g  =  lam_x (f`(g`x)),
|
| one regards 2 as the process which assigns to every function f
| its iterate f o f, so 2`f = f o f.  Formally, one defines
|
| 0  =  lam_x I,
|
| 1  =  lam_x x  =  I,
|
| 2  =  lam_x (x o x),
|
| ... .
|
| The successor function and the usual operations
| on natural numbers are defined by
|
| s`n  =  lam_y (y o (n`y)),
|
| m+n  =  lam_y ((m`y) o (n`y)),
|
| m n  =  m o n,
|
| m^n  =  n`m.
|
| Unfortunately, there are difficulties with this as soon as one introduces
| types.  For, if a has type A, then f and g in (f o g)`a both have types
| A^A = B say.  For n`f to make sense, n will have to be of type B^B, and
| for n`m to make sense, m will have to be of type B.  If m and n are to
| have the same type, we are thus led to require that B^B = B, which is
| certainly not true in general, although Dana Scott (1972) showed that
| one may have B^B ~=~ B.
|
| One way to get around this difficulty is to postulate a type N
| of natural numbers, a term 0 of type N, and term forming operations
| s(-)(successor) and i(-, -, -)(iterator) such that s(n) has type N and
| i(a, h, n) has type A for all n of type N, a of type A, and h of type A^A.
| These must satisfy suitable equations to assure that i(a, h, n) means (h^n)`a.
|
| The analogous concept for cartesian closed categories is
| a 'weak natural numbers object':  an object N with arrows
| 0 : 1 -> N and s : N -> N and a process which assigns to all
| arrows a : 1 -> A and h : A -> A an arrow g : N -> A such that
| the following diagram commutes:
|
|             0                   s
|   1 ----------------> N ----------------> N
|   |                   |                   |
|   |                   |                   |
|   |                   | g                 | g
|   |                   |                   |
|   v                   v                   v
|   1 ----------------> A ----------------> A
|             a                   h
|
| Lawvere had defined a (strong) natural numbers object
| to be such that the arrow g : N -> A with the above
| property is unique.
|
| For us, a typed lambda calculus contains by definition the
| structure given by N, 0, s, and i.  In stating Theorem 11.3
| on the equivalence between typed lambda calculus and cartesian
| closed categories, we stipulate that the latter be equipped with
| a weak natural numbers object.  Such categories were first studied
| formally by Marie-France Thibault (1977, 1982), who called them
| "prerecursive categories", although they are implicit in the
| work of logicians, e.g. in Goedel's functionals of finite
| type (1958).
|
| We would have preferred to state Theorem 11.3 for
| strong natural numbers objects in Lawvere's sense.
| Unfortunately, we do not yet know how to handle
| the corresponding notion in typed lambda calculus
| equationally.  As far as we can see, the iterators
| appearing in the literature (e.g. Troelstra 1973)
| mostly correspond to weak natural numbers objects.
| See however Sanchis (1967).
|
| For further historical comments
| the reader is referred to
| the end of Part 1.
|
| L&S, pages 45-47.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 19


| 1.  Propositional Calculus as a Deductive System
|
| We recall (Part 0, Definition 1.2) that, for categories,
| a 'graph' consists of two classes and two mappings
| between them:
|
| o--------------o      source       o--------------o
| |              | ----------------> |              |
| |   Arrows     |                   |   Objects    |
| |              | ----------------> |              |
| o--------------o      target       o--------------o
|
| In graph theory the arrows are usually called "oriented edges"
| and the objects "nodes" or "vertices", but in various branches
| of mathematics other words may be used.  Instead of writing
|
| source(f)  =  A,
|
| target(f)  =  B,
|                                   f
| one often writes f : A -> B or A ---> B.  We shall
| look at graphs with additional structure which are
| of interest in logic.
|
| A 'deductive system' is a graph with a specified arrow
|
|          1_A
| R1a.  A -----> A,
|
| and a binary operation on arrows ('composition')
|
|           f           g
|        A ---> B    B ---> C
| R1b.  ----------------------
|                 gf
|              A ----> C
|
| Logicians will think of the objects of a deductive system
| as 'formulas', of the arrows as 'proofs' (or 'deductions'),
| and of an operation on arrows as a 'rule of inference'.
|
| Logicians should note that a deductive system is concerned
| not just with unlabelled entailments or sequents A -> B
| (as in Gentzen's proof theory), but with deductions or
| proofs of such entailments.  In writing f : A -> B
| we think of f as the "reason" why A entails B.
|
| L&S, page 47.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 20


| A 'conjunction calculus' is a deductive system dealing with truth and
| conjunction.  Thus we assume that there is given a formula 'T' (= true)
| and a binary operation '&' (= and) for forming the conjunction A & B of
| two given formulas A and B.  Moreover, we specify the following additional
| arrows and rules of inference:
|
|           O_A
| R2.    A -----> T,
|
|               p1_A,B
| R3a.   A & B --------> A,
|
|               p2_A,B
| R3b.   A & B --------> B,
|
|           f           g
|        C ---> A    C ---> B
| R3c.  ----------------------.
|           <f, g>
|        C --------> A & B
|
| Here is a sample proof of the so-called commutative law for conjunction:
|
|         p2_A,B               p1_A,B
|  A & B --------> B    A & B --------> A
| ----------------------------------------.
|         <p2_A,B, p1_A,B>
|  A & B ------------------> B & A
|
| The presentation of this proof in tree-form, while instructive,
| is superfluous.  It suffices to denote it by <p2_A,B, p1_A,B>
| or even by <p2, p1> when the subscripts are understood.
|
| Another example is the proof of the associative law
|
| !a!_A,B,C : (A & B) & C -> A & (B & C).
|
| It is given by:
|
| 1.1.  !a!_A,B,C  =  <p1_A,B o p1_A&B,C, <p2_A,B o p1_A&B,C, p2_A&B,C>>
|
| or just by  !a!  =  <p1 p1, <p2 p1, p2>>.
|
| If we compose operations on proofs, we obtain "derived" rules of inference.
| For example, consider the derived rule:
|
|         p1_A,C           f                 p2_A,C           g
|  A & C --------> A    A ---> B      A & C --------> C    C ---> D
| -------------------------------    -------------------------------
|  A & C -> B                         A & C -> D
| ------------------------------------------------------------------.
|         f & g
|  A & C -------> B & D
|
| It asserts that from proofs f and g one can construct the proof
|
| f & g  =  <f p1_A,C, g p2_A,C>.
|
| Thus we may write simply
|
|     f           g
|  A ---> B    C ---> D
| ----------------------.
|         f & g
|  A & C -------> B & D
|
| L&S, pages 47-48.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 21


| A 'positive intuitionistic propositional calculus' is a conjunction calculus
| with an additional binary operation '<=' (= if).  Thus, if A and B are formulas,
| so are T, A & B, and A <= B.  (Yes, most people write B => A instead.)  We also
| specify the following new arrow and rule of inference:
|
|                     !e!_A,B
| R4a.  (A <= B) & B ---------> A,
|
|               h
|        C & B ---> A
| R4b.  ----------------.
|           h*
|        C ----> A <= B
|
| Actually we should have written h* = !L!^C_A,B (h),
| but the subscripts are usually understood from context.
|
| We note that from R4b, with the help of R4a, one may derive
|
|            !h!_C,B
| R'4b.   C ---------> (C & B) <= B,
|
|             g
|          D ---> A
| R'4c.  -------------------------------.
|                   g <= 1_B
|         (D <= B) ----------> (A <= B)
|
| To derive these, we put
|
| !h!_C,B     =  (1_C&B)*,
|
| (g <= 1_B)  =  (g !e!_D,B)*.
|
| Conversely, one may derive R4b from R'4b and R'4c by putting
|
| h*  =  (h <= 1_B) !h!_C,B.
|
| For future reference, we also note the following two
| derived rules of inference:
|
|      f
|  A -----> B
| -----------------,
|     #f
|  T -----> B <= A
|
|      g
|  T -----> B <= A
| -----------------,
|      g`
|  A -----> B
|
| where
|
| #f  =  (f p2_1,A)*,
|
| g`  =  !e!_B,A <g O_A, 1_A>.
|
| L&S, pages 48-49.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 22


| An 'intuitionistic propositional calculus' is more than a
| positive one;  it requires also falsehood and disjunction,
| that is, a formula 'F' (= false) and an operation 'v' (= or)
| on formulas, together with the following additional arrows:
|
|           []_A
| R5.    F ------> A,
|
|           k1_A,B
| R6a.   A --------> A v B,
|
|           k2_A,B
| R6b.   B --------> A v B,
|
|                            !z!^C_A,B
| R6c.  (C <= A) & (C <= B) -----------> C <= (A v B).
|
| The last mentioned arrow gives rise to and may be derived from the rule:
|
|            f           g
|         A ---> C    B ---> C
| R'6c.  ----------------------.
|                [f, g]
|         A v B --------> C
|
| Indeed, we may put
|
| [f, g]  =  (!z!^C_A,B <#f, #g>)`.
|
| If we want 'classical' propositional logic, we must also require:
|
| R7.  F <= (F <= A) -> A.
|
| L&S, pages 49-50.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 23


| 2.  The Deduction Theorem
|
| The usual deduction theorem asserts:
|
| if  A & B |- C  then  A |- C <= B.
|
| This result is here incorporated into R4,
| with the deduction symbol '|-' replaced
| by actual arrows in the appropriate
| deductive system $L$:
|
|  h  : A & B -> C
| ------------------.
|  h* : A -> C <= B
|
| However, at a higher level, the horizontal bar
| functions as a deduction symbol, and we obtain
| a new form of the deduction theorem.  It deals
| with proofs from an 'assumption' x : T -> A.
|
| In other words, we form a new deductive system $L$(x) by adjoining a new
| arrow x : T -> A and talk about proofs !f!(x) : B -> C in this new system.
| More precisely, $L$(x) has the same formulas (= objects) as $L$ and its
| proofs (= arrows) !f!(x) are freely generated from those of $L$ and the
| new arrow x by the appropriate rules of inference (= operations).
| Clearly, if $L$ is a conjunction calculus (positive calculus,
| intuitionistic calculus, classical calculus, respectively),
| so is the new deductive system $L$(x).
|
| Proposition 2.1.  (Deduction Theorem).  In a conjunction,
| positive, intuitionistic, or classical calculus, with every
| proof !f!(x) : B -> C from the assumption x : T -> A there is
| associated a proof f : A & B -> C in $L$ not depending on x.
|
| We write
|
| f  =  !k!_x:A !f!(x),
|
| where the subscript "x : A"
| indicates that x is of type A.
|
| Proof.  [L&S, pages 51-52].
|
| Remark 2.1.  Logicians don't usually talk of an assumption x : T -> A
| if there is a known proof a : T -> A or another assumption y : T -> A,
| but from our algebraic viewpoint this does not matter.
|
| The reader is warned that we do not distinguish notationally
| between composition of proofs g o f in $L$ and in $L$(x).
|
| In $L$
|
| !k!_x:A (gf)  =  g f p2_A,B,
|
| and in $L$(x) it is
|
| !k!_x:A (gf)  =  g p2_A,B <p1_A,B, f p2_A,B>.
|
| L&S, pages 50-52.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 24


| 3.  Cartesian Closed Categories Equationally Presented
|
| A 'category' is a deductive system in which
| the following equations hold between proofs:
|
| E1.  f 1_A  =  f,
|
|      1_B f  =  f,
|
|      (hg)f  =  h(gf),
|
| for all f : A -> B, g : B -> C, h : C -> D.
|
| Thus, from any deductive system one may obtain a category
| by imposing a suitable equivalence relation between proofs.
|
| A 'cartesian category' is both a category
| and a conjunction calculus satisfying the
| additional equations:
|
| E2.   f  =  O_A,  for all f : A -> T.
|
| E3a.  p1_A,B <f, g>  =  f,
|
| E3b.  p2_A,B <f, g>  =  g,
|
| E3c.  <p1_A,B h, p2_A,B h>  =  h,
|
| for all f : C -> A, g : C -> B, h : C -> A & B.
|
| E2 asserts T is a 'terminal object'.
| One usually writes T = 1, and
| we shall do so from now on.
| An equivalent formulation
| of E2 is:
|
| E'2.  1_1    =  O_1,
|
|       O_B f  =  O_A,
|
| for all f : A -> B.
|
| E3 asserts that A & B is a product of A and B
| with projections p1_A,B and p2_A,B.  We shall
| adopt the usual notation A & B = A x B.
|
| As a consequence of E3, let us record the 'distributive law':
|
| <f, g> h  =  <fh, gh>
|
| for all f : C -> A, g : C -> B, h : D -> C.
|
| Proof.  We show this as follows, omitting subscripts:
|
| <f, g> h  =  <p1(<f, g> h), p2(<f, g> h)>
|
|           =  <(p1<f, g>) h, (p2<f, g> h)>
|
|           =  <fh, gh>.
|
| We shall also write
|
| f x g  =  f & g  =  <f p1_A,C, g p2_A,C>,
|
| whenever f : A -> B and g : C -> D, and note
| that x : $A$ x $A$ -> $A$ is a functor (see
| Part 0, Definition 1.3).  Indeed, we have:
|
| 1_A x 1_C  =  <1_A p1_A,C, 1_C p2_A,C>
|
|            =  <p1_A,C, p2_A,C>
|
|            =  <p1_A,C 1_AxC, p2_A,C 1_AxC>
|
|            =  1_AxC,
|
| and, omitting subscripts, by the distributive law,
|
| (f x g)(f' x g')  =  <f p1, g p2> <f' p1, g' p2>
|
|                   =  <f p1 <f' p1, g' p2>, g p2 <f' p1, g' p2>
|
|                   =  <f f' p1, g g' p2>
|
|                   =  f f' x g g'.
|
| L&S, pages 52-53.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 25


| A 'cartesian closed category' is a cartesian category $A$ with
| additional structure R4 satisfying the additional equations:
|
| E4a.   !e!_A,B <h* p1_C,B, p2_C,B>    =  h,
|
| E4b.  (!e!_A,B <k  p1_C,B, p2_C,B>)*  =  k,
|
| for all h : C & B -> A,  k : C -> (A <= B).
|
| Thus, a cartesian closed category is
| a positive intuitionistic propositional
| calculus satisfying the equations E1 to E4.
| This illustrates the general principle that
| one may obtain interesting categories from
| deductive systems by imposing an appropriate
| equivalence relation on proofs.
|
| Inasmuch as we have decided to write C & B = C x B,
| we shall also write A <= B = A^B.  The equations E4
| assure that the mapping
|
|                *
| Hom(C x B, A) ---> Hom(C, A^B)
|
| is a one-to-one correspondence.  In fact,
| one has the following universal property
| of the arrow
|
| !e!_A,B : A^B x B -> A:
|
| given any arrow h : C x B -> A, there is
| a unique arrow h* : C -> A^B such that
|
| !e!_A,B (h* x 1_B)  =  h.
|
| The reader who recalls the notion of
| adjoint functor [Part 0, Section 3]
| will recognize that therefore
|
| U_B  =  (-)^B
|
| is right adjoint to the functor
|
| F_B  =  (-) x B : $A$ -> $A$
|
| with coadjunction
|
| !e!_B : F_B U_B -> 1_$A$
|
| defined by
|
| !e!_B (A)  =  !e!_A,B.
|
| Thus, an equivalent description
| of cartesian closed categories
| makes use of the adjunction
|
| !h!_B : 1_$A$ -> U_B F_B
|
| in place of *, where
|
| !h!_B (C)  =  !h!_C,B : C -> (C x B)^B,
|
| and stipulates equations expressing the
| functoriality of U_B and the naturality
| of !e!_B and !h!_B as well as the two
| adjunction equations.  Here:
|
| U_B (f)  =  f^B  =  (f <= 1_B)  =  (f !e!_A,B)*,
|
| for all f : A -> A'.  (For !h!_B see R'4b in Section 1.)
|
| L&S, pages 53-54.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 26


| We shall state another useful equation,
| which may also be regarded as a kind of
| distributive law.
|
| h*k  =  (h <k p1_D,B, p2_D,B>)*,
|
| where h : A x B -> C and k : D -> A.
|
| Proof.  We show this as follows,
| omitting subscripts:
|
| h*k  =  (!e! <h*k p1, p2>)*
|
|      =  (!e! <h*p1, p2> <k p1, p2>)*
|
|      =  (h <k p1, p2>)*.
|
| Quite important is the following bijection,
| which holds in any cartesian closed category.
|
| Hom(A, B)  ~=~  Hom(1, B^A).
|
| Proof.  As in Section 1, with any f : A -> B
| we associate #f : 1 -> B^A, called the 'name'
| of f by Lawvere, given by
|
| #f  =  (f p2_1,A)*,
|
| and with any g : 1 -> B^A we associate
| g` : A -> B, read "g of", given by
|
| g`  =  !e!_B,A <g O_A, 1_A>.
|
| We then calculate
|
| (#f)`  =  f,
|
| #(g`)  =  g.
|
| L&S, pages 54-55.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 27


I return to Part 0 to pick up one or two bits
of indispensable material that I skipped over.

| 3.  Adjoint Functors
|
| Perhaps the most important concept which category theory has helped
| to formulate is that of adjoint functors.  Aspects of this idea were
| known even before the advent of category theory and we shall begin by
| looking at one such.
|
| We recall from Proposition 1.4 that a functor $A$ -> $B$ between two
| preordered sets $A$ = (A, =<) and $B$ = (B, =<) regarded as categories is
| an order preserving mapping F : A -> B, that is, such that, for all elements
| a, a' of A, if a =< a' then F(a) =< F(a').  A functor G : $B$ -> $A$ in the
| opposite direction is said to be 'right adjoint' to F provided, for all
| a in A and b in B,
|
| F(a) =< b  if and only if  a =< G(b).
|
| Classically, a pair of order preserving mappings (F, G) is called
| a covariant 'Galois correspondence' if it satisfies this condition.
|
| Once we have such a Galois correspondence, we see immediately that
| GF : $A$ -> $A$ is a 'closure operation', that is, for all a, a' in A,
|
| a =< GF(a),
|
| GFGF(a) =< GF(a),
|
| if a =< a' then GF(a) =< GF(a').
|
| Similarly, FG : $B$ -> $B$ may be called an 'interior operation':
| it satisfies the conditions dual to the above.
|
| In a preordered set an isomorphism a ~=~ a' just means that
| a =< a' and a' =< a.  (In a 'poset', or 'partially ordered set',
| one has the antisymmetry law:  if a ~=~ a' then a = a'.)  We note
| that it follows from the above that GFGF(a) ~=~ GF(a) and, dually,
| FGFG(b) ~=~ FG(b), for all a in A and b in B.
|
| The most interesting consequence of a Galois correspondence is
| this:  the functors F and G set up a one-to-one correspondence between
| isomorphism classes of "closed" elements a of A such that GF(a) ~=~ a
| and isomorphism classes of "open" elements b of B such that FG(b) ~=~ b.
| We also say that F and G determine an 'equivalence' between the preordered
| set $A$_0 of closed elements of $A$ and the preordered set $B$_0 of open
| elements of $B$.  The following picture illustrates this principle of
| "unity of opposites", which will be generalized later in this section.
|
|                          F
|              ------------------------> 
|          $A$ <------------------------ $B$
|           ^              G              ^
|           |                             |
|           |                             |
|           |                             |
| inclusion |                             | inclusion
|           |                             |
|           |                             |
|           |                             |
|           |                             |
|         $A$_0 <---------------------> $B$_0
|                     equivalence
|
| L&S, page 12.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 28


| Before carrying out the promised generalization, let us
| look at a couple of examples of Galois correspondence;
| others will be found in the exercises.
|
| Example G1.  Take both $A$ and $B$ to be (N, =<),
| the set of natural numbers with the usual ordering,
| and let:
|
| F(0)  =  0,
|
| F(a)  =  p_a     =  the a^th prime number, when a > 0,
|
| G(b)  =  !p!(b)  =  the number of primes =< b.
|
| Then F and G form a pair of adjoint functors and
| the "unity of opposites" describes the bi-unique
| correspondence between positive integers and
| prime numbers.
|
| Many examples arise from a binary relation L c X x Y between two sets X and Y.
| Take $A$ = (Pow(X), <c), the set of subsets of X ordered by inclusion ['<c'],
| and  $B$ = (Pow(Y), >c), ordered by inverse inclusion ['>c'], and put
|
| F(A)  =  {y in Y : for all x in A, (x, y) is in L},
|
| G(B)  =  {x in X : for all y in B, (x, y) is in L},
|
| for all A c X and B c Y.
|
| This situation is called a 'polarity';  it gives rise to an isomorphism between
| the lattice $A$_0 of "closed" subsets of X and the lattice $B$_0 of "closed"
| subsets of Y.  (Notice that the open elements of $B$ are closed subsets of Y.)
|
| Example G2.  Take X to be the set of points of a plane, Y the set of half-planes,
| and write "(x, y) in L" for "x in y".  Then, for any set A of points, GF(A) is the
| intersection of all half-planes containing A, in other words, the 'convex hull' of A.
| The "unity of opposites" here asserts that there are two equivalent ways of describing
| a convex set:  by the points on it or by the half-planes containing it.
|
| L&S, page 13.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 29


| We shall now generalize the notion of adjoint functor from
| preordered sets to arbitrary categories.  In so doing, we
| shall bow to a notational prejudice of many categorists
| and replace the letter "G" by the letter "U".
| ("U" is for "underlying", "F" for "free".)
|
| Definition 3.1.  An 'adjointness' between categories
| $A$ and $B$ is given by a quadruple (F, U, !h!, !e!),
| where F : $A$ -> $B$ and U : $B$ -> $A$ are functors
| and !h! : 1_$A$ -> UF and !e! : FU -> 1_$B$ are
| natural transformations such that
|
| (U !e!) o (!h! U)  =  1_U,
|
| (!e! F) o (F !h!)  =  1_F.
|
| One says that U is 'right adjoint' to F
| or that F is 'left adjoint' to U and one
| calls !h! and !e! the two 'adjunctions'.
|
| Before going into examples, let us give another formulation of what
| will turn out to be an equivalent concept (in Proposition 3.3 below).
|
| Definition 3.2.  A solution to the 'universal mapping problem'
| for a functor U : $B$ -> $A$ is given by the following data:
| for each object A of $A$ an object F(A) of $B$ and an arrow
| !h!(A) : A -> UF(A) such that, for each object B of $B$ and
| each arrow f : A -> U(B) in $A$, there exists a unique arrow
| f* : F(A) -> B in $B$ such that U(f*)!h!(A) = f.
|
|           o F(A)
|            \
|             \
|  UF(A) o     \
|        ^\     \ f*
|        | \     \
|        |  \     \
|        | U(f*)   v
|        |    \     o  B
|        |     \
|        |      v
| !h!(A) |       o U(B)
|        |      ^
|        |     /
|        |    /
|        |   /  f
|        |  /
|        | /
|        |/
|     A  o
|
| Example U1.  Let $B$ be the category of monoids, $A$ the category of sets,
| U : $B$ -> $A$ the forgetful (= underlying) functor, F(A) the free monoid
| generated by the set A, and !h!(A) the obvious mapping of A into the
| underlying set of the monoid F(A).
|
| Definition 3.2'.  Of special interest is the case of
| Definition 3.2 in which $B$ is a full subcategory of $A$
| and U : $B$ -> $A$ is the inclusion.  Then !h!(A) : A -> F(A)
| may be called the 'best approximation' of A by an object of $B$
| in the sense that, for each arrow f : A -> B with B in $B$, there is
| a unique arrow f* : F(A) -> B such that f*!h!(A) = f.  One then says
| that $B$ is a full 'reflective' subcategory of $A$ with 'reflector' F
| and 'reflection' !h!.
|
| Example U2.  Let $A$ be the category of Abelian groups,
| $B$ the full subcategory of torsion free Abelian groups,
| and F(A) = A/T(A), where T(A) is the torsion subgroup of A.
|
| L&S, pages 13-14.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 30


| Proposition 3.3.  Given two categories $A$ and $B$, there is a
| one-to-one correspondence between adjointnesses (F, U, !h!, !e!)
| and solutions (F, !h!, *) of the universal mapping problem for
| U : $B$ -> $A$.
|
| Proof.  If (F, U, !h!, !e!) is given, put f* = !e!(B)F(f).
| Conversely, if U and (F, !h!, *) are given, for each f : A -> A',
| put F(f) = (!h!(A')f)* and check that this makes F a functor and
| !h! a natural transformation;  moreover, define !e!(B) = (1_U(B))*.
|
| It follows from symmetry considerations that an adjointness is also
| equivalent to a "co-universal mapping problem", obtained by dualizing
| Definition 3.2.  (A left adjoint to $B$ -> $A$ is a right adjoint to
| $B$^op -> $A$^op.)
|
| There is yet another way of looking at adjoint functors,
| at least when $A$ and $B$ are locally small.
|
| Proposition 3.4.  An adjointness (F, U, !h!, !e!)
| between locally small categories $A$ and $B$ gives
| rise to and is determined by a natural isomorphism:
|
| Hom_$B$ (F(-), -)  ~=~  Hom_$A$ (-, U(-))
|
| between functors $A$^op x $B$ -> Sets.
|
| We leave the proof of this to the reader.
|
| Even if $A$ is not locally small, there is a natural bijection between
| arrows FA -> B in $B$ and arrows A -> UB in $A$.  Logicians may think
| of such a bijection as comprising two rules of inference;  and this
| point of view has been quite influential in the development of
| categorical logic.  An analogous situation in the propositional
| calculus would be the bijection between proofs of the entailments
| C & A |- B and A |- C => B (see Exercise 4 below).  Inasmuch as
| implication is a more sophisticated notion than conjunction,
| the adjointness here explains the emergence of one concept
| from another.  This point of view, due to Lawvere, may be
| summarized by yet another slogan, illustrations of which
| will be found throughout this book (see, for instance,
| Exercise 6 below).
|
| Slogan 4.  Many important concepts in mathematics arise
| as adjoints, right or left, to previously known functors.
|
| We summarize two important properties of adjoint functors,
| which will be useful later.
|
| Proposition 3.5.
|
| 1.  Adjoint functors determine each other uniquely
|     up to natural isomorphisms.
|
| 2.  If (U, F) and (U', F') are pairs of adjoint functors,
|     as in the diagram:
|
|             U'            U
|         -------->     -------->
|     $C$           $B$           $A$,
|         <--------     <--------
|             F'            F
|
| then (UU', F'F) is also an adjoint pair.
|
| L&S, pages 14-15.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Note 31


| 4.  Equivalence of Categories
|
| ...
|
| L&S, pages 16-17.
|
| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

HOC. Higher Order Categorical Logic • Discussion

HOC. Discussion Note 1


MA  = Murray Altheim
L&S = Lambek & Scott

MA: You recommended I read Lambek and Scott's "Introduction to
    Higher Order Categorical Logic", so I ordered a copy from
    Cambridge University Press.  It came over the holidays.
    I suppose I should have heeded the series title:
    "Cambridge Studies in *Advanced* Mathematics",
    because I feel completely stupid.  I can't claim
    to understand more than the first page or two,
    which I wouldn't want to be quizzed on either.
    The stuff I can understand leads me to believe
    it's very interesting domain, but it also makes
    me think that  sometimes I'm just not cut out
    for certain types of thinking, or have received
    literally no training necessary to grok this:

L&S: | Let [squiggle] be the category of Abelian groups and [another_squiggle]
     | the opposite of the category of topological Abelian groups.  Let K be the
     | compact group of the reals modulo the integers: K [equal sign with three
     | lines] ['R' in an outline font]/['Z' in an outline font].  For any abstract
     | Abelian group A, define F(A) as the group of all homomorphisms of A into K,
     | with the topology induced by K.  For any topological Abelian group B, define
     | U(B) as the group of all continuous homomorphisms of B into K.  Then U and F
     | can easily (!!!) be seen to be the object parts of a pair of adjoint functors.
     | Here [squirrelly-A sub 0] is [squirrely-A], while [squirrelly-B sub 0] is the
     | opposite of the category of compact Abelian groups.  The 'unity of opposites'
     | asserts the well-known (!!!) Pontrjagin (no, I did not misspell that) duality
     | between abstract and compact Abelian groups.  The last statement of Proposition
     | 4.2 tells us that the compact Abelian groups form a reflective subcategory of
     | the category of all topological Abelian groups.

MA: This is on page 18.  By page 137 it's pretty much all formulae composed
    of symbols I've never even seen before. *sigh*   This is pretty much
    completely opaque to me, and I can't imagine being able to spend
    the time in the next five years to understand it well enough
    to make any use of it.

Actually, if could you get as far as understanding the definition
of a natural transformation on page 8, that would be a lot.  But
there's no reason to expect that you could do that on your own.
I spent a lot of time on the SUO list trying to get across basic
category-theoretic ways of thinking in concrete contexts without
ever mentioning the legion of officious titles, ideas which folks
would need to grasp before they could understand 1/10 of what RK
is talking about, but they seem to prefer the razzle-dazzle to
the nitty-gritty.

What you ran into is the sort of place where the authors try
to impress people who have had a couple of years of graduate
courses in algebra and topology with the fact that they can
sum up those two years of study in a single paragraph.  But
that is just a side-show bit, and you can ignore all of it.
In my notes to the Ontology List I skipped from page 11 to
page 41 just by way of getting to the logical motivations
a little quicker.

Category theory is really just a study in metaphors.
And, well, metaphors between metaphors (= functors).
And, well, metaphors between functors (= nat.trans).
In one of my first courses in this stuff we got to
do a "creative" final paper, and I wrote an intro
to the main ideas in the form of a science fiction
story.  Probably still have it buried in a basement
box somewhere, but don't know if I could find it now.

The stuff that I append here could provide us with a good couple
of months of study, but then you'd have the most essential bits.

[HOC.  Higher Order Categorical Logic.  Notes 01-07]

HOC. Discussion Note 2


JA  = Jon Awbrey
L&S = Lambek & Scott
MA  = Murray Altheim

JA: Actually, if could you get as far as understanding the definition
    of a natural transformation on page 8, that would be a lot.  But
    there's no reason to expect that you could do that on your own.
    I spent a lot of time on the SUO list trying to get across basic
    category-theoretic ways of thinking in concrete contexts without
    ever mentioning the legion of officious titles, ideas which folks
    would need to grasp before they could understand 1/10 of what RK
    is talking about, but they seem to prefer the razzle-dazzle to
    the nitty-gritty.

MA: Both the razzle-dazzle and the nitty-gritty are snowing me right now.

JA: What you ran into is the sort of place where the authors try
    to impress people who have had a couple of years of graduate
    courses in algebra and topology with the fact that they can
    sum up those two years of study in a single paragraph.  But
    that is just a side-show bit, and you can ignore all of it.
    In my notes to the Ontology List I skipped from page 11 to
    page 41 just by way of getting to the logical motivations
    a little quicker.

MA: So basically, you're saying to ignore the "Introduction to Category Theory"
    section and jump straight to "Cartesian closed categories and Lambda Calculus"?
    After understanding through page 8?

MA: Part of the difficult is language:  I don't have any experience
    in this particular use of English.  Even simple stuff is not so
    simple without the background, and I tend to not like to guess.
    E.g., the idea that a morphism "sends objects of $A$ to objects
    of $B$ and arrows of $A$ to arrows of $B$" might sound ostensibly
    like some kind of "morphing", but I have no idea what it really
    means in practice.  "sends"?  How is that different from "maps"?

"maps" and "sends" are just synonyms here.

in the beginning one starts with concrete categories:

HOC.    http://suo.ieee.org/ontology/thrd36.html#03373
HOC 1.  http://suo.ieee.org/ontology/msg03373.html

L&S: | Part 0.  Introduction to Category Theory
     |
     | 1.  Categories and Functors
     |
     | In this section we present what our reader is expected
     | to know about category theory.  We begin with a rather
     | informal definition.
     |
     | Definition 1.1.  A 'concrete category' is a collection of two kinds
     | of entities, called 'objects' and 'morphisms'.  The former are sets
     | which are endowed with some kind of structure, and the latter are
     | mappings, that is, functions from one object to another, in some
     | sense preserving that structure.  Among the morphisms, there is
     | attached to each object A the 'identity mapping' 1_A : A -> A
     | such that 1_A(a) = a for all a in A.  Moreover, morphisms
     | f : A -> B and g : B -> C may be 'composed' to produce
     | a morphism gf : A -> C such that (gf)(a) = g(f(a))
     | for all a in A.

this is the checklist for any category:

1.  what are the objects?
2.  what are the arrows?
3.  is there an identity arrow for each object?
4.  is there a composition operation on arrows?

for a concrete category, the objects are sets.
for a concrete category, the arrows are mappings between sets.
(in concrete categories, the arrows are usually called morphisms).

L&S: | Example C1.  The category of 'sets'.  Its objects are
     | arbitrary sets and its morphisms are arbitrary mappings.
     | We call this category "Sets".

Category C1 = Sets.
C1 takes any set to be an object of the category.
C1 takes any mapping between sets to be an arrow.
(this is a case of trivial structure to preserve.)

the next examples are sets plus "structure",
in these cases, something like a "sums table",
a "times table", or a "less than" relation is
defined on the sets of the category and also
preserved by the arrows of the category.

L&S: | Example C2.  The category of 'monoids'.  Its objects are
     | monoids, that is, semigroups with unity element, and its
     | morphisms are homomorphisms, that is, mappings which
     | preserve multiplication (the semigroup operation)
     | and the unity element.

a classic example would be the logarithm mapping from
a domain (D, *) of real numbers under multiplication (*)
to a domain (E, +) of real numbers under addition (+).

1.  log : (D, *) -> (E, +).

    the log function maps the object D to the object E,
    mapping the structure of (*) to the structure of (+).

2.  log(1) = 0.

    the log function maps the multiplicative identity 1
    to the additive identity 0.

3.  log(x * y) = log(x) + log(y)

    one says:  "the image of the product is the sum of the images".
    this describes a form of analogy or metaphor between (*) and (+).

L&S: | Example C3.  The category of 'preordered sets'.
     | Its objects are preordered sets, that is, sets
     | with a reflexive and transitive relation on them,
     | and its morphisms are monotone mappings, that is,
     | mappings that preserve this relation.

say that (D, -<) is a reflexive and transitive order relation on D.
say that (E, =<) is a reflexive and transitive order relation on E.

a "monotone" (order-preserving) mapping f : D -> E is one such that:

x -< y  implies  f(x) =< f(y),

again, we can think of f as describing or establishing an analogy or
a metaphor between the ordering (-<) on D and the ordering (=<) on E.

JA: Category theory is really just a study in metaphors.
    And, well, metaphors between metaphors (= functors).
    And, well, metaphors between functors (= nat.trans).
    In one of my first courses in this stuff we got to
    do a "creative" final paper, and I wrote an intro
    to the main ideas in the form of a science fiction
    story.  Probably still have it buried in a basement
    box somewhere, but don't know if I could find it now.

MA: While stories like that sometimes help, they also sometimes
    just mask the actual content.  What would be great would be
    a discussion of this in plain English, if that were possible.

JA: The stuff that I append here could provide us with a good couple
    of months of study, but then you'd have the most essential bits.

MA: ???  You have appended the literal contents of pages 4-8 of the book.

yes, just that much.

MA: And I must say that while the symbols in the book are difficult,
    their transformations into ASCII make it quite a bit harder to
    deal with.

you get used to it.  and it's quick.

MA: Maybe if we just honed in on something at the beginning?
    Say, Example C4, where we see definitions for deductive
    systems and categories.

okay, tomorrow ...

HOC. Discussion Note 3


JP = Jack Park

JP: My sentiments, precisely.

JP: I must say, however, that the book 'Conceptual Mathematics:
    A first introductiton to categories' by F. William Lawvere
    and Stephen H. Schanuel really do start out simple diagrams,
    spreadsheet tables, and real-world examples worked out to
    introduce the concepts.  I'm getting a lot from it.

yes, that's a good book.  the reason for tackling the lambek and scott,
though, was because of the connection they make to logic and computation.

JP: What I have asked for is something akin to some real-world problem.
    One that's, at once, simple, and potentially hairy, one that can start
    simple and grow like mad.  Rosen introduced a "metabolism-repair" object
    as the canonical living organism that his teacher Raschevsky was looking
    for.  When he drew it as a commutative diagram, he noticed that reproduction
    fell out for free.  I'd like to understand how that can come to pass.  Then,
    I'd really like to imagine or learn how to take the nodes in that commutative
    diagram and expand on them, turning them into some higher-order organism with
    real, functional, relational components.  In the end, I see that as a prototype
    for a lot of real-world things, like social systems, diseases, and everything
    that's not driven by pure newtonian mechanics.

can you draw me a copy of this here, or supply a link?
i only looked into rosen once many years ago, and have
hysterical amnesia for my time on the complexity list.

HOC. Discussion Note 4


JP = Jack Park

JP: You know, I got to thinking about arrows and identity arrows.
    It occurred to me that a fully fleshed-out topic is one of
    those.  It has identity, and it has arrows that point to
    other topics.

what is the composition?

HOC. Discussion Note 5


JA  = Jon Awbrey
L&S = Lambek & Scott
MA  = Murray Altheim

JA: "maps" and "sends" are just synonyms here.

MA: Okay. (he says, thinking he might be on firm ground but never
    sure if the mud will suddenly slide out from under him ...)

a function f is a set of ordered pairs f = {<x1, y1>, <x2, y2>, ...}.
we say that f associates, maps, sends, etc. x1 to y1, x2 to y2, ... .
and all of those are just conventional idioms for the ordered pairs.
in short, functions have a purely formal existence, but we can use
their forms to describe more concrete things like associations of
ideas, maps in geography, processes that take place in time, etc.

the set of all ordered pairs that you can form by taking
the first element from X and the second element from Y
is called the "cartesian product" X x Y.

we write S c T for "S is a subset of T".

a "2-adic relation" L between X and Y is an arbitrary set of ordered pairs
with the first from X and the second from Y, that is, any subset of X x Y,
so we can say that L c X x Y.

a "function" f : X -> Y is a special case of a 2-adic relation f c X x Y
that has just this one additional property:  every element x in X appears
in one and only one ordered pair of f.

so functions all look like this:

1   2   3   4   5   6
o   o   o   o   o   o   X
 \  |  /    |  /   /
  \ | /     | /   /     f
   \|/      |/   /
o   o   o   o   o       Y
1   2   3   4   5

JA: in the beginning one starts with concrete categories:

HOC.    http://suo.ieee.org/ontology/thrd36.html#03373
HOC 1.  http://suo.ieee.org/ontology/msg03373.html

L&S: | Part 0.  Introduction to Category Theory
     |
     | 1.  Categories and Functors
     |
     | In this section we present what our reader is expected
     | to know about category theory.  We begin with a rather
     | informal definition.
     |
     | Definition 1.1.  A 'concrete category' is a collection of two kinds
     | of entities, called 'objects' and 'morphisms'.  The former are sets
     | which are endowed with some kind of structure, and the latter are
     | mappings, that is, functions from one object to another, in some
     | sense preserving that structure.  Among the morphisms, there is
     | attached to each object A the 'identity mapping' 1_A : A -> A
     | such that 1_A(a) = a for all a in A.  Moreover, morphisms
     | f : A -> B and g : B -> C may be 'composed' to produce
     | a morphism gf : A -> C such that (gf)(a) = g(f(a))
     | for all a in A.

MA: Okay.  Baby steps.  I'm not going to
    pretend to understand something I'm
    not sure I actually do understand.

JA: this is the checklist for any category:

    1.  what are the objects?
    2.  what are the arrows?
    3.  is there an identity arrow for each object?

MA: What does this mean?  That there is an arrow connecting
    the object to some other object establishing its identity?
    What constitutes "identity" in this context?  If we're not
    connecting these objects to things in the real world  (I'm
    assuming given my hand was recently slapped that we're solely
    in the realm of abstract mathematics), that "identity" has some
    mathematical definition.

a concrete category C consists of a set of objects, called Obj(C),
and a set of arrows, or (homo)morphisms, called Arr(C), or Hom(C).
(the extra language is left over from a recent cultural revolution).

in a concrete category C the objects are just sets,
let's say there's the set X = {1, 2, 3, 4, 5} in C
and the set Y = {1, 2, 3, 4, 5, 6} in C.

1   2   3   4   5
o   o   o   o   o       X


o   o   o   o   o   o   Y
1   2   3   4   5   6

the definition then demands that 1_X, the identity arrow for X,
and 1_Y, the identity arrow for Y, must be included in Arr(C),
that is, listed among the arrows of C.

concretely considered, 1_X is the mapping from X to X that
looks like this, reading the ordered pairs down the page:

1   2   3   4   5
o   o   o   o   o   X
|   |   |   |   |
|   |   |   |   |  1_X
|   |   |   |   |
o   o   o   o   o   X
1   2   3   4   5

one writes 1_X (x) = x for all x in X.

concretely considered, 1_Y is the mapping from Y to Y that
looks like this, reading the ordered pairs down the page:

1   2   3   4   5   6
o   o   o   o   o   o   Y
|   |   |   |   |   |
|   |   |   |   |   |  1_Y
|   |   |   |   |   |
o   o   o   o   o   o   Y
1   2   3   4   5   6

One writes 1_Y (x) = x for all x in Y.

You could stop right there and have a valid example of a category,
as the identity arrows trivially compose according to the rules,
1_X o 1_X = 1_X and 1_Y o 1_Y = 1_Y.  (We sometimes use "o" for
emphasis to indicate the composition operation.)

JA: 4.  Is there a composition operation on arrows?

MA: What does this mean? ("composition operation")
    on the arrows rather than the objects?

In a concrete category, composition of arrows
is just the usual composition of functions.

An ordered pair of functions, f : U -> V and g : X -> Y,
in that order, is "composable" if V = X.  that is to say,
the target of f is the source of g.  At this point, folks
will follow different conventions, and even shift paradigms
from one context to the next.  Unfortunately, it is slightly
more popular to do things backasswards, in the following way:

If we have f : X -> Y and g : Y -> Z, then the composition of g on f
is the function written g o f : X -> Z, or just gf : X -> Z, and this
is defined by the equation (g o f)(x) = gf(x) = g(f(x)) for all x in X.

By way of illustration, suppose we have X and Y as above,
and suppose we add the object Z = {1, 2, 3, 4, 5, 6, 7}.

Consider the function f : X -> Y that looks like this:

1   2   3   4   5
o   o   o   o   o       X
 \   \   \   \  |
  \   \   \   \ |       f
   \   \   \   \|
o   o   o   o   o   o   Y
1   2   3   4   5   6

Consider the function g : Y -> Z that looks like this:

1   2   3   4   5   6
o   o   o   o   o   o       Y
 \   \   \   \   \  |
  \   \   \   \   \ |       g
   \   \   \   \   \|
o   o   o   o   o   o   o   Z
1   2   3   4   5   6   7

The composition g o f : X -> Z can be visualized
by following up f with g in the following manner:

1   2   3   4   5
o   o   o   o   o           X
 \   \   \   \  |
  \   \   \   \ |           f
   \   \   \   \|
o   o   o   o   o   o       Y
 \   \   \   \   \  |
  \   \   \   \   \ |       g
   \   \   \   \   \|
o   o   o   o   o   o   o   Z
1   2   3   4   5   6   7

Then you just replace each 2-edge path with
a 1-edge path, ignoring the multiplicities:

    1   2   3   4   5
    o   o   o   o   o       X
     \   \   \   \  |
      \   \   \   \ |     g o f
       \   \   \   \|
o   o   o   o   o   o   o   Z
1   2   3   4   5   6   7
 
JA: For a concrete category, the objects are sets.
    For a concrete category, the arrows are mappings between sets.
    (In concrete categories, the arrows are usually called morphisms).

MA: So this is not simple graph theory, since the graph objects
    are themselves graphs (I'm assuming that a set can be modeled
    as a graph?)

Yes, this is more like an "application of graph theory" to codify
at a fairly high level of abstraction the structure in a category.
(Graph theorists simpliciter usually do not talk this way, and would
insist on calling them "labeled digraphs" or labeled directed graphs.)
Thus, a node in one of these directed graphs stands for a whole set of
elements, and a single directed edge stands for a function between sets.

L&S: | Example C1.  The category of 'sets'.  Its objects are
     | arbitrary sets and its morphisms are arbitrary mappings.
     | We call this category "Sets".

JA: Category C1 = Sets.
    C1 takes any set to be an object of the category.
    C1 takes any mapping between sets to be an arrow.
    (This is a case of trivial structure to preserve.)

MA: This seems fairly clear.

JA: The next examples are sets plus "structure",
    in these cases, something like a "sums table",
    a "times table", or a "less than" relation is
    defined on the sets of the category and also
    preserved by the arrows of the category.

L&S: | Example C2.  The category of 'monoids'.  Its objects are
     | monoids, that is, semigroups with unity element, and its
     | morphisms are homomorphisms, that is, mappings which
     | preserve multiplication (the semigroup operation)
     | and the unity element.

MA: The unity element being "="?  What does the
    phrase "preserve multiplication" mean?

A "semigroup" is a set with a 2-ary operation (*)
subject to an associative law, a*(b*c) = (a*b)*c.
Sometimes you think of (*) as multiplication,
and write a*b as ab.  Other times you think
of (*) as addition, and write a*b as a+b.

The "unity element" is just another name
for the identity element in the system.
If you are thinking of (*) on analogy
with multiplication, you will use "1"
and write 1*a = a = a*1.  If you are
thinking of (*) on analogy with sum,
you use "0" and write 0+a = a = a+0.

Have to break here.  Will pick up at "preserve".

HOC. Discussion Note 6


JA: The next examples are sets plus "structure",
    in these cases, something like a "sums table",
    a "times table", or a "less than" relation is
    defined on the sets of the category and also
    preserved by the arrows of the category.

L&S: | Example C2.  The category of 'monoids'.  Its objects are
     | monoids, that is, semigroups with unity element, and its
     | morphisms are homomorphisms, that is, mappings which
     | preserve multiplication (the semigroup operation)
     | and the unity element.

MA: The unity element being "="?  What does the
    phrase "preserve multiplication" mean?

JA: A "semigroup" is a set with a 2-ary operation (*)
    subject to an associative law, a*(b*c) = (a*b)*c.
    Sometimes you think of (*) as multiplication,
    and write a*b as ab.  Other times you think
    of (*) as addition, and write a*b as a+b.

JA: The "unity element" is just another name
    for the identity element in the system.
    If you are thinking of (*) on analogy
    with multiplication, you will use "1"
    and write 1*a = a = a*1.  If you are
    thinking of (*) on analogy with sum,
    you use "0" and write 0+a = a = a+0.

JA: A classic example would be the logarithm mapping from
    a domain (D, *) of real numbers under multiplication (*)
    to a domain (E, +) of real numbers under addition (+).

    1.  log : (D, *) -> (E, +).

        The log function maps the object D to the object E,
        mapping the structure of (*) to the structure of (+).

    2.  log(1) = 0.

        The log function maps the multiplicative identity 1
        to the additive identity 0.

    3.  log(x * y) = log(x) + log(y)

        One says:  "the image of the product is the sum of the images".
        this describes a form of analogy or metaphor between (*) and (+).

MA: I *think* I understand this.

That figure of speech -- called "chiasma" or "chiasmus" in literary circles --
is one of the recognizable signatures by which you may know that a morphism
has set its hand to the work.  Here, the word "image" refers to the morphic
image, that is, the functional result of the structure-preserving function.

Let's try to get at the notion of morphisms as "structure-preserving maps".
Suppose we have two structured sets (X, L) and (Y, M) and a map f : X -> Y.
What does it mean that f maps the structure L on X to the structure M on Y?

The use of the word "preserve" for a correspondence established between two
structures will make more sense if you remember that the paradigmatic case
is one where both L and M are thought of under the same name, say (*), (+),
(=<), etc., even if that is strictly speaking an act of great abstraction
wrapped in a figure of hardly heard homophony.

The ingredients of a potential morphism are as follows:

   1.  We have a set X with a certain "structure" L that is defined on it.
       It could a 2-adic relation L c X x X that has the properties of an
       order relation, or it could be a 3-adic relation L c X x X x X that
       is associated with a 2-ary operation like addition, multiplication,
       or any one of several 2-ary logical connectives.

   2.  We have a set Y with a comparable structure M that is defined on it.

For the sake of a concrete example, let's say that both L and M are 3-adic
relations of the kind that are associated with 2-ary operations.  Thus we
can write (X, L) = (X, *) and (Y, M) = (Y, +), where L c X^3 and M c Y^3.
As a generic name for the result of an operation, I'll use "resultant".

   3.  We are given a mapping f : X -> Y, and we would like to test whether
       f maps the structure L on X to the structure M on Y, in which case
       we will bow to tradition and say that f preserves the respective
       attachments of structure in the passage from X to Y.

Here is one way to formulate the property that we need to test.
In order to say that f : X -> Y preserves the form of L in the
form of M, the following equation must hold for all u, v in X.

   f(u * v)  =  f(u) + f(v)

In the idiom that is commonly used, we are asking whether the
following parable, properly interpreted, is a constant truth:

   The image of the resultant is the resultant of the images.

In order to read this right, you have to keep in mind that
"image of" means "f evaluated at", the first "resultant"
refers to L or (*) evaluated on a pair u, v in X, and
the second "resultant" refers to M or (+) evaluated
on the corresponding pair f(u), f(v) in Y.

Saved by the dinner bell ...
I will use the interval
to rustle up some kinds
of pictures that might
help with this mess.

HOC. Discussion Note 7


Here is a simple example of a morphism f : (X, L) -> (Y, M).

Let X be the integers, X = {..., -3, -2, -1, 0, 1, 2, 3, ...},
and let L c X^3 be the 3-adic relation on X whose 3-tuples we
commonly represent by means of the following "addition table":

...       ...           ...           ...           ...           ...      ...
...  [-2,  2,  0]  [-1,  2,  1]  [ 0,  2,  2]  [ 1,  2,  3]  [ 2,  2,  4]  ...
...  [-2,  1, -1]  [-1,  1,  0]  [ 0,  1,  1]  [ 1,  1,  2]  [ 2,  1,  3]  ...
...  [-2,  0, -2]  [-1,  0, -1]  [ 0,  0,  0]  [ 1,  0,  1]  [ 2,  0,  2]  ...
...  [-2, -1, -3]  [-1, -1, -2]  [ 0, -1, -1]  [ 1, -1,  0]  [ 2, -1,  1]  ...
...  [-2, -2, -4]  [-1, -2, -3]  [ 0, -2, -2]  [ 1, -2, -1]  [ 2, -2,  0]  ...
...       ...           ...           ...           ...           ...      ...

The entries in the table have the form [x_1, x_2, x_3], where x_1 + x_2 = x_3.

Let Y be the integers modulo 2, to wit, Y = {0, 1},
and take M c Y^3 as the 3-adic relation on Y whose
3-tuples are given by the following addition table:

  + | 0   1     
 ---o---o---o
  0 | 0 | 1 |
    o---o---o
  1 | 1 | 0 |
    o---o---o

The column heads give y_1, the row heads give y_2,
and the entries in the table give y_3 = y_1 + y_2.

The obvious morphism in this case is the map f : X -> Y
that sends all even integers in X to the element 0 in Y
and sends all odd integers in X to the element 1 in Y.

We need to check that "the image of the sum is the sum of the images",
otherwise formulated, that f(x_1 + x_2) = f(x_1) + f(x_2), where the
first "+" uses the 1st table and the second "+" uses the 2nd table.

Bur all this just means that:
Evens plus Evens are Even,
Evens plus Odds are Odd,
Odds plus Evens are Odd,
Odds plus Odds are Even,
which is clear enough.

In summary, f gives the parity of an integer, 0 for Even, 1 for Odd,
and the parity of the integer sum is the mod two sum of the parities.

Finally, observe that "structure-preserving" does not imply that all
of the structure is preserved, but only an identifiable aspect of it.

Parity On, Dude!

HOC. Discussion Note 8


MW = Matthew West

MW: First, thank you for this excellent tutorial.
    Even I can more or less follow you.

Thanks, and I will share the thanks all round,
as this form of semi-auto-tutorial is largely
dependent on penetrating questions from the
participants to see through the f o g.

MW: Can I chip in a question here.  You talk about
    structure-preserving functions below.  Is that
    the same as an isomorphism?  If not what is
    the difference?

The full official name of a morphism is a "homomorphism",
which was chosen to suggest "same form, more or less",
whereas "isomorphism" means "same form, exactly".

An "isomorphism" f : X -> Y is a special case of
a homomorphism from X to Y where f is "bijective",
that is, in some of the other language that gets
used here, f is both "injective" ("one-to-one")
and "surjective" ("onto").  (That's the English
"onto", not the Greek "onto-", by the way.)

That's how one thinks of it in concrete categories,
anyway, where the objects are just garden variety
sets and all the arrows are ordinary functions.
In categories more abstractly viewed, that is,
purely in terms of the axiomatic properties
that they exemplify, it is usual to give
more elegant definitions of isomorphism.

Lambek & Scott give an abstract definition of isomorphism on page 7.

HOC.    http://suo.ieee.org/ontology/thrd36.html#03373
HOC 6.  http://suo.ieee.org/ontology/msg03381.html

L&S: | Definition 1.5.  An arrow f : A -> B in a category
     | is called an 'isomorphism' if there is an arrow
     | g : B -> A such that gf = 1_A and fg = 1_B.
     | One writes A ~=~ B to mean that such an
     | isomorphism exists and says that
     | A is 'isomorphic' with B.

For instance, look at the example of a concrete morphism that I gave next:

HOC Discussion.    http://suo.ieee.org/ontology/thrd37.html#05262
HOC Discussion 7.  http://suo.ieee.org/ontology/msg05268.html

Here we have a morphism f : X -> Y, where X is an infinite set
and Y is a finite set, so f cannot possibly be an isomorphism.

HOC. Discussion Note 9


JA  = Jon Awbrey
L&S = Lambek & Scott
MA  = Murray Altheim

L&S: | Example C3.  The category of 'preordered sets'.
     | Its objects are preordered sets, that is, sets
     | with a transitive and reflexive relation on them,
     | and its morphisms are monotone mappings, that is,
     | mappings which preserve this relation.

JA: Say that (D, -<) is a reflexive and transitive order relation on D.
    Say that (E, =<) is a reflexive and transitive order relation on E.

JA: A "monotone" (order-preserving) mapping f : D -> E is one such that:

MA: Damn, the choices of words are so
    strange.  "Monotone" to me has to
    do with frequencies of sound.

The ties that bind our Fab Four -- Arithmetic, Geometry, Music, Physics --
into such a tight band go way way back, but hear the musician in them
borrows if not quiet covers the tune of a physical tension, to wit,
the stretch of a singular cord across the redounding monochord,
and though we might vie to re-dub our monotonous theme with
monoscalar variations, we'd still have to face the music.

JA: x -< y  implies  f(x) =< f(y),

JA: Again, we can think of f as describing or establishing an analogy or
    a metaphor between the ordering (-<) on D and the ordering (=<) on E.

MA: "Analogy" and "metaphor"?  Simile?  Synonym?  I suppose any field
    borrows terms from other fields, but the cognitive dissonance
    for outsiders is pretty high, sorta like learning Bulgarian.
    No, actually, like learning Dutch.

No, and I'm guessing that it's probably become more obvious by now,
the use of the term "analogy" -- what Aristotle called "paradigm",
that's Greek for "side-show" -- is quite precise in describing
a correspondence of formal structure between two domains.  But
we'll be seeing lots more examples of that before we're done.

JA: Category theory is really just a study in metaphors.
    And, well, metaphors between metaphors (= functors).
    And, well, metaphors between functors (= nat.trans).
    In one of my first courses in this stuff we got to
    do a "creative" final paper, and I wrote an intro
    to the main ideas in the form of a science fiction
    story.  Probably still have it buried in a basement
    box somewhere, but don't know if I could find it now.

Incidentally, there's lots of formal recognition of this theme
in the AI literature.  A couple of examples that come to mind
would be the joint work of Holland, Holyoak, Nisbett, Thagard
on induction and related inference processes, and also the work
of Forbus, Gentner, Stevens, et al. on analogy and mental models.

MA: And I must say that while the symbols in the book
    are difficult, their transformations into ASCII
    make it quite a bit harder to deal with.

JA: You get used to it.  And it's quick.

MA: I'm guessing you have *somewhere* provided that ASCII mapping.
    It helps that I've got Lambek and Scott in front of me, as this
    is one of the first times I've seen the equivalents of your ASCII
    given full font and glyph printing.  Like $A$, I would never have
    guessed what it looked like. I can't imagine what some of those
    squiggles look like in ASCII.

At the time, I was using bang-bars like !a! for Greek characters
and scrip-bars like $A$ for script (or Fraktur or Gothic) letters.
But these days I try to get by with one level of fanciness, using
bang-bars for both Greek and script, and leaving the resolution of
character to the developing context and the discerning reader's eye.

Most of the other mark-up is pretty standard:  carets to mark superscripts,
like X^3, underscores to mark subscripts, like x_2, single quotes to mark
italics, like 'this'.  The isomorphism symbol I transcribe like so, ~=~,
otherwise there's a risk that readers would read "~=" as "not equal".
Plus, no sense trying to be too pretty, as it's obviously a book
that everybody will want to buy sooner or later, anyway.

HOC. Discussion Note 10


While I recover my strength for the imminent trek through Emyn Muil,
here's a conglomerate of concrete material on the relations between
various species of functions and relations in general:

RIG.  Relations In General.  http://suo.ieee.org/ontology/thrd14.html#04721

01.  http://suo.ieee.org/ontology/msg04721.html
02.  http://suo.ieee.org/ontology/msg04722.html
03.  http://suo.ieee.org/ontology/msg04723.html
04.  http://suo.ieee.org/ontology/msg04724.html

A slightly more leisurely introduction to category theory
and a useful supplement to Lambek & Scott can be found in
Mac Lane's 'Categories for the Working Mathematician',
some excerpts from which are collected here:
 
CAT.  Category Theory.  http://suo.ieee.org/ontology/thrd14.html#04789
CAT.  Category Theory.  Links 01-23.

HOC. Discussion Note 11


I will now introduce a number of different ways of
looking at morphisms as structure preserving maps.

Let's suppose we have three functions, f : X -> Y,
G : X x X -> X, and H : Y x Y -> Y, that satisfy
the following equation for all pairs u, v in X.

   f(G(u, v))  =  H(f(u), f(v))

Our morphic leitmotif can be rubricized by way of the following slogan:

   The image of the resultant is the resultant of the images.

Here, f produces the images, G the first resultant, and H the second resultant.

Figure 1 presents a diagram of the situation in question.

o-----------------------------------------------------------o
|                                                           |
|                       G           H                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    /  |  v     /  |  v                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    \   \   \ ^   ^   ^                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         f   f   f                         |
|                                                           |
o-----------------------------------------------------------o
Figure 1.  Structure Preserving Map f : (X, G) -> (Y, H)

Figure 1 uses arrows to indicate the relational domains at which
each of the relations f, G, H happens to be functional.  That is,
it is more like the feathers of the arrows that serve to mark the
relational domains at which the relations f, G, H are functional,
but it would take yet another construction to make this precise,
as the feathers are not uniquely appointed but many splintered.

Table 2 shows the constraint matrix version of the same thing.

Table 2.  f(G(u, v))  =  H(f(u), f(v))
o---------o---------o---------o---------o
|         %    f    |    f    |    f    |
o=========o=========o=========o=========o
|    G    %    X    |    X    |    X    |
o---------o---------o---------o---------o
|    H    %    Y    |    Y    |    Y    |
o---------o---------o---------o---------o

One way to read this Table is in terms of the informational redundancies
that it schematizes.  In particular, it can be read to say that when one
satisfies the constraint in the G row, along with all of the constraints
in the f columns, then the constraint in the H row is automatically true.
This is the same information as the equation, f(G(u, v)) = H(f(u), f(v)).

HOC. Discussion Note 12


JA = Jon Awbrey
JP = Jack Park

Re: HOC Discussion 3.  http://suo.ieee.org/ontology/msg05264.html
In: HOC Discussion.    http://suo.ieee.org/ontology/thrd37.html#05262

JP: My sentiments, precisely.

JP: I must say, however, that the book 'Conceptual Mathematics:
    A first introductiton to categories' by F. William Lawvere
    and Stephen H. Schanuel really do start out simple diagrams,
    spreadsheet tables, and real-world examples worked out to
    introduce the concepts.  I'm getting a lot from it.

JA: yes, that's a good book.  the reason for tackling the lambek and scott,
    though, was because of the connection they make to logic and computation.

JP: What I have asked for is something akin to some real-world problem.
    One that's, at once, simple, and potentially hairy, one that can start
    simple and grow like mad.  Rosen introduced a "metabolism-repair" object
    as the canonical living organism that his teacher Raschevsky was looking
    for.  When he drew it as a commutative diagram, he noticed that reproduction
    fell out for free.  I'd like to understand how that can come to pass.  Then,
    I'd really like to imagine or learn how to take the nodes in that commutative
    diagram and expand on them, turning them into some higher-order organism with
    real, functional, relational components.  In the end, I see that as a prototype
    for a lot of real-world things, like social systems, diseases, and everything
    that's not driven by pure newtonian mechanics.

JA: can you draw me a copy of this here, or supply a link?
    i only looked into rosen once many years ago, and have
    hysterical amnesia for my time on the complexity list.

JP: If you visit:

    http://www.people.vcu.edu/~mikuleck/PPRISS3.html

    and scroll down just past mid way,
    you will see 10C6 drawn and discussed.

this is bizarre!  i was just now thinking about all the
old chestnuts about (ill, well)-posedness in connection
with the topic of information that was recently revived
on the global brain list.

okay, this helps.  on looking into mikulecky's rosen, i begin
to remember that picture of the "modeling relation" and some
of the discussions that we had about it.  mostly i remember
a passel of misencounters of the usual 2-adic/3-adic kind.

will get back to this ...

HOC. Higher Order Categorical Logic • Work Area

HOC. Discussion Work 1


Let's go back and take another look at what is most likely every
child's first example of a non-trivial morphism, namely, any one
of the mappings f : Reals -> Reals (roughly speaking) that are
commonly known as "logarithm functions", where you get to pick
your favorite base.  In this case, we have G(u, v) = u * v,
H(r, s) = r + s, and the defining formula of the logarithm
map f, namely, f(G(u, v)) = H(fu, fv) comes out looking
like f(u * v) = f(u) + f(v), writing a star (*) and
a plus sign (+) for the ordinary 2-ary operations
of arithmetical multiplication and arithmetical
summation, respectively.

o-----------------------------------------------------------o
|                                                           |
|                      {*}         {+}                      |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    /  |  v     /  |  v                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    \   \   \ ^   ^   ^                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         f   f   f                         |
|                                                           |
o-----------------------------------------------------------o
Figure 3.  Logarithm Arrow f : (X, *) -> (Y, +)

Thus, where the "image" f is the logarithm map,
the first resultant G is the numerical product,
and the second resultant H is the numerical sum,
one then obtains the immemorial mnemonic motto:

| The image of the product is the sum of the images.
|
| f(u * v)  =  f(u) + f(v)
|
| f(G(u, v))  =  H(fu, fv)

HOC. Discussion Work 2


LOR. Note 57

I'm going to elaborate a little further on the subject
of arrows, morphisms, or structure-preserving maps, as
a modest amount of extra work at this point will repay
ample dividends when it comes time to revisit Peirce's
"number of" function on logical terms.

The "structure" that is being preserved by a structure-preserving map
is just the structure that we all know and love as a 3-adic relation.
Very typically, it will be the type of 3-adic relation that defines
the type of 2-ary operation that obeys the rules of a mathematical
structure that is known as a "group", that is, a structure that
satisfies the axioms for closure, associativity, identities,
and inverses.

For example, in the previous case of the logarithm map J, we have the data:

| J : R <- R (properly restricted)
|
| K : R <- R x R, where K(r, s) = r + s
|
| L : R <- R x R, where L(u, v) = u . v

Real number addition and real number multiplication (suitably restricted)
are examples of group operations.  If we write the sign of each operation
in braces as a name for the 3-adic relation that constitutes or defines
the corresponding group, then we have the following set-up:

| J : {+} <- {.}
|
| {+} c R x R x R
|
| {.} c R x R x R

In many cases, one finds that both groups are written with the same
sign of operation, typically ".", "+", "*", or simple concatenation,
but they remain in general distinct whether considered as operations
or as relations, no matter what signs of operation are used.  In such
a setting, our chiasmatic theme may run a bit like these two variants:

| The image of the sum is the sum of the images.
|
| The image of the product is the product of the images.

Figure 22 presents a generic picture for groups G and H.

o-----------------------------------------------------------o
|                                                           |
|                       G           H                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 22.  Group Homomorphism J : G <- H

In a setting where both groups are written with a plus sign,
perhaps even constituting the very same group, the defining
formula of a morphism, J(L(u, v)) = K(Ju, Jv), takes on the
shape J(u + v) = Ju + Jv, which looks very analogous to the
distributive multiplication of a sum (u + v) by a factor J.
Hence another popular name for a morphism:  a "linear" map.

HOC. Higher Order Categorical Logic • Document History

Ontology List (Oct 2001)

  1. http://web.archive.org/web/20081204200346/http://suo.ieee.org/ontology/msg03373.html
  2. http://web.archive.org/web/20081204200607/http://suo.ieee.org/ontology/msg03375.html
  3. http://web.archive.org/web/20081204200947/http://suo.ieee.org/ontology/msg03376.html
  4. http://web.archive.org/web/20081121223306/http://suo.ieee.org/ontology/msg03377.html
  5. http://web.archive.org/web/20070302103422/http://suo.ieee.org/ontology/msg03378.html
  6. http://web.archive.org/web/20070302103431/http://suo.ieee.org/ontology/msg03381.html
  7. http://web.archive.org/web/20070302103442/http://suo.ieee.org/ontology/msg03383.html
  8. http://web.archive.org/web/20070302103500/http://suo.ieee.org/ontology/msg03384.html
  9. http://web.archive.org/web/20070302175225/http://suo.ieee.org/ontology/msg03392.html
  10. http://web.archive.org/web/20070302103242/http://suo.ieee.org/ontology/msg03393.html
  11. http://web.archive.org/web/20070302103505/http://suo.ieee.org/ontology/msg03394.html
  12. http://web.archive.org/web/20070302103230/http://suo.ieee.org/ontology/msg03395.html
  13. http://web.archive.org/web/20070302103517/http://suo.ieee.org/ontology/msg03396.html
  14. http://web.archive.org/web/20070302103303/http://suo.ieee.org/ontology/msg03398.html
  15. http://web.archive.org/web/20081121223647/http://suo.ieee.org/ontology/msg03399.html
  16. http://web.archive.org/web/20070302103539/http://suo.ieee.org/ontology/msg03400.html
  17. http://web.archive.org/web/20070302103550/http://suo.ieee.org/ontology/msg03401.html
  18. http://web.archive.org/web/20070302103559/http://suo.ieee.org/ontology/msg03402.html
  19. http://web.archive.org/web/20070302103609/http://suo.ieee.org/ontology/msg03403.html
  20. http://web.archive.org/web/20070302103619/http://suo.ieee.org/ontology/msg03404.html
  21. http://web.archive.org/web/20070302103630/http://suo.ieee.org/ontology/msg03405.html
  22. http://web.archive.org/web/20070302103639/http://suo.ieee.org/ontology/msg03406.html
  23. http://web.archive.org/web/20070302103650/http://suo.ieee.org/ontology/msg03409.html
  24. http://web.archive.org/web/20081121223918/http://suo.ieee.org/ontology/msg03410.html
  25. http://web.archive.org/web/20070302103712/http://suo.ieee.org/ontology/msg03411.html
  26. http://web.archive.org/web/20070302103722/http://suo.ieee.org/ontology/msg03412.html
  27. http://web.archive.org/web/20080906120100/http://suo.ieee.org/ontology/msg03415.html
  28. http://web.archive.org/web/20070302103742/http://suo.ieee.org/ontology/msg03416.html
  29. http://web.archive.org/web/20070302103752/http://suo.ieee.org/ontology/msg03417.html
  30. http://web.archive.org/web/20070302103802/http://suo.ieee.org/ontology/msg03418.html

HOC. Higher Order Categorical Logic • Discussion History

Inquiry List (Jan 2004)

  1. http://web.archive.org/web/20061013235830/http://stderr.org/pipermail/inquiry/2004-January/001037.html
  2. http://web.archive.org/web/20061013235743/http://stderr.org/pipermail/inquiry/2004-January/001038.html
  3. http://web.archive.org/web/20040331105109/http://stderr.org/pipermail/inquiry/2004-January/001039.html
  4. http://web.archive.org/web/20061014000144/http://stderr.org/pipermail/inquiry/2004-January/001040.html
  5. http://web.archive.org/web/20061013235834/http://stderr.org/pipermail/inquiry/2004-January/001041.html
  6. http://web.archive.org/web/20040331104547/http://stderr.org/pipermail/inquiry/2004-January/001042.html
  7. http://web.archive.org/web/20061013235923/http://stderr.org/pipermail/inquiry/2004-January/001043.html
  8. http://web.archive.org/web/20040331104447/http://stderr.org/pipermail/inquiry/2004-January/001044.html
  9. http://web.archive.org/web/20061013235859/http://stderr.org/pipermail/inquiry/2004-January/001045.html
  10. http://web.archive.org/web/20061013235531/http://stderr.org/pipermail/inquiry/2004-January/001047.html
  11. http://web.archive.org/web/20061013235946/http://stderr.org/pipermail/inquiry/2004-January/001050.html
  12. http://web.archive.org/web/20061013235926/http://stderr.org/pipermail/inquiry/2004-January/001052.html

Ontology List (Jan 2004)

  1. http://web.archive.org/web/20070302153905/http://suo.ieee.org/ontology/msg05262.html
  2. http://web.archive.org/web/20070302153917/http://suo.ieee.org/ontology/msg05263.html
  3. http://web.archive.org/web/20070302133411/http://suo.ieee.org/ontology/msg05264.html
  4. http://web.archive.org/web/20070302164529/http://suo.ieee.org/ontology/msg05265.html
  5. http://web.archive.org/web/20070302164538/http://suo.ieee.org/ontology/msg05266.html
  6. http://web.archive.org/web/20070302164548/http://suo.ieee.org/ontology/msg05267.html
  7. http://web.archive.org/web/20070302164558/http://suo.ieee.org/ontology/msg05268.html
  8. http://web.archive.org/web/20070302164608/http://suo.ieee.org/ontology/msg05270.html
  9. http://web.archive.org/web/20070302164618/http://suo.ieee.org/ontology/msg05271.html
  10. http://web.archive.org/web/20070302164638/http://suo.ieee.org/ontology/msg05273.html
  11. http://web.archive.org/web/20070302164659/http://suo.ieee.org/ontology/msg05277.html
  12. http://web.archive.org/web/20070302154121/http://suo.ieee.org/ontology/msg05285.html

INF. Information Flow

INF. Note 1

MOD. Model Theory

MOD. Note 1

| Model Theory
|
| 1.  Introduction
|
| 1.1.  What Is Model Theory?
|
| Model theory is the branch of mathematical logic which deals with the
| relation between a formal language and its interpretations, or models.
| We shall concentrate on the model theory of first-order predicate logic,
| which may be called "classical model theory".
|
| Let us now take a short introductory tour of model theory.
| We begin with the models which are structures of the kind which
| arise in mathematics.  For example, the cyclic group of order 5,
| the field of rational numbers, and the partially-ordered structure
| consisting of all sets of integers ordered by inclusion, are models
| of the kind we consider.  At this point we could, if we wish, study
| our models at once without bringing the formal language into the
| picture.  We would then be in the area known as universal algebra,
| which deals with homomorphisms, substructures, free structures,
| direct products, and the like.  The line between universal
| algebra and model theory is sometimes fuzzy;  our own
| usage is explained by the equation:
|
|       universal algebra  +  logic  =  model theory.
|
| To arrive at model theory, we set up our formal language, the
| first-order logic with identity.  We specify a list of symbols
| and then give precise rules by which sentences can be built up
| from the symbols.  The reason for setting up a formal language is
| that we wish to use the sentences to say things about the models.
| This is accomplished by giving a basic 'truth definition', which
| specifies for each pair consisting of a sentence and a model one
| of the truth values 'true' or 'false'.
|
| The truth definition is the bridge connecting the formal language with
| its interpretation by means of models.  If the truth value "true" goes
| with the sentence !p! and model !A!, we say that !p! is 'true' in !A!
| and also that !A! is a 'model' of !p!.  Otherwise we say that !p! is
| 'false' in !A! and that !A! is not a model of !p!.  Moreover, we say
| that !A! is a 'model' of a set !S! of sentences iff !A! is a model
| of each sentence in the set !S!.
|
| Chang & Keisler, 'Model Theory', pages 1-2.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 2

| 1.  Introduction
|
| 1.1.  What Is Model Theory? (cont.)
|
| What kinds of theorems are proved in model theory?
| We can already give a few examples.  Perhaps the earliest
| theorem in model theory is Löwenheim's theorem (Löwenheim, 1915):
| If a sentence has an infinite model, then it has a countable model.
| Another classical result is the compactness theorem, due to Gödel (1930)
| and Malcev (1936):  If each finite subset of a set !S! of sentences has a
| model, then the whole set !S! has a model.  As a third example, we may state
| a more recent result, due to Morley (1965).  Let us say that a set !S! of
| sentences is 'categorical' in power !a! iff there is, up to isomorphism,
| only one model of !S! of power !a!.  Morley's theorem states that, if
| !S! is categorical in one uncountable power, then !S! is categorical
| in every uncountable power.
|
| These theorems are typical results of model theory.  They say something
| negative about the "power of expression" of first-order predicate logic.
| Thus Löwenheim's theorem shows that no consistent sentence can imply
| that a model is uncountable.  Morley's theorem shows that first-order
| predicate logic cannot, as far as categoricity is concerned, tell
| the difference between one uncountable power and another.  And the
| compactness theorem has been used to show that many interesting
| properties of models cannot be expressed by a set of first-order
| sentences -- for instance, there is no set of sentences whose
| models are precisely all the finite models.
|
| The three theorems we have stated also say something positive about the
| existence of models having certain properties.  Indeed, in almost all
| of the deeper theorems in model theory the key to the proof is to
| construct the right kind of a model.  For instance, look again
| at Löwenheim's theorem.  To prove that theorem, we must begin
| with an uncountable model of a given sentence and construct
| from it a countable model of the sentence.  Likewise,
| to prove the compactness theorem we must construct
| a single model in which each sentence of !S! is
| true.  Even Morley's theorem depends vitally
| on the construction of a model.  To prove
| it we begin with the assumption that
| !S! has two different models of one
| uncountable power and construct
| two different models of every
| other uncountable power.
|
| Chang & Keisler, 'Model Theory', page 2.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 3

| 1.  Introduction
|
| 1.1.  What Is Model Theory? (cont.)
|
| There are a small number of extremely important ways in which models
| have been constructed.  For example, for various purposes they can
| be constructed from individual constants, from functions, from
| Skolem terms, or from unions of chains.  These constructions
| give the subject of model theory unity.  To a large extent,
| we have organized this book according to these ways of
| constructing models.
|
| Another point which gives model theory unity is
| the distinction between 'syntax' and 'semantics'.
| Syntax refers to the purely formal structure of the
| language -- for instance, the length of a sentence
| and the collection of symbols occurring in a sentence,
| are syntactical properties.  Semantics refers to the
| interpretation, or meaning, of the formal language --
| the truth or falsity of a sentence in a model is
| a semantical property.  As we shall soon see,
| much of model theory deals with the interplay
| of syntactical and semantical ideas.
|
| We now turn to a brief historical sketch.
| The mathematical world was forced to observe
| that a theory may have more than one model in the
| 19th century, when Bolyai and Lobachevsky developed
| non-Euclidean geometry, and Riemann constructed a model
| in which the parallel postulate was false but all the
| other axioms were true.  Later in the 19th century,
| Frege formally developed the predicate logic, and
| Cantor developed the intuitive set theory in which
| our models live.
|
| Model theory is a young subject.  It was not clearly
| visible as a separate area of research in mathematics
| until the early 1950's.  However, its historical roots
| go back to the older subjects of logic, universal algebra,
| and set theory -- and some of the early work, such as
| Löwenheim's theorem, is now classified as model theory.
| Other important early developments which contributed to
| the theory are:  the extension of Löwenheim's theorem by
| Skolem (1920) and Tarski;  the completeness theorem of
| Gödel (1930) and its generalization by Malcev (1936);
| the characterization of definable sets of real numbers,
| the rigorous definition of the truth of a sentence
| in a model, and the study of relational systems by
| Tarski (1931, 1933, 1935a);  the construction of a
| nonstandard model of number theory by Skolem (1934);
| and the study of equational classes initiated by
| Birkhoff (1935).  Model theory owes a great deal to
| general methods which were originally developed for
| special purposes in older branches of mathematics.
| We shall come across many instances of this in our
| book;  to mention just one, the important notion
| of a saturated model (Chapter 5) goes back to the
| !h!_!a! [eta sub alpha]-structures in the theory
| of simple order, due to Hausdorff (1914).  The
| subject grew rapidly after 1950, stimulated by
| the papers of Henkin (1949), Tarski (1950), and
| Robinson (1950).  The phrase "theory of models"
| is due to Tarski (1954).  Today the literature in
| the subject is quite extensive.  There is a rather
| complete bibliography in Addison, Henkin, and Tarski
| (1965).  In recent years, the theory of models has been
| applied to obtain significant results in other fields,
| notably set theory, algebra, and analysis.  However,
| until now only a tiny part of the potential strength
| of model theory has been used in such applications.
| It will be interesting to see what happens when
| (and if) the full strength is used.
|
| Chang & Keisler, 'Model Theory', pages 2-4.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 4

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic
|
| In our introduction, Section 1.1, we gave a general idea of the
| flavor of model theory, but we were not yet ready to give many
| details.  We shall now come down to earth and give a rigorous
| treatment of model theory for a very simple formal language,
| sentential logic (also known as propositional calculus).
| We shall quickly develop this "toy" model theory along
| lines parallel to the much deeper model theory for
| predicate logic.  The basic ideas are the decision
| procedure via truth tables, due to Post (1921),
| and Lindenbaum's theorem with the compactness
| theorem which follows.  This section will
| give a preview of what lies ahead in
| our book.
|
| We are assuming (see Preface) that the reader is already
| thoroughly familiar with sentential, and even predicate,
| logic.  Thus we shall feel free to proceed at a fairly
| rapid pace.  Nevertheless, we shall start from scratch,
| in order to show what sentential logic looks like when
| it is developed in the spirit of model theory.
|
| Classical sentential logic is designed to study a set $S$ of simple statements,
| and the compound statements built up from them.  At the most intuitive level,
| an intended interpretation of these statements is a "possible world", in
| which each statement is either true or false.  We wish to replace these
| intuitive interpretations by a collection of precise mathematical objects
| which we may use as our models.  The first thing which comes to mind is
| a function F which associates with each simple statement S one of the
| truth values "true" or "false".  Stripping away the inessentials,
| we shall instead take a model to be a subset A of $S$;  the idea
| is that S in A indicates that the simple statement S is true,
| and S not in A indicates that the simple statement S is false.
|
| 1.2.1.  By a 'model' A for $S$ we simply mean a subset A of $S$.
|
| Thus the set of all models has the power 2^|$S$|.  Several relations and
| operations between models come to mind; for example, A c B, $S$ - A, and
| the intersection |^|_(i in I) A_i of a set {A_i : i in I} of models.
| Two distinguished models are the empty set Ø and the set $S$ itself.
|
| Chang & Keisler, 'Model Theory', page 4.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 5

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We now set up the sentential logic as a formal language.
| The symbols of our language are as follows:
|
| 1.  Connectives '&' (and), '~' (not).
|
| 2.  Parentheses '(' and ')'.
|
| 3.  A nonempty set $S$ of sentence symbols.
|
| Intuitively, the sentence symbols stand for simple statements,
| and the connectives &, ~ stand for the words used to combine
| simple statements into compound statements.  Formally,
| the 'sentences' of $S$ are defined as follows:
|
| 1.2.2.  [Definition of a 'sentence']
|
| 1.  Every sentence symbol S is a sentence.
|
| 2.  If p is a sentence, then (~p) is a sentence.
|
| 3.  If p, q are sentences, then (p & q) is a sentence.
|
| 4.  A finite sequence of symbols is a sentence
|     only if it can be shown to be a sentence by
|     a finite number of applications of (1, 2, 3).
|
| Chang & Keisler, 'Model Theory', page 5.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 6

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| Our definition of a sentence of $S$ may be
| restated as a recursive definition based on
| the length of a finite sequence of symbols:
|
|    A single symbol is a sentence iff it is a sentential symbol;
|
|    A sequence p of symbols of length n > 1 is a sentence
|    iff there are sentences q and r of length less than n
|    such that p is either (~q) or (q & r).
|
| Alternatively, our definition may be restated in set-theoretical terms:
|
|    The set of all sentences of $S$ is the least set !S!
|    of finite sequences of symbols of $S$ such that each
|    sentence symbol S belongs to !S! and, whenever q, r
|    are in !S!, then (~q), (q & r) belong to !S!.
|
| No matter how we may think of sentences, the important thing is that
|'properties of sentences can only be established through an induction
| based on 1.2.2'.  More precisely, to show that every sentence p has
| a given property P, we must establish three things:
|
| 1.  Every sentence symbol S has the property P.
|
| 2.  If p is (~q) and q has the property P,
|     then p has the property P.
|
| 3.  If p is (q & r) and q, r have the property P,
|     then p has the property P.
|
| The reader may check his [or her] understanding
| of this point by proving through induction that
| every sentence p has the same number of right
| parentheses as it has left parentheses.
|
| How many sentences of $S$ are there?  This depends on the number
| of sentence symbols S in $S$.  Each sentence is a finite sequence
| of symbols.   If the set $S$ is finite or countable, then there
| are countably many sentences of $S$.  Of course, not every finite
| sequence of symbols is a sentence;  for instance, (S_0 & (~S_5))
| is a sentence, but & & ) S_3 and S_0 & ~S_5 are not.  If the set
| $S$ of sentence symbols has uncountable cardinal !a!, then the
| set of sentences of $S$ also has power !a!.
|
| Chang & Keisler, 'Model Theory', pages 5-6.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 7

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We shall introduce abbreviations to our language in the usual way,
| in order to make sentences more readable.  The symbols 'v' (or),
| '=>' (implies), and '<=>' (if and only if) are abbreviations
| defined as follows:
|
|    (p v q)     for   (~((~p) & (~q))),
|
|    (p => q)    for   ((~p) v q),
|
|    (p <=> q)   for   ((p => q) & (q => p)).
|
| Of course, v, =>, and <=> could just as well have been
| included in our list of symbols as three more connectives.
| However, there are certain advantages to keeping our list of
| symbols short.  For instance, 1.2.2 and proofs by induction
| based on it are shorter this way.  At the other extreme,
| we could have managed with only a single connective,
| whose English translation is "neither ... nor ...".
| We did not do this because "neither ... nor ..."
| is a rather unnatural connective.
|
| Another abbreviation which we shall adopt is to
| leave out unnecessary parentheses.  For instance,
| we shall never bother to write outer parentheses in
| a sentence -- thus ~S is our abbreviation for (~S).
| We shall follow the commonly accepted usage in dropping
| other parentheses.  Thus ~ is considered more binding than
| & and v, which in turn are more binding than => and <=>.
| For instance, ~p v q => r & p means ((~p) v q) => (r & p).
|
| Hereafter we shall use the single symbol $S$ to denote both the
| set of sentence symbols and the language built on these symbols.
| There is no fear of confusion in this double usage since the
| language is determined uniquely, modulo the connectives,
| by the sentence symbols.
|
| Chang & Keisler, 'Model Theory', pages 6-7.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 8

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We are now ready to build a bridge between the language $S$ and its models,
| with the definition of the truth of a sentence in a model.  We shall express
| the fact that a sentence p is true in a model A succinctly by the special
| notation:
|
|    A |= p.
|
| The relation A |= p is defined as follows:
|
| 1.2.3.  [Definition of A |= p, that is, A is a 'model' of p, or p 'holds' in A]
|
| 1.  If p is a sentence symbol S, then A |= p holds if and only if S is in A.
|
| 2.  If p is q & r, then A |= p if and only if both A |= q and A |= r.
|
| 3.  If p is ~q, then A |= p iff it is not the case that A |= q.
|
| When A |= p, we say that p is 'true' in A, or that p 'holds' in A, or
| that A is a 'model' of p.  When it is not the case that A |= p, we say
| that p is 'false' in A, or that p 'fails' in A.  The above definition of
| the relation A |= p is an example of a recursive definition based on 1.2.2.
| The proof that the definition is unambiguous for each sentence p is, of course,
| a proof by induction based on 1.2.2.
|
| Chang & Keisler, 'Model Theory', page 7.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 9

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| An especially important kind of sentence is a 'valid sentence'.
| A sentence p is called 'valid', in symbols, |= p, iff p holds in
| all models for $S$, that is, iff A |= p for all A.  Some notions
| closely related to validity are mentioned in the exercises.
|
| At first glance, it seems that we have to examine uncountably many
| different infinite models A in order to find out whether a sentence p
| is valid.  This is because validity is a semantical notion, defined in
| terms of models.  However, as the reader surely knows, there is a simple
| and uniform test by which we can find out in only finitely many steps
| whether or not a given sentence p is valid.
|
| This decision procedure for validity is based on a syntactical notion,
| the notion of a tautology.  Let p be a sentence such that all the sentence
| symbols which occur in p are among the n + 1 symbols S_0, S_1, ..., S_n.
| Let a_0, a_1, ..., a_n be a sequence made up of the two letters 't', 'f'.
| We shall call such a sequence an 'assignment'.
|
| 1.2.4.  The 'value' of a sentence p for the assignment a_0, ..., a_n
|         is defined recursively as follows:
|
| 1.  If p is the sentence symbol S_m, m =< n, then the value of p is a_m.
|
| 2.  If p is ~q, then the value of p is the opposite of the value of q.
|
| 3.  If p is q & r, then the value of p is t if the values of q and r
|     are both t, and otherwise the value of p is f.
|
| Notice how similar Definitions 1.2.3 and 1.2.4 are.  The only
| essential difference is that 1.2.3 involves an infinite model A,
| while 1.2.4 involves only a finite assignment a_0, ..., a_n.
|
| 1.2.5.  Let p be a sentence and let S_0, ..., S_n
|         be all the sentence symbols occurring in p.
|         The sentence p is said to be a 'tautology',
|         in symbols, |- p, iff p has the value t
|         for every assignment a_0, ..., a_n.
|
| We shall use both of the symbols |= and |- in many
| ways throughout this book.  To keep things straight,
| remember this:
|
|    |=  is used for semantical ideas,
|
|    |-  is used for syntactical ideas.
|
| The value of a sentence p for an assignment a_0, ..., a_n may be very easily
| computed.  We first find the values of the sentence symbols occurring in p
| and then work our way through the smaller sentences used in building up
| the sentence p.  A table showing the value of p for each possible
| assignment a_0, ..., a_n is called a 'truth table' of p.  We shall
| assume that truth tables are already quite familiar to the reader,
| and that he [or she] knows how to construct a truth table of a
| sentence.  Truth tables provide a simple and purely mechanical
| procedure to determine whether a sentence p is a tautology --
| simply write down the truth table for p and check to see
| whether p has the value t for every assignment.
|
| 1.2.6.  Proposition.  Suppose that all the sentence symbols occurring in p
|         are among S_0, S_1, ..., S_n.  Then the value of p for an assignment
|         a_0, a_1, ..., a_n, ..., a_(n+m) is the same as the value of p for
|         the assignment a_0, a_1, ..., a_n.
|
| Chang & Keisler, 'Model Theory', pages 7-8.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 10

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We now prove the first of a series of theorems
| which state that a certain syntactical condition
| is equivalent to a semantical condition.
|
| 1.2.7.  Theorem.  (Completeness Theorem).
|
|         |- p  if and only if  |= p.
|
|         In words, a sentence is a tautology
|         if and only if it is valid.
|
| Proof.  Let p be a sentence and let all the sentence symbols in p
|         be among S_0, ..., S_n.  Consider an arbitrary model A.
|         For m = 0, 1, ..., n, put a_m = t if S_m is in A,
|         and a_m = f if S_m is not in A.  This gives us
|         an assignment a_0, a_1, ..., a_n.  We claim:
|
|         1.  A |= p if and only if the value of p for
|             the assignment a_0, a_1, ..., a_n is t.
|
|         This can be readily proved by induction.  It is immediate
|         if p is a sentence symbol S_m.  Assuming that (1) holds
|         for p = q and for p = r, we see at once that (1) holds
|         for p = ~q and p = q & r.
|
|         Now let S_0, ..., S_n be all the sentence symbols occurring in p.
|         If p is a tautology, then by (1), p is valid.  Since every assignment
|         a_0, a_1, ..., a_n can be obtained from some model A, it follows from (1)
|         that, if p is valid, then p is a tautology.  -|
|
| Our decision procedure for |- p now can be used to decide whether p is valid.
| Several times we shall have an occasion to use the fact that a particular
| sentence is a tautology, or is valid.  We shall never take the trouble
| actually to give the proof that a sentence of $S$ is valid, because
| the proof is always the same -- we simply look at the truth table.
|
| Chang & Keisler, 'Model Theory', pages 8-9.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 11

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| Let us now introduce the notion of
| a formal deduction in our logic $S$.
|
| The 'Rule of Detachment' (or 'Modus Ponens') states:
|
|       From q and q => p, infer p.
|
| We say that p is 'inferred' from q and r
| by detachment iff r is the sentence q => p.
|
| Now consider a finite or infinite set !S! of $S$.
|
| A sentence p is 'deducible' from !S!, in symbols, !S! |- p,
| iff there is a finite sequence q_0, q_1, ..., q_n of sentences
| such that p = q_n, and each sentence q_m is either a tautology,
| belongs to !S!, or is inferred from two earlier sentences of
| the sequence by detachment.  The sequence q_0, q_1, ..., q_n
| is called a 'deduction' of p from !S!.  Notice that p is
| deducible from the empty set of sentences if and only if
| p is a tautology.
|
| We shall say that !S! is 'inconsistent'
| iff we have !S! |- p for all sentences p.
| Otherwise, we say that !S! is 'consistent'.
|
| Finally, we say that !S! is 'maximal consistent' iff
| !S! is consistent, but the only consistent set of
| sentences which includes !S! is !S! itself.
|
| The proposition below contains facts which
| can be found in most elementary logic texts.
|
| 1.2.8.  Proposition.
|
|         1.  If  !S! is consistent
|             and !C! is the set of all
|             sentences deducible from !S!,
|             then !C! is consistent.
|
|         2.  If  !S! is maximal consistent
|             and !S! |- p, then p is in !S!.
|
|         3.  !S! is inconsistent if and only if
|             !S! |- S & ~S  (for any S in $S$).
|
|         4.  Deduction Theorem.
|
|             If !S! |_| {q} |- p, then !S! |- q => p.
|
| Chang & Keisler, 'Model Theory', pages 9-10.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 12

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| 1.2.9.  Lemma.  (Lindenbaum's Theorem).
|
|         Any consistent set !S! of sentences can be enlarged
|         to a maximal consistent set !C! of sentences.
|
| Proof.  Let us arrange all of the sentences of $S$ in a list:
|
|         p_0,  p_1,  p_2,  ...,  p_!a!,  ...
|
|         The order in which we list them is immaterial,
|         as long as the list associates in a one-one
|         fashion an ordinal number with each sentence.
|
|         We shall form an increasing chain
|         of consistent sets of sentences:
|
|         !S!  =  !S!_0  c  !S!_1  c  !S!_2  c  ...  c  !S!_!a!  c  ...
|
|         If !S! |_| {p_0} is consistent, define !S!_1  =  !S! |_| {p_0}.
|
|         Otherwise, define !S!_1  =  !S!.
|
|         At the !a!^th stage, we define:
|
|         1.  !S!_(!a! + 1)  =  !S!_!a! |_| {p_!a!}
|
|             if !S!_!a! |_| {p_!a!} is consistent;
|
|         2.  Otherwise define:
|
|             !S!_(!a! + 1)  =  !S!_!a!.
|
|         At limit ordinals !a! take unions:
|
|         !S!_!a!  =  |_|^(!b! < !a!) !S!_!b!.
|
|         Now let !C! be the union of all the sets !S!_!a!.
|
|         We claim that !C! is consistent.
|
|         Suppose not.
|
|         Then there is a deduction
|
|         q_0,  q_1,  ...,  q_u
|
|         of the sentence S & ~S from !C!, (see Proposition 1.2.8).
|
|         Let r_1, ..., r_v be all the sentences in !C! which are
|         used in this deduction.  We may choose !a! so that all
|         of  r_1, ..., f_v belong to !S!_!a!.  But this means
|         that !S!_!a! is inconsistent (by Proposition 1.2.8),
|         which is a contradiction.
|
|         Having shown that !C! is consistent, we next claim that !C! is
|         maximal consistent.  For suppose !D! is consistent and !C! c !D!.
|         Let p_!a! be in !D!.  Then !S!_!a! |_| {p_!a!} is consistent, hence:
|
|         !S!_(!a! + 1)  =  !S!_!a! |_| {p_!a!}.
|
|         Thus p_!a! is in !C!, and hence !D! = !C!.  -|
|
| Chang & Keisler, 'Model Theory', page 10.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 13

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| 1.2.10.  Lemma.  Suppose !C! is a maximal
|          consistent set of sentences in $S$.
|
|          Then:
|
|          1.  For each sentence p, exactly one of
|              the sentences p, ~p belongs to !C!.
|
|          2.  For each pair of sentences {p, q},
|              we have that p & q belongs to !C!
|              if and only if both p and q
|              belong to !C!.
|
| We leave the proof as an exercise.
|
| Now consider a set !S! of sentences of $S$.
|
| We shall say that A is a 'model' of !S!,
| in symbols,
|
|       A |= !S!,
|
| iff every sentence p in !S! is true in A.
|
| !S! is said to be 'satisfiable' iff it has at least one model.
|
| We now prove the most important theorem of sentential logic,
| which is a criterion for a set !S! to be satisfiable.
|
| 1.2.11.  Theorem.  (Extended Completeness Theorem).
|
|          A set !S! of sentences of $S$ is consistent
|          if and only if !S! is satisfiable.
|
| Proof.   Assume first that !S! is satisfiable, and let A |= !S!.
|
|          We show that every sentence deducible from !S! holds in A.
|
|          Let q_0, q_1, ..., q_n be a deduction of q_n from !S!.
|
|          Let m =< n.
|
|          If   q_m is in !S! or
|          if   q_m is a tautology,
|          then q_m holds in A.
|
|          If q_m is inferred from two sentences
|          q_k and q_k => q_m which hold in A,
|          then q_m must clearly hold in A.
|
|          It follows by induction on m that each of
|          the sentences q_0, q_1, ..., q_n holds in A.
|          Since S & ~S does not hold in A, it is not
|          deducible from !S!, so !S! is consistent.
|
|          Now assume that !S! is consistent.
|
|          By Lindenbaum's theorem we enlarge !S!
|          to a maximal consistent set !C!.
|
|          We now construct a model of !S!.
|
|          Let A be the set of all sentence symbols S in $S$
|          such that S is in !C!.  We show by induction that,
|          for each sentence p, we have:
|
|          1.  p in !C!  if and only if  A |= p.
|
|          By definition, (1) holds when p is a sentence symbol S_n.
|
|          Lemma 1.2.10.1 guarantees that,
|          if   (1) holds when p =  q,
|          then (1) holds when p = ~q.
|
|          Lemma 1.2.10.2 guarantees that,
|          if   (1) holds when p = q and when p = r,
|          then (1) holds when p = q & r.
|
|          From (1), it follows that A |= !C!,
|          and since !S! c !C!, that A |= !S!.  -|
|          
| We can obtain a purely semantical corollary.
|
| !S! is said to be 'finitely satisfiable' iff
| every finite subset of !S! is satisfiable.
|
| 1.2.12.  Corollary.  (Compactness Theorem).
|
|          If   !S! is finitely satisfiable,
|          then !S! is satisfiable.
|
| Proof.   Suppose !S! is not satisfiable.
|
|          Then by the extended completeness theorem !S! is inconsistent.
|
|          Hence,  !S! |- S & ~S.
|
|          In the deduction of the sentence S & ~S from !S!
|          only a finite set !S!_0 of sentences of !S! is used.
|
|          It follows that !S!_0 |- S & ~S,  so !S!_0 is inconsistent.
|
|          Then !S!_0 is not satisfiable, so !S! is not finitely satisfiable.  -|
|
| Notice that the converse of the compactness theorem is trivially true,
| that is, every satisfiable set of sentences is finitely satisfiable.
|
| Chang & Keisler, 'Model Theory', pages 10-11.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 14

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We say that p is a 'consequence' of !S!,
| in symbols,
|
|       !S! |= p,
|
| iff every model of !S! is a model of p.
|
| The reader is asked to prove Exercises 1.2.3-1.2.6 as well as the following:
|
| 1.2.13.  Corollary.  [Truth & Consequences].
|
|          1.  !S! |- p  if and only if  !S! |= p.
|
|          2.  If !S! |= p,
|              then there is a finite subset !S!_0 of !S! such that !S!_0 |= p.
|
| We shall conclude our model theory for sentential logic with a few applications of
| the compactness theorem.  In these applications, the true spirit of model theory
| will appear, but at a very rudimentary level.  Since we shall often wish to
| combine a finite set of sentences into a single sentence, we shall use
| expressions like:
|
|       p_1  &  p_2  &  ...  &  p_n
|
| and
|
|       p_1  v  p_2  v  ...  v  p_n.
|
| In these expressions the parentheses are assumed,
| for the sake of definiteness, to be associated
| to the right;  for instance:
|
|       p_1  &  p_2  &  p_3   =   p_1  &  (p_2  &  p_3).
|
| First we introduce a bit more terminology.
|
| A set !C! of sentences is called a 'theory'.
|
| A theory !C! is said to be 'closed' iff
| every consequence of !C! belongs to !C!.
|
| A set !D! of sentences is said to
| be a 'set of axioms' for a theory !C!
| iff !C! and !D! have the same consequences.
|
| A theory is called 'finitely axiomatizable'
| iff it has a finite set of axioms.
|
| Since we may form the conjunction of a finite
| set of axioms, a finitely axiomatizable theory
| actually always has a single axiom.
|
| The set !C!^c  of all consequences of !C!
| is the unique closed theory which has !C!
| as a set of axioms.
|
| 1.2.14.  Proposition.
|
|          !D! is a set of axioms for a theory !C!
|
|          if and only if
|
|          !D! has exactly the same models as !C!.
|
| 1.2.15.  Corollary.
|
|          Let !C!_1 and !C!_2 be two theories such that:
|
|          The set of all models of !C!_2
|
|          is the complement of
|
|          the set of all models of !C!_1.
|
|          Then !C!_1 and !C!_2 are both finitely axiomatizable.
|
| Proof.   The set !C!_1 |_| !C!_2 is not satisfiable,
|          so it is not finitely satisfiable.  Thus,
|          we may chose finite sets:
|
|          !D!_1 c !C!_1  and  !D!_2 c !C!_2
|
|          such that  !D!_1 |_| !D!_2  is not satisfiable.
|
|          If A |= !D!_1 then A is not a model of !C!_2,
|
|          and consequently A |= !C!_1.
|
|          It follows by Proposition 1.2.14 that
|
|          !D!_1 is a finite set of axioms for !C!_1.
|
|          Similarly,
|
|          !D!_2 is a fimite set of axioms for !C!_2.  -|
|
| Chang & Keisler, 'Model Theory', pages 11-12.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 15

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| The next group of theorems shows connections
| between mathematical operations on models and
| syntactical properties of sentences.  The first
| result of this group concerns positive sentences.
| A sentence p is said to be 'positive' iff p is
| built up from sentence symbols using only the
| two connectives & and v.  For example,
|
| (S_0 & (S_2 v S_3)) v S_16 is positive,
|
| while ~S_4 and S_3 <=> S_3 are not positive.
|
| A set !S! of sentences is called 'increasing'
| iff A |= !S! and A c B implies B |= !S!.
|
| 1.2.16.  Theorem.
|
|          1.  A c B
|
|              if and only if
|
|              every positive sentence which holds in A holds in B.
|
|          2.  A consistent theory !C! is increasing
|
|              if and only if
|
|              !C! has a set of positive axioms.
|
|          3.  A sentence p is increasing
|
|              if and only if
|
|              either p is equivalent to a positive sentence,
|              p is valid, or ~p is valid.
|
| Proof.   [C&K, pages 13-14].
|
| A completely trivial fact which is analogous to part (1)
| of the above theorem is:  A = B if and only if every sentence
| which holds in A holds in B.  We shall see later on in this book
| that the situation is very different in predicate logic, where a
| maximal consistent theory ordinarily does not even come close to
| characterizing a single model.  This is one thing which makes
| model theory for predicate logic so much more interesting
| and difficult than model theory for sentential logic.
|
| Chang & Keisler, 'Model Theory', pages 13-14.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 16

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We now turn to another kind of sentence.
| By a 'conditional sentence' we mean a
| sentence p_1 & ... & p_n, where each
| p_i is one of the following kinds:
|
|    1.  S,
|
|    2.  ~S_1 v ~S_2 v ... v ~S_k,
|
|    3.  ~S_1 v ~S_2 v ... v ~S_k v S.
|
| A set !S! of sentences is said to be
| 'preserved under finite intersections' iff
| A |= !S! and B |= !S! implies A |^| B |= !S!.
|
| !S! is said to be 'preserved under arbitrary intersections'
| iff for every nonempty set {A_i : i in I} of models of !S!,
| the intersection |^|_(i in I) A_i is also a model of !S!.
|
| 1.2.17.  Lemma.
|
|          A theory !C! is preserved under finite intersections
|
|          if and only if
|
|          !C! is preserved under arbitrary intersections.
|
| Proof.   Let !C! be preserved under finite intersections, let {A_i : i in I}
|          be a nonempty set of models of !C!, and let B = |^|_(i in I) A_i.
|          Let !S! be the set of all sentences of the form S or ~S which hold
|          in B.  We show that !C! |_| !S! is satisfiable.  Let !S!_0 be an
|          arbitrary finite subset of !S!, and let the negative sentences in
|          !S!_0 be ~S_1, ..., ~S_k.  If k = 0, all the sentences in !S!_0 are
|          positive, and each of the models A_i is a model of !S!_0, because
|          B c A_i.  Let k > 0 and choose models A_i_1, ..., A_i_k from among
|          the A_i such that S_1 is not in A_i_1, ..., S_k is not in A_i_k.
|          Then A = A_i_1 |^| ... |^| A_i_k is a model of !S!_0.  Since !C!
|          is preserved under finite intersections, A is also a model of !C!.
|          We have shown that !C! |_| !S! is finitely satisfiable.  By the
|          compactness theorem, !C! |_| !S! has a model.  But the only model
|          of !S! is B, so B is a model of !C!.   -|
|
| In view of the above lemma, we may as well simply say from now on
| that !C! is 'preserved under intersections', since it makes no
| difference whether we say finite or arbitrary intersections.
|
| Chang & Keisler, 'Model Theory', pages 14-15.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 17

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| 1.2.18.  Theorem.
|
|          1.  A theory !C! is preserved under intersections
|              if and only if !C! has a set of conditional axioms.
|
|          2.  A sentence p is preserved under intersections
|              if and only if p is equivalent to a conditional sentence.
|
| Proof.   1.  We leave to the reader the proof that every conditional
|              sentence (and hence every set of conditional sentences)
|              is preserved under intersections.
|
|              Conversely, let !C! be preserved under intersections.
|              Consider the set !D! of all conditional consequences
|              of !C!.  It suffices to show that every model of !D!
|              is a model !C!.   Let B be an arbitrary model of !D!.
|              For each T in $S$ - B, let !S!_T be the set of all
|              sentences of the form
|
|              S_1  &  ...  &  S_k  &  ~T
|
|              which hold in B.  We also let the sentence ~T itself be
|              in !S!_T.  We first note that the conjunction of finitely
|              many sentences in !S!_T is again equivalent to a sentence
|              in !S!_T.  Consider a sentence p in !S!_T.  Then ~p is
|              clearly equivalent to a conditional sentence q either
|              of the form S or of the form
|
|              ~S_1  v  ...  v  ~S_k  v  T.
|
|              But q fails in B, so q does not belong to !D!.  This means that q,
|              and hence ~p, is not a consequence of !C!,  and it follows that
|              !C! |_| {p} is satisfiable.  Since !S!_T is, up to equivalence,
|              closed under finite conjunction, we see that !C! |_| !S!_T is
|              finitely satisfiable.  Applying the Compactness Theorem, we
|              may choose a model A_T of !C! |_| !S!_T .
|
|              For each T in $S$ - B, we have T not in A_T and B c A_T.
|              Thus, if $S$ - B is not empty, then:
|
|              B  =  |^|_(T not in B) A_T.
|
|              Since each A_T is a model of !C! and !C! is
|              closed under intersections, we have B |= !C!.
|
|              In the remaining case B = $S$, we let !S!
|              be the set of all sentences of the form
|
|              S_1  &  ...  &  S_k.
|
|              Arguing as before, we find that !C! |_| !S!
|              is finitely satisfiable and thus has a model.
|
|              But B is the only model of !S!, so again B is a model of !C!.
|
|              We have now shown that every model of !D! is a model of !C!,
|              and it follows that !D! is a set of conditional axioms for !C!.
|
|          2.  This follows from (1) by an argument
|              similar to the last part of the proof
|              of Theorem 1.2.16.  -|
|
| Chang & Keisler, 'Model Theory', pages 15-16.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 18

| 1.  Introduction
|
| 1.2.  Model Theory for Sentential Logic (cont.)
|
| We conclude with a table which summarizes
| the semantical and syntactical notions that
| we have shown to be equivalent (some of these
| are done in the exercises).
|
|                          Table 1.2.1
| o-----------------------------o-----------------------------o
| |  Syntax                     |  Semantics                  |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is a tautology           |  p is valid                 |
| |                             |                             |
| |  |- p                       |  |= p                       |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  !S! is consistent          |  !S! is satisfiable         |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is inconsistent          |  p is refutable             |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is deducible from !S!    |  p is a consequence of !S!  |
| |                             |                             |
| |  !S! |- p                   |  !S! |= p                   |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is equivalent to         |  p is increasing and        |
| |    a positive sentence      |    not valid or refutable   |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is equivalenct to        |  p is preserved             |
| |    a conditional sentence   |    under intersections      |
| |                             |                             |
| o-----------------------------o-----------------------------o
|
| Chang & Keisler, 'Model Theory', page 16.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 19

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction
|
| We begin here the development of first-order languages in a way parallel
| to the treatment of sentential logic in Section 1.2.  First, we shall
| define the notions of a first-order predicate language $L$ and of a
| model for $L$.  We introduce some basic relations between models --
| reductions and expansions, isomorphisms, submodels and extensions.
| We shall then develop the syntax of the language $L$, defining the
| sets of terms, formulas, and sentences, and presenting the axioms and
| rules of inference.  Finally, we give the key definition of a sentence
| being true in a model for the language $L$.  The precise formulation of
| this definition is much more of a challenge in first-order logic than
| it was for sentential logic.  At the end of this section, we state
| the completeness and compactness theorems (Theorems 1.3.20, 21, 22),
| but the proofs of these theorems are deferred until the next chapter.
|
| We first establish a uniform notation and set of conventions
| for such languages and their models.  A 'language' $L$ is a
| collection of symbols.  These symbols are separated into
| three groups, 'relation symbols', 'function symbols',
| and '(individual) constant symbols'.  The relation
| and function symbols of $L$ will be denoted by
| capital Latin letters P, F, with subscripts.
| Lower case Latin letters c, with subscripts,
| range over the constant symbols of $L$.
| If $L$ is a finite set, we may display
| the symbols of $L$ as follows:
|
|    $L$  =  {P_0, ..., P_n,  F_0, ..., F_m,  c_0, ..., c_k}.
|
| Each relation symbol P of $L$ is assumed to be an n-placed relation for
| some n >= 1, depending on P.  Similarly, each function symbol F of $L$ is
| an m-placed function symbol, where m >= 1 and m depends on F.  Notice that
| we do not allow 0-placed relation or function symbols.  When dealing with
| several languages at the same time, we use the letters $L$, $L$’, $L$”,
| etc.  If the symbols of the language are quite standard, as for example,
| '+' for addition, '=<' for an order relation, etc., we shall simply write:
|
|    $L$  =  {=<},
|
|    $L$  =  {=<, +, ·, 0},
|
|    $L$  =  {+, ·, -, 0, 1},
|
|    etc.,
|
| for such languages.  The number of places of the various
| kinds of symbols is understood to follow the standard usage.
| The 'power', or 'cardinal' of the language $L$, denoted
| by ||$L$||, is defined as:
|
|    ||$L$||  =  !w! [omega] |_| |$L$|.
|
| We say that a language $L$ is countable or uncountable
| depending on whether ||$L$|| is countable or uncountable.
| 
| We occasionally pass from a given language $L$ to another language $L$’ which
| has all the symbols of $L$ plus some additional symbols.  In such cases we use
| the notation $L$ c $L$’ and say that the language $L$’ is an 'expansion' of $L$,
| and that $L$ is a 'reduction' of $L$’.  In the special case where all the symbols
| in $L$’ but not in $L$ are constant symbols, $L$’ is said to be a 'simple expansion'
| of $L$.  Since $L$ and $L$’ are just sets of symbols, the expansion $L$’ may be
| written $L$’ = $L$ |_| X, where X is the set of new symbols.
|
| Chang & Keisler, 'Model Theory', pages 18-19.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 20

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| Turning now to the models for a given language $L$, we first point out that
| the situation here is more complicated than for the sentential logic $S$
| in Section 1.2.  There, each S in $S$ could take on at most two values,
| true or false.  Thus the set of intended interpretations for $S$ has
| rather simple properties, as the reader discovered.  This time, each
| n-placed relation symbol has as its intended interpretations all
| n-placed relations among the objects, each m-placed function
| symbol has as its intended interpretations all m-placed
| functions from objects to objects, and, finally, each
| constant symbol has as intended interpretations
| fixed or constant objects.
|
| Therefore, a "possible world", or model for $L$ consists, first of all,
| of a 'universe' A, a nonempty set.  In this universe, each n-placed P
| corresponds to an n-placed 'relation' R c A^n on A, each m-placed F
| corresponds to an m-placed 'function' G : A^m -> A on A, and each
| constant symbol 'c' corresponds to a 'constant' x in A.
|
| This correspondence is given by an 'interpretation' function $I$ mapping
| the symbols of $L$ to appropriate relations, functions, and constants in A.
|
| A 'model' for $L$ is a pair <A, $I$>.
|
| We use Gothic [$Script$] letters to range over models.  Thus we write
| $A$ = <A, $I$>, $B$ = <B, $J$>, $C$ = <C, $K$>, etc., with appropriate
| subscripts and superscripts.  We shall try to be quite consistent in this
| respect, so that the universes of the models $B$’, $B$”, $B$_i, $B$_j, etc.,
| are precisely the sets B’, B”, B_i, B_j, etc.  The relations, functions,
| and constants of $A$ are, respectively, the images under $I$ of the
| relation symbols, function symbols, and constant symbols of $L$.
|
| Chang & Keisler, 'Model Theory', pages 19-20.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 21

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| Notice that in a given universe A there are many different
| permissible interpretations of the symbols of $L$.  Suppose
| that $A$ = <A, $I$> and $A$’ = <A’, $I$’> are models for $L$
| and that R and R’ are relations of $A$ and $A$’, respectively.
| We say that R’ is the 'corresponding relation' to R if they are
| the interpretations of the same relation symbol in $L$, that is:
|
|    $I$(P) = R  and  $I$’(P) = R’  for some P in $L$.
|
| We introduce similar conventions as regards the functions and constants.
|
| When
|
|    $L$  =  {P_0, ..., P_n,  F_0, ..., F_m,  c_0, ..., c_k},
|
| we write the models for $L$ in displayed form as:
|
|    $A$  =  <A,  R_0, ..., R_m,  G_0, ..., G_m,  x_0, ..., x_k>.
|
| When the symbols of $L$ are familiar, we shall agree to use, for instance,
|
|    $A$  =  <A, =<, +, ·>
|
| for models of the language
|
|    $L$  =  {=<, +, ·}.
|
| We may resort to
|
|    $A$  =  <A, =<_A, +_A, ·_A>,
|
|    $B$  =  <B, =<_B, +_B, ·_B>,
|
|    etc.,
|
| if the context of the discussion requires it.
|
| If we start with a model $A$ for the language $L$
| we can always expand it to a model for the language
| $L$’ = $L$ |_| X by giving appropriate interpretations
| for the symbols in X.  If $I$’ is any interpretation for
| the symbols of X in $A$, and X is disjoint from $L$, then
| $A$’ = <A, $I$ |_| $I$’> is a model for $L$’.  In this case
| we say that $A$’ is an 'expansion' of $A$ to $L$’, and $A$ is
| the 'reduct' of $A$’ to $L$.  Sometimes we use the shorter
| notation ($A$, $I$’) for $A$’.  Clearly, there are many
| ways a model $A$ for $L$ can be expanded to a model
| $A$’ for $L$’.  On the other hand, given a model
| $A$’ for $L$’, it has only one reduction $A$ to
| $L$.  Namely, we form $A$ by restricting the
| interpretation function $I$’ on $L$ |_| X
| to $L$.  The processes of expansion and
| reduction do not change the universe
| of the model.
|
| The 'cardinal', or 'power', of the model $A$ is the cardinal |A|.
| $A$ is said to be finite, countable, or uncountable if |A| is
| finite, countable, or uncountable.  Notice that on a finite
| universe A, while there can be only finitely many different
| relations, functions, and constants, the number of different
| interpretation functions $I$ can be very large and depends
| on |$L$|.
|
| Chang & Keisler, 'Model Theory', pages 20-21.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 22

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| We next introduce some simple but basic notions and operations on models.
| The reader should go through the exercises at the end of this section in
| order to be familiar with them.
|
| Two models $A$ and $A$’ for $L$ are 'isomorphic' iff
| there is a 1-1 function f mapping A onto A’ satisfying:
|
|    1.  For each n-placed relation R  of $A$ and
|        the corresponding relation R’ of $A$’
|
|        R(x_1 ... x_n)  if and only if  R’(f(x_1) ... f(x_n))
|
|        for all x_1, ..., x_n in A.
|
|    2.  For each m-placed function G  of $A$ and
|        the corresponding function G’ of $A$’
|
|        f(G(x_1 ... x_m))  =  G’(f(x_1) ... f(x_m))
|
|        for all x_1, ..., x_m in A.
|
|    3.  For each constant x of $A$ and the
|        corresponding constant x’ of $A$’
|
|        f(x)  =  x’.
|
| A function f that satisfies the above is called an 'isomorphism' of $A$ onto $A$’,
| or an 'isomorphism' between $A$ and $A$’.  We use the notation f : $A$ ~=~ $A$’
| to denote that f is an isomorphism of $A$ onto $A$’, and we use $A$ ~=~ $A$’
| for $A$ is isomorphic to $A$’.  For convenience we use ~=~ to denote the
| 'isomorphism relation' between models for $L$.  It is quite clear that
| ~=~ is an equivalence relation.  Furthermore, it preserves powers,
| that is, if $A$ ~=~ $B$, then |A| = |B|.  Indeed, unless we wish
| to consider the particular structure of each element of A or B,
| for all practical purposes $A$ and $B$ are the same if they
| are isomorphic.
|
| Chang & Keisler, 'Model Theory', page 21.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 23

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| A model $A$’ is called a 'submodel' of $A$
| if  $A$’ c $A$ and:
|
|    1.  Each n-placed relation R’ of $A$’
|        is the restriction to  A’ of the
|        corresponding relation R  of $A$,
|        that is:   R’ = R |^| (A’)^n.
|
|    2.  Each m-placed function G’ of $A$’
|        is the restriction to  A’ of the
|        corresponding function G  of $A$,
|        that is:       G’ = G|(A’)^m.
|
|    3.  Each constant of $A$’ is the
|        corresponding constant of $A$.
|
| We use $A$’ ç $A$ to denote that $A$’ is a submodel of $A$, and
| the symbol 'ç' for the submodel relation between models for $L$.
| The reader should show that ç is a partial-order relation and
| that, if $A$ ç $B$, then |A| =< |B|.  We say that $B$ is
| an 'extension' of $A$ if $A$ is a submodel of $B$.
|
| Combining the above two notions, we say that
| $A$ is 'isomorphically embedded' in $B$ if
| there is a model $C$ and an isomorphism f
| such that f : $A$ ~=~ $C$ and $C$ ç $B$.
| In this case we call the function f an
| 'isomorphic embedding' of $A$ in $B$.
| If $A$ is isomorphically embedded
| in $B$, then $B$ is isomorphic
| to an extension of $A$.
|
| Chang & Keisler, 'Model Theory', pages 21-22.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 24

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| To formalize a language $L$, we need the
| following 'logical symbols' (see the
| corresponding development for $S$
| in Section 1.2):
|
|    1.  Parentheses.  '(' and ')'
|
|    2.  Variables.    v_0, v_1, ..., v_n, ...
|
|    3.  Connectives.  '&' (and), '~' (not)
|
|    4.  Quantifier.   '`A`' (for all)
|
| and one binary relation symbol '=' (identity).
|
| We assume, of course, that no symbol in $L$
| occurs in the above list.  Certain strings
| of symbols from the above list and from $L$
| are called 'terms'.  They are defined as
| follows:
|
| 1.3.1.  [Definition of a 'term' of $L$].
|
|         1.  A variable is a term.
|
|         2.  A constant symbol is a term.
|
|         3.  If F is an m-placed function symbol
|             and t_1, ..., t_m are terms, then
|
|             F(t_1 ... t_m) is a term.
|
|         4.  A string of symbols is a term
|             only if it can be shown to be
|             a term by a finite number of
|             applications of (1, 2, 3).
|
| The 'atomic formulas' of $L$ are strings of the form given below:
|
| 1.3.2.  [Definition of an 'atomic formula' of $L$].
|
|         1.  t_1 = t_2 is an atomic formula,
|             where t_1 and t_2 are terms of $L$.
|
|         2.  If P is an n-placed relation symbol
|             and t_1, ..., t_n are terms, then
|
|             P(t_1 ... t_n) is an atomic formula.
|
| Finally, the 'formulas' of $L$ are defined as follows:
|
| 1.3.3.  [Definition of a 'formula' of $L$].
|
|         1.  An atomic formula is a formula.
|
|         2.  If p and q are formulas, then
|
|             (p & q) and (~p) are formulas.
|
|         3.  If v is a variable and p is a formula, then
|
|             (`A`v)p is a formula.
|
|         4.  A sequence of symbols is a formula
|             only of it can be shown to be a
|             formula by a finite number of
|             applications of (1, 2, 3).
|
| Just as in the case of $S$, we may put definitions 1.3.1 and 1.3.3
| in a set-theoretical setting.  Namely, the set of terms of $L$ is
| the least set T such that:
|
|    T contains all constant symbols and all variables v_n, n = 0, 1, 2, ...,
|    and, whenever F is an m-placed function symbol and t_1, ..., t_m are in T,
|    then F(t_1 ... t_m) is in T.
|
| Similarly, the set of formulas of $L$ is the least set Q such that:
|
|    Every atomic formula belongs to Q and, whenever p and q are in Q
|    and v is a variable, then (p & q), (~p), (`A`v)p all belong to Q.
|
| Notice that we have tacitly used the letters 't' (with subscripts)
| to range over terms, 'v' to range over variables, and p, q to range
| over formulas.  Again, we empahsize that 'properties of terms and
| formulas of $L$ can only be established by an induction based on
| definitions 1.3.1 and 1.3.3'.
|
| We can now introduce the abbreviations v, =>, <=> as in
| Section 1.2.  Furthermore, we adopt all the conventions
| introduced earlier.  The new symbol '`E`' (there exists)
| is introduced as an abbreviation defined as:
|
|    (`E`v)p  for  ~(`A`v)~p.
|
| Some new conventions are the following:
|
|    p_1 & p_2 & ... & p_n  for  (p_1 & (p_2 & ... & p_n))
|
|    p_1 v p_2 v ... v p_n  for  (p_1 v (p_2 v ... v p_n))
|
|    (`A`x_1 x_2 ... x_n)p  for  (`A`x_1)(`A`x_2) ... (`A`x_n)p
|
|    (`E`x_1 x_2 ... x_n)p  for  (`E`x_1)(`E`x_2) ... (`E`x_n)p
|
| At this point we assume that the reader has enough experience in first-order
| predicate logic to continue the development on his [or her] own.  In particular,
| we leave it to him [or her] to decide on the notions of 'subformulas', 'free' and
| 'bound' occurrences of a variable in a formula, and to give a proper definition
| (based on definitions 1.3.1, 1.3.3) of 'substitution' of a term for a variable
| in a formula.
|
| Chang & Keisler, 'Model Theory', pages 22-23.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 25

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| We now come to an extremely important
| convention of notation.  To make sure
| that the reader does not miss it, we
| enclose it in a box:
|
| o-----------------------------------------------------o
| |                                                     |
| |  We use t(v_0 ... v_n) to denote a term t whose     |
| |                                                     |
| |  variables form a subset of {v_0, ..., v_n}.        |
| |                                                     |
| |  Similarly,                                         |
| |                                                     |
| |  we use p(v_0 ... v_n) to denote a formula p whose  |
| |                                                     |
| |  free variables form a subset of {v_0, ..., v_n}.   |
| |                                                     |
| o-----------------------------------------------------o
|
| Notice that we do not require that all of the variables v_0, ..., v_n be free
| variables of p(v_0 ... v_n).  In fact, p(v_0 ... v_n) could even have no free
| variables.  Also, we make no restriction on the bound variables.  For example,
| each of the following formulas is of the form p(v_0 v_1 v_2):
|
|    (`A`v_1)(`E`v_3) R(v_0 v_1 v_3),
|
|    R(v_0 v_1 v_2),
|
|    S(v_0 v_2),
|
|    (`A`v_4) S(v_4 v_4).
|
| A 'sentence' is a formula with no free variables.
|
| Notice that even if $L$ has no symbols, there are still formulas of $L$.
| These formulas are built up entirely from the identity symbol '=' and the
| other logical symbols listed.  Such formulas are called 'identity formulas'
| and 'they occur in every language'.  The following proposition is simple
| but important.
|
| 1.3.4.  Proposition.  The cardinal of the set of all formulas of $L$ is ||$L$||.
|
| To make all the above syntactical notions into a 'formal system' we
| need 'logical axioms' and 'rules of inference'.  The logical axioms
| for $L$ are divided into three groups:
|
| 1.3.5.  Sentential Axioms.
|
|         Every formula p of $L$ which can be obtained
|         from a tautology q of $S$ by (simultaneously
|         and uniformly) substituting formulas of $L$
|         for the sentence symbols of q is a logical
|         axiom for $L$.  From now on we shall call
|         such a formula p a 'tautology' of $L$.
|
| 1.3.6.  Quantifier Axioms.
|
|         1.  If p and q are formulas of $L$ and v is a variable not free in p,
|             then the formula:
|
|             (`A`v)(p => q) => (p => (`A`v)q)
|
|             is a logical axiom.
|
|         2.  If p and q are formulas and q is obtained from p by freely
|             substituting each free occurrence of v in p by the term t
|             (that is, no variable x in t shall occur bound in q at
|             the place where it is introduced), then the formula:
|
|             (`A`v)p => q
|
|             is a logical axiom.
|
| 1.3.7.  Identity Axioms.
|
|         Suppose x, y are variables, t(v_0 ... v_n) is a term, and
|         p(v_0 ... v_n) is an atomic formula.  Then the formulas:
|
|         x = x
|
|         x = y  =>  t(v_0 ... v_(i-1)  x  v_(i+1) ... v_n)  =
|                    t(v_0 ... v_(i-1)  y  v_(i+1) ... v_n)
|
|         x = y  =>  p(v_0 ... v_(i-1)  x  v_(i+1) ... v_n)  =
|                    p(v_0 ... v_(i-1)  y  v_(i+1) ... v_n)
|
|         are logical axioms.
|
| There are two rules of inference:
|
| 1.3.8.  Rule of Detachment (or Modus Ponens).
|
|         From p and p => q, infer q.
|
| 1.3.9.  Rule of Generalization.
|
|         From p, infer (`A`x)p.
|
| Given the axioms and the rules of inference, we assume that the
| resulting notions of 'proof', 'length of proof', 'theorem' are
| already familiar to the reader.  As we are dealing with the
| usual first-order logic with identity, we shall assume as
| known and make free use of all of the basic theorems and
| metatheorems of such formal systems.
|
| Chang & Keisler, 'Model Theory', pages 23-25.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 26

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| Following standard usage, |- p means that p is a theorem of $L$.
|
| If !S! is a set of sentences of $L$, then !S! |- p means
| that there is a proof of p from the logical axioms and !S!.
| If !S! = {!s!_1, ..., !s!_n} is finite, we write:
|
|    !s!_1 ... !s!_n  |-  p.
|
| As the logical axioms are always assumed,
| we say that 'there is a proof' of p from !S!,
| or p is 'deducible' from !S!, whenever !S! |- p.
|
| !S! is 'inconsistent' iff every formula of $L$ can
| be deduced from !S!.  Otherwise !S! is 'consistent'.
| A sentence !s! is consistent iff {!s!} is.
|
| !S! is 'maximal consistent' (in $L$) iff
| !S! is consistent and no set of sentences
| (of $L$) properly containing !S! is consistent.
|
| We list in the proposition below some useful, though
| simple, properties of consistent and maximal consistent
| sets of sentences.  (Many of these properties are found
| also in Proposition 1.2.8.)
|
| 1.3.10.  Proposition.
|
|          1.  !S! is consistent if and only if every
|              finite subset of !S! is consistent.
|
|          2.  Let !s! be a sentence.
|
|              !S! |_| {!s!} is inconsistent
|
|              if and only if
|
|              !S! |- ~!s!.
|
|              Whence !S! |_| {!s!} is consistent
|
|              if and only if
|
|              ~!s! is not deducible from !S!.
|
|          3.  If !S! is maximal consistent, then, for any sentences !s! and !t!:
|
|              !S! |- !s!            iff   !s! belongs to !S!.
|
|              !s! is not in !S!     iff   ~!s! belongs to !S!.
|
|              !s! & !t! is in !S!   iff   !s! and !t! belong to !S!.
|
|          4.  Deduction Theorem.
|
|              !S! |_| {!s!} |- !t!  if and only if  !S! |- !s! => !t!.
|
|              (Here, !s! is a sentence, although !t! need not be one.)
|
| Chang & Keisler, 'Model Theory', page 25.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 27

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| The next proposition duplicates Lemma 1.2.9.  There is no change in the proof.
|
| 1.3.11.  Proposition.  (Lindenbaum's Theorem).
|
|          Any consistent set of sentences of $L$ can
|          be extended to a maximal consistent set of
|          sentences of $L$.
|
| We now come to the key definition of this section.  In fact, the following
| definition of satisfaction is the cornerstone of model theory.  We first
| give the motivation for the definition in a few remarks.  If we compare
| the models of Section 1.2 and the models discussed here, we see that
| with the former we were only concerned with whether a statement is
| true or false in it, while here the situation is more complicated
| because the sentences of $L$ say something about the individual
| elements of the model.  The whole question of the (first-order)
| truths or falsities of a possible world (i.e., model) is just not
| a simple problem.  For instance, there is no way to decide whether a
| given sentence of $L$ = {+, ·, S, 0} is true or false in the standard
| model <N, +, ·, S, 0> of arithmetic (where S is the successor function).
| Whereas we have already seen in Section 1.2 that there is such a decision
| procedure for every model for $S$ and for every sentence of $S$.  To define
| the notion:
|
|       the sentence !s! is true in the model $A$,
|
| we have first to break up !s! into smaller parts and to examine each part.
| If !s! is ~p or if !s! is p & q, then we see that the truth or falsity of !s!
| in $A$ follows once we know the truth or falsity of p and q in $A$.  If, on the
| other hand, !s! is (`A`x)p, then the same method for deciding the truth of !s!
| breaks down as p may not be a sentence and it would be meaningless to ask if
| p is true or false in $A$.
|
| The free variable x in p is supposed to range over the elements
| of A.  For each particular a in A it is meaningful to ask whether:
|
|       the formula p is true in $A$ if p is talking about a.
|
| If for each a in A the answer to this question is yes, then we can
| say that !s! is true in $A$.  If there exists an a in A so that the
| answer is no, then we can say that !s! is false in $A$.  But in order
| to answer the above question, even for a fixed element of A, we shall
| run into the same difficulty if p happens to be (`A`y)q.  Then we are
| led naturally to ask whether:
|
|       q is true in $A$ if q is talking about a pair of elements a and b in A.
|
| It takes but a very small step before we see
| that the crucial question is the following:
|
|       Given a formula p(v_0 ... v_k) and a sequence x_0, ..., x_k in A,
|       what does it mean to say that p is true in $A$ if the variables
|       v_0, ..., v_k are taken to be x_0, ..., x_k?
|
| Our plan is to give an answer to this question first for every atomic formula
| q(v_0 ... v_k) and all elements x_0, ..., x_k.  Then, by an inductive procedure
| based on our inductive definition of a formula (1.3.1-1.3.3), we shall give an
| answer for all formulas p(v_0 ... v_k) and elements x_0, ..., x_k.
|
| There is still one difficulty with our plan:  If all the free variables
| of a formula p are among v_0, ..., v_k, it does not follow that all the
| free variables of every subformula of p are among v_0, ..., v_k.  For a
| quantifier make a free variable bound.   This will cause trouble in the
| induction part of our plan.  To overcome this difficulty we observe that
| the follwoing is true.  If all the variables, free or bound, of a formula
| p are among v_0, ..., v_m, then all the variables of every subformula of p
| are also among v_0, ..., v_m.  So we shall modify our plan thus:  First, we
| answer the question for all atomic formulas q(v_0 ... v_m) and all elements
| x_0, ..., x_m.  Then by an inductive procedure we answer the question for
| all formulas p such that all its 'free and bound' variables are among
| v_0, ..., v_m, and all elements x_0, ..., x_m.  Finally we 'prove'
| that the answer to the question for a formula p(v_0 ... v_k) and
| elements x_0, ..., x_m, k =< m, depends only on the elements
| x_0, ..., x_k corresponding to the 'free' variables of p,
| so that the values of x_(p+1), ..., x_m are irrelevant.
|
| Chang & Keisler, 'Model Theory', pages 26-27.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 28

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| We are now ready for the formal definition.  The crucial notion to be defined
| is the following:  Let p be any formula of $L$, all of whose free and bound
| variables are among v_0, ..., v_m, and let x_0, ..., x_m be any sequence
| of elements of A.  We define the predicate:
|
| 1.3.12.  p is 'satisfied by' the sequence x_0, ..., x_m in $A$,
|
|          or
|
|          x_0, ..., x_m 'satisfies' p in $A$.
|
| The definition proceeds in three stages (compare with 1.3.1 - 1.3.3).
|
| Let $A$ be a fixed model for $L$.
|
| 1.3.13.  The 'value' of a term t(v_0 ... v_m) at x_0, ..., x_m is
|          defined as follows (we let t[x_0 ... x_m] denote this value):
|
|          1.  If    t  =  v_i,
|
|              then  t[x_0 ... x_m]  =  x_i.
|
|          2.  If    t  is a constant symbol  c,
|
|              then  t[x_0 ... x_m] is the interpretation of c in $A$.
|
|          3.  If    t  =  F(t_1 ... t_n), where F is an n-placed function symbol,
|
|              then  t[x_0 ... x_m]  =  G(t_1 [x_0 ... x_m] ... t_n [x_0 ... x_m]),
|
|              where G is the interpretation of F in $A$.
|
| 1.3.14.  1.  Suppose p(v_0 ... v_m) is the atomic formula  t_1 = t_2,
|
|              where t_1 (v_0 ... v_m) and t_2 (v_0 ... v_m) are terms.
|
|              Then x_0, ..., x_m 'satisfies' p
|
|              if and only if
|
|              t_1 [x_0 ... x_m]  =  t_2 [x_0 ... x_m].
|
|          2.  Suppose p(v_0 ... v_m) is the atomic formula P(t_1 ... t_n),
|
|              where P is an n-placed relation symbol
|
|              and t_1 (v_0 ... v_m), ..., t_n (v_0 ... v_m) are terms.
|
|              Then x_0, ..., x_m 'satisfies' p
|
|              if and only if
|
|              R(t_1 [x_0 ... x_m] ... t_n [x_0 ... x_m]),
|
|              where R is the interpretation of P in $A$.
|
| For brevity, we write:
|
|              $A$  |=  p[x_0 ... x_m]
|
| for:         x_0, ..., x_m  satisfies  p  in  $A$.
|
| Thus 1.3.14 can also be formulated as:
|
| 1.3.14.  1.  $A$  |=  (t_1 = t_2) [x_0 ... x_m]
|
|              if and only if
|
|              t_1 [x_0 ... x_m]  =  t_2 [x_0 ... x_m].
|
|          2.  $A$  |=  P(t_1 ... t_n) [x_0 ... x_m]
|
|              if and only if
|
|              R(t_1 [x_0 ... x_m] ... t_n [x_0 ... x_m]).
|
| 1.3.15.  Suppose that p is a formula of $L$
|          and all free and bound variables
|          of p are among v_0, ..., v_m.
|
|          1.  If    p  is  r_1 & r_2
|
|              then  $A$  |=  p[x_0 ... x_m]
|
|              if and only if
|
|              both  $A$  |=  r_1 [x_0 ... x_m]
|
|              and   $A$  |=  r_2 [x_0 ... x_m].
|
|          2.  If    p  is  ~r
|
|              then  $A$  |=  p[x_0 ... x_m]
|
|              if and only if
|
|              not   $A$  |=  r[x_0 ... x_m].
|
|          3.  If    p  is  (`A`v_i) q
|
|              where i =< m,
|
|              then  $A$  |=  p[x_0 ... x_m]
|
|              if and only if
|
|              for every x in A,
|
|              $A$  |=  q[x_0 ... x_(i-1)  x  x_(i+1) ... x_m].
|
| Our definition of 1.3.12 is now completed.  As simple exercises,
| the reader should check that the abbreviations v, =>, <=>, `E`
| have their usual meanings.
|
| In particular:
|
|              If    p  is  (`E`v_i) q
|
|              where i =< m,
|
|              then  $A$  |=  p[x_0 ... x_m]
|
|              if and only if
|
|              there exists x in A such that
|
|              $A$  |=  q[x_0 ... x_(i-1)  x  x_(i+1) ... x_m].
|
| More important, the reader should realize that we can formulate
| a precise definition of t[x_0 ... x_m] and $A$ |= p[x_0 ... x_m]
| in set theory, based upon 1.3.13 - 1.3.15.
|
| Chang & Keisler, 'Model Theory', pages 27-28.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 29

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| Having finished our definition, our first task
| is to prove the proposition that the relation:
|
|     $A$  |=  p(v_0 ... v_k) [x_0 ... x_m]
|
| depends only on x_0, ..., x_k, where k < m.
| This is the last part of the plan we have
| outlined.
|
| 1.3.16.  Proposition.
|
|          1.  Let t(v_0 ... v_k) be a term and let 
|              x_0, ..., x_m and y_0, ..., y_n be two
|              sequences such that k =< m and k =< n,
|              and x_i = y_i whenever v_i is a free
|              variable of t.
|
|              Then  t[x_0 ... x_m]  =  t[y_0 ... y_n].
|
|          2.  Let p be a formula all of whose free and
|              bound variables are among v_0, ..., v_k
|              and let x_0, ..., x_m and y_0, ..., y_n
|              be two sequences with k =< m and k =< n,
|              and x_i = y_i whenever v_i is a free
|              variable of p.
|
|              Then  $A$  |=  p[x_0 ... x_m]
|
|              iff   $A$  |=  p[y_0 ... y_n].
|
| Remark.  Proposition 1.3.16 shows that the value of a term t at x_0, ..., x_m
|          and whether a formula p is satisfied or not by a sequence x_0, ..., x_m
|          'depend only' on those values of x_i for which v_i is a free variable,
|          and are 'independent' of the other values of the sequence as well as
|          the length of the sequence.  The length m of the sequence must be
|          high enough to cover all the free and bound variables of t and p
|          in order for the expressions t[x_0 ... x_m], $A$ |= p[x_0 ... x_m]
|          to be defined at all.  We can now immediately infer that if !s! is
|          a sentence, then $A$ |= !s![x_0 ... x_m] is entirely independent of
|          the sequence x_0, ..., x_m.  The importance of the above proposition
|          is that it allows us to make the following definition:
|
| 1.3.17.  Let p(v_0 ... v_k) be a formula all of whose free and bound variables
|          are among v_0, ..., v_m, k =< m.  Let x_0, ..., x_k be a sequence of
|          elements of A.  We say that p is 'satisfied' in $A$ by x_0, ..., x_k,
|
|          $A$  |=  p[x_0 ... x_k],
|
|          if and only if
|
|          p is satisfied in $A$ by x_0, ..., x_k, ..., x_m for
|          some (or, equivalently, every) x_(k+1), ..., x_m.
|
|          Let p be a sentence all of whose bound variables are among
|          v_0, ..., v_m.  We say that $A$ 'satisfies' p, in symbols
|          $A$ |= p, if and only if p is satisfied in $A$ by some
|          (or, equivalently, every) sequence x_0, ..., x_m.
|
| The proof of Proposition 1.3.16 is straightforward but tedious.
| We shall sketch it here as a first example of an inductive proof
| on the "complexity" of formulas.  We shall often omit similar easy
| inductive proofs in the future.
|
| Proof of Proposition 1.3.16.  [C&K, pages 30-31].
|
| Chang & Keisler, 'Model Theory', pages 28-31.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 30

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| We shall state one more elementary proposition which
| deals with the behavior of the satisfaction relation
| under the substitution of variables by terms.  We omit
| the proof, which is another tedious but straightforward
| induction.
|
| 1.3.18.  Proposition.
|
|          Let p(v_0 ... v_k) be a formula and let
|          t_0 (v_0 ... v_k), ..., t_k (v_0 ... v_k)
|          be terms.  Suppose that no variable occurring
|          in any of the terms t_0, ..., t_k occurs bound in p.
|
|          Let x_0, ..., x_k be a sequence of elements of A and
|          let p(t_0 ... t_k) be the formula obtained from p by
|          substituting  t_i for v_i  (i = 0, ..., k).
|
|          Then:
|
|          $A$  |=  p(t_0 ... t_k) [x_0 ... x_k]
|
|          if and only if
|
|          $A$  |=  p[t_0 [x_0 ... x_k] ... t_k [x_0 ... x_k]].
|
| We have now completed the project started several paragraphs back.
| Namely, we say of a sentence !s! that:
|
|          !s! is true in $A$
|
|          iff and only if
|
|          $A$  |=  !s![x_0 ... x_m]
|
|          for some (or for every) sequence x_0, ..., x_m of A.
|
| We use the special notation $A$ |= !s! to denote that !s! is true in $A$.
| This last phrase is equivalent to each of the following phrases:
|
|          !s!  holds in  $A$
|
|          $A$  satisfies  !s!
|
|          $A$  is a model of  !s!
|
|          !s!  is satisfied in  $A$
|
| When it is not the case that !s! holds in $A$, we say that
| !s! is 'false' in $A$, or that !s! 'fails' in $A$, or that
| $A$ is a model of ~!s!.
|
| Given a set !S! of sentences, we say $A$ is a 'model' of !S!
| iff $A$ is a model of each !s! in !S!, and it is convenient
| to use the notation $A$ |= !S! for this notion.
|
| A sentence !s! that holds in every model for $L$ is called 'valid'.
| A sentence, or a set of sentences, is 'satisfiable' if and only if
| it has at least one model.  Whence, !s! is satisfiable if and only
| if ~!s! is 'refutable'.
|
| |= !s!  denotes that !s! is a valid sentence.
|
| A sentence !t! is a 'consequence' of another sentence !s!,
| in symbols !s! |= !t!, iff every model of !s! is a model of !t!.
|
| A sentence !t! is a 'consequence' of a set of sentences !S!,
| in symbols !S! |= !t!, iff every model of !S! is a model of !t!.
|
| It follows that:
|
|          !S! |_| {!s!}  |=  !t!
|
|          if and only if
|
|          !S!  |=  !s! => !t!
|
| Two models $A$ and $B$ for $L$ are 'elementarily equivalent' iff
| every sentence that is true in $A$ is true in $B$ and vice versa.
| We express this relationship between models by !=!.  It is easy to
| see that !=! is indeed an equivalence relation.  The symbol we have
| chosen to denote elementary equivalence is exactly the same [in the
| original text] as the identity symbol for the language $L$.  However,
| no confusion can ever arise because one is a relation between models
| for $L$ and the other is a relation between terms of $L$.  If the
| context is clear, 'equivalent' shall mean elementarily equivalent.
|
| 1.3.19.  Proposition.
|
|          If    $A$  ~=~  $B$
|
|          then  $A$  !=!  $B$.
|
|          In case $A$ is finite, then the converse is also true.
|
| Chang & Keisler, 'Model Theory', pages 31-32.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 31

| 1.  Introduction
|
| 1.3.  Languages, Models, and Satisfaction (cont.)
|
| We conclude this section by stating a number of important results
| without proofs, but whose proofs will be given in the next chapter.
|
| 1.3.20.  Theorem.  (Gödel's Completeness Theorem).
|
|          Given any sentence !s!,
|          !s! is a theorem
|          if and only if
|          !s! is valid.
|
| 1.3.21.  Theorem.  (Extended Completeness Theorem).
|
|          Let !S! be any set of sentences.
|          Then !S! is consistent
|          iff !S! has a model.
|
| 1.3.22.  Theoreom.  (Compactness Theorem).
|
|          A set of sentences !S! has a model iff
|          every finite subset of !S! has a model.
|
| As in Section 1.2, we conclude with a table of equivalent notions.
|
|                          Table 1.3.1
| o-----------------------------o-----------------------------o
| |  Syntax                     |  Semantics                  |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is a theorem             |  p is valid                 |
| |                             |                             |
| |  |- p                       |  |= p                       |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  !S! is consistent          |  !S! has a model            |
| |                             |                             |
| o-----------------------------o-----------------------------o
| |                             |                             |
| |  p is deducible from !S!    |  p is a consequence of !S!  |
| |                             |                             |
| |  !S! |- p                   |  !S! |= p                   |
| |                             |                             |
| o-----------------------------o-----------------------------o
|
| Chang & Keisler, 'Model Theory', pages 32-33.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 32

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories
|
| A (first-order) theory T of $L$ is a collection of sentences of $L$.
| T is said to be 'closed' iff it is closed under the |= relation.
| In view of Table 1.3.1, this is the same as requiring that T
| be closed under |- .  Since theories are sets of sentences
| of $L$, we may apply the expressions:
|
|       a model of a theory,
|
|       consistent theory,
|
|       satisfiable theory,
|
| as introduced in Section 1.3.
|
| A theory T is called 'complete' (in $L$) if and only if its set of
| consequences is maximal consistent.  If T is a theory of $L$, with
| $L$ c $L$’ and $L$ =/= $L$’, then T is not a closed theory of $L$’.
| On the other hand, it is easy to see that if $L$’ c $L$, then the
| 'restriction' of a closed theory T to $L$’, in symbols T | $L$’,
| is always a closed theory of $L$’.  T is a 'subtheory' of T’ iff
| T c T’.  If T is a subtheory of T’, then T’ is an 'extension' of T.
|
| A 'set of axioms' of a theory T is a set of sentences with the
| same consequences as T.  Clearly, T is a set of axioms of T, and
| the empty set is a set of axioms of T if and only if T is a set
| of valid sentences of $L$.  Every set of sentences !S! is a set
| of axioms for the closed theory T = {p : !S! |= p}.  A theory T
| is 'finitely axiomatizable' iff it has a finite set of axioms.
|
| The most convenient and standard way of giving a theory T is by
| listing a finite or infinite set of axioms for it.  Another way
| to give a theory is as follows:  Let $A$ be a model for $L$;
| then the 'theory of' $A$ is the set of all sentences which
| hold in $A$.  The theory of any model $A$ is obviously
| a complete theory.
|
| Historically, the importance of theories stems from the following
| two facts.  Once the axioms of a theory are given, then by using
| the relation |- we can find out, in a syntactical manner, all the
| consequences of T.  On the other hand, by using the satisfaction
| relation, we can also study all the models of T.
|
| By the extended completeness theorem, these two approaches
| give basically the same results about consequences of T.
| However, owing to the fact that models of T also have
| non-first-order properties, such as isomorphism,
| submodels, extensions, plus many others, the
| second approach leads to the field now
| known as model theory.
|
| We shall give in the rest of this section some examples of theories
| and their models to show the intimate connections that model theory
| has with other branches of mathematics.  In each example we describe
| a closed theory by a set of axioms.  Some classical results will be
| stated without proof.
|
| Chang & Keisler, 'Model Theory', pages 36-37.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 33

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.1.  Example.  [The Theory of Partial Order].
|
|         Let $L$ consist of the single 2-placed relation symbol =<.
|         Using the usual notation for =<, we write x =< y for =<(x, y).
|         The theory of 'partial order' has three axioms:
|
|         1.  (`A`xyz)(x =< y  &  y =< z  =>  x =< z),
|
|         2.  (`A`xy) (x =< y  &  y =< x  =>  x = y),
|
|         3.  (`A`x)  (x =< x).
|
| They are, respectively, the transitive, antisymmetric, and reflexive properties of
| partial orders.  Any model <A, =< > of this theory consists of a nonempty set A
| and a partial order relation =< on A.  If we add the comparability axiom:
|
|         4.  (`A`xy) (x =< y  or  y =< x),
|
| we obtain the theory of 'simple order' (also called 'linear order').
| A model <A, =< > for this theory is a simply-ordered set.  Adding
| two more axioms (writing x =/= y for ~(x = y)):
|
|         5.  (`A`xy) (x =< y  &  x =/= y
|
|                      =>  (`E`z)(x =< z  &  z =/= x  &  z =< y  &  z =/= y)),
|
|         6.  (`E`xy) (x =/= y),
|
| we then have the theory of 'dense (simple) order'.
| The rationals with the usual =< is an example of
| a model of this theory.  The theory of dense order
| has no finite models.  If we wish to consider only
| dense orders 'without endpoints', we add the axioms:
|
|         7.  (`A`x)(`E`y)(x =< y  &  x =/= y),
|
|         8.  (`A`x)(`E`y)(y =< x  &  x =/= y).
|
| 1.4.2.  Proposition.
|
|         Any two countable models of the theory
|         of dense order without endpoints
|         are isomorphic.
|
| Chang & Keisler, 'Model Theory', pages 37-38.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 34

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.3.  Example.  [The Theory of Boolean Algebras].
|
|         Let $L$ = {+, ·, ~, 0, 1},
|
|         where + and · are 2-placed function symbols,
|         where ~ is a 1-placed function symbol, and
|         where 0 and 1 are constant symbols.
|
|         The theory of 'Boolean algebras' has the following axioms,
|         where we shall assume that the following formulas all have
|         their free variables universally quantified in front:
|
|         1.  Associativity of + and ·
|
|             x + (y + z) = (x + y) + z
|
|             x · (y · z) = (x · y) · z
|
|         2.  Commutativity of + and ·
|
|             x + y = y + x
|
|             x · y = y · x
|
|         3.  Idempotent Laws
|
|             x + x = x
|
|             x · x = x
|
|         4.  Distributive Laws
|
|             x + (y · z) = (x + y) · (x + z)
|
|             x · (y + z) = (x · y) + (x · z)
|
|         5.  Absorption Laws
|
|             x + (x · y) = x
|
|             x · (x + y) = x
|
|         6.  De Morgan Laws
|
|             ~(x + y) = ~x · ~y
|
|             ~(x · y) = ~x + ~y
|
|         7.  Laws of Zero and One
|
|             x + 0 = x
|
|             x · 0 = 0
|
|             x + 1 = 1
|
|             x · 1 = x
|
|             0 =/= 1
|
|             x + ~x = 1
|
|             x · ~x = 0
|
|         8.  Law of Double Negation
|
|             ~~x = x
|
| A model $A$ = <A, +, ·, ~, 0, 1> of this theory is called a Boolean algebra.
| Strictly speaking, we should write +_$A$, ·_$A$, ~_$A$, 0_$A$, 1_$A$ in the
| above model.  But following our conventions we shall drop the subscripts.
|
| A partial order =< can be defined on A by:
|
|         x =< y  if and only if  x + y = y.
|
| It can be shown that =< has a largest element, namely 1,
| a smallest element, namely 0, and that, given any two elements
| x, y in A, the l.u.b. (least upper bound) of x and y is x + y,
| and the g.l.b. (greatest lower bound) of x and y is x · y.
|
| A 'field of sets' S is a collection of subsets of a nonempty set X
| such that both the empty set Ø and the set X are in S, and such that
| S is is closed under |_|, |¯|, and ~ with respect to X.  It is easy
| to see that if S is a field of sets, then:
|
|         <S, |_|, |¯|, ~, Ø, X>
|
| is a Boolean algebra.  Conversely, we have:
|
| 1.4.4.  Proposition.  (Representation Theorem for Boolean Algebras).
|
|         Every Boolean algebra is isomorphic to a field of sets.
|
| An 'atom' of a Boolean algebra is an element x =/= 0 such that
| there is no element y which lies properly between 0 and x, i.e.,
| not (0 =< y =< x  &  0 =/= y  &  y =/= x).  A Boolean algebra is
| 'atomic' if and only if every nonzero element x includes an atom.
| A Boolean algebra is 'atomless' if and only if it has no atoms.
| There are Boolean algebras which are neither atomic nor atomless.
|
| Adding the axiom (writing x =< y for x + y = y):
|
|         a.  (`A`x)(0 =/= x  =>
|
|                    (`E`y)(y =< x  &  0 =/= y  &
|
|                           (`A`z)(z =< y  =>  z = 0  or  z = y)))
|
| gives us the theory of 'atomic Boolean algebras';
| while adding the axiom:
|
|         å.  ~(`E`y)(0 =/= y  &  (`A`z)(z =< y  =>  z = 0  or  z = y))
|
| gives us the theory of 'atomless Boolean algebras'.
|
| 1.4.5.  Proposition.
|
|         Any two countable atomless Boolean algebras are isomorphic.
|
| Some other relevant facts about Boolean algebras can be found in the exercises.
|
| Chang & Keisler, 'Model Theory', pages 38-39.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 35

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.6.  Example.  [The Theory of Groups].
|
|         Let $L$ = {+, 0},
|
|         where + is a 2-placed function symbol
|         and   0 is a constant symbol.
|
|         The theory of 'groups' has the following axioms:
|
|         1.  Associativity
|
|             x + (y + z) = (x + y) + z
|
|         2.  Identity
|
|             x + 0 = x
|
|             0 + x = x
|
|         3.  Existence of Inverse
|
|             (`E`y) (x + y = 0  &  y + x = 0)
|
| A model <G, +, 0> of this theory is a 'group'.
| We obtain the theory of 'Abelian groups' when
| we add the axiom:
|
|         4.  Commutativity
|
|             x + y = y + x
|
| The 'order' of an element x of a group is the least n such that
| x + x + ... + x (n times) = 0.  If no such n exists, the order
| of x is infinity.  For a fixed n >= 1, we can write down the
| abbreviation nx for the expression:
|
|         x + (x + ( ... (x + x) ... )),  n times.
|
| Suppose p is a prime.  The theory of 'Abelian groups
| with all elements of order p' has the extra axiom:
|
|         5_p.  Prime Order
|
|               px = 0
|
| 1.4.7.  Proposition.
|
|         Any two models of the theory of Abelian groups
|         with all elements of order p of the same power
|         are isomorphic.
|
| To obtain the theory of 'Abelian groups with all elements
| of order infinity (torsion-free)' we need an infinite list
| of axioms.  For each n >= 1, we add the axiom:
|
|         6_n.  Torsion Free
|
|               x =/= 0  =>  nx =/= 0
|
| This theory is our first example of a non-finitely-axiomatizable theory.
| If we add a further infinite list of axioms, one for each n >= 1, thus:
|
|         7_n.  Divisibility
|
|               (`E`y)(ny = x)
|
| we have the theory of 'divisible torsion-free Abelian groups'.
|
| 1.4.8.  Proposition.
|
|         Any two uncountable divisible torsion-free Abelian groups of the
|         same power are isomorphic.  There are countably many such groups
|         which are countable and not isomorphic.
|
| Chang & Keisler, 'Model Theory', pages 39-40.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 36

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.9.  Example.  [Commutative Rings to Ordered Fields].
|
|         Let $L$ = {+, ·, 0, 1},
|
|         where + and · are 2-placed function symbols
|         and   0 and 1 are constant symbols.
|
|         The theory of 'commutative rings (with unit)' has
|         the axioms (1, 2, 3, 4) listed above [copied here]
|         plus the axioms (8, 9, 10, 11) given below:
|
|         1.   Associativity (+)
|
|              x + (y + z) = (x + y) + z
|
|         2.   Identity (+)
|
|              x + 0 = x
|
|              0 + x = x
|
|         3.   Existence of Inverse (+)
|
|              (`E`y) (x + y = 0  &  y + x = 0)
|
|         4.   Commutativity (+)
|
|              x + y = y + x
|
|         8.   Unit (·)
|
|              1 · x = x
|
|              x · 1 = x
|
|         9.   Associativity (·)
|
|              x · (y · z) = (x · y) · z
|
|         10.  Commutativity (·)
|
|              x · y = y · x
|
|         11.  Distributivity (· over +)
|
|              x · (y + z) = (x · y) + (x · z)
|
| Adding one more axiom:
|
|         12.  Absence of Zero Divisors
|
|              x · y = 0  =>  x = 0  or  y = 0
|
| gives us the theory of 'integral domains'.
|
| Adding the two axioms:
|
|         13.  Distinct Identities
|
|              0 =/= 1
|
|         14.  Existence of Inverse (·)
|
|              x =/= 0  =>  (`E`y)(y · x = 1)
|
| gives us the important theory of 'fields'.
| For a fixed prime p, if we add the axiom:
|
|         15_p.  Prime Characteristic
|
|                p1 = 0
|
| we have the theory of 'fields of characteristic p'.
| On the other hand, if we add for all primes p the
| negation of (15_p), namely, all the axioms:
|
|         16.  Characteristic Zero
|
|              p1 =/= 0, with p a prime
|
| we have the theory of 'fields of characteristic zero'.
| We now introduce the abbreviation x^n for the expression:
|
|         x · (x · ( ... (x · x) ... )),  n times.
|
| The infinite list of axioms, one for each n >= 1, as follows:
|
|         17_n.  (`E`y)
|
|                (x_n · y^n  +  x_(n-1) · y^(n-1)  + ... +  x_1 · y  +  x_0  =  0)
|
|                 or  x_n = 0
|
| when added to the theory of fields, gives us
| the theory of 'algebraically closed fields'.
|
| 1.4.10.  Proposition.
|
|          Any two uncountable algebraically closed fields of
|          the same characteristic and power are isomorphic.
|
| Each axiom (17_n) says that every polynomial of degree n has a root.
| The theory of 'real closed fields' has as axioms all the axioms for
| fields plus the axiom:
|
|          18.  (`A`x)(`E`y) (y^2 = x  or  y^2 + x = 0)
|
| and two infinite lists of axioms.  One is the infinite list (17_n)
| for all odd n, and the other is the infinite list that says that 0
| is not a sum of nontrivial squares:
|
|          18_n.  (x_0)^2  +  (x_1)^2  +  ...  +  (x_n)^2  =  0
|
|                 =>
|
|                 x_0 = 0  &  x_1 = 0  &  ...  &  x_n = 0
|
| The theory of 'ordered fields' is formulated in
| the language $L$ = {=<, +, ·, 0, 1}.  It has all
| the field axioms, the linear order axioms, and the
| additional axioms:
|
|          19.  x =< y             =>  x + z  =<  y + z
|
|          20.  x =< y  &  0 =< z  =>  x · z  =<  y · z
|
| The ordered fields of rational numbers and of real numbers are examples.
|
| Of the examples of theories we have discussed so far, the following are complete:
| dense order without endpoints, atomless Boolean algebras, infinite Abelian groups
| with all elements of order p, torsion-free divisible Abelian groups, algebraically
| closed fields of a given characteristic, and real closed fields.  The various
| propositions show that each of these complete theories, except the last one,
| enjoys the unusual property that in some (sometimes all) infinite powers
| all models of the given theory of that power are isomorphic.
|
| Chang & Keisler, 'Model Theory', pages 40-42.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 37

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.11.  Example.  [Number Theory, or Peano Arithmetic].
|
|          Let $L$ = {+, ·, S, 0},
|
|          where + and · are 2-placed function symbols,
|          where S is a 1-placed function symbol,
|          called the 'successor' function,
|          and 0 is a constant symbol.
|
|         'Number theory' (or 'Peano arithmetic')
|          has the following list of axioms:
|
|          1.  0 =/= Sx                    (0 has no predecessor)
|
|          2.  Sx = Sy  =>  x = y          (S is a one-to-one map)
|
|          3.  x + 0 = x
|
|          4.  x + Sy = S(x + y)
|
|          5.  x · 0 = 0
|
|          6.  x · Sy = (x · y) + x
|
| and, finally, for each formula
| !p!(v_0, v_1, ..., v_n) of $L$,
| where v_0 does not occur bound
| in !p!, the axiom:
|
|          7_!p!.  !p!(0, v_1, ..., v_n)
|
|                  &
|
|                  (`A`v_0) (!p!(v_0, v_1, ..., v_n) => !p!(Sv_0, v_1, ..., v_n))
|
|                  =>
|
|                  (`A`v_0)  !p!(v_0, v_1, ..., v_n)
|
| Axioms (3) and (4) are the usual recursive definition of + in terms of 0 and S,
| while axioms (5) and (6) are the recursive definition of · in terms of 0, S, +.
| The whole list of axioms (7_!p!), one for each !p!, is called the 'axiom schema
| of induction'.
|
| The 'standard model' of number theory is <!w!, +, ·, S, 0>, where
| S is the successor function and +, ·, 0 have their usual meaning.
| All other (non-isomorphic) models are called 'nonstandard'.
|'Complete number theory' is the set of all sentences !p!
| of $L$ that hold in the standard model.
|
| There are several deep results about number theory:
|
| Gödel's (1931) incompleteness theorem states that number theory
| is not complete;  therefore, complete number theory is a proper
| extension of number theory.
|
| No finite extension (that is, by adding a finite number of new axioms)
| of number theory is complete;  therefore complete number theory is not
| finitely axiomatizable over number theory, whence it is certainly not
| finitely axiomatizable.
|
| Number theory itself is not finitely axiomatizable.  This was proved by
| Ryll-Nardzewski (1952) by the use of nonstandard models.  The existence of
| nonstandard models of complete number theory was first shown by Skolem (1934).
|
| We mention a number of interesting subtheories of number theory.
| For instance, if the induction schema (7_!p!) is replaced by
| the single axiom:
|
|          8.  (`A`x) (x =/= 0  =>  (`E`y) (x = Sy))
|
| we obtain a finitely axiomatizable subtheory of number theory (the theory Q
| of Tarski, Mostowski, and Robinson, 1953) which is incomplete, and no finite
| extension of it is complete.
|
| In the language $L$’ = {S, 0} obtained by leaving out
| the symbols + and ·, the subtheory of number theory
| given by axioms (1), (2), and the schema (7_!p!),
| restricted of course to formulas of $L$’, is
| complete.  However, it is still not finitely
| axiomatizable, as can be shown by using the
| compactness theorem.
|
| In the language $L$” = {+, S, 0}, the axioms (1, 2, 3, 4) and
| the schema (7_!p!), again restricted to formulas of $L$”, give the
|'additive number theory'.  This theory is not finitely axiomatizable,
| but it is complete (Presburger, 1929);  the completeness of the theory
| $L$’ in the previous paragraph follows from the proof given by Presburger.
|
| Chang & Keisler, 'Model Theory', pages 42-43.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 38

| 1.  Introduction
|
| 1.4.  Theories and Examples of Theories (cont.)
|
| 1.4.12.  Example.  [Theories of Sets].
|
| We shall now discuss some examples of set theories.
|
| There are two quite different reasons to include a
| discussion of set theories in a book on model theory.
| The first reason is that, if we wish to be completely
| precise, we should formulate our whole treatment of
| model theory within an appropriate system of axiomatic
| set theory.  Actually, we are taking the more practical
| approach of formulating things in an informal set theory,
| but it is still important that, 'in principle', we could
| do it all in an axiomatic set theory.  We have left for the
| Appendix an outline of the informal set theory that we are
| using.  The other reason for discussing set theories is that
| they are among the most interesting and important examples of
| theories.  The second reason is the one which concerns us at
| this time.  The theory of models is particularly well suited
| to the study of models of set theory.  In the Appendix we have
| listed the axioms for four of the most familiar set theories:
| Zermelo, Zermelo-Fraenkel, Bernays, and Bernays-Morse.  The first
| two of them are formulated in the language $L$ = {in}, while the
| other two are formulated in the language $L$’ = {in, V}, where
|'in' is a binary relation symbol and V is a unary relation symbol.
| Zermelo set theory is a subtheory of Zermelo-Fraenkel, and Bernays
| set theory is a subtheory of Bernays-Morse.
|
| The deepest results in set theory use constructions of models.
| However, these constructions are often of a special nature,
| for models of set theory only, and are therefore outside
| the scope of this book.  For instance, the notion of
| constructible sets was used by Gödel (1939) to show
| that if Bernays set theory is consistent, then it
| remains consistent if we add to it the axiom of
| choice and the generalized continuum hypothesis;
| in other words, if Bernays set theory has a model,
| then it has a model in which the axiom of choice
| and the generalized continuum hypothesis are true.
| The same proofs and results are also well known to
| hold for Zermelo-Fraenkel set theory.  Cohen's forcing
| construction has been used by Cohen and others to obtain
| a remarkable series of additional consistency results (see
| Cohen, 1963).  For example, if Bernays (or Zermelo-Fraekel)
| set theory has a model, then it has a model in which the
| axiom of choice is false, and another model in which
| the axiom of choice is true but the generalized
| continuum hypothesis is false.
|
| In the rest of our discussion we use the abbreviation ZF
| for "Zermelo-Fraenkel set theory".  Whether or not we can
| prove that ZF is consistent depends on just how much we are
| assuming in our intuitive set theory.  If our intuitive set
| theory is just a replica of ZF, then we cannot prove the
| consistency of ZF, even if we allow the use of the axiom
| of choice.  Similarly, for any of the other set theories
| T we have introduced in the Appendix, we cannot prove the
| consistency of T if our intuitive set theory is a replica
| of T.  These assertions follow from the Gödel incompleteness
| theorem.  On the other hand, in Bernays-Morse set theory we
| can prove the consistency of Bernays set theory and of ZF.
| In ZF we can prove the consistency of Zermelo set theory.
| If we assume the existence of an inaccessible cardinal,
| then we can prove that Bernays-Morse set theory as well
| as ZF are consistent.  Bernays set theory and ZF are
| very close to each other, and we can prove that one
| is consistent if and only if the other is.  We shall
| leave the last three results above for exercises.
|
| Neither Zermelo set theory, nor ZF, nor Bernays-Morse
| set theory is finitely axiomatizable (assuming that they
| are consistent).  But, surprisingly, Bernays set theory is
| finitely axiomatizable (Bernays, 1937).  With its finite
| axiomatization it is sometimes called Bernays-Gödel set
| theory.  Each of the four set theories in our discussion,
| like number theory, has the following property:  if the
| theory is consistent, then it is not complete, and no
| finite extension of it is complete.  This is another
| consequence of the Gödel incompleteness theorem.
|
| There is no completely satisfactory notion of a "standard" model of set theory.
| The closest thing to it is the notion of a 'natural model'.  Natural models,
| roughly, are models of the form <M, in>, where M is a set of sets formed
| by starting with the empty set and repeating the operations of union
| and power set, while 'in' is the [epsilon]-relation restricted to M.
| More precisely, we define for each ordinal !a! the set R(!a!) by:
|
|          R(0)        =  0,
|
|          R(!a! + 1)  =  S(R(!a!)),
|
|          R(!a!)      =  |_|^(!b! < !a!) R(!b!),  if !a! is a limit ordinal.
|
| Then a 'natural model' of ZF (or of Zermelo set theory) is a model of the form
| <R(!a!), in>.  A natural model of Bernays set theory is a model of the form
| <R(!a! + 1), in, R(!a!)>.
|
| None of our set theories has any countable natural models.  For this
| reason, a somewhat weaker notion of "standard" model is also important.
| A model <M, in> is said to be a 'transitive model' if and only if 'in' is
| the [epsilon]-relation restricted to M and every element of an element of
| M is an element of M.  For models of the language $L$’ = {in, V} we make a
| similar definition.  The countable transitive models are the most important
| models for Cohen's forcing construction.
|
| Since number theory has just one standard model and is not complete, it
| has consistent extensions which have no standard models.  If ZF has any
| transitive model at all, then it has many nonequivalent transitive models.
| Nevertheless, if ZF is consistent, then it has consistent extensions which
| have no transitive models at all.  Moreover, in ZF plus the axiom of choice,
| we cannot prove the following:  If ZF has a model, then ZF has a transitive
| model.
|
| Chang & Keisler, 'Model Theory', pages 43-45.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 39

| 1.  Introduction
|
| 1.5.  Elimination of Quantifiers
|
| Each model $A$ of a theory T gives rise to a complete theory, namely
| the set of all sentences holding in $A$, which is an extension of T.
| For this reason it is important to know something about the complete
| extensions of a theory.  In a few fortunate cases, it is possible to
| give a simple description of all the complete extensions of a theory
| by using the method of elimination of quantifiers.
|
| This method applies only to very special theories.  Moreover, each time the
| method is applied to a new theory we must start from scratch in the proofs,
| because there are few opportunities to use general theorems about models.
| On the other hand, the method is extremely valuable when we want to beat
| a particular theory into the ground.  When it can be carried out, the
| method of elimination of quantifiers gives a tremendous amount of
| information about a theory.  For instance, it tells us about the
| behavior of all formulas, as well as all sentences, relative to
| the theory.  Usually it also gives a uniform way of deciding
| whether or not a sentence belongs to the theory;  in other
| words, it gives a proof that the theory is decidable.
|
| The question of the decidability of a theory lies outside the scope of this book,
| since it is not usually considered model theory.  However, it is a very important
| question, and in fact the most striking applications of the method of elimination
| of quantifiers are to show that certain theories are decidable.  The method is
| also valuable as a source of examples of thoroughly understood theories, which
| are useful for testing conjectures and for illustrating results.  The method
| may be thought of as a direct attack on a theory.  Later on we shall learn
| of several more indirect attacks on theories, which work more often but
| give less information in particualar cases.
|
| Beside describing the method, we need some more notation.  In Section 1.3,
| we introduced the notion of a sentence p being a consequence of a set !S! of
| sentences, in symbols !S! |= p.  What meaning shall we give to !S! |= p if p
| is a formula?  We shall say that a formula p(v_0 ... v_n) is a 'consequence'
| of !S!, symbolically !S! |= p, if and only if for every model $A$ of !S! and
| every sequence a_0, ..., a_n in A, the sequence a_0, ..., a_n satisfies p.
| It follows that the 'formula' p(v_0 ... v_n) is a consequence of !S! iff
| the 'sentence' (`A`v_0 ... v_n) p(v_0 ... v_n) is a consequence of !S!.
| We say that two formulas p, q are !S!-'equivalent' iff !S! |= p <=> q.
|
| In general, the method of elimination of quantifiers is as follows:
| First, depending on the theory T, we pick out an appropriate set of
| formulas, called 'basic formulas'.  By a 'Boolean combination' of
| basic formulas we mean a formula obtained from basic formulas by
| repeated application of the connectives ~ and &.  The main result
| to be proved is that 'every formula is T-equivalent to a Boolean
| combination of basic formulas'.  The key step in the proof is
| the step where we "eliminate quantifiers".  In fact, we may
| state at once a simple but general lemma which shows why
| the name "elimination of quantifiers" is given to the
| method (the name is due to Tarski, 1935).
|
| 1.5.1.  Lemma.  Let T be a theory and let !S! be a set of formulas,
|         called basic formulas.  In order to show that every formula
|         is T-equivalent to a Boolean combination of basic formulas,
|         it is sufficient to show the following:
|
|         1.  Every atomic formula is T-equivalent to
|             a Boolean combination of basic formulas.
|
|         2.  If r is a Boolean combination of basic formulas, then
|             (`E`v_m)r is T-equivalent to a Boolean combination of
|             basic formulas.
|
| Proof.  Let Q be the set of all formulas which are
|         T-equivalent to a boolean combination of
|         basic formulas.  We show by induction
|         that every formula p belongs to Q.
|
|         If    p is an atomic formula,
|         then  p is in Q by (1).
|
|         If p is ~q,  where q is in Q,
|         it is obvious that p is in Q.
|
|         Similarly, if p is q_1 & q_2,
|         where q_1 and q_2 are in Q,
|         then p is in Q.
|
|         If p is (`E`v_m)q, where q is in Q,
|         then q is T-equivalent to a Boolean combination r of basic formulas.
|         Moreover, p is T-equivalent to (`E`v_m)r.  By (2), (`E`v_m)r is in Q,
|         so p is in Q.  -|
|
| Chang & Keisler, 'Model Theory', pages 49-50.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

MOD. Note 40

| 1.  Introduction
|
| 1.5.  Elimination of Quantifiers (cont.)
|
| We shall illustrate the method with two simple examples.
| Our first example is the theory of dense simple order without
| endpoints (Example 1.4.1).  Let us temporarily (in this section
| only) call this theory !D!.  As we mentioned in Section 1.4, the
| theory !D! is complete.  The method of elimination of quantifiers
| is one of several ways which we shall come across for proving that
| theories are complete.  The completeness of !D! will follow from
| our results below.  The elimination of quantifiers was applied
| to the theory of !D! very early, by Langford (1927).
|
| As basic formulas we shall take the atomic formulas:
|
|       v_m = v_n,       v_m =< v_n.
|
| The Boolean combinations of atomic formulas are precisely the formulas which
| have no quantifiers.  In any language, formulas which have no quantifiers are
| called 'open formulas'.  We wish to prove that every formula p is !D!-equivalent
| to an open formula q.  As we carry out our arguments, we shall also keep track
| of which variables occur in the open formula which which is !D!-equivalent to
| a given formula.  This will be useful for applications.  Before we can eliminate
| any quantifiers, we must take a close look at the open formulas.  For convenience,
| we use the abbreviation:
|
|       v_m < v_m    for    v_m =< v_n  &  ~(v_m = v_n).
|
| Let us consider the n + 1 variables v_0, ..., v_n, n > 0.
|
| By an 'arrangement' of the variables v_0, ..., v_n we mean
| a finite conjunction of the form:
|
|       r_0  &  r_1  &  ...  &  r_(n-1),
|
| where u_0, ..., u_n is a renumbering of v_0, ..., v_n, and each formula
| r_i is either u_i < u_(i+1) or else u_i = u_(i+1).  The lemma below allows
| us to put every open formula into a "normal form" built up from arrangements
| of the variables.
|
| 1.5.2.  Lemma.
|
|         Every open formula p(v_0, ..., v_n) is !D!-equivalent
|         either to one of the formulas v_0 < v_0, v_0 = v_0, or
|         else to the disjunction of finitely many arrangements
|         of the variables v_0, ..., v_n.
|
| Proof.


| Chang & Keisler, 'Model Theory', pages 50-51.
|
| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

PAS. Probability And Statistics

PAS. Note 1

Excerpts from 'Introduction to Probability Theory'
by Paul G. Hoel, Sidney C. Port, Charles J. Stone.

| 1.2.  Probability Spaces
|
| Our purpose in this section is to develop the formal
| mathematical structure, called a probability space,
| that forms the foundation for the mathematical
| treatment of random phenomena.
|
| Envision some real or imaginary experiment that we are trying to model.
| The first thing we must do is decide on the possible outcomes of the
| experiment.  It is not too serious if we admit more things into our
| consideration than can really occur, but we want to make sure that
| we do not exclude things that might occur.  Once we decide on the
| possible outcomes, we choose a set !W! [Omega] whose points !w!
| [omega] are associated with these outcomes.  From the strictly
| mathematical point of view, however, !W! is just an abstract
| set of points.
|
| We next take a nonempty collection $A$ of subsets of !W! that is
| to represent the collection of "events" to which we wish to assign
| probabilities.  By definition, now, an 'event' means a set A in $A$.
| The statement 'the event A occurs' means that the outcome of our
| experiment is represented by some point !w! in A.  Again, from
| the strictly mathematical point of view, $A$ is just a specified
| collection of subsets of the set !W!.  Only sets A in $A$, i.e.,
| events, will be assigned probabilities.  In our model in Example 1,
| $A$ consisted of all subsets of !W!.  In the general situation when
| !W! does not have a finite number of points, as in Example 2, it may
| not be possible to choose $A$ in this manner.
|
| Hoel, Port, Stone, 'Probability Theory', p. 6.
|
| Hoel, P.G., Port, S.C., & Stone, C.J.,
|'Introduction to Probability Theory',
| Houghton Mifflin, Boston, MA, 1971.

PAS. Note 2

| 1.2.  Probability Spaces (cont.)
|
| The next question is, what should the collection $A$ be?
| It is quite reasonable to demand that $A$ be closed under
| finite unions and finite intersections of sets in $A$ as
| well as under complementation.
|
| For example, if A and B are two events, then A |_| B occurs if the
| outcome of our experiment is either represented by a point in A or
| a point in B.  Clearly, then, if it is going to be meaningful to
| talk about the probabilities that A and B occur, it should also
| be meaningful to talk about the probability that either A or B
| occurs, i.e., that the event A |_| B occurs.  Since only sets
| in $A$ will be assigned probabilities, we should require that
| A |_| B is in $A$ whenever A and B are members of $A$.
|
| Now A |^| B occurs if the outcome of our experiment is represented
| by some point that is in both A and B.  A similar line of reasoning
| to that used for A |_| B convinces us that we should have A |^| B
| in $A$ whenever A, B are in $A$.
|
| Finally, to say that the event A does not occur is to say that
| the outcome of our experiment is not represented by a point in A,
| so that it must be represented by some point in A^c.  It would be
| the height of folly to say that we could talk about the probability
| of A but not of A^c.  Thus we shall demand that whenever A is in $A$
| so is A^c.
|
| We have thus arrived at the conclusion that $A$
| should be a nonempty collection of subsets of !W!
| having the following properties:
|
|     1.  If A is in $A$ so is A^c.
|
|     2.  If A and B are in $A$ so are A |_| B and A |^| B.
|
| An easy induction argument shows that
| if A_1, A_2, ..., A_n are sets in $A$
| then so are:
|
|     |_| (i = 1 to n) A_i
|
| and
|
|     |^| (i = 1 to n) A_i.
|
| Here we use the shorthand notation:
|
|     |_| (i = 1 to n) A_i  =  A_1 |_| A_2 |_| ... |_| A_n
|
| and
|
|     |^| (i = 1 to n) A_i  =  A_1 |^| A_2 |^| ... |^| A_n.
|
| Also, since A |^| A^c = {} and A |_| A^c = !W!, we see
| that both the empty set {} and the set !W! must be in $A$.
|
| Hoel, Port, Stone, 'Probability Theory', pp. 6-7.
|
| Hoel, P.G., Port, S.C., & Stone, C.J.,
|'Introduction to Probability Theory',
| Houghton Mifflin, Boston, MA, 1971.

PAS. Note 3

| 1.2.  Probability Spaces (cont.)
|
| A nonempty collection of subsets of a given set !W! that is closed under
| finite set theoretic operations is called a 'field of subsets' of !W!.
| It therefore seems we should demand that $A$ be a field of subsets.
| It turns out, however, that for certain mathematical reasons just
| taking $A$ to be a field of subsets of !W! is insufficient.
| What we will actually demand of the collection $A$ is
| more stringent.  We will demand that $A$ be closed
| not only under finite set theoretic operations
| but under countably infinite set theoretic
| operations as well.  In other words if
| {A_n}, n >= 1, is a sequence of sets
| in $A$, we will demand that:
|
|    |_| (n = 1 to oo) A_n  is in  $A$
|
| and
|
|    |^| (n = 1 to oo) A_n  is in  $A$.
|
| Here we are using the shorthand notation:
|
|    |_| (n = 1 to oo) A_n  =  A_1 |_| A_2 |_| ...
|
| to denote the union of all the sets of the sequence, and:
|
|    |^| (n = 1 to oo) A_n  =  A_1 |^| A_2 |^| ...
|
| to denote the intersection of all the sets of the sequence.
|
| A collection of subsets of a given set !W! that is closed
| under countable set theory operations is called a !s!-field
| of subsets of !W!.  (The !s! [sigma] is put in to distinguish
| such a collection from a field of subsets.)  More formally we
| have the following:
|
| Definition 1.
|
| A nonempty collection of subsets $A$ of a set !W!
| is called a !s!-field of subsets of !W! provided
| that the following two properties hold:
|
|    1.  If A is in $A$, then A^c is also in $A$.
|
|    2.  If A_n is in $A$, n = 1, 2, ..., then:
|
|           |_| (n = 1 to oo) A_n
|
|        and
|
|           |^| (n = 1 to oo) A_n
|
|        are both in $A$.
|
| Hoel, Port, Stone, 'Probability Theory', p. 7.
|
| Hoel, P.G., Port, S.C., & Stone, C.J.,
|'Introduction to Probability Theory',
| Houghton Mifflin, Boston, MA, 1971.

PAS. Note 4

| 1.2.  Probability Spaces (cont.)
|
| We now come to the assignment of probabilities to events.
| As was made clear in the examples of the preceding section,
| the probability of an event is a nonnegative real number.  For
| an event A, let P(A) denote this number.  Then 0 =< P(A) =< 1.
| The set !W! representing every possible outcome should, of course,
| be assigned the number 1, so P(!W!) = 1.
|
| In our discussion of Example 1 we showed that the probability of events
| satisfies the property that if A and B are any two disjoint events then
| P(A |_| B) = P(A) + P(B).  Similarly, in Example 2 we showed that if
| A and B are two disjoint intervals, then we should also require that:
|
|    P(A |_| B)  =  P(A) + P(B).
|
| It now seems reasonable in general to demand that if A and B are disjoint
| events then P(A |_| B) = P(A) + P(B).  By induction, it would then follow
| that if A_1, A_2, ..., A_n are any n mutually disjoint sets (that is, if
| A_i |^| A_j = {} whenever i =/= j), then:
|
|    P(|_| (i = 1 to n) A_i)  =  Sum (i = 1 to n) P(A_i).
|
| Actually, again for mathematical reasons, we will
| in fact demand that this additivity property hold
| for countable collections of disjoint events.
|
| Definition 2.
|
| A probability measure P on a !s!-field of
| subsets $A$ of a set !W! is a real-valued
| function having domain $A$ satisfying the
| following properties:
|
|    1.  P(!W!) = 1.
|
|    2.  P(A) >= 0 for all A in $A$.
|
|    3.  If A_n, n = 1, 2, 3, ..., are
|        mutually disjoint sets in $A$,
|        then:
|
|        P(|_| (n = 1 to oo) A_n)  =  Sum (n = 1 to oo) P(A_n).
|
| A probability space, denoted by (!W!, $A$, P),
| is a set !W!, a !s!-field of subsets $A$, and
| a probability measure P defined on $A$.
|
| Hoel, Port, Stone, 'Probability Theory', p. 8.
|
| Hoel, P.G., Port, S.C., & Stone, C.J.,
|'Introduction to Probability Theory',
| Houghton Mifflin, Boston, MA, 1971.

PAS. Probability And Statistics • Document History

The following material is excerpted from:

  • Hoel, P.G., Port, S.C., and Stone, C.J. (1971), Introduction to Probability Theory, Houghton Mifflin, Boston, MA.

Inquiry List

  1. http://web.archive.org/web/20061013221215/http://stderr.org/pipermail/inquiry/2003-June/000588.html
  2. http://web.archive.org/web/20070316233522/http://stderr.org/pipermail/inquiry/2003-June/000589.html
  3. http://web.archive.org/web/20070311001637/http://stderr.org/pipermail/inquiry/2003-June/000591.html
  4. http://web.archive.org/web/20070311001647/http://stderr.org/pipermail/inquiry/2003-June/000592.html

Ontology List

  1. http://web.archive.org/web/20071009000644/http://suo.ieee.org/ontology/msg04885.html
  2. http://web.archive.org/web/20071009000324/http://suo.ieee.org/ontology/msg04886.html
  3. http://web.archive.org/web/20071008110036/http://suo.ieee.org/ontology/msg04887.html
  4. http://web.archive.org/web/20071007170817/http://suo.ieee.org/ontology/msg04888.html

SEM. Program Semantics

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Algebraic Approaches to Program Semantics
|
| Preface
|
| In the 1930's, mathematical logicians studied the notion
| of "effective computability" using such notions as recursive
| functions, lambda calculus, and Turing machines.  The 1940's saw
| the construction of the first electronic computers, and the next 20
| years saw the evolution of higher-level programming languages in which
| programs could be written in a convenient fashion independent (thanks to
| compilers and interpreters) of the architecture of any specific machine.
| The development of such languages led in turn to the general analysis of
| questions of 'syntax', structuring strings of symbols which could count
| as legal programs, and 'semantics', determining the "meaning" of a
| program, for example, as the function it computes in transforming
| input data to output results.  An important approach to semantics,
| pioneered by Floyd, Hoare, and Wirth, is called 'assertion' semantics:
| given a 'specification' of which assertions ('preconditions') on input data
| should guarantee that the results satisfy desired assertions ('postconditions') on
| output data, one seeks a logical proof that the program satisfies its specification.
| An alternative approach, pioneered by Scott and Strachey, is called 'denotational'
| semantics:  it offers algebraic techniques for characterizing the denotation
| of (i.e., the function computed by) a program -- the properties of the
| program can then be checked by direct comparison of the denotation
| with the specification.
|
| This book is an introduction to denotational semantics.
| More specifically, we introduce the reader to two approaches to
| denotational semantics:  the 'order semantics' of Scott and Strachey
| and our own 'partially additive semantics'.  Moreover, we show how each
| approach may be applied both to the specification of the semantics of programs,
| including recursive programs, and to the specification of new data types from old.
| There has been a growing acceptance that 'category theory', a branch of abstract
| algebra, provides a perspicuous general setting for all these topics, and for
| many other algebraic approaches to program semantics as well.  Thus, an
| important aim of this book is to interweave the study of semantics
| with a completely self-contained introduction to a useful core
| of category theory, fully motivated by basic concepts of
| computer science.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.1.  Syntax and Semantics
|
| To specify a programming language we must specify its syntax and semantics.
| The 'syntax' of a programming language specifies which strings of symbols
| constitute valid programs.  A formal description of the syntax typically
| involves a precise specification of the alphabet of allowable symbols
| and a finite set of rules delineating how symbols may be grouped into
| expressions, instructions, and programs.  Most compilers for programming
| languages are implemented with 'syntax checking' whereby the first stage
| in compiling a program is to check its text to see if it is syntactically
| valid.  In practice, syntax must be described at two levels, for a human
| user through programming manuals and as a syntax-checking algorithm within
| a compiler or interpreter.
|
| "Semantics" is a technical word for "meaning".  A 'semantics' for
| a programming language explains what programs in that language mean.
| In more mathematical terms, semantics is a function whose input is a
| syntactically valid program and whose output is a description of the
| function computed by the program.
|
| There are different approaches to semantics.  We briefly introduce three:
| operational semantics, denotational semantics, and assertion semantics.
| We will give an example of an operational semantics in the next section.
| Assertion semantics will be further considered in Chapter 4.  Denotational
| semantics is a major concern of this book.
|
| Manes & Arbib, AAPS, pages 1-3.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.1.  Syntax and Semantics (cont.)
|
| 'Operational semantics' is the most intuitive for beginners with
| some programming experience, being the form of semantics described
| in most programming manuals.  To provide an operational semantics for
| a programming language, one invents an "abstract computer" and describes
| how programs "run" on this computer.  Usually, the semantics prescribes how
| the syntactic form of a program is to be interpreted as a (data-dependent)
| sequence of instructions.  Input data are then transformed as the program
| is run in sequence, instruction by instruction, branching and looping back
| on the basis of tests on current values of data.
|
| By contrast to operational semantics which traces all intermediate states
| in a computation, 'denotational semantics' focuses on input/output behavior
| and ignores the intermediate states.  Operational semantics provides more
| information on how to implement a programming language as long as the
| implementation environment resembles that of the abstract computer.
| For example, an operational semantics in which every computation is
| described as a serial sequence of state changes would be somewhat
| at odds with an implementation on a pipeline architecture which
| maximizes parallel computation.  An objective of denotational
| semantics is to avoid worry about details of implementation.
|
| A challenge posed by denotational semantics is to invent
| mathematical frameworks permitting the description of
| repetitive programming constructs (i.e., "loops")
| without explicit reference to intermediate states.
| The "partially additive semantics" of Section 1.5
| introduces a power-series representation for
| computed functions which, in part, expresses
| programming constructs in terms of operations
| that manipulate power series.  Other approaches
| to denotational semantics, to be discussed
| in Part 2, use partially ordered sets
| and metric spaces for their
| mathematical underpinnings.
|
| Manes & Arbib, AAPS, pages 3-4.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.1.  Syntax and Semantics (cont.)
|
| Before discussing assertion semantics we must first introduce assertions.
| An 'assertion' is a statement about the program state which is either true
| or false.  As an example, consider the (hopefully transparent) program 1.
|
| 1.  INPUTS:   X
|     OUTPUTS:  Y
|     {X >= 0}
|     BEGIN
|        (a block of code representing
|         an algorithm for Y := X^½ )
|     END
|     {X = Y * Y}.
|
| The assertions are shown enclosed by braces, "{" and "}".  They are not
| part of the program, but assert what properties 'should' hold true when
| the assertion is encountered in executing the program.  A program is
| 'correct' if indeed the satisfaction of all initial assertions about
| the input data guarantees the truth of all assertions encountered
| later on.
|
| One could attempt to design a programming language with assertions in mind.
| All built-in functions would come with associated assertions and for each
| programming construct there would be rules explaining how to find suitable
| assertions for the overall construct from the pieces of the construct and
| their assertions.  Ideally, every program would automatically be strewn
| with assertions with the following beneficial effects.  The assertions
| would usefully document the program, and it would be possible to write
| software that could automatically scan the assertions to detect bugs
| and check for correctness.
|
| In the next section we introduce a small fragment of Pascal giving a
| formal syntax and an operational semantics.  In Section 1.3, however, we
| introduce a functional programming fragment that makes no use of identifiers
| or assignment statements.  Here, the concept of "state" (which in Section 1.2
| means the values stored by the identifiers) would require major overhaul before
| one could give an operational semantics or an assertion semantics.  It is hard
| to create general semantic theories devoid of built-in assumptions about the
| programming languages to which they apply!
|
| Manes & Arbib, AAPS, pages 4-5.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal
|
| In this section we describe an abbreviated version of Pascal.
| Although this limited version has full computing power with
| regard to functions whose inputs and outputs are natural
| numbers, this is a tangential point -- the main objective
| of this section is to illustrate how to present a formal
| syntax as well as an operational semantics for a simple
| programming language.  The reader should observe that
| the level of precision of the operational semantics
| is such that it becomes fairly clear how to write
| a compiler or interpreter for the Pascal fragment,
| so that we accomplish more than an exercise in
| formalizing what we already knew.
|
| The complete syntax of our Pascal fragment is given in Table 1.
|
| Table 1.  The Syntax of a Pascal Fragment
| ---------------------------------------------------
|
| Alphabet of Symbols
|
|    Digits:  0, 1, ..., 9
|
|    Letters:  a, b, ..., z
|
|    Parentheses:  ( , )
|
|    Boolean Truth Values:  T, F
|
|    Boolean Connectives:  ~, v, &
|
|    Comparisons:  =, =/=, <, =<, >, >=
|
|    Arithmetic Functions:  +, -, *, ÷
|
|    Statement Constructors:
|
|       :=, ;, begin, end, if, then, else, while, do, repeat, until
|
| The set of 'expressions' is defined by:
|
|    Given Outright:
|
|       Any nonempty string of digits (called a 'numeral'),
|       a letter followed by a (possibly empty) string of
|       digits and letters (called an 'identifier').
|
|    Building Rules:
|
|       If D, E are expressions
|       so are (D + E), (D - E), (D * E), (D ÷ E).
|
| The set of 'tests' is defined by:
|
|    Given Outright:
|
|       T, F,
|
|       D = E, D =/= E, D < E, D =< E, D > E, D >= E,
|       for any two expressions D, E.
|
|    Building Rules:
|
|       If B, C are tests so are (~B), (B v C), (B & C).
|
| The set of 'statements' is defined by:
|
|    Given Outright:
|
|       I := E, if I is an identifier and E is an expression.
|
|    Building Rules:
|
|       If S_1, ..., S_n are statements (n >= 0), so is:
|
|          begin S_1; ...; S_n end.
|
|       If B is a test and R, S are statements, so are:
|
|          (if B then R else S),
|
|          (while B do S),
|
|          (repeat S until B).
|
| ---------------------------------------------------
|
| Here, the colons, commas, and periods are 'not' among the 64 symbols
| in the alphabet.  Parentheses are used liberally to ensure that there
| is exactly one way to derive an expression, test, or statement using
| the building rules and beginning with those which are given outright.
| We do not give a formal proof of this here, but encourage the reader
| to explore this (see Exercise 1).  Three examples of expressions are:
|
|    ((a + 5) * 2),
|
|    572,
|
|    (cat + (dog + mouse)),
|
| whereas, according to our rules,
|
|    a + 5
|
| is not an expression.  An example of a statement is shown in (2).
|
| 2.  begin a := 5; (while (a > 0 & a =/= 6) do a := a - 1) end
|
| Notice that begin, while, do, and end are single symbols
| in the chosen alphabet and that there is no space symbol
| in the alphabet.  Normally, one displays a statement so
| as to be more readable by humans, for example, as in (3).
|
| 3.  begin
|       a := 5;
|       (while (a > 0 & a =/= 6) do a := a - 1)
|     end
|
| This is harmless since we obtain (2) from (3) by ignoring the
| aspects (in this case the vertical arrangement and the spaces)
| which are not expressible in the formal syntax.
|
| Manes & Arbib, AAPS, pages 5-6.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

Nota Bene.  In this transcription, I found it too distracting
to mark the syntactic keywords in bold -- as #begin# or #end# --
by way of attempting to emulate the authors' practice in mine,
so I must ask for my reader's indulgence in the way of making
intelligent adjustments, 'mutatis mutandis', in the reception.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal (cont.)
|
| We assume that the reader already has a good idea of what the semantics of
| our fragment should be.  (For example, the algorithm described by (2) always
| terminates with identifier 'a' storing the value 0.)  A formal operational
| semantics is as follows.
|
| We imagine an abstract computer with one memory location set aside for each
| identifier.  Each location stores a single value, where a 'value' is either a
| natural number or the symbol _|_ meaning "as yet undefined".  At any time, only
| finitely many locations store a number.  The effect of executing a statement is
| to assign numerical values to identifiers by evaluating numerical expressions
| according to an algorithm controlled by tests and conditional and repetitive
| constructs.  (Here we ignore overflow:  our numerical operations +, -, *, ÷, for
| addition, subtraction, multiplication, and division compute exact integer values
| no matter how large.)  The only thing that can "go wrong" is that we might attempt
| to evaluate an expression containing identifiers for which no numerical values have
| been assigned.  When this happens we wish to abort the computation and so we create
| a special 'abort state' !w! [omega].  Every other state is a 'normal state' which we
| define to be a function !s! [sigma] from the set of all identifiers to the set of all
| values, with the requirement that !s!(I) =/= _|_ for only finitely many identifiers I.
| The 'initial state' is the function !t! [tau] which assigns _|_ to each identifier.
|
| The operational semantics of a statement S will be defined as
| a 'computation sequence' of states beginning with the initial
| state !t! and taking one of the forms (4a), (4b), or (4c):
|
| 4a.  !t!, !s!_1, ..., !s!_n, !w!  (n > 0, all !s!_i =/= !w!);
|
| 4b.  !t!, !s!_1, ..., !s!_n, ...  (all !s!_i =/= !w!);
|
| 4c.  !t!, !s!_1, ..., !s!_n       (n >= 0, all !s!_i =/= !w!).
|
| In (4a), 'computation aborts'.
|
| In (4b), 'computation is nonterminating'.
|
| In (4c), the computation 'terminates' in a normal state !s!_n.
|
| Manes & Arbib, AAPS, pages 6-7.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal (cont.)
|
| We now turn to the details of how to associate a definite sequence of
| states to a statement.  Here the description of Table 1 provides a guide.
| (We substitute the more mathematical terms "basis step" for "given outright"
| and "inductive step" for "building rules" from now on.)  We must first assign
| appropriate values to expressions and tests (a process that depends on the state).
|
| 5.  The 'value' [!s!, E] of expression E in normal state !s!
|     is defined inductively as follows.
|
|     Basis step.
|
|     If E is a numeral, [!s!, E] is the
|     usual base-10 natural number value
|     of E (with leading zeros ignored).
|
|     If E is an identifier, [!s!, E] = !s!(E).
|
|     Inductive step.
|
|     If either [!s!, D] = _|_ or [!s!, E] = _|_ then
|
|     [!s!, (D+E)] = [!s!, (D-E)] = [!s!, (D*E)] = [!s!, (D÷E)] = _|_.
|
|     Else
|
|     [!s!, (D + E)]  =  [!s!, D]  +  [!s!, E]
|
|     [!s!, (D - E)]  =  [!s!, D] -°- [!s!, E]
|
|     [!s!, (D * E)]  =  [!s!, D]  ·  [!s!, E]
|
|     [!s!, (D ÷ E)]  =  [!s!, D] div [!s!, E]
|
|     are the expected natural-number arithmetic operations
|     so that x -°- y means the maximum of 0 and x - y, and
|     x div y is the largest integer =< y/x, that is, the
|     unique integer q with y = qx + r, where the remainder
|     r satisfies 0 =< r < x.
|
| (Here we have relied on the earlier-stated fact that there
|  is only one way to decouple an expression;  if there were
|  more than one way the above rules might assign values to
|  expressions ambiguously.)
|
| To illustrate how (5) is used, suppose that !s!(a) = 3.  Then:
|
| [!s!, ((a + 5) * 2)]
|
| =  [!s!, (a + 5)] · [!s!, 2]
|
| =  ([!s!, a] + [!s!, 5]) · [!s!, 2]
|
| =  (3 + 5)(2)
|
| =  16
|
| Manes & Arbib, AAPS, pages 7-8.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal (cont.)
|
| Tests are evaluated in a similar way:
|
| 6.  The 'truth value' [!s!, B] of test B in normal state !s!
|     is defined inductively as follows.
|
|     Basis step.
|
|     [!s!, T]  =  T,
|
|     [!s!, F]  =  F.
|
|     [!s!, D = E] is _|_ if either [!s!, D] or [!s!, E] is _|_
|
|     else is T or F accordingly as [!s!, D] = [!s!, E] or [!s!, D] =/= [!s!, E].
|
|     [!s!, D =/= E], [!s!, D < E], [!s!, D =< E], [!s!, D > E], [!s!, D >= E]
|
|     are defined similarly.
|
|     Inductive step.
|
|     Let ~ (not), v (or), & (and) have their usual meanings on the Boolean
|     truth values T, F (T for "true", F for "false") so that, for example,
|     ~T = F, ~F = T, F & T = F, and so on.  Then:
|
|     [!s!, (~B)] is _|_ if [!s!, B] is _|_ else is ~[!s!, B].
|
|     [!s!, (B v C)] is _|_ if either of [!s!, B] or [!s!, C] is _|_
|
|                           else is [!s!, B] v [!s!, C].
|
|     [!s!, (B & C)] is _|_ if either of [!s!, B] or [!s!, C] is _|_
|
|                           else is [!s!, B] & [!s!, C].
|
| Manes & Arbib, AAPS, page 8.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal (cont.)
|
| As a prelude to defining the semantics of statements, we aid the
| reader's intuition with flowschemes for the programming constructs
| in Table 7.
|
| Table 7.  Flowschemes for Programming Constructs
| -------------------------------------------------------------
|
| Assignment Statement.  I := E
|
|           o---------o
|      ---->| I := E  |---->
|           o---------o
|
|
| Composition.  begin S_1; ...; S_n end
|
|           o---------o    o---------o    o---------o
|      ---->|   S_1   |--->|   ...   |--->|   S_n   |---->
|           o---------o    o---------o    o---------o
|
|
| Conditional.  (if B then R else S)
|
|                T         o---------o
|                o-------->|    R    |-------->o
|               / \        o---------o         |
|              /   \                           |
|      ------>o  B  o                          o------>
|              \   /                           |
|               \ /        o---------o         |
|                o-------->|    S    |---------o
|                F         o---------o
|
|
| Repetitive Constructs.
|
| (while B do S)
|
|         o<-----------------------------------o
|         |                                    ^
|         |      T         o---------o         |
|         |      o-------->|    S    |-------->o
|         |     / \        o---------o
|         v    /   \
|      ------>o  B  o
|              \   /
|               \ /
|                o-------------->
|                F
|
|
| (repeat S until B)
|                                              T
|                                              o--------->
|                                             / \
|                          o---------o       /   \
|      ------------------->|    S    |----->o  B  o
|                ^         o---------o       \   /
|                |                            \ /
|                o<----------------------------o
|                                              F
|
| -------------------------------------------------------------
|
| Manes & Arbib, AAPS, pages 8-9.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.2.  A Simple Fragment of Pascal (cont.)
|
| The principal semantic definition is:
|
| For any normal state !s! the 'computation sequence' of S starting at !s!
| is a state sequence <!s!, S> of one of the three forms (8a), (8b), (8c):
|
| 8a.  !s!, !s!_1, ..., !s!_n, !w!  (n >= 0, all !s!_i =/= !w!);
|
| 8b.  !s!, !s!_1, ..., !s!_n, ...  (all !s!_i =/= !w!);
|
| 8c.  !s!, !s!_1, ..., !s!_n       (n >= 0, all !s!_i =/= !w!).
|
| (with interpretations similar to those of (4a), (4b), (4c))
| defined inductively as follows.
|
| 9.  Basis step.
|
|                       (  !s!, !w!    if [!s!, E] = _|_
|     <!s!, I := E>  =  <
|                       (  !s!, !s!_1  else,
|
|     where
|
|                       (  !s!(J)      if J =/= I
|     !s!_1(J)       =  <
|                       (  [!s!, E]    if J  =  I.
|
| This is the expected meaning.  Identifier I is assigned the
| value obtained by evaluating E, as long as this is possible,
| and other identifiers are left unchanged.
|
| Inductive step.
|
| 10.  Composition.
|
|      Define
|
|      <!s!, begin end>  =  !s!
|
|      and define
|
|      <!s!, begin S_1 end>  =  <!s!, S_1>.
|
|      Proceeding inductively on the number of statements, assume that
|
|      <!s!, begin S_2; ...; S_k+1 end>
|
|      has been defined for every normal state !s!
|      and every k statements S_2, ..., S_k+1.
|
|      Then
|
|      <!s!, begin S_1; ...; S_k+1 end>
|
|      is defined as follows.
|
|      It is defined to be
|
|      <!s!, S_1>
|
|      if "S_1 fails to terminate normally starting at !s!", that is,
|      if <!s!, S_1> has one of the forms (8a), (8b).  Otherwise,
|
|      <!s!, S_1>  =  !s!, !s!_1, ..., !s!_n
|
|      as in (8c), so we define
|
|      <!s!, begin S_1; ...; S_k+1 end>
|
|      to be the sequence
|
|      !s!, !s!_1, ..., !s!_n-1, <!s!_n, begin S_2; ...; S_k+1 end>.
|
| In short, we form the sequence obtained if each S_i+1 begins where the
| previous S_i leaves off, save that this cannot continue if computation
| aborts or one of the S_i did not terminate.
|
| 11.  Conditional.
|
|                                      (  <!s!, !w!>  if [!s!, B] = _|_
|      <!s!, (if B then R else S)>  =  <  <!s!,  R >  if [!s!, B] =  T
|                                      (  <!s!,  S >  if [!s!, B] =  F
|
| Repetitive constructs.
|
| The computation sequence <!s!, (while B do S)> is given by (12).
|
| 12.  While-do statement.
|
|                                (  !s!, !w!  if  [!s!, B] = _|_
|      <!s!, (while B do S)>  =  <
|                                (  !s!       if  [!s!, B] =  F
|
|      <!s!, (while B do S)>  =  <!s!, S>     if  [!s!, B] =  T
|                                             and <!s!, S> has one
|                                             of the forms (8a, 8b).
|
|      <!s!, (while B do S)>  =  !s!, !s!_1, ..., !s!_n-1, <!s!_n, (while B do S)>
|
|                                             if  [!s!, B] =  T
|                                             and <!s!, S> has the form
|                                             !s!, !s!_1, ..., !s!_n of (8c).
|
| This sequence may, of course, fail to terminate.
| We leave it to the reader to formulate a similar
| definition for <!s!, (repeat S until B)>.
|
| 13.  The 'computation sequence' <S> of the statement S is <!t!, S>,
|      where !t! is the initial state mapping each identifier to _|_.
|      The operational semantics of our Pascal fragment is complete.
|
| Manes & Arbib, AAPS, pages 9-11.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment
|
| In his provocative 1977 Turing Award Lecture, John Backus expressed concern
| that many programming languages were syntactically fat and unwieldy but
| semantically lean and inexpressive.  In reaction, he proposed a new
| class of languages, the 'functional programming languages', in
| which a "program" is a symbolic input/output function whose
| inputs are not given names:  there are no identifiers,
| assignments, or references of any kind to intermediate
| storage and hence there are no side effects (such as
| clashes between local and global variable identifiers)
| to concern the programmer.  In this section we present
| a simple functional programming fragment whose principal
| data structures are trees similar to the "s-expressions"
| of the programming language Lisp but many of whose function
| constructors are patterned after those emphasized by Backus.
| Because we delay introduction of repetitive constructs into this
| fragment until our later discussion of recursion in Chapter 5, the
| version of this section temporarily fails to have full computing power.
|
| We shall call our language FPF for "Functional Programming Fragment".
| The syntax of FPF is given in Table 1.  Here the colons and periods
| are not among the 32 symbols of the alphabet.
|
| Table 1.  The Syntax of FPF
| -------------------------------------------------------------
|
| Alphabet of Symbols
|
|    Digits:  0 1 ... 9
|
|    Parentheses:  ( ) < >
|
|    Atomic Functions:  id  head  tail  +  -  *  ÷  num  =
|
|    Function Constructors:  !=!  o  if then else  [ ]  !a!  /
|
| The set !DTN! of 'dynamic trees of numerals' (DTN's for short) is defined by:
|
|    Basis Step.
|
|       A 'numeral' (i.e., a nonempty string of digits) is a DTN.
|
|    Inductive Step.
|
|       If t_1 ... t_k are DTN's (k >= 0)
|
|       then <t_1, ..., t_k> is a DTN.
|
| The set of 'functions' is defined by:
|
|    Basis Step.
|
|       An atomic function symbol is a function.
|
|       If t is a DTN then !=!t is a function.
|
|    Inductive Step.
|
|       If f_1 ... f_k are functions (k >= 1)
|
|       then so are (f_k o ... o f_1) and [f_1, ..., f_k].
|
|       If p, f, g are functions then so is (if p then f else g).
|
|       If f is a function then so are (!a!f) and (/f).
|
| -------------------------------------------------------------
|
| The reader need not feel uneasy if Table 1 fails to explain how FPF
| works, since that is the job of semantics:  syntax has no meaning!
|
| Manes & Arbib, AAPS, pages 11-12.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| We will give a denotational semantics for FPF.  We begin by discussing !DTN!,
| whose inductive definition is given in Table 1, which is the set of DTN's,
| that is, 'dynamic trees of numerals' (note the different typeface for the
| set and for the generic name of elements of the set).  This set includes
| lists of numerals, namely, the DTN's of form <n_1, ..., n_k>, where n_i
| are numerals.  The case k = 0 gives the 'empty list' < > as a DTN.
| Similarly, we can have a list of lists such as <<5, 17>, < >, <035>>,
| the list whose first entry is the list <5, 17>, whose second entry is
| the empty list, and whose third entry is the length-1 list consisting
| of the numeral 035.  Other examples are less homogeneous, for example,
| <05, << >>, <2, 3>>.  An m x n matrix of numerals (a_ij), usually
| visualized as a retangular array with the numeral a_ij in row i
| and column j, may conveniently be coded as the DTN:
|
| 2.  <<a_11, ..., a_1n>, ..., <a_m1, ..., a_mn>>.
|
| The input to a matrix multiplication algorithm may then be coded as
| a length-2 list whose entries are matrices as in (2).  These examples
| suggest the ease with which DTN's model complex inputs and outputs.
|
| Each DTN has a unique 'derivation tree' describing how to build it
| using the basis and inductive steps in the definition of !DTN! of
| Table 1.  For example, <1, <<0, 10>, < >>> has derivation tree
|
|      0    01
|       o   o
|        \ /
|         o   o
|      1   \ /
|       o   o
|        \ /
|         @
|
| where each node [= @ or o] indicates a list whose entries are the
| subtrees branching from that node (read in left-to-right order).
| A node without branches thus indicates the empty list.  The node
| at the root [= @] of the tree indicates the lists represented by
| the whole tree.  It is clear that such derivation trees are in
| natural one-to-one correspondence with the elements of !DTN!
| and, indeed, that the list notation is just a convenient
| way to code such a tree as a string.  This explains the
| term dynamic 'tree' of numerals.  "Dynamic" is in the
| same sense as in the term "dynamic array" in Pascal,
| meaning that the lengths and shapes of DTN's are
| not prespecified in a "declaration".
|
| Manes & Arbib, AAPS, pages 12-13.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

Nota Bene.  The way I remember it, computicians initially fell into
the habit of drawing their trees upside down, not because they were
more enamored of genealogists than of botanists in their phyllogeny,
but because of a need to print trees out on the old teletypewriters,
which scrolled the papyrus inexorably upward with neither piety nor
wit in their more than purely symbolic denaturing of nature.  But I,
having been raised in a school of graph theory that accords its due
respect to the light of nature, am compelled to take liberties with
the authors' rendering of trees, and to reset their matters upright.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| In our denotational semantics of FPF, the semantics of
| each syntactic function will be such as to transform
| inputs in !DTN! to outputs which are again in !DTN!.
| We pause briefly to note what kind of function
| constitutes such a transformation.
|
| 3.  Definitions.  Let X, Y be sets.  A 'partial function' from X to Y is
|     specified by providing a subset A of X and a function mapping each
|     element of A to a unique element of Y.  We say X is the 'domain',
|     Y is the 'codomain', and A is the 'domain of definition'.
|
| (Other authors use "domain" for our "domain of definition".
|  Our terminology follows the conventions of category theory
|  as discussed in the next chapter;  see Definition 2.1.1.)
|
| [A shorter name for "domain of definition" is "corange".]
|
| Our most common notation will be to assign a symbolic
| name such as 'f' to a partial function.  We write
|
|         f
| "let X ---> Y be a partial function"
|
| to mean f is a partial function from X to Y.
| We may also write f : X -> Y in place of
|
|         f
|      X ---> Y
|
| In either case, we use f(x) for the value assigned by f to
| each x in its domain of definition, which we denote by DD(f).
| If x is in X but x is not in DD(f) we say "f(x) is undefined".
|
| 4.  The set of all partial functions from X to Y will be
|     written Pfn(X, Y).  The "partial" in partial function
|     means "partially defined".  Paradoxically, an important
|     special case of a partial function f : X -> Y occurs
|     when DD(f) = X.  This is just a function from X to Y.
|     For emphasis, we call such f a 'total function' from
|     X to Y.
|
| 5.  We relate this to program semantics in general before returning to !DTN!.
|     Let X be an input set and let Y be an output set.  A given algorithm
|     with input x in X may fail to terminate.  Let A be the subset of X
|     consisting of those x for which the algorithm terminates if x is
|     the input.  The denotational semantics of the algorithm is the
|     partial function f : X -> Y with DD(f) = A, and where f(x) is
|     the output at termination when x is the input.  (In 1.4.5
|     below we will consider a computation environment in which
|     (5) requires modification.
|
| 6.  The set of all total functions from X to Y will be written Tot(X, Y).
|
| Manes & Arbib, AAPS, pages 13-14.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| If f in Pfn(X, Y), g in Pfn(Y, Z) arise as in (5) we may
| think of f, g as the computations of subalgorithms which
| can be chained together setting the output of f as the
| input of g to produce a net output in Z from an input
| in X.  The formal operation involved is as follows.
|
| 7.  Definition.  For f in Pfn(X, Y) and g in Pfn(Y, Z)
|     their 'composition' gf in Pfn(X, Z) is defined by:
|
|     DD(gf)   =  {x in X : x in DD(f) and f(x) in DD(g)},
|
|     (gf)(x)  =  g(f(x))  for x in DD(gf).
|
| Notice that gf is total when f and g are.
|
| The functions studied in first-semester calculus are
| partial functions from the set of reals to itself (e.g.,
| DD(1/x) = {x : x =/= 0}, DD(arcsin x) = {x : -1 =< x =< 1},
| etc.).  The "chain rule" refers to the composition of (7),
| being a rule for the derivative of gf.  Composition of
| functions is sometimes called "chaining" because the
| output of one function is the input to the next,
| creating a chain of two links.  Longer chains
| arise in (12) below.
|
| Manes & Arbib, AAPS, page 14.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| We turn now to the semantics for FPF by associating a partial function
| in Pfn(!DTN!, !DTN!) to each syntactic function.  To keep the notation
| as simple as possible we will denote the semantics of the function f
| by f: and so we will write f: t for the value f: assigns to the DTN t.
| Thus, the presence of the colon (which is not in the alphabet of
| Table 1) indicates semantics.
|
| In describing a specific partial function f, if a formula for
| f: t is given for t of a particular form without further comment
| our convention is that f: is not defined for other t.  Sometimes,
| of course, DD(f) is sufficiently complicated for a more careful
| description to be necessary.
|
| We begin with the basis-step functions in Table 1.
|
| 8.  id: is the identity function,
|
|     id: t  =  t    for all t in !DTN!.
|
|     head returns the first element of a list and
|     tail drops the first element of a list as follows:
|
|     head: <t_1, ..., t_k>  =  t_1                (k >= 1),
|
|     tail: <t_1, ..., t_k>  =  <t_2, ..., t_k>    (k >= 1).
|
| Thus, we cannot makes heads or tails of the empty list or numerals.
|
| 9.  The arithmetic functions +, -, *, ÷ require an input of the form <m, n>
|     where m, n are numerals.  The meaning of the operations is then the same
|     as in Pascal as described in 1.2.5.  Thus,
|
|     +: <m, n>  =  m  +  n
|
|     -: <m, n>  =  m -°- n
|
|     *: <m, n>  =  m  ·  n
|
|     ÷: <m, n>  =  m div n
|
| where, on the right-hand sides, the numerals m, n represent numbers
| in the usual base-10 way and the numerical results are represented
| as numerals without leading zeros.
|
| 10.  The 'numeral' function num is defined by
|
|                 (  << >>  if t is a numeral,
|      num: t  =  <
|                 (   < >   else.
|
|      Similarly, the 'equality' function = takes an input of
|      the form <t, u> where t, u in !DTN! are arbitrary and
|
|                     (  << >>  if t  =  u,
|      =: <t, u>  is  <
|                     (   < >   if t =/= u.
|
| Here we have coded the truth values as DTN's by representing
| T as << >> and F as < >.  This is analogous to the trick used
| in set theory (mathematicians sometimes adopt the view that
| all of mathematics may be derived from set theory) to define
| natural numbers in terms of sets, wherein 0 is defined as
| the empty set Ø, 1 is defined as the one-element set {Ø},
| 2 is defined as the two-element set {0, 1} = {Ø, {Ø}},
| and n = {0, ..., n-1} in general.  Using lists instead of
| sets, that is, by substituting < for { and > for }, the same
| constructions are available in !DTN!.  We could have used the
| numerals 0 and 1 for F and T but it seemed more desirable to
| use a convention that would apply to dynamic trees of objects
| other than numerals.  In fact, our convention is analogous to
| that used in the programming language Lisp, where the empty
| list Nil is used for the truth value F and where any other
| values may be interpreted as T.
|
| Manes & Arbib, AAPS, pages 14-15.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| We now provide the semantics for the basis step
| in the set of functions defined in Table 1.
|
| 11.  For each DTN t, !=!t:
|      is the total function
|      which is constantly t,
|      that is,
|
|      !=!t: u  is  t  for any DTN u.
|
| To continue our description of the semantics of FPF
| we examine the constructions of the inductive step
| for the functions in Table 1.
|
| 12.  If f_1, ..., f_k are functions (k >= 1), (f_k o ... o f_1):
|      is the k-fold composition of (7), being essentially the same
|      as the pseudo-Pascal f_1; ...; f_k, that is,
|
|      (f_k o ... o f_1): t  =  f_k: (f_k-1: (... (f_2: (f_1: t) ...))).
|
| (Such is defined, of course, only when all the intermediate steps are defined.)
|
| The next construction applies k functions in parallel and combines the results
| in a single list.
|
| 13.  If f_1, ..., f_k are functions (k >= 1),
|
|      then
|
|      DD([f_1, ..., f_k])  =  DD(f_1) |^| ... |^| DD(f_k)
|
|      and
|
|      [f_1, ..., f_k]: t   =  <f_1: t, ..., f_k: t>.
|
| This construction is a major tool in building lists.
|
| 14.  For p, f, g functions,
|
|                                   (  f: t       (p: t =/= < >),
|      (if p then f else g): t  is  <  g: t       (p: t  =  < >),
|                                   (  undefined  (p: t undefined).
|
| Thus, our device for viewing function p as a test is to consider
| p: t false if it is our coding < > for false, true if it is defined
| but not false, and undefined else.  The notation above is understood
| to mean that (if p then f else g): t is undefined if p: t = < > but
| g: t is undefined, or if p: t is defined and =/= < > but f: t is
| undefined.
|
| 15.  The symbol !a! is the 'apply-to-all operator'.  If f is a function,
|      (!a!f): is "f applied to all entries in the input list".  Specifically,
|      an input to (!a!f): must have the form <t_1, ..., t_k> with k >= 1, and
|      each t_i in DD(f), and then
|
|      (!a!f): <t_1, ..., t_k>  =  <f: t_1, ..., f: t_k>.
|
| 16.  The symbol / is the 'insertion operator'.
|
|      If f is a function then (/f): <t_1, t_2, t_3>, for example,
|      will be defined as f: <t_1, f: <t_2, t_3>>.  Equivalently,
|      using infix notation t f u instead of f: <t, u>, we have:
|
|      (/f): <t_1, t_2, t_3>  =  t_1 f (t_2 f t_3).
|
|      Similarly,
|
|      (/f): <t_1, t_2, t_3, t_4>  =  t_1 f (t_2 f (t_3 f t_4)).
|
|      Thus, / treats f as a function of two variables and
|      extends it to a function on any number of variables
|      by "inserting" it between the variables.
|
|      The formal definition is as follows.  The input must
|      have the form <t_1, ..., t_k>, (k >= 0), that is, it
|      cannot be a numeral.  We use induction on k.
|
|      (/f):  < >                =   < >,
|
|      (/f): <t_1>               =   t_1,
|
|      (/f): <t_1, ..., t_k+1>   =   f: <t_1, (/f): <t_2, ..., t_k+1>>.
|
| This completes the description of the syntax and semantics of FPF.
| Since the reader may have had very little prior experience with
| functional languages, we will write some FPF functions to
| illustrate some of the concepts.  Additional examples
| using recursion will be given in Section 5.1, but
| we shall be able to achieve quite a bit without
| any repetitive constructs.  Indeed, it is
| possible to write an FPF function to
| multiply two square matrices and
| this is done in (26) below.
|
| Manes & Arbib, AAPS, pages 15-17.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.3.  A Functional Programming Fragment (cont.)
|
| We begin by introducing "abbreviations" which amount to "subprograms".
|
| 17.  We introduce the symbol =_abb.
|
|      If f is a syntactic function then
|
|      x  =_abb  f,
|
|      read "x is an abbreviation for f", is an
|      informal declaration that any occurrence
|      of x may be literally replaced by the
|      string f.
|
| We begin with some abbreviations which produce
| functions to manipulate lists and matrices.
|
| 18.  For any function f and n >= 0,
|
|      f^n is the abbreviation defined by
|
|      f^0  =_abb  id,
|
|      f^1  =_abb  f,
|
|      f^n  =_abb  (f o ... o f),  (n times for n > 1).
|
| For i >= 1 we have the following abbreviations:
|
| 19.  pr_i      =_abb  (head o tail^(i-1)),  the 'i^th projection function'.
|
| 20.  col_i     =_abb  (!a!pr_i),            the 'i^th column function'.
|
| 21.  transp_n  =_abb  [col_1, ..., col_n],  the 'n-column transpose function'.
|
| Thus, transp_3 is an abbreviation for the FPF function:
|
|      [(!a!(head o id)), (!a!(head o tail)), (!a!(head o tail o tail))].
|
| The reader may easily check that pr_i: <t_1, ..., t_n> is t_i for i =< n
| but undefined for i > n, so that pr_i selects the i^th entry of a list,
| that col_i returns the i^th column of a matrix:
|
|      col_i: <<a_11, ..., a_1n>, ..., <a_m1, ..., a_mn>>
|
|          (   <a_1i, ..., a_mi>,   (i =< n),
|      =   <
|          (   undefined,           (i  > n),
|
| and that
|
|      transp_n: <<a_11, ..., a_1n>, ..., <a_m1, ..., a_mn>>
|
|      =         <<a_11, ..., a_m1>, ..., <a_1n, ..., a_mn>>
|
| produces the transpose of an n-column matrix.
|
| Manes & Arbib, AAPS, pages 17-18.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.4.  Multifunctions
|
| Since denotational semantics is to assign an input/output meaning to each program,
| it is reasonable to consider possible general forms for input/output "functions".
| In the Pascal fragment of Section 1.2 inputs and outputs were assignments of
| natural numbers to identifiers whereas they were DTN's for the functional
| programming fragment of Section 1.3.  In Part 3 we shall be concerned
| with the theory of data types which addresses the question of how
| inputs and outputs can be structured (e.g., "DTN structure").
| But even if we bypass this issue for the time being, allowing
| the inputs and outputs to have no particular structure, we
| may nonetheless wish to consider more general things than
| partial functions for input/output descriptions.
|
| In this section we introduce "multifunctions".  We make no claim that
| partial functions and multifunctions exhaust all reasonable possibilities.
| Rather, we introduce the notion of a "category" in Chapter 2 as a candidate
| for a truly general framework.  The common properties of partial functions
| and multifunctions studied in this section will help to motivate later
| work with categories.
|
| A total function is "single-valued" in the sense that exactly one output f(x)
| results for each input x.  Similarly, a partial function is "at-most-one-valued".
| More generally, multifunctions obtain by allowing f(x) to be any set of outputs,
| including the empty set.  For an example, consider an anthropological data base
| for a population P in which it is possible to retrieve the names of the children
| (also in P) of any person in P.  The "children" multifunction f then assigns
| to each p in P the set f(p) of all children of p.  The formal definition
| of a multifunction is as follows.
|
| 1.  Definition.  Let X, Y be sets.  A 'multifunction' from X to Y is
|     a total function from X to the set of subsets of Y.  The set of
|     all multifunctions from X to Y will be denoted Mfn(X, Y).
|
| In set theory it is customary to call the
| set of subsets of Y the 'power set' of Y,
| which leads to the following standard:
|
| 2.  Notation.  If Y is a set, !P!(Y) denotes the set of subsets of Y.
|
| We then have, by Definition 1,
|
| 3.  Mfn(X, Y)  =  Tot(X, !P!(Y)).
|
| Why should Definition 1 be useful, then, if multifunctions
| are just a special case of total functions?  The reason lies
| in considering how we want to chain multifunctions together.
| For example, a grandchild is just a child of a child, so that
| if f in Mfn(P, P) is the "children" multifunction as above, one
| intuitively expects to obtain the "grandchildren" multifunction
| by an appropriate composition of f with itself.  Considering f
| as a total function from P to !P!(P) and trying to compose f
| with itself as in 1.3.7 does not work because the value of
| the output f(p) does not have the right form to be an input
| to f.  What we need is the following definition.
|
| 4.  Definition.  For f in Mfn(X, Y) and g in Mfn(Y, Z)
|     their 'composition' gf in Mfn(X, Z) is defined by:
|
|     gf(x)  =  {z in Z : there exists y in f(x) with z in f(y)}.
|
| Indeed, it is immediate that if f in Mfn(P, P) is
| the "children" multifunction then ff in Mfn(P, P)
| is the "grandchildren" multifunction we desired.
|
| Manes & Arbib, AAPS, pages 21-22.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.4.  Multifunctions (cont.)
|
| Multifunctions are suitable input/output functions for the
| following parallel computation scenario which generalizes
| that of 1.3.5.
|
| 5.  Let X be an input set and let Y be an output set.  Beginning with
|     an input x in X, a given algorithm simultaneously initiates a set
|     of noninteracting computations.  Some of these may not terminate
|     and those that do may halt at different times.  The denotational
|     semantics of the algorithm is the multifunction f in Mfn(X, Y)
|     which assigns to x the set f(x) of all outputs in Y resulting
|     from some terminating computation initiated by input x.
|
| One might, for example, add atomic multifunctions to the functional programming
| fragment of Section 1.3 and give a multifunction denotational semantics based
| on (5) rather than 1.3.3.  See Exercise 3.  In such a situation we would need
| a multifunction semantics for the FPF (f_k o ... o f_1).  Similarly, in
| attempting a multifunction semantics for Pascal we would need to assign
| a meaning to "begin f_1; ...; f_k end".  While the composition operation
| of (4) is the natural candidate, a technical issue is raised.  Up to now
| we have viewed the chaining together of, say, three functions in the
| following way:
|
|           o---------o    o---------o    o---------o
|      ---->|    f    |--->|    g    |--->|    h    |---->
|           o---------o    o---------o    o---------o
|
| For multifunctions, should this mean h(gf) or (hg)f?
| Fortunately, it makes no difference.
|
| 6.  Proposition.  (Associative Law for Multifunction Composition).
|
|     If    f in Mfn(W, X),
|
|           g in Mfn(X, Y),
|
|           h in Mfn(Y, Z),
|
|     then  h(gf) = (hg)f in Mfn(W, Z).
|
| Proof.  Let z be in (h(gf))(w).  Then there exists y in (gf)(w)
| with z in h(y).  But then there exists x in f(w) with y in g(x).
| By the definition of hg, z is in (hg)(x) and so z in ((hg)f)(w).
| So far, we have shown that (h(gf))(w) is a subset of ((hg)f)(w)
| for all w in W.  To complete the proof, let z be in ((hg)f)(w)
| and show z is in (h(gf))(w).  There exists x in f(w) with z in
| (hg)(x).  Thus, there exists y in g(x) with z in h(y).  By the
| definition of gf, y is in (gf)(w) and then z in (h(gf))(w).  þ
|
| Theorem 6 allows us to write the equal multifunctions
| h(gf) and (hg)f simply as hgf.  In fact, the proof
| has shown:
|
| 7.  (hgf)(w)
|
|      = {z in Z : there exists x in f(w) and then y in g(x) with z in h(y)}.
|
| Repeated use of the associative law guarantees that
| parentheses can be avoided for chains of all lengths.
|
| Manes & Arbib, AAPS, pages 22-23.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.4.  Multifunctions (cont.)
|
| We conclude this section by showing that partial functions (and so
| total functions) may be thought of as special cases of multifunctions.
|
| 9.  Definition.  For each f in Pfn(X, Y) define fˆ in Mfn(X, Y) by:
|
|               (  {f(x)},  x in DD(f),
|     fˆ(x)  =  <
|               (   Ø,      else.
|
| Such fˆ is closely associated to f.  For example, f can be completely deduced
| from fˆ because DD(f) = {x in X : fˆ(x) =/= Ø} and for x in DD(f), f(x) is the
| unique element of fˆ(x).  A multifunction g has the form fˆ if and only if g(x)
| has at most one element for all x.  Furthermore, the compositions of (4) and
| 1.3.7 respect each other as is shown in the next result.
|
| 10.  Proposition.  Let f be in Pfn(X, Y), g be in Pfn(Y, Z),
|      and let gf in Pfn(X, Z) be the composition of 1.3.7.
|      Let gˆfˆ in Mfn(X, Z) be the composition of (4).
|      Then (gf)ˆ = gˆfˆ.
|
| Proof.  (gˆfˆ)(x)  =  {z : there exists y in fˆ(x) with z in gˆ(y)}
|
|                    =  {z : x in DD(f) and f(x) in DD(g) and z = g(f(x))}
|
|                       (  {g(f(x))},  x in DD(f) and f(x) in DD(g),
|                    =  <
|                       (   Ø,         else.
|
|                    =  (gf)ˆ(x).      þ
|
| The import of (9), (10) is that "partial functions are multifunctions",
| that is, blurring the distinction between f and fˆ is unlikely to be
| imprecise.  Usually, one writes fˆ simply as f.  Thus, if f is in
| Pfn(X, Y) and g is in Mfn(Y, Z) we would write gf without comment
| for the more precise gfˆ in Mfn(X, Z).  One mild warning is in
| order, however, relating to 1.3.3.  If a known programming
| statement computes f we would expect to be able to write
| the statement:
|
|      if fˆ(x) = Ø then g(x) else h(x)
|
| in, say, Pascal.  This would compute h(x) if computation of f(x) halts,
| but would be undefined rather than returning g(x) if f(x) does not halt,
| that is, if x is not in DD(f).  In short, fˆ(x) = Ø in 1.3.5 should be
| interpreted not as a returned value but as a nontermination.  A similar
| interpretation applies to f(x) = Ø in (5).  On the other hand, there are
| circumstances such as the "children" multifunction where Ø is a reasonable
| returned value.  In a semantic environment where a possibly nonterminating
| algorithm has the empty set as a possible returned value, multifunctions
| may not provide the correct type of function.  See Exercise 2.1.10.
|
| 11.  Proposition.  (Associative Law for Partial Function Composition).
|
|      If    f in Pfn(W, X),
|
|            g in Pfn(X, Y),
|
|            h in Pfn(Y, Z),
|
|      then, with respect to the composition of 1.3.7,
|
|            h(gf) = (hg)f in Pfn(W, Z).
|
| Proof.  Using (6) and (10) we have:
|
| (h(gf))ˆ = hˆ(gf)ˆ = hˆ(gˆfˆ) = (hˆgˆ)fˆ = (hg)ˆfˆ = ((hg)f)ˆ,
|
| so that h(gf) = (hg)f.   þ
|
| Manes & Arbib, AAPS, pages 23-24.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics
|
| In this section we consider partial functions and multifunctions
| as frameworks for denotational semantics without reference to
| any particular programming language.  Basic constructions
| such as chaining, conditional testing, and looping
| are described at the function level.  The term
| "partially additive" refers to a kind of
| sum operation which can be defined on
| the sets Pfn(X, Y), Mfn(X, Y), (and
| more generally in Section 3.2).
|
| To fix the context we must choose just one of "partial function" or
| "multifunction", that is, we must specify the "semantic category" in
| the sense of the following definition (which will be generalized in the
| next chapter.
|
| 1.  Definition.  The 'semantic category' is either Pfn (for partial functions)
|     or Mfn (for multifunctions).  We adopt the noncommittal notation SC(X, Y)
|     to mean "Pfn(X, Y) if the semantic category is Pfn and Mfn if the semantic
|     category is Mfn".
|
| 2.  Notation.  We will use all the notations:
|
|     f : X -> Y
|
|            f
|         X ---> Y
|
|
|         X   o-----o  Y
|       ----->|  f  |----->
|             o-----o
|
| as synonyms for f in SC(X, Y).  These may appear geometrically reoriented
| in diagrams, for example, right-to-left, vertically, diagonally, and so on.
| The last notation is "flowscheme" notation.
|
| The important operation of iterated composition has already been
| introduced (in 1.4.4, 1.4.6-8 for Mfn, 1.3.7, 1.4.11 for Pfn).
| If f_i in SC(X_i-1, X_i) for i = 1, ..., n, suitable flowscheme
| notation for the composition f_n ... f_1 in SC(X_0, X_n) is:
|
| 3.
|
|     X_0   o---------o   X_1               X_n-1   o---------o   X_n
| --------->|   f_1   |--------->  . . .  --------->|   f_n   |--------->
|           o---------o                             o---------o
|
|                                f
| The labeled arrow notation X -----> Y is
| useful in "commutative diagrams" such as:
|
| 4.
|             f_1       f_2
|     X_0 o-------->o-------->o X_2
|         |\       X_1       / \
|         | \               /   \
|         |  \             /     \
|         |   \           /       \
|         |  g \         / f_3     \ h
|         |     \       /           \
|         |      \     /             \
|         |       \   /               \
|         |        v v       f_4       v
|          \        o------------------>o X_4
|           \      X_3                  |
|            \                          |
|          f  \                         | f_5
|              \                        |
|               \                       v
|                ---------------------->o X_5
|
| in which our convention is the following:
|
| 5.  In a diagram such as (4), if two paths of arrows begin at the same place
|     and end at the same place, then, unless the contrary is indicated, the
|     compositions of these paths are asserted to be equal.  To emphasize
|     this assertion we say "the diagram commutes".
|
| Thus, in (4), we have the following equations:
|
|     g  =  f_3 f_2 f_1          in  SC(X_0, X_3),
|
|     h  =  f_4 f_3              in  SC(X_2, X_4),
|
|     h f_2 f_1  =  f_4 g        in  SC(X_0, X_4),
|
|     f  =  f_5 f_4 f_3 f_2 f_1  in  SC(X_0, X_5),
|
| and so on.  Notation such as:
|
| 6.
|                   f
|       X o------------------>o Y
|          \                 /
|           \               /
|            \      ?      /
|             \           /
|            h \         / g
|               \       /
|                \     /
|                 \   /
|                  v v
|                   o
|                   Z
|
| could be used to indicate that h is not
| necessarily the same as gf in SC(X, Z).
|
| Manes & Arbib, AAPS, pages 26-27.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| The identity function id : !DTN! --> !DTN! was introduced
| with FPF in 1.3.8.  More generally, we have the following:
|
| 7.  Definition.  For each set X, the 'identity function' of X,
|     id_X : X -> X is the total function defined by id_X (x) = x.
|     This function is in Pfn(X, X) and so may be considered in
|     Mfn(X, X) as in 1.4.9-10, so that always id_X in SC(X, X).
|
| We clearly have the following:
|
| 8.  For f in SC(X, Y),
|
|     id_Y f  =  f  =  f id_X.
|
| We may express this by a commutative diagram:
|
|                 f
|     X  o------------------>o  Y
|         \                 ^ \
|          \               /   \
|           \             /     \
|            \         f /       \
|       id_X  \         /         \  id_Y
|              \       /           \
|               \     /             \
|                \   /               \
|                 v /       f         v
|               X  o------------------>o  Y
|
| Alternatively, inventing the "through box":
|
|       X   o-----o  X
|     ----->|-----|----->
|           o-----o
|
| as a flowscheme notation for id_X,
| (8) may be expressed in flowscheme
| terms by:
|
|        |                                   |
|        | X                                 | X
|        v                                   v
|     o-----o              |              o-----o
|     |     |              |              |  |  |
|     |  f  |              | X            |  |  |
|     |     |              v              |  |  |
|     o-----o           o-----o           o-----o
|        |              |     |              |
|        | Y      =     |  f  |     =        | X
|        v              |     |              v
|     o-----o           o-----o           o-----o
|     |  |  |              |              |     |
|     |  |  |              | Y            |  f  |
|     |  |  |              |              |     |
|     o-----o              v              o-----o
|        |                                   |
|        | Y                                 | Y
|        v                                   v
|
| Manes & Arbib, AAPS, pages 27-28.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| We now introduce the fundamental operation of sum,
| first for Mfn and then for Pfn.
|
| 9.  Definition.  Let X and Y be sets.  Let I be a set and for each i in I
|     let f_i be in Mfn(X, Y).  (We say (f_i : i in I) is an I-indexed family
|     in Mfn(X, Y).)  Then the 'sum' Sum(f_i : i in I), alternatively written
|     as Sum_(i in I) f_i, or as Sum_i,I f_i, is the multifunction in Mfn(X, Y)
|     defined by:
|
|     (Sum_i,I f_i)(x)  =  |_|^i,I f_i (x)
|
|                       =  {y in Y : y in f_i (x) for some i in I}.
|
| Hence, for one-element families (meaning that I has one element)
| Sum(f) = f, and in the case where I is empty the sum maps x
| to the empty set for all x in X (see Exercise 1).
|
| If I = {1, 2, ..., n} with n >= 2, so that the family (f_i : i in I)
| has the form (f_1, ..., f_n), we write f_1 + ... + f_n as a synonym
| for Sum(f_i : i in I).  In general, we may write Sum f_i instead
| of  Sum(f_i : i in I) when I is clear from context.
|
| An intuitive flowscheme notation for summing is exemplified by the following.
|
| 10.  f + g is written:
|
|                          |
|                          |
|                          |
|                          v
|        o-----------------o-----------------o
|        |                                   |
|        |                                   |
|        |                                   |
|     o-----o                             o-----o
|     |     |                             |     |
|     |  f  |                             |  g  |
|     |     |                             |     |
|     o-----o                             o-----o
|        |                                   |
|        |                                   |
|        |                                   |
|        o-----------------o-----------------o
|                          |
|                          |
|                          |
|                          v
|
| and similarly for other families (f_i : i in I).
|
| This notation conveys the idea of (9) since an output
| from (f + g)(x) is an output from either f(x) or g(x).
|
| Manes & Arbib, AAPS, pages 28-29.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| We next seek a suitable sum operation for partial functions.
| It is easy to see that when each f_i in (9) is a partial function
| (i.e., a multifunction which happens to be a partial function, cf.
| 1.4.9-10) then Sum f_i need not be.  In the case of (10), let f, g
| be partial functions and x be such that f(x) and g(x) are defined
| and different.  Then the sum f + g maps x to the set {f(x), g(x)}
| and so is not a partial function.  To better understand what needs
| to be fixed, imagine the "fanout" in (10),
|
|                          |
|                          |
|                          |
|                          v
|        o-----------------o-----------------o
|        |                                   |
|        |                                   |
|        |                                   |
|        v                                   v
|
| as controlled by a test such as "if f is defined go left;
| if g is defined go right".  For multifunctions, such a test can
| pass the input down both lines simultaneously.  For partial functions,
| we demand that such a test choose 'at most one' alternative and define
| (10) only when DD(f) |^| DD(g) = Ø.  We have motivated the definition:
|
| 11.  Let X, Y be sets and let (f_i : i in I) be an I-indexed family in
|      Pfn(X, Y).  Then (f_i : i in I) is 'summable' in Pfn(X, Y) if for
|      all i, j in I with i =/= j, DD(f_i) |^| DD(f_j) = Ø.  In that case,
|      Sum f_i = Sum(f_i : i in I) in Pfn(X. Y) is defined by:
|
|      DD(Sum f_i)   =   |_|^(i,I) DD(f_i)
|
|                        (  f_j (x),    if there exists j with x in DD(f_j),
|      (Sum f_i)(x)  =   <
|                        (  undefined,  else.
|
| Notice that we do not require that I be finite.
|
| The following is an immediate result:
|
| 12.  If (f_i : i in I) is summable,
|
|      then  (Sum f_i)ˆ  =  Sum(f_iˆ),
|
|      where fˆ is defined in 1.4.9 and
|      the latter sum is that of (9).
|
| Thus, the Pfn sum, when it exists, specializes the Mfn sum.
|
| In particular, we have for one-element families
|
| 13.  Sum(f)  =  f
|
| and for empty families
|
| 14.  Sum Ø   =  0
|
| where 0 : A -> B denotes the everywhere undefined
| partial function characterized by DD(0) = Ø.  It is
| obvious that we may extend a summable family by adding
| any number of 0's or we may delete any number of 0's which
| are already there without affecting either the summability of
| the family or the value of the sum.  It is for this reason that,
| in this context, we prefer the notation 0 instead of the alternate
| notation _|_ introduced in Exercise 1.3.6.
|
| Our operation of sum, then, differs from ordinary numerical addition
| in two fundamental respects:
|
| a.  It is not always defined.  Indeed, for any f in Pfn(X, Y)
|     with DD(f) =/= Ø, f + f is never defined.  The "partial" in
|     "partially additive" refers to this property -- addition (= sum)
|     is only partially defined.
|
| b.  There are many infinite families whose sum is defined.
|
| We remark that even finite sums such as (10) cannot be implemented
| in an unrestricted way.  It is well known from computability theory
| that given two programs which compute partial functions f, g there is
| no way to decide, in general, if DD(f) |^| DD(g) = Ø, and this makes it
| hard to imagine a suitable approach to compute f + g for arbitrary f, g
| (see Exercise 4 for an unsuccessful attempt).  There remains the option
| to restrict the use of sum to "provably disjoint" families, and this will
| in fact be what happens when we give a partially additive semantics for
| iteration in Section 3.3 (see also (27) below).
|
| Manes & Arbib, AAPS, pages 29-30.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| We turn to some properties of sum, beginning with the following one.
|
| 15.  Proposition.  (Distributive Law of Composition over Sums in Mfn).
|      Let f be in Mfn(W, X), let (g_i : i in I) be a family in Mfn(X, Y),
|      and let h be in Mfn(Y, Z).  Then:
|
|      1.  (Sum g_i)f  =  Sum(g_i f)  in  Mfn(W, Y).
|
|      2.  h(Sum g_i)  =  Sum(h g_i)  in  Mfn(X, Z).
|
| Proof.
|
| 1.  y in ((Sum g_i)f)(w)
|
|     <=>  there exists x in f(w) with y in (Sum g_i)(x)
|
|     <=>  there exists x in X, i in I, with x in f(w) and y in g_i (x)
|
|     <=>  there exists i in I with y in (g_i f)(w)
|
|     <=>  y in (Sum(g_i f))(w).   
|
| 2.  z in (h(Sum g_i))(x)
|
|     <=>  there exists y in (Sum g_i)(x) with z in h(y)
|
|     <=>  there exists i in I, y in g_i (x), with z in h(y)
|
|     <=>  there exists i in I with z in (h g_i)(x)
|
|     <=>  z in (Sum(h g_i))(x).
|
| þ
|
| 16.  Corollary.  (Distributive Law of Composition over Sums in Pfn).
|      Let f be in Pfn(W, X), let (g_i : i in I) be a summable family
|      in Pfn(X, Y), and let h be in Pfn(Y, Z).  Then (g_i f : i in I)
|      and (h g_i : i in I) are summable, and:
|
|      1.  (Sum g_i)f  =  Sum(g_i f)  in  Pfn(W, Y).
|
|      2.  h(Sum g_i)  =  Sum(h g_i)  in  Pfn(X, Z).
|
| Proof.
|
| 1.  If w in DD(g_i f) |^| DD(g_j f) then f(w) in DD(g_i) |^| DD(g_j), so i = j.
|
| 2.  If x in DD(h g_i) |^| DD(h g_j) then x in DD(g_i) |^| DD(g_j), so i = j.
|
| Then equality of the sums follows from (15) in view (12).  þ
|
| Proposition 15 and Corollary 16 are valid when I is empty, yielding the following:
|
| 17.  For f in SC(X, Y) and for all sets W, Z we have the commutative diagram:
|
|              0 
|      W o---------->o X
|         \          |\
|          \         | \
|           \        |  \
|            \       |   \
|             \    f |    \
|            0 \     |     \ 0
|               \    |      \
|                \   |       \
|                 \  |        \
|                  \ |         \
|                   vv          v
|                  Y o---------->o Z
|                          0
|
| where the four 0's are the appropriate empty sums of (14).
| It follows that any composition f_n ... f_1 is 0
| if any of the f_i is 0.
|
| Manes & Arbib, AAPS, pages 30-31.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| A useful result about the existence of sums is the following:
|
| 18.  Proposition.  Let (f_i : i in I) be a summable family in Pfn(X, Y).
|
|      Then:
|
|      a.  If    J c I,
|
|          then  (f_i : i in J) is summable in Pfn(X, Y).
|
|      b.  If    (g_i : i in I) is a similarly indexed family
|                (not necessarily summable) in Pfn(Y, Z),
|
|          then  (g_i f_i : i in I) is a summable family in Pfn(X, Z).
|
| Proof.  That (a) holds is obvious.  For (b), if x is in
| DD(g_i f_i) |^| DD(g_j f_j) then x is in DD(f_i) |^| DD(f_j),
| so i = j.  þ
|
| In the balance of this section we emphasize the
| use of sums to define programming constructs.
|
| 19.  Definition.  If A is a subset of X,
|      the 'inclusion function' of A is
|      inc_A in Pfn(X, X) defined by:
|
|      DD(inc_A)  =  A,
|
|      inc_A (x)  =  x.
|
| Thus, inc_Ø = 0 is the everywhere undefined function X -> X and inc_X = id_X.
| As usual, we consider inc_A in Mfn(X, X) as well, as in 1.4.9-10.
|
| 20.  Definition.  If p in Pfn(X, X) is an inclusion function (so that
|      p = inc_A for A = DD(p)) we say p is a 'guard function', and for f
|      in SC(X, Y) we introduce the notation p -> f for fp.  The meaning of
|      p -> f is "if p is true then execute f else the result is undefined",
|      where to say p(x) is true means x in DD(p).  Thus p "guards" entry to f.
|      Such p -> f is called a 'guarded command'.
|
| Manes & Arbib, AAPS, pages 31-32.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 27

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| 21.  Definition.  For n >= 1, an 'n-way test' on X
|
|      is   (p_1, ..., p_n),
|
|      with each p_i an inclusion function in Pfn(X, X)
|
|      and  DD(p_i) |^| DD(p_j)  =  Ø  if i =/= j.
|
| 22.  Definition.  Let (p_1, ..., p_n) be an n-way test on X and let
|      f_1, ..., f_n be in SC(X, Y).  Then a natural generalization of
|      the case statement in Pascal is:
|
|      case(p_1, ..., p_n) of (f_1, ..., f_n)  =  f_1 p_1 + ... + f_n p_n
|
|      with flowscheme:
|
|                          |
|                          |
|                          | X
|                          |
|                          v
|        o-----------------o-----------------o
|        |                                   |
|        |                                   |
|        |                                   |
|     o-----o                             o-----o
|     |     |                             |     |
|     | p_1 |                             | p_n |
|     |     |                             |     |
|     o-----o                             o-----o
|        |                                   |
|      X |             .   .   .             | X
|        |                                   |
|     o-----o                             o-----o
|     |     |                             |     |
|     | f_1 |                             | f_n |
|     |     |                             |     |
|     o-----o                             o-----o
|        |                                   |
|        |                                   |
|        |                                   |
|        o-----------------o-----------------o
|                          |
|                          |
|                          | Y
|                          |
|                          v
|
| The sum is defined by Proposition 18.
|
| A related construction in multifunction semantics is the following:
|
| 23.  Definition.  Let p_1, ..., p_n be guard functions
|      in Pfn(X, X) and let f_1, ..., f_n be in Mfn(X, Y).
|      Then the 'alternative construct' is:
|
|      if p_1 -> f_1 [] ... [] p_n -> f_n fi
|
|      =  f_1 p_1 + ... + f_n p_n  in  Mfn(X, Y).
|
| We emphasize that the guards here are not required to have disjoint
| domains.  The intended meaning is "pick any i for which the guard p_i
| is true and execute f_i".  The flowscheme is the same as in (22).
|
| The Pascal if-then-else construction is a special case of (22) as follows.
|
| 24.  Definition.  Let A be a subset of X.
|      Define A’ to be the complement of A,
|      that is, A’ = {x in X : x not in A}.
|      Then (inc_A, inc_A’) is a two-way
|      test on X.  For f, g in SC(X, Y)
|      define:
|
|      if A then f else g  =  f inc_A + g inc_A’
|
|      in SC(X, Y).
|
| Two suitable flowschemes are:
|
|                            |
|                            |
|                            | X
|                            |
|                            v
|                            o
|                           / \
|                          /   \
|                 T       /     \       F
|          o-------------o   A   o-------------o
|          |              \     /              |
|          |               \   /               |
|          |                \ /                |
|          v                 o                 v
|     o---------o                         o---------o
|     |         |                         |         |
|     |    f    |                         |    g    |
|     |         |                         |         |
|     o---------o                         o---------o
|          |                                   |
|          |                                   |
|          |                                   |
|          o-----------------o-----------------o
|                            |
|                            |
|                            | Y
|                            |
|                            v
|
|
|                            |
|                            |
|                            | X
|                            |
|                            v
|          o-----------------o-----------------o
|          |                                   |
|          |                                   |
|          |                                   |
|     o---------o                         o---------o
|     |         |                         |         |
|     |  inc_A  |                         |  inc_A’ |
|     |         |                         |         |
|     o---------o                         o---------o
|          |                                   |
|        X |                                   | X
|          |                                   |
|     o---------o                         o---------o
|     |         |                         |         |
|     |    f    |                         |    g    |
|     |         |                         |         |
|     o---------o                         o---------o
|          |                                   |
|          |                                   |
|          |                                   |
|          o-----------------o-----------------o
|                            |
|                            |
|                            | Y
|                            |
|                            v
|
| Manes & Arbib, AAPS, pages 32-33.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 28

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| The sum operation and composition lead to a calculus to
| manipulate functions.  We begin with two basic properties
| of inclusion functions whose proof is obvious and follow this
| with an example that simplifies a compound conditional statement.
|
| 25.  Proposition.  Let A and B be subsets of X.  Then:
|
|      a.  inc_A inc_B  =  inc_(A |^| B)  =  inc_B inc_A.
|
|      b.  If    A |^| B  =  Ø,
|
|          then  inc_A + inc_B exists,
|
|          and   inc_A + inc_B  =  inc_(A |_| B).
|
| 26.  Example.  If A, B c X, and f, g, h in SC(X, Y), then:
|
|      if A then (if B then f else g) else (if A’ then f else h)
|
|      =  (f inc_B + g inc_B’)inc_A + (f inc_A’ + h inc_A)inc_A’
|
|         <since A’’ = A>
|
|      =  f inc_B inc_A + g inc_B’ inc_A + f inc_A’ inc_A’ + h inc_A inc_A’
|
|         <by (15)>
|
|      =  f inc_(A |^| B) + g inc_(A |^| B’) + f inc_A’
|
|         <by (25)>
|
|      =  f(inc_(A |^| B) + inc_A’) + g inc_(A |^| B’)
|
|         <by (15) since the sum in parentheses is defined by (25)>
|
|      =  f(inc_((A |^| B) |_| A’)) + g inc_(A |^| B’)
|
|         <by (25)>
|
|      =  f inc_(A |^| B’)’ + g inc_(A |^| B’)
|
|      =  if A |^| B’ then g else f.
|
| Manes & Arbib, AAPS, pages 33-34.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 29

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| Repetitive constructs may be defined using infinite sums.  For example:
|
| 27.  Definition.  For A c X and f in SC(X, X) define:
|
|      while A do f
|
|      =  Sum_(n=0,OO) inc_A’ (f inc_A)^n
|
|      in SC(X, X)
|
| (where, for g in SC(Y, Y), g^n is defined by g^0 = id_Y, g^(n+1) = g^n g)
| with one summand for each number n of traversals of the loop in either of
| these two variant flowschemes:
|
|                                 |
|                                 |
|     o-------------------------->| X
|     |                           |
|     |                           v
|     |                           o
|     |                          / \
|     |                         /   \
|     |                T       /     \       F
|     |         o-------------o   A   o-------------o
|     |         |              \     /              |
|     |       X |               \   /               | X
|     |         |                \ /                |
|     |         v                 o                 v
|     |    o---------o
|     |    |         |
|     |    |    f    |
|     |    |         |
|     |    o---------o
|     |         |
|     |         |
|     |         |
|     o---------o
|
|
|                                 |
|                                 |
|     o-------------------------->| X
|     |                           |
|     |                           v
|     |         o-----------------o-----------------o
|     |         |                                   |
|     |         |                                   |
|     |         v                                   v
|     |    o---------o                         o---------o
|     |    |         |                         |         |
|     |    |  inc_A  |                         |  inc_A’ |
|     |    |         |                         |         |
|     |    o---------o                         o---------o
|     |         |                                   |
|     |       X |                                   | X
|     |         v                                   v
|     |    o---------o
|     |    |         |
|     |    |    f    |
|     |    |         |
|     |    o---------o
|     |         |
|     |         |
|     o---------o
|
| That the sum exists when the semantic category is Pfn
| is clear from the fact that:
|
| 28.  DD(inc_A’ (f inc_A)^n)
|
|      =  {x in X : x, f(x), ..., f^(n-1)(x) in A, and f^n (x) not in A},
|
| which ensures that x is in DD(inc_A’ (f inc_A)^n) for at most one n.
|
| Manes & Arbib, AAPS, page 34.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 30

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 1.  An Introduction to Denotational Semantics
|
| 1.5.  A Preview of Partially Additive Semantics (cont.)
|
| 29.  Definition.  For A c X and f in SC(X, X) define:
|
|      repeat f until A  =  (while A’ do f) f.
|
| It is easy to use the laws for manipulating sums to
| deduce a formula like that of (27), (see Exercise 7).
|
| A companion to the multivalued alternative construct
| of (23) is the multivalued repetitive construct:
|
|      do p_1 -> f_1 [] ... [] p_n -> f_n od
|
| which is intended to mean "pick any i for which guard p_i is true and
| execute f_i;  repeat until no such i exists and then exit".  Since
| the choice of i is multivalued, many successful computation paths
| are possible.  A suitable formal definition of the semantics
| is the following:
|
| 30.  Definition.  Let p_1, ..., p_n be guard functions
|      in Pfn(X, X) and let f_1, ..., f_n be in Mfn(X, X).
|      Then the 'multivalued repetitive construct' is:
|
|      do p_1 -> f_1 [] ... [] p_n -> f_n od
|
|      =  while A do if p_1 -> f_1 [] ... [] p_n -> f_n fi,
|
|      where if p_i = inc_A_i, then A = A_1 |_| ... |_| A_n.
|
| This may be expressed as an infinite sum (see Exercise 8).
|
| 31.  Example.  For A c X, f in SC(X, X), g in SC(X, Y) we have the
|      identity [of the constructs in the following two flowschemes]:
|
|                                 |
|                                 |
|     o-------------------------->| X
|     |                           |
|     |                           v
|     |                           o
|     |                          / \
|     |                         /   \
|     |                T       /     \       F
|     |         o-------------o   A   o-------------o
|     |         |              \     /              |
|     |         |               \   /               |
|     |         |                \ /                |
|     |         v                 o                 v
|     |     o-------o                           o-------o
|     |     |       |                           |       |
|     |     |   f   |                           |   g   |
|     |     |       |                           |       |
|     |     o-------o                           o-------o
|     |         |                                   |
|     |         |                                   | Y
|     |         |                                   |
|     o---------o                                   v
|
| =============================================================
|
|                                          |
|                                          |
|                                          | X
|                                          |
|                                          v
|                                          o
|                                         / \
|                                        /   \
|                                 T     /     \     F
|                            o---------o   A   o---------o
|                            |          \     /          |
|     o--------------------->|           \   /           |
|     |                      |            \ /            |
|     |                      v             o             v
|     |                      o                       o-------o
|     |                     / \                      |       |
|     |                    /   \                     |   g   |
|     |             T     /     \     F              |       |
|     |        o---------o   A   o---------o         o-------o
|     |        |          \     /          |             |
|     |        |           \   /           |             |
|     |        |            \ /            |             |
|     |        v             o             v             |
|     |    o-------o                   o-------o         |
|     |    |       |                   |       |         |
|     |    |   f   |                   |   g   |         |
|     |    |       |                   |       |         |
|     |    o-------o                   o-------o         |
|     |        |                           |             |
|     |        |                           |             |
|     |        |                           |             |
|     o--------o                           o-------------o
|                                                 |
|                                                 | Y
|                                                 |
|                                                 v
|
| that is,
|
|      g (while A do f)  =  if A then g (while A do f) else g.
|
| A formal proof is as follows, where we make use of (15), (16), (25).
|
|      if A then g (while A do f) else g
|
|             0/0
|      =  { g Sum inc_A’ (f inc_A)^n } inc_A  +  g inc_A’
|             n=0 
|
|             0/0
|      =  g { Sum inc_A’ (f inc_A)^n }        +  g inc_A’
|             n=1
|
|         <since  inc_A’ inc_A = 0,  while inc_A inc_A = inc_A>
|
|             0/0
|      =  g { Sum inc_A’ (f inc_A)^n }
|             n=0
|
|         <since the sum in parentheses 'is' defined>
|
|      =  g (while A do f).
|
| Manes & Arbib, AAPS, pages 34-36.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 31

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| Going beyond the partial functions and multifunctions
| already considered, one might invent other useful notions
| of the input/output function from X to Y.  In addition to the
| need to consider X, Y as "data structures", there are theoretical
| approaches to semantics in which all X, Y must carry further structure.
| Rather than embark on the misguided task of presenting an exhaustive list
| of present and future possibilities, we introduce 'categories' as a framework
| for semantics which possess so little structure that most models of semantics
| can be represented this way.  Surprisingly, what structure remains can be
| extensively developed and there is a great deal to say.
|
| Category theory per se is tangential to this book.  We discuss only a few
| topics which bear directly on our analysis of the "semantic category".
| In Section 2.1 we introduce the notion of a category which provides
| the bare bones of abstraction of the semantics of composition.
| Section 2.2 introduces the useful organizing principle of
| duality and relates it to isomorphisms and to initial and
| terminal objects.  Isomorphism is self-dual and initial
| is dual to terminal.  The uniqueness of initial objects
| has important instantiations in semantics, such as the
| uniqueness of a sequence defined by simple recursion.
| Zero objects are simultaneously initial and terminal
| and generalize the empty set in Pfn.  To round out
| this introduction to category theory we present,
| in Section 2.3, the notion of product and the
| dual concept of coproduct which both find
| frequent applications throughout the book.
|
| With this we have all the category theory needed
| for our study of program semantics in Chapter 3.
| Further category theory is developed in Chapter 4
| as motivated by the issues raised by attempting to
| describe assertion semantics in a semantic category.
|
| When we turn to the study of data types in Part 3
| we shall need to call on further concepts from
| category theory -- functors, limits, and
| algebraic theories.
|
| The concepts in this chapter are quite abstract
| and may seem so even to readers with experience
| in pure mathematics.  We encourage patience!
| Familiarity with the language will grow and
| the approach should come to seem increasingly
| natural with the applications to semantics in
| subsequent chapters.
|
| Manes & Arbib, AAPS, pages 38-39.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 32

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.1.  The Definition of a Category
|
| A "category" is an abstraction of "sets and functions between them".  In a
| category sets become "objects", abstract things with no internal structure.
| There 'are' sets in the theory, however, namely, for each two objects X, Y
| there is a set of "morphisms" from X to Y.  These morphisms compose in an
| associative way, and there are identity morphisms.  The motivating examples
| for us are (2) and (3) below.  Here, then, is the precise definition:
|
| 1.  Definition.  A 'category' C
|     is given by data  (1, 2, 3)
|     subject to axioms (a, b, c)
|     as follows:
|
|     Datum 1.  A collection ob(C) of C-'objects' X, Y, Z, ... .
|
|     Datum 2.  For each ordered pair of objects (X, Y) a set C(X, Y)
|               of C-'morphisms' from X to Y.  We use the term 'map'
|               as a synonym for morphism.
|
|     Axiom a.  The sets C(X, Y) are disjoint:
|
|               If    C(X, Y) |^| C(X', Y') =/= 0,
|
|               then  X = X' and Y = Y'.
|
|               We will rarely say f is in C(X, Y), introducing
|               instead the following two synonymous notations:
|
|               f : X -> Y
|
|                   f
|               X -----> Y
|
|               Here X is called the 'domain' of f
|               and  Y is called the 'codomain' of f.
|               Axiom (a) guarantees that this definition
|               makes sense, that is, there will never be
|               any ambiguity concerning the domain or the
|               codomain of a morphism.
|
|     Datum 3.  A composition operator 'o' assigning to each ordered pair
|               of morphisms (f, g) of form f : X -> Y, g : Y -> Z (i.e.,
|               the codomain of f coincides with the domain of g) a third
|               morphism g o f : X -> Z whose domain is that of f and whose
|               codomain is that of g.
|
|     Axiom b.  Composition is associative, that is, given
|
|               f : X -> Y,
|
|               g : Y -> Z,
|
|               h : Z -> W,
|
|               we have that
|
|               (h o g) o f  =  h o (g o f) : X -> W.
|
|     Axiom c.  For each object X there exists an
|               'identity' morphism id_X : X -> X with
|               domain and codomain X and with the property
|
|               that for each morphism f : Y -> X,
|               
|               id_X o f = f,
|
|               and  for each morphism g : X -> Z,
|
|               g o id_X = g.
|
| This completes the definition.  We observe at once
| that the id_X of axiom (c) is unique.  For suppose
| also that u : X -> X satisfies u o f = f for all
| f : Y -> X and g o u = g for all g : X -> Z.
| Regarding id_X as g for u, id_X o u = id_X.
| Regarding u as f for id_X, id_X o u = u.
| Thus u = id_X.  Hence id_X is well named
| as 'the' identity morphism of X.
|
| As is usual for mathematical structures generally,
| a host of alternate notations may prove useful.  Thus,
| composition might be denoted g * f instead of g o f for
| some categories.  Since composition is the basic operation
| of category theory we shall most often write composition with
| no symbol at all, as gf.  We shall almost always stick to id_X
| for the identity morphism of X.  Even in our first examples,
| different categories may share the same objects and even
| the same morphisms.  In such situations different arrows
| such as f : X -» Y may be used and alternate notation
| for composition may be essential.
|
| Manes & Arbib, AAPS, pages 39-40.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 33

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.1.  The Definition of a Category (cont.)
|
| 2.  Example.  'Set', the category of 'sets and total functions'.
|     Here objects are sets, a morphism f : X -> Y is a total function
|     from X to Y, composition is the usual one, (gf)(x) = g(f(x)), and
|     id_X (x) = x.
|
| 3.  Example.  'Pfn', the category of 'sets and partial functions'.
|     Here objects are sets, but a morphism f : X -> Y is a partial function
|     from X to Y.  Composition is as in 1.3.7.  The identity (total) function
|     still provides id_X.  Notice that Pfn(X, Y) in the sense of definition (1)
|     is exactly Pfn(X, Y) as in 1.3.4.
|
| 4.  Example.  'Mfn', the category of 'sets and multivalued functions'.
|     Here objects are sets, Mfn(X, Y) is as in 1.4.3 with composition
|     given by 1.4.4, and id_X (x) = {x}.
|
| 5.  Example.  'ANMfn', the category of sets and multivalued functions
|     with "all or nothing" composition.  In this example, objects are
|     sets and ANMfn(X, Y) = Mfn(X, Y) but composition gf : X -> Z for
|     f : X -> Y and g : Y -> Z is defined by:
|
|               (  Ø  if g(y) = Ø for some y in f(x),
|     gf(x)  =  <
|               (  {z in Z : there exists y in f(x) with z in g(y)} else.
|
| This is "all or nothing" in the sense that scenario 1.4.5 has been
| modified so that no output is defined if 'any' computation fails to
| terminate.  The identity morphism id_X is the same as in Mfn.  Thus,
| the only difference between ANMfn and Mfn is composition.
|
| Examples (2-5) are categories.  For all but ANMfn, axiom (b)
| has been established in Section 1.4, (we leave the modification
| of properties 1.4.6 to ANMfn as an exercise).  Axiom (c) is routine.
| Axiom (a) holds by definition -- we consider the domain and codomain
| as part of the definition of a function.  In the student's likely
| first encounter with functions, elementary calculus, axiom (a) is
| not made explicit.  Formulas such as x^2 are confused with functions
| and one speaks one moment of "x^2 for -1 =< x =< 10" and the next
| moment of "x^2 for 2 =< x =< 3".  According to our conventions
| these are different functions.  This is reasonable since these
| functions have different properties -- for example, the second
| is monotone increasing where the first is not.
|
| We again avoid a formal proof that repeated use of the associative law
| axiom (b) establishes that all n-fold compositions are equal regardless of
| parenthesization, and so can be written without parentheses as f_n ... f_1.
| The commutative designation such as 1.5.4 is useful in any category.
|
| Thus, in the diagram:
|
|          A   g   B
|           o---->o
|          ^       \
|       f /         \ h
|        /           v
|     X o             o Y
|        \           ^
|         \         /
|          \       /
|         a \     / b
|            \   /
|             v /
|              o
|              D
|
| we understand that "ba = hgf" is asserted
| and we may emphasize this assertion by
| saying "the diagram commutes".
|
| When one regards a category as "the semantic category"
| generalizing 1.5.1, with (3), (4), (5) being examples,
| the flowscheme notation of 1.5.2 -- clearly a workable
| synonym for f : X -> Y in any category -- is useful.
| In practice, however, many other types of category
| arise.  Experience dictates that virtually any class
| of structures can be made the objects of a category
| in a "natural" way.  Some of the possibilities are
| explored in the exercises.  We turn now to examples
| of categories that are useful in this book but not
| necessarily as "semantic" categories.
|
| Manes & Arbib, AAPS, pages 40-41.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 34

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.1.  The Definition of a Category (cont.)
|
| 6.  Definitions.  A 'partially ordered set', or 'poset' for short,
|     is a pair (P, =<) where P is a set and =< is a binary relation
|     on P which is a 'partial order' on P.  This is defined to mean
|     that the following three axioms hold for all x, y, z in P.
|
|     'Reflexivity'.   x =< x.
|
|     'Transitivity'.  If x =< y and y =< z, then x =< z.
|
|     'Antisymmetry'.  If x =< y and y =< x, then x = y.
|
| We emphasize that the symbol =< has no à priori meaning.
| 'Any' relation satisfying the three axioms is a partial order,
| and many different partial orders may be of interest on one set.
|
| While other symbols could be used, for example,
| xRy instead of x =< y, the =< symbol gives rise
| to the following associated definitions.  In a
| poset (P, =<) we say that:
|
|     x  <  y    if x =< y but x =/= y,
|
|     x  >= y    if y =< x,
|
|     x  >  y    if y  < x,
|
|     x =/< y    if it is false that x =< y,
|
|                warning:  not equivalent to x > y,
|                e.g., see the Hasse diagram below.
|
| x /< y,  x /> y,  x >/= y are defined similarly.  It is not
| so clear how to obtain similar conventions with the symbol R.
|
| A useful device for drawing finite posets galore
| is the 'Hasse diagram', an example of which is:
|
|     d        e
|      o       o
|       \     / \
|        \   /   \
|         \ /     \
|        b o       o c
|           \     /
|            \   /
|             \ /
|              o
|              a
|
| Here P is the set of nodes (= o's);  P = {a, b, c, d, e} in this example.
| The partial order is defined by x =< y if and only if x = y or x is below y
| and there exists an upward path from x to y.  It is easy to see that (P, =<)
| is always a poset.  In the above example, a =< b, a =< d, while b and c are
| 'incomparable' because b =/< c and c =/< b.
|
| A 'totally ordered' set is a partially ordered set (P, =<) in which every two
| elements are comparable -- given x, y at least one of x =< y or y =< x holds.
| The term 'partially' ordered set refers to the possibility that incomparable
| pairs may exist.
|
| Posets are fundamental structures arising frequently in
| mathematics and theoretical computer science.  They play
| several roles in this book.  Here are some examples of
| posets:
|
| 7.  Example.  If N = {0, 1, 2, ...} is the set of natural numbers and
|     =< has its usual meaning, then (N, =<) is a totally ordered set.
|
| 8.  Example.  If Y is any set and !P!(Y) is the set of subsets of Y,
|     then (!P!(Y), c) is a poset where A c B is the subset inclusion.
|     Notice that we may have:
|
|     o-----------------------------o
|     |              Y              |
|     |      o-----o   o-----o      |
|     |     /       \ /       \     |
|     |    /         o         \    |
|     |   /         / \         \   |
|     |  o         o   o         o  |
|     |  |    A    |   |    B    |  |
|     |  |         |   |         |  |
|     |  o         o   o         o  |
|     |   \         \ /         /   |
|     |    \         o         /    |
|     |     \       / \       /     |
|     |      o-----o   o-----o      |
|     |                             |
|     o-----------------------------o
|
| that is, A, B in !P!(Y) but neither A c B nor B c A holds.  Thus,
| if Y has two or more elements, (!P!(Y), c) is not totally ordered.
|
| 9.  Example.  For any two sets X, Y and f, g in Pfn(X, Y) define
|     f =< g to mean 'g extends f', that is, "for x in X where f(x)
|     is defined, there g(x) is also defined and then g(x) = f(x)".
|     Then (Pfn(X, Y), =<) is a poset which is not totally ordered.
|     This example is important in Section 5.1.
|
| Manes & Arbib, AAPS, pages 41-42.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 35

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.1.  The Definition of a Category (cont.)
|
| Partially ordered sets form a category.
|
| 10.  Example.  Define 'Poset' to be the category whose objects are posets
|      and with Poset((P, =<), (P', =<')) the set of all total functions
|      f : P -> P' which are 'monotone' in the sense that:
|
|      if  x_1 =< x_2  then  f(x_1) =<' f(x_2).
|
|      Composition and identity morphisms are as in Set.
|
| The reader should check that Poset does then satisfy the category axioms.
|
| We next introduce another important mathematical structure.
|
| 11.  Definition.  A 'monoid' is a triple (M, o, e),
|      where M is a set, o : M x M -> M is a function,
|      and e is in M, all subject to the axioms:
|
|      o is 'associative'.  (x o y) o z  =  x o (y o z)  for all x, y, z in M.
|
|      e is the 'identity'.  e o x  =  x o e  =  x  for all x in M.
|
| As for categories, the composition
| of x_1, ..., x_n is written without
| parentheses as  x_1 o ... o x_n.
|
| 12.  Example.  For any category C and any object X in ob(C),
|      the set C(X, X) of all morphisms of X to itself forms
|      a monoid under composition, with identity id_X.
|
| 13.  Example.  An example of a monoid familiar from
|      formal language theory is (X*, conc, !e!), where
|      X* is the set of all finite strings (x_1, ..., x_m),
|      m >= 0, with each x_i in the given "alphabet" X.
|      Here 'conc' is the operation of 'concatenation':
|
|      conc((x_1, ..., x_m), (y_1, ..., y_n))
|
|      =  (x_1, ..., x_m, y_1, ..., y_n),
|
|      and !e! = () is the 'empty string',
|      namely, (x_1, ..., x_m) with m = 0.
|
| 14.  Example.  The category 'Mon' has monoids as objects and
|      monoid homomorphisms as morphisms.  Here, given two monoids
|      (M, o, e) and (M', *, e'), we say that a function f : M -> M'
|      is a 'monoid homomorphism' f : (M, o, e) -> (M', *, e') if and
|      only if f(e) = e' and f(x o y) = f(x) * f(y) for all x, y in M.
|      We define composition and identity as for functions.  The reader
|      should check that Mon does indeed satisfy the category axioms.
|
| Manes & Arbib, AAPS, page 43.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 36

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.1.  The Definition of a Category (cont.)
|
| 15.  Definition.  Let C be any category and let !D! be any
|      subclass of ob(C).  Define a category D by ob(D) = !D!
|      and by letting D(X, Y) = C(X, Y) for each X, Y in !D!,
|      with composition and identities being the same as in C.
|      A routine check shows that D is a category.  We call it
|      the 'full subcategory' induced by !D!.  The "full" refers
|      to the fact that 'all' C-morphisms between objects in !D!
|      have been retained.
|
| Since no restrictions have been imposed on !D!,
| full subcategories give rise to a rich supply
| of new categories.  Even more generally:
|
| 16.  Definition.  Let C be a category.  A 'subcategory' D of C is
|      given by a subclass ob(D) of ob(C) and, for each X, Y in ob(D),
|      a subset D(X, Y) of C(X, Y) subject to the axioms that id_X is
|      in D(X, X) and, whenever f is in D(X, Y) and g is in D(Y, Z),
|      then gf in C(X, Z) is in fact in D(X, Z).
|
| It is obvious that such D, with the composition inherited
| from C, satisfies axioms (a, b, c) of the definition of
| a category.  Thus, a subcategory is a category in its
| own right.
|
| Clearly, a subcategory D of C is a 'full' subcategory
| if and only if D(X, Y) = C(X, Y) for all X, Y in ob(D).
|
| 17.  Example.  Set is a (nonfull) subcategory of Pfn
|      since ob(Set) = ob(Pfn), Set(X, Y) c Pfn(X, Y),
|      id_X in Pfn(X, X) is the total identity function,
|      and if f, g are composable total functions then
|      their composition gf as partial functions is
|      their composition as total functions.
|
| Manes & Arbib, AAPS, pages 43-44.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 37

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects
|
| This section introduces the fundamental equivalence relation of category theory,
| isomorphism.  Also discussed are duality, initial and terminal objects, as well
| as zero objects, which are simultaneously initial and terminal.  The Cartesian
| product of two sets, the set of words on a given alphabet, and the principle
| of simple recursion are all manifestations of initial or terminal objects.
|
| Isomorphisms
|
| All constructions in a category must ultimately be described entirely
| in the language of objects, morphisms, composition, and identities.
| Our first definition in this language is that of "isomorphism".
|
| 1.  Definition.  A morphism f : X -> Y in a category C
|     is an 'isomorphism' if there exists g : Y -> X
|     with gf = id_X and fg = id_Y, or, in terms of
|     a commutative diagram:
|
|             f 
|     X o---------->o Y
|        \          |\
|         \         | \
|          \        |  \
|           \       |   \
|            \    g |    \
|        id_X \     |     \ id_Y
|              \    |      \
|               \   |       \
|                \  |        \
|                 \ |         \
|                  vv          v
|                 X o---------->o Y
|                         f
|
| Such g, 'if it exists', is unique, since if also hf = id_X and fh = id_Y,
| then g = g id_Y = g(fh) = (gf)h = id_X h = h.  This proof uses the full
| force of axioms (b) and (c) in the definition of a category.  Such g,
| then, is called the 'inverse' of f and is written f^-1.
|
| 2.  Example.  In Set, f : X -> Y is an isomorphism if and only if
|     f is 'bijective', that is, f is one-one and onto.  To see this,
|     first suppose that f is an isomorphism.  If f(x) = f(x') then:
|
|     x  =  (f^-1)(f(x))  =  (f^-1)(f(x'))  =  x',
|
|     which proves that f is one-one (injective).
|
|     If y is any element of Y, then y = f((f^-1)(y)),
|     so f is onto (surjective).
|
|     Conversely, let f be injective and surjective.  If y is any element
|     of Y there exists a unique of X, call it g(y), which f maps to y.
|     Thus, f(g(y)) = y.  Since, in particular, f(g(f(x))) = f(x) and
|     f is injective, g(f(x)) = x.
|
| 3.  Example.  In Pfn, f : X -> Y is an isomorphism if and only if
|     f is a total function which is bijective.  By the first example
|     it is obvious that a bijective total function is an isomorphism.
|     Conversely, let f : X -> Y be an isomorphism.  Then (f^-1)f = id_X,
|     so that X = DD(id_X) = DD((f^-1)f) c DD(f), which implies that f is
|     a total function.  Similarly f(f^-1) = id_Y implies that f^-1 is total,
|     so f is an isomorphism in Set.
|
| 4.  Example.  In Mfn, f : X -> Y is an isomorphism if and only if f is a
|     total function which is bijective.  As in Example 3, one way is clear.
|     Conversely, let f : X -> Y be an isomorphism.
|
|     First observe that,
|
|     if y in f(x), then x in (f^-1)(y),
|
|     since f(f^-1)(y) = {y} implies that (f^-1)(y) =/= Ø,
|
|     and,
|
|     if x' in (f^-1)(y), then x' in (f^-1)f(x) = {x}, so that x' = x.
|
|     Then,
|
|     if y, y' in f(x), then x in (f^-1)(y) and y' in f(x),
|
|     so y' in f(f^-1)(y) = {y}, and so y = y'.
|
|     This proves that f is a partial function.
|     Symmetrically, f^-1 is a partial function.
|     Now use the preceding example.
|
| Manes & Arbib, AAPS, pages 46-47.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 38

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| 5.  Definition.  Two objects X, Y in a category C are 'isomorphic' if
|     there exists an isomorphism f : X -> Y.  This is written X ~=~ Y.
|
| 6.  Observation.  Isomorphism is an equivalence relation on ob(C).
|
| Proof.  id_X : X -> X is an isomorphism with (id_X)^-1 = id_X,
| so that X ~=~ X and isomorphism is 'reflexive'.  If f : X -> Y
| is an isomorphism, so is f^-1 : Y -> X, so that isomorphism is
| 'symmetric'.  To see that 'transitivity' holds, if f : X -> Y
| and g : Y -> Z are isomorphisms, then:
|
| (g f)(f^-1 g^-1)  =  g (f f^-1) g^-1  =  g g^-1  =  id_Z
|
| and  (f^-1 g^-1)(g f)  =  id_X   similarly
|
| so that gf is an isomorphism (and (gf)^-1  =  f^-1 g^-1).  þ
|
| As a rule, definitions and constructions in category theory
| (beginning with (7) below;  see Theorem 8) are not unique
| but are "unique up to isomorphism".  Thus, a major aspect
| of the philosophy of category is that "isomorphism"
| formalizes "abstractly the same".
|
| Each theorem of category theory has a "dual theorem" whose
| proof is an automatic consequence of the original, obtained
| by "reversing the arrows".  Before giving the general notion
| of duality, we explore the motivating duality of initial and
| terminal objects.
|
| 7.  Definition.  An object A in a category C is 'initial' if
|     for every object X there exists exactly one morphism from
|     A to X.  We denote this unique morphism by ! : A -> X.
|
| The next result, simple as its proof may be, is one of the most
| fundamental in category theory, because it turns out that many
| important constructs can be shown to be equivalent to initial
| objects in suitable categories.
|
| 8.  Theorem.  If A and B are both initial objects in a category C
|     then ! : A -> B is an isomorphism.  Thus, if C has an initial
|     object it is unique up to a unique isomorphism.
|
| Proof.  As any two morphisms from A to A are equal,
| similarly B to B, the following diagram commutes:
|
|             ! 
|     A o---------->o B
|        \          |\
|         \         | \
|          \        |  \
|           \       |   \
|            \    ! |    \
|        id_A \     |     \ id_B
|              \    |      \
|               \   |       \
|                \  |        \
|                 \ |         \
|                  vv          v
|                 A o---------->o B
|                         !
|
| 9.  Example.  The empty set Ø is initial in Pfn with ! : Ø -> X
|     being the totally undefined function.  Since this function is
|     total (else it would be undefined on some element of Ø) we have
|     that Ø is also initial in Set.  Thus, ! : Ø -> X is not only the
|     unique partial function but also the unique multifunction (by 1.4.3)
|     so Ø is again the initial object of Mfn and ANMfn.
|
| Manes & Arbib, AAPS, page 48.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 39

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| For each construction defined in a general category C,
| the 'dual construction' is the construction obtained
| by "reversing all arrows".  An intitial object is
| one admitting unique morphisms from itself, so
| the dual concept should be an object admitting
| unique morphisms to itself, and such is aptly
| called a 'terminal object'.
|
| 10.  Definition.  An object A in a category C is 'terminal'
|      if for each object X of C there exists exactly one
|      C-morphism from X to A.  The unique C-morphism
|      from X to A will be denoted ¡ : X -> A.
|
| As another exercise in the language of duality, consider the notion of
| an isomorphism.  We saw that f : X -> Y is an isomorphism just in case
| there is a map g : Y -> X such that the following diagram commutes:
|
|             g 
|      Y o---------->o X
|         \          |\
|          \         | \
|           \        |  \
|            \       |   \
|             \    f |    \
|         id_Y \     |     \ id_X
|               \    |      \
|                \   |       \
|                 \  |        \
|                  \ |         \
|                   vv          v
|                  Y o---------->o X
|                          g
|
| If we reverse all of the arrows, we say that f : Y -> X
| is "the dual of an isomorphism" just in case there is a
| map g : X -> Y such that the following diagram commutes:
|
|             g 
|      Y o<----------o X
|         ^          ^^
|          \         | \
|           \        |  \
|            \       |   \
|             \    f |    \
|         id_Y \     |     \ id_X
|               \    |      \
|                \   |       \
|                 \  |        \
|                  \ |         \
|                   \|          \
|                  Y o<----------o X
|                          g
|
| But this just says that f is an isomorphism,
| so the concept of isomorphism is self-dual.
| With this observation, the dual of Theorem 8
| is the following:
|
| 11.  Theorem.  If A and B are both terminal objects in
|      a category C, then ¡ : A -> B is an isomorphism.
|
| Proof.  Once the concept of duality is understood,
| no proof is needed -- just reverse all arrows in
| the proof of Theorem 8.  þ
|
| Manes & Arbib, AAPS, page 49.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 40

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| We are now going to place the concept of duality on a more formal footing,
| by regarding a diagram in this category C with all its arrows reversed as
| being identical to a diagram in the "opposite category" C^op.  Here the
| abstraction of our general definition of a category begins to show its
| power.  A function f : X -> Y from a set X to a set Y is certainly not
| to be considered as a function from Y to X, but there is nothing to
| prevent us from using the "arrow-reversed notation" f : Y -< X for
| f (using a distinctive new arrowhead) and calling this a morphism
| 'from' Y 'to' X in the new category Set^op.  Here is the general
| definition.
|
| 12.  Definition.  Let C be a category.
|      The 'dual' or 'opposite category'
|      of C is the category C^op
|      defined as follows:
|
|      ob(C^op)    =  ob(C),
|
|      C^op(X, Y)  =  C(Y, X).
|
| Taking C as the "primary" category whose arrows we write
| in the normal way f : X -> Y, we write the same morphism
| in C^op as f : Y -< X.
|
| If f in C^op(X, Y) and g in C^op(Y, Z) their composition g * f in C^op(X, Z)
| is obtained by taking the composition f o g of g in C(Z, Y) and f in C(Y, X)
| in C.
|
|         f      g            g * f            f o g
|      X ---< Y ---< Z  =  X -------< Z  =  X -------< Z  in  C^op,
|
| where
|
|         g      f            f o g
|      Z ---> Y ---> X  =  Z -------> X  in  C.
|
| Axioms (a, b, c) for C^op follow easily from their correspondents in C.
| The identity morphisms of C^op coincide with those of C.  Moreover,
| rephrasing our earlier observation that isomorphism is self-dual,
| f in C(X, Y) is an isomorphism in C if and only if the same g [f?]
| considered in C^op is an isomorphism in C^op.
|
| Proof.  The following diagrams (in which equally labeled commutative diagrams)
| are equal statements in their respective categories) establish the assertion
| about isomorphisms:
|
|         f                               f
| X o---------->o Y               X o>----------o Y
|    \          |\                   v          vv
|     \   (A)   | \                   \   (B)   | \
|      \        |  \                   \        |  \
|       \       |   \                   \       |   \
|        \    g |    \                   \    g |    \
|    id_X \     |     \ id_Y         id_X \     |     \ id_Y
|          \    |      \                   \    |      \
|           \   |       \                   \   |       \
|            \  |        \                   \  |        \
|             \ |   (B)   \                   \ |   (A)   \
|              vv          v                   \|          \
|             X o---------->o Y               X o>----------o Y
|                     f                               f
|
| Clearly, C = (C^op)^op.  There is nothing special about being of the form C^op.
| C^op ranges over all categories as C does.  When C is a "concrete" category such
| as Set there is no guarantee that C^op will likewise have such a representation.
| Set^op is "more abstract" than Set.
|
| Manes & Arbib, AAPS, pages 49-50.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 41

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| 13.  Definition.  Given a construction A in C,
|      the 'dual construction' is that obtained by
|      performing the construction in C^op, and then
|      interpreting the construction in C.  We will
|      often refer to the dual of the A construct
|      as the co-A construct.
|
| Thus, isomorphism = co-isomorphism, co-initial = terminal,
| and co-terminal = initial.  With this, let us use C^op in
| spelling out a full proof of Theorem 11.  By definition,
| A and B are are initial objects in C^op.  By Theorem 8,
| ! : B -< A is an isomorphism in C^op.  But we have
| already shown that f : B -< A is an isomorphism in
| C^op if and only if f : A -> B is an isomorphism
| in C.  Thus, the unique morphism A -> B (which
| we choose to call ¡ rather than !) is an
| isomorphism in C.
|
| 14.  Example.  In Set, a terminal object is a one-element set.  Hence, Set has
|      many different terminal objects but all are isomorphic (as they must be
|      by Theorem 13).  Thus, while the "abstract theory of initial objects"
|      and the "abstract theory of terminal objects" should be regarded as
|      the same (whatever we can state and prove about initial objects in
|      C is automatically stated and proved, dually, for terminal objects
|      in C^op, and C^op ranges over all categories as C does), in a
|      particular example such as Set initial objects and terminal
|      objects behave differently.
|
| 15.  Example.  Let D be the full subcategory of Set whose objects are
|      sets with two or more elements.  While the initial object Ø of
|      Set is not in D this does not in itself prove that D has no
|      initial object (see Exercise 2.d).  In fact, if A is any
|      object of D there are at least two morphisms A -> A.
|      This proves that no object of D is either initial
|      or terminal.
|
| Manes & Arbib, AAPS, pages 50-51.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 42

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| Before introducing zero objects we consider the more general concept of
| zero morphisms which abstract from "totally undefined" morphisms in Pfn.
|
| 16.  Definition.  Let C be any category, and let 0_XY
|      in C(X, Y) be given for each X and Y.  Say that
|      (0_XY) is a 'family of zero morphisms' if for
|      every f : W -> X and g : Y -> Z we have:
|
|                 0_WZ
|      W o------------------->o Z
|        |                    |
|        |                    |
|        |                    |
|      f |                    | g
|        |                    |
|        |                    |
|        v                    v
|      X o------------------->o Y
|                 0_XY
|
| On taking f or g equal to the identity, we see that this amounts to
| saying that "any composition which has a zero factor is itself zero".
| Set does not have a family of zero morphisms because Set(X, Ø) is empty
| whenever X =/= Ø.  When a family of zero morphisms does exist, however,
| it is unique since if (0_XY), (Z_XY) are both families of zero morphisms:
|
|      Z_XY  =  id_Y Z_XY (0_XX id_X)  =  (id_Y Z_XY) 0_XX id_X  =  0_XY.
|
| We often write 0 : X -> Y for 0_XY : X -> Y if no confusion would arise.
|
| 17.  Example.  In Pfn, Mfn, and ANMfn,
|      the totally undefined functions:
|
|                  ¡      !
|      0_XY  =  X ---> Ø ---> Y
|
|      yield a family of zero morphisms.
|
| This example motivates the next definition and proposition.
|
| 18.  Definition.  A 'zero object' in a category C
|      is an object that is both initial and terminal.
|      We denote a zero object by 0, the same symbol
|      as for an initial object.  Though arbitrary,
|      the convention is standard.
|
| 19.  Proposition.  A category with a zero object has zero morphisms.
|      In a category with zero morphisms, each initial object is also
|      a zero object and each terminal object is also a zero object.
|
| Proof.  For the first statement,
| let 0 be a zero object and define:
|
|                  ¡      !
|      0_XY  =  X ---> 0 ---> Y.
|
| Then the following diagram commutes:
|
|                  ¡   0  !
|             W o----->o----->o Z
|               |     ^ \     |
|               |    /   \    |
|             f |   /     \   | g
|               |  / ¡   ! \  |
|               | /         \ |
|               v/           vv
|             X o             o Y
|
| so (0_XY) is a family of zero morphisms.
|
| For the second statement, let zero morphisms exist
| and let 0 be an initial object.  There exists at
| least one morphism X -> 0, namely, 0.  As 0 is
| initial, 0 : 0 -> 0 is id_0 : 0 -> 0, so if
| f : X -> 0 is arbitrary, we have:
|
|      f  =  id_0 f  =  0f  =  0.
|
| This shows that 0 is terminal.
| That a terminal object is initial
| is simply the dual statement.  þ
|
| 20.  Example.  Pfn, Mfn, and ANMfn have Ø as zero object.
|      The construction in Example 17 follows the proof of
|      Theorem 19.
|
| In Pfn, we have said f : X -> Y is total iff DD(f) = X, that is,
| f(x) is defined for every x in X.  In the same spirit, say that
| f : X -> Y in Mfn or ANMfn is total if f(x) =/= Ø for all x in X.
| It is easy to prove (work Exercise 7!) that these definitions are
| unified by the following abstract one.
|
| 21.  Definition.  Let C be a category with sero morphisms.
|      Say that f : X -> Y is 'total' if, whenever t : T -> X,
|      we have that t =/= 0 implies ft =/= 0.
|
| 22.  Proposition.  Let C be a category
|      with zero morphisms and let
|      f : X -> Y, g : Y -> Z.
|      Then:
|
|      1.  If f, g are total, so is gf : X -> Z.
|
|      2.  If gf is total, so is f.
|
| Proof.
|
|      1.  If t =/= 0 then ft =/= 0 so g(ft) = (gf)t =/= 0.
|
|      2.  If t =/= 0 then (gf)t =/= 0 so g(ft) =/= 0.
|          Since g0 = 0, ft =/= 0.
|
| þ
|
| Manes & Arbib, AAPS, pages 51-52.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 43

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| Simple Recursion
|
| We conclude this section by showing how sequences inductively
| defined by simple recursion are the unique morphism ! from
| the initial object in an appropriate category.  This is
| a foretaste of the principle that 'constructions in
| a category which are unique up to isomorphism are
| instantiations of initial objects'.
|
| By a 'sequence' in a set X we mean a function N -> X.
| We may define the sequence g : N -> N where n ~> 2^n
| inductively by the definition:
|
|      g(0)    =  1,
|
|      g(n+1)  =  2 * g(n),  for n >= 0.
|
| This is a specific case of the following general notion.
|
| 23.  Definition.  We say that the sequence g : N -> X is defined from
|      x_0 in X and f : X -> X by 'simple recursion' if g satisfies the
|      recursive definition:
|
|      Basis Step.      g(0)    =  x_0,
|
|      Induction Step.  g(n+1)  =  f(g(n)),  for n in N.
|
| In the above example, x_0 = 1 and f(x) = 2 * x.  It is clear that g
| is defined uniquely by the above scheme.  We shall take up a general
| discussion of recursive definitions in Chapter 5.  Here our task is
| to show that the g of (23) is really an example of the unique map !
| induced from an initial object.  First, we note that an element x_0
| in X can also be written as the function 1 -> X that sends the unique
| element of the one-element set to x_0.  We shall also call this map x_0.
| The basis step of (23) can then be rewritten as the commutative diagram:
|
| 24.
|              N
|              o
|             ^|
|            / |
|         0 /  |
|          /   |
|         /    |
|      1 o     | g
|         \    |
|          \   |
|       x_0 \  |
|            \ |
|             vv
|              o
|              X
|
| Here, 0 is not a zero morphism as in (16),
| but the map whose value is 0 in N.  We are,
| in fact, in Set, which does not have any
| zero morphisms.
|
| Again, if we let s : N -> N denote
| the successor function n ~> n + 1,
| the induction step is equivalent
| to the commutative diagram:
|
| 25.
|              N              s              N
|              o<----------------------------o
|              |                             |
|              |                             |
|              |                             |
|              |                             |
|              |                             |
|            g |                             | g
|              |                             |
|              |                             |
|              |                             |
|              |                             |
|              v                             v
|              o<----------------------------o
|              X              f              X
|
| More generally, then, we have the following:
|
| 26.  The Principle of Simple Recursion.
|
|      For each x_0 : 1 -> X and f : X -> X
|      there exists a unique     g : N -> X
|      such that the following diagram commutes:
|
|              N              s              N
|              o<----------------------------o
|             ^|                             |
|            / |                             |
|         0 /  |                             |
|          /   |                             |
|         /    |                             |
|      1 o     | g                           | g
|         \    |                             |
|          \   |                             |
|       x_0 \  |                             |
|            \ |                             |
|             vv                             v
|              o<----------------------------o
|              X              f              X
|
| This leads us to consider the following category:
|
| 27.  'The category of simple recursion data' Srd has as objects the triples
|       (X, x_0, f), where X is a set, x_0 in X, and f : X -> X is a total
|       function.  A morphism p : (X, x_0, f) -> (Y, y_0, h) is a total
|       function p : X -> Y for which the following diagram commutes:
|
|              X              f              X
|              o<----------------------------o
|             ^|                             |
|            / |                             |
|       x_0 /  |                             |
|          /   |                             |
|         /    |                             |
|      1 o     | p                           | p
|         \    |                             |
|          \   |                             |
|       y_0 \  |                             |
|            \ |                             |
|             vv                             v
|              o<----------------------------o
|              Y              h              Y
|
| that is, p(x_0) = y_0, while p(f(x)) = h(p(x)) for each x in X.
|
| We must yet specify composition and identities and verify that
| Srd is a category.  But, given this, we can note immediately
| that 'the principle of simple recursion' (26) is equivalent
| to the statement that "(N, 0, s) is initial in Srd".
|
| Manes & Arbib, AAPS, pages 53-54.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 44

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.2.  Isomorphism, Duality, and Zero Objects (cont.)
|
| Returning to the definition of Srd, composition is defined to be the usual
| composition q’ o q of total functions.  That this is well defined is best
| seen from "diagram pasting":
|
|                  X              f              X
|                  o<----------------------------o
|                 ^|                             |
|                / |                             |
|               /  |                             |
|              /   |                             |
|         x_0 /    | q                           | q
|            /     |                             |
|           /      |                             |
|          /       |                             |
|         /  y_0   v              h              v
|      1 o-------->o<----------------------------o
|         \        | Y                           | Y
|          \       |                             |
|           \      |                             |
|            \     |                             |
|         z_0 \    | q’                          | q’
|              \   |                             |
|               \  |                             |
|                \ |                             |
|                 vv              k              v
|                  o<----------------------------o
|                  Z                             Z
|
| For example,
|
|    k(q’q)  =  (q’q)f,
|
| because
|
|    k(q’q)  =  (kq’)q  =  (q’h)q  =  q’(hq)  =  q’(qf)  =  (q’q)f.
|
| Axiom (b) of 2.1.1 [the associative property] is obvious since the
| composition of total functions is associative.  The identities for
| axiom (c) are the obvious ones:
|
|              X              f              X
|              o<----------------------------o
|             ^|                             |
|            / |                             |
|       x_0 /  |                             |
|          /   |                             |
|         /    |                             |
|      1 o     | id_X                        | id_X
|         \    |                             |
|          \   |                             |
|       x_0 \  |                             |
|            \ |                             |
|             vv                             v
|              o<----------------------------o
|              X              f              X
|
| Now claim:  If q : (X, x_0, f) -> (Y, y_0, h) is a Srd-morphism, then q
| is an isomorphism in Srd if and only if q is bijective.  On the one hand,
| if q is an isomorphism then there exists  p : (Y, y_0, h) -> (X, x_0, f)
| with q o p = id_Y and p o q = id_X, so q is bijective.  Conversely, if q
| is bijective then there exists a function p : Y -> X with q o p = id_Y and
| p o q = id_X.  Is such p a morphism : (Y, y_0, h) -> (X, x_0, f)?  Consider
| the diagram below -- the ?'s indicate places where commutativity is yet to
| be proved.
|
|                  X              f              X
|                  o<----------------------------o
|                 ^|                             |
|                / |                             |
|               /  |                             |
|              /   |                             |
|         x_0 /    | q                           | q
|            /     |                             |
|           /      |                             |
|          /       |                             |
|         /  y_0   v              h              v
|      1 o-------->o<----------------------------o
|         \        | Y                           | Y
|          \   ?   |                             |
|           \      |                             |
|            \     |                             |
|         x_0 \    | p            ?              | p
|              \   |                             |
|               \  |                             |
|                \ |                             |
|                 vv                             v
|                  o<----------------------------o
|                  X              f              X
|
| Well, reading from the diagram above:
|
|    (ph)q  =  p(hq)   =  p(qf)  =  (pq)f
|
|           =  id_X f  =    f    =  f id_X
|
|           =  f(pq)   =  (fp)q.
|
| Thus:
|
|    ph  =  (ph)(q q^-1)  =  ((ph)q) q^-1
|
|        =  ((fp)q) q^-1  =  (fp)(q q^-1)  =  fp.
|
| Similarly:
|
|    p(y_0)  =  p(q(x_0))  =  (pq)(x_0)  =  id_X (x_0)  =  x_0.
|
| We reiterate the desire that objects in a category should be isomorphic
| just in case they are "abstractly the same".  This works out well for the
| category of recursion data above where if q : (X, x_0, f) -> (Y, y_0, g)
| is an isomorphism, the bijection q transports x_0 to y_0 and f to g, so
| thinking of q as a "relabelling", the abstract structure is "the same".
| When designing new categories, one of the aesthetic criteria to keep
| in mind is that this technical sense of isomorphism should relate to
| intuitive ones.  For example, if when defining the category of simple
| recursion data we dropped the requirement that q(x_0) = y_0 and that
| qf = gq, we would get a category whose isomorphisms were bijections,
| but here if s : N -> N is s(n) = n + 1, whereas for z : N -> N one
| has z(n) = 0, then (N, 0, s) and (N, 0, z) would be isomorphic,
| which is not desirable.
|
| Manes & Arbib, AAPS, pages 54-55.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 45

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.3.  Products and Coproducts
|
| In this section we show that two constructions of set theory which
| play an important role in program semantics -- Cartesian products
| and disjoint unions -- can be described in category-theoretic terms
| and so generalize to a wide class of categories.  While the original
| constructions seem unrelated, their category-theoretic descriptions are
| seen to be dual.  Cartesian products abstract to products in a category
| whereas disjoint unions abstract to coproducts, it being common to indicate
| duality by the prefix "co" as discussed earlier in 2.2.13.  Coproducts are
| an important structural aspect of the partially additive categories in the
| next chapter.
|
| We begin by describing Cartesian products of sets.  The term "Cartesian" honors
| the mathematician René Descartes who developed plane analytic geometry whereby
| the plane is represented as the set of all ordered pairs (x, y) with x and y
| in the set R of real numbers.  Thus, the plane is R x R where, in general,
| for any two sets X, Y their 'Cartesian product' is the set:
|
|     X x Y  =  {(x, y) : x in X, y in Y}
|
| of all ordered pairs with x in X and y in Y.  If a program fragment had two
| variables x and y taking values in the sets X and Y, respectively, then X x Y
| would comprise all possible values which could be taken by the two variables
| taken together.  Turning to a formal analysis, we offer the following precise
| description of an ordered pair (x, y) in X x Y which, if somewhat pedantic, is
| useful for generalizing to the product of infinitely many sets.  If we invent
| a convenient two-element set, say {i, j}, an ordered pair in X and Y amounts
| to a total function f : {i, j} -> X |_| Y with f(i) in X and f(j) in Y.
| The relationship between (x, y) and f is that f(i) = x and f(j) = y, and
| this formula defines (x, y) in terms of f, and f in terms of (x, y), and,
| indeed, establishes a bijective correspondence between the f and the (x, y).
| We could then regard X x Y as the set of all functions f : {i, j} -> X |_| Y
| with f(i) in X and f(j) in Y.  This leads to the following general definition:
|
| 1.  Definition.  Let (X_i : i in I) be any family of sets.
|     Their 'Cartesian product' is the set of all functions:
|
|        f
|     I ---> |_|^(i in I) X_i
|
|     such that f(i) is in X_i for each i in I.
|     We denote this set of functions by:
|
|     ]¯[_(i in I) X_i    or    ]¯[ (X_i : i in I).
|
| Often family notation is used so we write (x_i : i in I) instead of f,
| where f(i) = x_i.  This directly generalizes the motivating comments
| above where I has two elements.
|
| 2.  Example.  If I = {p, a, s} and if X_p is a set of persons, X_a is a set
|     of age values, say {16, 17, ..., 80}, and X_s = {male, female}, then
|
|     ]¯[_(i in I) X_i
|
|     is a suitable value object for a data base "record"
|     for a person's age and sex.
|
| Manes & Arbib, AAPS, pages 57-58.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 46

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.3.  Products and Coproducts (cont.)
|
| We next consider unions.
|
| Notice that if X_1 ~=~ X_2 are isomorphic in Set and
|
| if, similarly, Y_1 ~=~ Y_2, it is not necessarily true
|
| that   X_1 |_| Y_1 ~=~ X_2 |_| Y_2.
|
| For example, let:
|
|     X_1  =  {a, A},
|
|     X_2  =  {a, b},
|
|     Y_1  =  {A, b, e},
|
|     Y_2  =  {c, d, e}.
|
| Then:
|
|     X_1 |_| Y_1  =  {a, A, b, e}  has four elements
|
| whereas:
|
|     X_2 |_| Y_2  =  {a, b, c, d, e}  has five elements.
|
| Since any two sets in bijective correspondence are
| "abstractly the same", according to the philosophy
| of category theory we seek a notion of union that
| respects isomorphism better than ordinary union.
| A solution is given in terms of "disjoint unions"
| wherein, given a family (X_i : i in I), the elements
| of X_i are "painted color i" before taking an ordinary
| union.  The more precise definition makes use of ordered
| pairs and is as follows:
|
| 3.  Definition.  Let (X_i : i in I) be any family of sets.
|     Their 'disjoint union' is the set:
|
|     {(x, i) : i in I, x in X_i}
|
|     and is denoted:
|
|     ]_[^(i in I) X_i    or    ]_[ (X_i : i in I).
|
| The choice of the upside-down Cartesian product symbol
| anticipates the yet-to-be-established category-theoretic
| duality between Cartesian product and disjoint union.
|
| Notice that
|
|     ]_[^(i in I) X_i  =  |_|^(i in I) X_i x {i}
|
| is the ordinary union of the "painted" sets X_i x {i}
| in which an element (x, i) is "x painted color i".
|
| The union is disjoint because
|
|     X_i x {i}  |^|  X_j x {j}  =  Ø    if i =/= j,
|
| even if X_i = X_j.
|
| Disjoint unions also occur in semantics:
|
| 4.  Example.  The disjoint union has a very natural application
|     in describing the exit value of a multi-exit program.  Here,
|     a value like (y, i) would be interpreted as "execution of the
|     program terminates by taking exit i with value y".  For example,
|     given f_1, ..., f_n : X -> Y in Pfn, consider the case statement
|     of 1.5.22.
|
|     case (p_1, ..., p_n) of (f_1, ..., f_n)
|
|     with flowscheme:
|
|                      o-----o        o-----o   :
|             o------->| p_1 |------->| f_1 |---:--->o
|             |        o-----o        o-----o   :    |
|             |                  ·              :    |
|     ------->o                  ·              :    o------->
|             |                  ·              :    |
|             |        o-----o        o-----o   :    |
|             o------->| p_n |------->| f_n |---:--->o
|                      o-----o        o-----o   :
|
| For I = {1, ..., n}, the semantics of the
| portion to the left of the dotted line is:
|
|        g
|     X ---> ]_[^(i in I) Y
|
| where the notation ]_[^(i in I) Y means ]_[^(i in I) Y_i with each Y_i = Y,
|
| and where:
|
|              (  (f_i (x), i)   if p_i (x) is defined,
|     g(x)  =  <
|              (  undefined      else.
|
| This discussion will be completed in (25) below.
|
| Manes & Arbib, AAPS, pages 58-59.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 47

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.3.  Products and Coproducts (cont.)
|
| We turn now to a description of Cartesian products that uses
| only category-theoretic language in Set.  Our starting point
| is a given family (X_i : i in I) of sets.  Our Definition 1
| is in terms of elements;  we must think more in terms of how
| to use morphisms in Set to characterize when a set X is to be
| isomorphic to ]¯[_(i in I) X_i.  To this end consider a family
| of morphisms of the form (f_i : Y -> X_i : i in I).  For each
| y in Y, (f_i (y) : i in I) is an element of ]¯[_(i in I) X_i
| and so should correspond to a definite element of X, call it
| f(x).  In this way there is a bijective correspondence between
| morphisms Y -> X and families of morphisms Y -> X_i.  Moreover,
| when X = ]¯[_(i in I) X_i this correspondence is easily described
| in terms of commutative diagrams.  To begin, we need the following
| definition:
|
| 5.  Definition.  For (X_i : i in I) a family of sets and j in I,
|     the 'j^th projection function' is:
|
|                        pr_j
|     ]¯[_(i in I) X_i -------> X_j,    (x_i) ~> x_j.
|
| We then observe that the relationship between f and the f_i is precisely:
|
| 6.
|                              pr_j
|     ]¯[_(i in I) X_i o------------------>o X_j
|                       ^                 ^
|                        \               /
|                         \             /
|                          \           /
|                         f \         / f_j     (for all j in I)
|                            \       /
|                             \     /
|                              \   /
|                               \ /
|                                o
|                                Y
|
| In terms of elements, (6) asserts that:
|
|     f(y)  =  (f_i (y) : i in I)
|
| which is what we expected.  We have motivated the following definition.
|
| 7.  Definition.  Let C be any category and let (X_i : i in I) be a family of
|     objects of C.  A 'product' of (X_i : i in I) is (P, (pr_i : i in I)),
|     where P is an object of C and for each i in I, pr_i : P -> X_i is
|     a C-morphism, all subject to the following property:
|
|     Given (Y, (f_i : i in I)) with C-object Y and C-morphisms f_i : Y -> X_i,
|     there exists a unique f : Y -> P such that pr_i f = f_i for all i in I,
|     as shown here:
|
|               pr_i
|     P o------------------>o X_i
|        ^                 ^
|         \               /
|          \             /
|           \           /
|          f \         / f_i
|             \       /
|              \     /
|               \   /
|                \ /
|                 o
|                 Y
|
|     The pr_i are called 'projection morphisms'.
|
| 8.  Example.  In Set, P = ]¯[_(i in I) X_i,
|     with pr_i as in (5), is a product of
|     (X_i : i in I).  This was established
|     in the discussion motivating (7).
|
| Manes & Arbib, AAPS, pages 59-60.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 48

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.3.  Products and Coproducts (cont.)
|
| 9.  Proposition.  In any category C, products are
|     unique up to a unique isomorphism, that is,
|     if (P, (pr_i)) and (P’, (pr’_i)) are both
|     products of (X_i : i in I), then the
|     unique !a! in the next diagram
|     is an isomorphism:
|
|                !a!
|     P o------------------>o P’
|        \                 /
|         \               /
|          \             /
|           \           /
|       pr_i \         / pr’_i
|             \       /
|              \     /
|               \   /
|                v v
|                 o
|                X_i
|
| Proof.  For given (X_i : i in I), let D be a category
| whose objects are all (Y, (f_i)) with f_i : Y -> X_i,
| whose morphisms h : (Y, (f_i)) -> (Z, (g_i)) are
| defined to be C-morphisms h : Y -> Z such that:
|
|                 h
|     Y o------------------>o Z
|        \                 /
|         \               /
|          \             /
|           \           /
|        f_i \         / g_i
|             \       /
|              \     /
|               \   /
|                v v
|                 o
|                X_i
|
|           (all i in I)
|
| with composition and identities as in C.  That D is a category is
| routinely verified.  By Definition 7, a product of (X_i : in I) is
| the same thing as a terminal object of D.  The desired result now
| follows from Theorem 2.2.11.  þ
|
| Manes & Arbib, AAPS, page 60.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SEM.  Note 49

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 2.  An Introduction to Category Theory
|
| 2.3.  Products and Coproducts (cont.)
|
| 10.  Proposition.


| Manes & Arbib, AAPS, pages 60-61.
|
| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

SET. Set Theory

SET. Note 1


This is one of those things that I wish I had just gone ahead and
done a year ago.  Better late than never?  I will begin excerpting
some standard presentations of set theory, just the basic classic
material, nothing near the edge of the ever-exploding universe.

I had started down this road once before, but got distracted:

08 Mar 2002.  http://suo.ieee.org/email/msg08053.html
09 Mar 2002.  http://suo.ieee.org/email/msg08067.html

It just seems that there is a persistent problem about understanding the
difference between our informal set theories and our formal set theories,
and especially the fact that the formal theory does not always do for us
what we might imagine that the informal theory does for us.

SET. Note 2


| Appendix
|
| Elementary Set Theory
|
| This appendix is devoted to elementary set theory.
| The ordinal and cardinal numbers are constructed
| and the most commonly used theorems are proved.
| The non-negative integers are defined and
| Peano's postulates are proved as theorems.
|
| A working knowledge of elementary logic is assumed, but acquaintance with
| formal logic is not essential.  However, an understanding of the nature
| of a mathematical system (in the technical sense) helps to clarify and
| motivate some of the discussion.  Tarski's excellent exposition [1]
| describes such systems very lucidly and is particularly recommended
| for general background.
|
| This presentation of set theory is arranged so that it may be
| translated without difficulty into a completely formal language. †
| In order to facilitate either formal or informal treatment the
| introductory material is split into two sections, the second of
| which is essentially a precise restatement of part of the first.
| It may be omitted without loss of continuity.
|
| The system of axioms adopted is a variant of systems of Skolem and of
| A.P. Morse and owes much to the Hilbert-Bernays-von Neumann system as
| formulated by Gödel.  The formulation used here is designed to give
| quickly and naturally a foundation for mathematics which is free
| from the more obvious paradoxes.
|
| For this reason a finite axiom system is abandoned
| and the development is based on eight axioms and
| one axiom scheme ‡ (that is, all statements of a
| certain prescribed form are accepted as axioms).
|
| It has been convenient to state as theorems many
| propositions which are essentially preliminary
| to the desired results.  This clutters up the
| list of theorems, but it permits omission of
| many proofs and abbreviation of others.
| Most of the devices used are more or
| less evident from the statements of
| the definitions and theorems.
| 
| †  That is, it is possible to write the theorems in terms of
|    logical constants, logical variables, and the constants of
|    the system, and the proofs may be derived from the axioms
|    by means of rules of inference.  Of course, a foundation
|    in formal logic is necessary for this sort of development.
|    I have used (essentially) Quine's meta-axioms for logic [1]
|    in making this kind of presentation for a course.
| 
| ‡  Actually, an axiom scheme for definition is also assumed without explicit
|    statement.  That is, statements of a certain form, which in particular
|    involve one new constant and are either an equivalence or an identity,
|    are accepted as definitions and are treated in precisely the same
|    fashion as theorems.  The axiom scheme of definition is in the
|    fortunate position of being justifiable in the sense that,
|    if the definitions conform with the prescribed rules,
|    then no new contradictions and no real enrichment
|    of the theory results.  These results are due
|    to S. Lésniewski.
|
| JLK, Gen Top, pages 250-251.
|
| Bibliography, pages 282-291.
|
| W.V.O. Quine [1],
|'Mathematical Logic',
| Cambridge (USA), 1947.
|
| A. Tarski [1],
|'Introduction to Modern Logic',
| 2nd American Ed., New York, 1946.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 3


| Elementary Set Theory
|
| The Classification Axiom Scheme
|
| Equality is always used in the sense of logical identity;
| "1 + 1 = 2" is to mean that "1 + 1" and "2" are names of
| the same object.  Besides the usual axioms for equality an
| unrestricted substitution rule is assumed:  in particular the
| result of changing a theorem by replacing an object by its equal
| is again a theorem.
|
| There are two primitive (undefined) constants besides "=" and the other
| logical constants.  The first of these is "in" [epsilon], which is read
| "is a member of" or "belongs to".  The second constant is denoted, rather
| strangely, "{·· : ···}" and is read "the class if all ·· such that ···".
| It is the 'classifier'.
|
| A remark on the use of the term "class" may clarify matters.  The term
| does not appear in any axiom, definition, or theorem, but the primary
| interpretation † of these statements is as assertions about classes
| (aggregates, collections).  Consequently the term "class" is used
| in the discussion to suggest this interpretation.
|
| Lower case Latin letters are (logical) variables.  The difference between a
| constant and a variable lies entirely in the substitution rules.  For example,
| the result of replacing a variable in a theorem by another variable which does
| not occur in the theorem is again a theorem, but there is no such substitution
| rule for constants.
|
| I.  Axiom of Extent. ‡
|
|     For each x and each y it is true that x = y if and only
|     if for each z, z is in x when and only when z is in y.
|
| Thus two classes are identical iff only every member of each is a member of the
| other.  We shall frequently omit "for each x" or "for each y" in the statement of
| a theorem or definition.  If a variable, for example "x", occurs and is not preceded
| by "for each x" or "for some x" it is understood that "for each x" is to be prefixed
| to the theorem or definition in question.
|
| †  Presumably other interpretations are also possible.
|
| ‡  One is tempted to make this the definition of equality, thus
|    dispensing with one axiom and with all logical presuppositions
|    about equality.  This is perfectly feasible.  However, there would
|    be no unlimited substitution rule for equality and one would have
|    to assume as an axiom:  If x is in z and y = x, then y is in z.
|
| JLK, Gen Top, pages 251-252.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 4


| Elementary Set Theory
|
| The Classification Axiom Scheme (cont.)
|
| The first definition assigns a special name to those classes which
| are themselves members of classes.  The reason for this dichotomy
| among classes is discussed a little later.
|
| 1.  Definition.  x is a 'set' iff for some y, x in y.
|
| The next task is to describe the use of the classifier.  The first blank
| in the classifier constant is to be occupied by a variable and the second
| by a formula, for example {x : x in y}.  We accept as an axiom the statement:
| u in {x : x in y} iff u is a set and u in y.  More generally, each statement
| of the following form is supposed to be an axiom:
|
|     u in {x : ··· x ···} iff u is a set and ··· u ···.
|
| Here "··· x ···" is supposed to be a formula and "··· u ···" is supposed to be the
| formula which is obtained from it by replacing every occurrence of "x" by "u".
| Thus u in {x : x in y and z in x} iff u is a set and u in y and z in u.
|
| This axiom scheme is precisely the usual intuitive construction of classes except
| for the requirement "u is a set".  This requirement is very evidently unnatural
| and is intuitively quite undesirable.  However, without it a contradiction may
| be constructed simply on the basis of the axiom of extent.  (See theorem 39
| and the discussion preceding it.)  This complication, which necessitates a
| good bit of technical work on the existence of sets, is simply the price
| paid to avoid obvious inconsistencies.  Less obvious inconsistencies
| may very possibly remain.
|
| JLK, Gen Top, page 252.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 5


| Elementary Set Theory
|
| The Classification Axiom Scheme (cont.)
|
| A precise statement of the classification axiom scheme
| requires a description of formulae.  It is agreed that: †
|
| a.  The result of replacing "!a!" and "!b!"
|     by variables is, for each of the following,
|     a formula.
|
|     !a! = !b!
|
|     !a! in !b!
|
| b.  The result of replacing "!a!" and "!b!" by variables
|     and "A" and "B" by formulae is, for each of the following,
|     a formula.
|
|     if A, then B
|
|     A iff B
|
|     it is false that A
|
|     A and B
|
|     A or B
|
|     for every !a!, A
|
|     for some !a!, A
|
|     !b! in {!a! : A}
|
|     {!a! : A} in !b!
|
|     {!a! : A} in {!b! : B}
|
| Formulae are constructed recursively, beginning with
| the primitive formulae of (a) and proceeding via the
| constructions permitted by (b).
|
| II.  Classification Axiom-Scheme.
|
|      An axiom results if in the following "!a!" and "!b!"
|
|      are replaced by variables, "A" by a formula $A$,
|
|      and "B" by the formula obtained from $A$
|
|      by replacing each occurrence
|
|      of the variable which replaced !a!
|
|      by the variable which replaced !b!:
|
|      For each !b!, !b! in {!a! : A} if and only if !b! is a set and B.
|
| †  This circuitous sort of language is unfortunately necessary.
|    Using the convention of quotation marks for names, for example,
|    "Boston" is the name of Boston, if $A$ is a formula and $B$ is
|    a formula then "$A$ => $B$" is not a formula.  For example, if
|    $A$ is "x = y" and $B$ is "y = z", then '"x = y" => "y = z"'
|    is not a formula.  Formulae (for example "x = y") contain no
|    quotation marks.  Instead of "$A$ => $B$" we want to discuss
|    the result of replacing  "!a!" by $A$  and  "!b!" by $B$  in
|    "!a! => !b!".  This sort of circumlocution can be avoided by
|    using Quine's corner convention.
|
| JLK, Gen Top, page 253.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 6


| Elementary Set Theory
|
| Elementary Algebra of Classes
|
| The axioms already stated permit the deduction of a number
| of theorems directly from logical results.  The deduction
| is straightforward and only an occasional proof is given.
|
| 2.  Definition.  x |_| y  =  {z : z in x or z in y}.
|
| 3.  Definition.  x |^| y  =  {z : z in x and z in y}.
|
| The class x |_| y is the 'union' of x and y,
|
| and x |^| y is the 'intersection' of x and y.
|
| 4.  Theorem.
|
|     z in x |_| y  if and only if  z in x or z in y, and
|
|     z in x |^| y  if and only if  z in x and z in y.
|
| Proof.  From the classification axiom, z in x |_| y iff
|
|         z in x or z in y and z is a set.  But in view of
|
|         the definition 1 of set, z in x or z in y and
|
|         z is a set iff z in x or z in y.  A similar
|
|         argument proves the corresponding result
|
|         for intersection.  þ
|
| JLK, Gen Top, pages 253-254.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 7


| Elementary Set Theory
|
| Elementary Algebra of Classes (cont.)
|
| 5.  Theorem.
|
|     x |_| x  =  x
|
|     and
|
|     x |^| x  =  x.
|
| 6.  Theorem.
|
|     x |_| y  =  y |_| x
|
|     and
|
|     x |^| y  =  y |^| x.
|
| 7.  Theorem. †
|
|     (x |_| y) |_| z  =  x |_| (y |_| x)
|
|     and
|
|     (x |^| y) |^| z  =  x |^| (y |^| x).
|
| These theorems state that union and intersection are, in the usual sense,
| commutative and associative operations.  The distributive laws follow.
|
| 8.   Theorem.
|
|      x |^| (y |_| z)  =  (x |^| y) |_| (x |^| z)
|
|      and
|
|      x |_| (y |^| z)  =  (x |_| y) |^| (x |_| z).
|
| 9.   Definition.  x ~in y  if and only if  it is false that x in y.
|
| 10.  Definition.  ~x  =  {y : y ~in x}.
|
| The class ~x is called the 'complement' of x.
|
| 11.  Theorem.  ~(~x)  =  x.
|
| †  There would be no necessity for parentheses if the constant "|_|"
|    occurred first in the definition;  that is, "|_| x y" instead of
|    "x |_| y".  In this case the first part of the theorem would read:
|
|    |_| |_| x y z  =  |_| x |_| y z.
|
| JLK, Gen Top, page 254.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 8


| Elementary Set Theory
|
| Elementary Algebra of Classes (cont.)
|
| 12.  Theorem.  (De Morgan).
|
|      ~(x |_| y)  =  (~x) |^| (~y)
|
|      and
|
|      ~(x |^| y)  =  (~x) |_| (~y).
|
| Proof.  Only the first of the two statements will be proved.
|
|         For each z, we have z in ~(x |_| y)  iff  z is a set
|
|         and it is false that z in (x |_| y), because of the
|
|         classification axiom and the definition 10 of complement.
|
|         Using theorem 4,  z in x |_| y  iff  z in x or z in y.
|
|         Consequently, z in ~(x |_| y)  iff  z is a set and
|
|         z ~in x and z ~in y, that is, iff  z in ~x and z in ~y.
|
|         Using 4 again, z in ~(x |_| y)  iff  z in (~x) |^| (~y).
|
|         Hence  ~(x |_| y)  =  (~x) |^| (~y)  because of the
|
|         axiom of extent.  þ
|
| 13.  Definition.  x ~ y  =  x |^| (~y).
|
| The class x ~ y is the 'difference' of x and y,
| or the 'complement' of y relative to x.
|
| 14.  Theorem.  x |^| (y ~ z)  =  (x |^| y) ~ z.
|
| The proposition  "x |_| (y ~ z)  =  (x |_| y) ~ z"  is unlikely,
| although at this stage it is impossible to exhibit a counter example.
| To be a little more precise, the negation of the proposition cannot be
| proved on the basis of the axioms so far assumed;  it is possible to
| make a model for this initial part of the system such that x ~in y
| for each x and each y (there are no sets).  The negation of the
| proposition can be proved on the basis of axioms which will
| presently be assumed.
|
| JLK, Gen Top, pages 254-255.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 9


| Elementary Set Theory
|
| Elementary Algebra of Classes (cont.)
|
| 15.  Definition.  0  =  {x : x =/= x}.
|
| The class 0 is the 'void class', or 'zero'.
|
| 16.  Theorem.  x ~in 0.
|
| 17.  Theorem.  0 |_| x  =  x
|
|      and       0 |^| x  =  0.
|
| 18.  Definition.  $U$  =  {x : x = x}.
|
| The class $U$ is the 'universe'.
|
| 19.  Theorem.  x in $U$  if and only if  x is a set.
|
| 20.  Theorem.  x |_| $U$  =  $U$
|
|      and       x |^| $U$  =   x.
|
| 21.  Theorem.  ~0   =  $U$
|
|      and      ~$U$  =   0.
|
| 22.  Definition. †
|
|      |^| x  =  {z : for each y, if y in x, then z in y}.
|
| 23.  Definition.
|
|      |_| x  =  {z : for some y, z in y and y in x}.
|
| The class |^| x is the 'intersection' of the members of x.
| Notice that the members of |^| x are members of members of x
| and may or may not belong to x.
|
| The class |_| x is the 'union' of the members of x.
|
| Observe that a set z belongs to |^| x (or to |_| x) iff
| z belongs to every (respectively, to some) member of x.
|
| 24.  Theorem.  |^| 0  =  $U$
|
|      and       |_| 0  =   0.
|
| Proof.  z in |^| 0  iff  z is a set and z belongs to each member of 0.
|
|         Since (theorem 16) there is no member of 0, z in |^| 0  iff
|
|         z is a set, and by 19 and the axiom of extent |^| 0  =  $U$.
|
|         The second assertion is also easy to prove.  þ
|
| †  A bound variable notation for the intersection
|    of the members of a family is not needed in this
|    appendix, and consequently a notation is adopted
|    which is simpler than that used in the rest of
|    the book.
|
| JLK, Gen Top, pages 255-256.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 10


| Elementary Set Theory
|
| Elementary Algebra of Classes (cont.)
|
| 25.  Definition.  x c y  iff  for each z, if z in x, then z in y.
|
| A class x is a 'subclass' of y, or is 'contained in' y, or y 'contains' x,
| iff x c y.  It is absolutely essential that "c" not be confused with "in".
| For example, 0 c 0 but it is false that 0 in 0.
|
| 26.  Theorem.  0 c x  and  x c $U$.
|
| 27.  Theorem.  x = y  iff  x c y  and  y c x.
|
| 28.  Theorem.  If  x c y  and  y c z, then  x c z.
|
| 29.  Theorem.  x c y  iff  x |_| y  =  y.
|
| 30.  Theorem.  x c y  iff  x |^| y  =  x.
|
| 31.  Theorem.  If x c y,
|
|      then      |_| x  c  |_| y
|
|      and       |^| y  c  |^| x.
|
| 32.  Theorem.  If x in y,
|
|      then      x  c  |_| y
|
|      and       |^| y  c  x.
|
| The preceding definitions and theorems
| are used very frequently -- often
| without explicit reference.
|
| JLK, Gen Top, page 256.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 11


| Elementary Set Theory
|
| Existence of Sets
|
| This section is concerned with the existence of sets and
| with the initial steps in the construction of functions
| and other relations from the primitives of set theory.
|
| III.  Axiom of Subsets.
|
|       If x is a set
|       there is a set y
|       such that for each z,
|       if z c x, then z in y.
|
| 33.  Theorem.  If x is a set and z c x, then z is a set.
|
| Proof.  According to the axiom of subsets, if x is a set
|         there is y such that, if z c x, then z in y, and
|         hence by definition 1, z is a set.  (Observe that
|         this proof does not use the full strength of the
|         axiom of subsets since the argument does not
|         require that y be a set.)  þ
|
| 34.  Theorem.  0   =  |^| $U$
|
|      and      $U$  =  |_| $U$.
|
| Proof.  If x in |^| $U$, then x is a set and, since 0 c x, it follows from 33
|
|         that 0 is a set.  Then 0 in $U$ and each member of |^| $U$ belongs to 0.
|
|         It follows that |^| $U$ has no members.  Clearly (that is, by theorem 26),
|
|         |_| $U$  c  $U$.  If x in $U$, then x is a set, and by the axiom of subsets
|
|         there is a set y such that, if z c x, then z in y.  In particular, x in y,
|
|         and since y in $U$ it follows that x in |_| $U$.  Consequently $U$ c |_| $U$
|
|         and equality follows.  þ
|
| 35.  Theorem.  If x =/= 0, then |^| x is a set.
|
| Proof.  If x =/= 0, then for some y, y in x.  But y is a set,
|
|         and since |^| x  c  y by 32, it follows from 33 that
|
|         |^| x is a set.  þ
|
| JLK, Gen Top, pages 256-257.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 12


| Elementary Set Theory
|
| Existence of Sets (cont.)
|
| 36.  Definition.  2^x  =  {y : y c x}.
|
| 37.  Theorem.  $U$  =  2^$U$.
|
| Proof.  Every member of 2^$U$ is a set
|         and consequently belongs to $U$.
|         Every member of $U$ is a set and
|         is contained in $U$ (theorem 26)
|         and hence belongs to 2^$U$.  þ
|
| 38.  Theorem.
|
|      If x is a set, then 2^x is a set,
|      and for each y, y c x iff y in 2^x.
|
| It is interesting to notice that the existence of sets is
| not provable on the basis of the axioms so far enunciated,
| but it is possible to prove that there is a class which is
| not a set.  Letting R be {x : x ~in x}, by the classifier
| axiom R in R iff R ~in R and R is a set.  It follows that
| R is not a set.  Observe that, if the classifier axiom did
| not contain the "is a set" qualification, then an outright
| contradiction, R in R iff R ~in R, would result.  This is
| the Russell paradox.  A consequence of this argument is
| that $U$ is not a set, because R c $U$ and 33 applies.
| (The regularity axiom will imply that R = $U$;  this
| axiom also yields a different proof that $U$ is not
| a set.)
|
| 39.  Theorem.  $U$ is not a set.
|
| JLK, Gen Top, page 257.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 13


| Elementary Set Theory
|
| Existence of Sets (cont.)
|
| 40.  Definition.  {x}  =  {z : if x in $U$, then z = x}.
|
| 'Singleton' x is {x}.
|
| This definition is an example of a technical device which is very convenient.
| If x is a set, {x} is a class whose only member is x.  However, if x is not
| a set, then {x} = $U$ (these statements are theorems 41 and 43).  Actually,
| the primary interest is in the case where x is a set, and for this case
| the same result is given by the more natural definition {z : z = x}.
| However, it simplifies statements of results greatly if computations
| are arranged so that $U$ is the result of applying the computation
| outside its pertinent domain.
|
| 41.  Theorem.
|
|      If x is a set,
|      then, for each y,
|      y in {x} iff y = x.
|
| 42.  Theorem.
|
|      If x is a set, then {x} is a set.
|
| Proof.  If x is a set, then {x} c 2^x, and 2^x is a set.  þ
|
| 43.  Theorem.
|
|      {x} = $U$  if and only if  x is not a set.
|
| Proof.  If x is a set, then {x} is a set
|         and consequently is not equal to $U$.
|         If x is not a set, then x ~in $U$ and
|         and {x} = $U$ by the definition.  þ
|
| 44.  Theorem.
|
|      If  x is a set,
|
|      then  |^| {x}  =  x
|
|      and   |_| {x}  =  x.
|
|      If x is not a set,
|
|      then  |^| {x}  =   0
|
|      and   |_| {x}  =  $U$.
|
| Proof.  Use 34 and 41.  þ
|
| IV.  Axiom of Union.
|
|      If x is a set and y is a set so is x |_| y.
|
| 45.  Definition.  {x y}  =  {x} |_| {y}.
|
| The class {x y} is an 'unordered pair'.
|
| 46.  Theorem.
|
|      If x is a set and y is a set,
|
|      then {x y} is a set and z in {x y} iff z = x or z = y.
|
|      {x y} = $U$  if and only if  x is not a set or y is not a set.
|
| 47.  Theorem.
|
|      If x and y are sets,
|
|      then  |^| {x y}  =  x |^| y
|
|      and   |_| {x y}  =  x |_| y.
|
|      If either x or y is not a set,
|
|      then  |^| {x y}  =   0
|
|      and   |_| {x y}  =  $U$.
|
| JLK, Gen Top, page 258.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 14


| Elementary Set Theory
|
| Ordered Pairs:  Relations
|
| This section is devoted to the properties of ordered pairs and relations.
| The crucial property for ordered pairs is theorem 55:  if x and y are sets,
| then (x, y) = (u, v) iff x = u and y = v.
|
| 48.  Definition.  (x, y)  =  {{x} {x y}}.
|
| The class (x, y) is an 'ordered pair'.
|
| 49.  Theorem.
|
|      (x, y) is a set if and only if x is a set and y is a set.
|
|      If (x, y) is not a set, then (x, y) = $U$.
|
| 50.  Theorem.
|
|      If x and y are sets, then:
|
|      |_| (x, y)      =  {x y}
|
|      |^| (x, y)      =  {x}
|
|      |_| |_| (x, y)  =   x |_| y
|
|      |_| |^| (x, y)  =   x
|
|      |^| |_| (x, y)  =   x |^| y
|
|      |^| |^| (x, y)  =   x
|
|      If either x or y is not a set, then:
|
|      |_| |_| (x, y)  =  $U$
|
|      |_| |^| (x, y)  =   0
|
|      |^| |_| (x, y)  =   0
|
|      |^| |^| (x, y)  =  $U$
|
| 51.  Definition.  1st coord z  =   |^| |^| z.
|
| 52.  Definition.  2nd coord z  =  (|^| |_| z) |_| ((|_| |_| z) ~ |_| |^| z).
|
| These definitions will be used, with one exception,
| only in the case where z is an ordered pair.
| The 'first  coordinate' of z is 1st coord z.
| The 'second coordinate' of z is 2nd coord z.
|
| 53.  Theorem.  2nd coord $U$  =  $U$.
|
| 54.  Theorem.
|
|      If x and y are sets,
|
|      then  1st coord (x, y)  =  x
|
|      and   2nd coord (x, y)  =  y.
|
|      If either of x and y is not a set,
|
|      then  1st coord (x, y)  =  $U$
|
|      and   2nd coord (x, y)  =  $U$.
|
| Proof.  If x and y are sets, then the equality for
|
|         1st coord is immediate from 50 and 51.
|
|         The equality for 2nd coord reduces to showing that
|
|         y  =  (x |^| y) |_| ((x |_| y) ~ x), by 50 and 52.
|
|         It is straightforward to see that
|
|         (x |_| y) ~ x  =  y ~ x,
|
|         and by the distributive law,
|
|         (y |^| x) |_| (y |^| ~x)  =  y |^| (x |_| ~x)  =  y |^| $U$  =  y.
|
|         If either x or y is not a set, then, using 50 it is easy to compute
|
|         1st coord (x, y) and 2nd coord (x, y).  þ
|
| 55.  Theorem.
|
|      If x and y are sets and (x, y) = (u, v), then x = u and y = v.
|
| JLK, Gen Top, page 259.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 15


| Elementary Set Theory
|
| Ordered Pairs:  Relations (cont.)
|
| 56.  Definition.
|
|      r is a 'relation' if and only if
|      for each member z of r there is
|      x and y such that z = (x, y).
|
| A 'relation' is a class whose members are ordered pairs.
|
| 57.  Definition.
|
|      r o s  =
|
|      {u : for some x, some y, some z, u = (x, z), (x, y) in s, (y, z) in r}.
|
| The class r o s is the 'composition' of r and s.
|
| To avoid excessive notation we agree that {(x, z) : ···} is to
| be identical with {u : for some x, some z, u = (x, z) and ···}.
| Thus r o s = {(x, z) : for some y, (x, y) in s and (y, z) in r}.
|
| 58.  Theorem.
|
|      (r o s) o t  =  r o (s o t).
|
| 59.  Theorem.
|
|      r o (s |_| t)  =  (r o s) |_| (r o t),
|
|      r o (s |^| t)  c  (r o s) |^| (r o t).
|
| 60.  Definition.
|
|      r^-1  =  {(x, y) : (y, x) in r}.
|
| If r is a relation, r^-1 is the 'relation inverse to' r.
|
| 61.  Theorem.
|
|      (r^-1)^-1  =  r.
|
| 62.  Theorem.
|
|      (r o s)^-1  =  (s^-1) o (r^-1).
|
| JLK, Gen Top, page 260.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 16


| Elementary Set Theory
|
| Functions
|
| Intuitively, a function is to be identical with the class of ordered pairs
| which is its graph.  All functions are single-valued, and consequently two
| distinct ordered pairs belonging to a function must have different first
| coordinates.
|
| 63.  Definition.
|
|      f is a 'function' if and only if f is a relation
|      and for each x, each y, each z, if (x, y) in f
|      and (x, z) in f then y = z.
|
| 64.  Theorem.
|
|      If f is a function and g is a function so is f o g.
|
| 65.  Definition.
|
|      domain f  =  {x : for some y, (x, y) in f}.
|
| 66.  Definition.
|
|      range f   =  {y : for some x, (x, y) in f}.
|
| 67.  Theorem.
|
|      domain $U$  =  $U$,
|
|      range $U$   =  $U$.
|
| Proof.  If x in $U$, then (x, 0) and (0, x) belong to $U$
|         and hence x belongs to domain $U$ and range $U$.  þ
|
| JLK, Gen Top, page 260.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

SET. Note 17


| Elementary Set Theory
|
| Functions (cont.)
|
| ...
|
| JLK, Gen Top, page 261.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Topology

TOP. Note 1


| 1.  Topological Spaces
|
| 1.1.  Topologies and Neighborhoods
|
| A 'topology' is a family !T! of sets which satisfies the two conditions:
| the intersection of any two members of !T! is a member of !T!, and the
| union of the members of each subfamily of !T! is a member of !T!.  The
| set X = |_|{U : U in !T!} is necessarily a member of !T! because !T!
| is a subfamily of itself, and every member of !T! is a subset of X.
| The set X is called the 'space' of the topology !T! and !T! is a
| 'topology for X'.  The pair (X, !T!) is a 'topological space'.
| When no confusion seems possible we may forget to mention the
| topology and write "X is a topological space".  We shall be
| explicit in cases where precision is necessary (for example
| if we are considering two different topologies for the same
| set X).
|
| The members of the topology !T! are called 'open' relative to !T!, or
| !T!-open, or if only one topology is under consideration, simply open
| sets.  The space X of the topology is always open, and the void set is
| always open because it is the union of the members of the void family.
| These may be the only open sets, for the family whose only members are
| X and the void set is a topology for X.  This is not a very interesting
| topology, but it occurs frequently enough to deserve a name;  it is
| called the 'indiscrete' (or 'trivial') topology for X, and (X, !T!)
| is then an 'indiscrete topological space.  At the other extreme is
| the family of all subsets of X, which is the 'discrete' topology
| for X (then (X, !T!) is a 'discrete topological space').  If !T!
| is the discrete topology, then every subset of the space is open.
|
| JLK, Gen Top, page 37.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 2


| 1.  Topological Spaces
|
| 1.1.  Topologies and Neighborhoods (cont.)
|
| The discrete and the indiscrete topology for a set X are
| respectively the largest and the smallest topology for X.
| That is, every topology for X is contained in the discrete
| topology and contains the indiscrete topology.  If !T! and
| !U! are topologies for X, then, following the convention
| for arbitrary families of sets, !T! is smaller than !U!
| and !U! is larger than !T! iff !T! c !U!.  In other words,
| !T! is smaller than !U! iff each !T!-open set is !U!-open.
| In this case it is also said that !T! is 'coarser' than !U!
| and !U! is 'finer' than !T!.  (Unfortunately, this situation
| is described in the literature by both of the statements:
| !T! is 'stronger' than !U! and !T! is 'weaker' than !U!.)
| If !T! and !U! are arbitrary topologies for X it may happen
| that !T! is neither larger nor smaller than !U!;  in this
| case, following the usage for partial orderings, it is
| said that !T! and !U! are not 'comparable'.
|
| JLK, Gen Top, pages 37-38.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 3


| 1.  Topological Spaces
|
| 1.1.  Topologies and Neighborhoods (cont.)
|
| The set of real numbers, with an appropriate topology, is
| a very interesting topological space.  This is scarcely
| surprising since the notion of a topological space is
| an abstraction of some interesting properties of the
| real numbers.  The 'usual topology' for the real
| numbers is the family of all those sets which
| contain an open interval about each of their
| points.  That is, a subset A of the set of
| real numbers is open iff for each member x
| of A there are numbers a and b such that
| a < x < b and the 'open interval'
| {y : a < y < b} is a subset of A.
| Of course, we must verify that
| this family of sets is indeed
| a topology, but this offers
| no difficulty.  It is worth
| noticing that, conveniently,
| an open interval is an
| open set.
|
| JLK, Gen Top, page 38.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 4


| 1.  Topological Spaces
|
| 1.1.  Topologies and Neighborhoods (cont.)
|
| A set U in a topological space (X, !T!) is a 'neighborhood' (!T!-neighborhood)
| of a point x iff U contains an open set to which x belongs.  A neighborhood of
| a point need not be an open set, but every open set is a neighborhood of each
| of its points.  Each neighborhood of a point contains an open neighborhood
| of the point.  If !T! is the indiscrete topology the only neighborhood of
| a point x is the space X itself.  If !T! is the discrete topology, then
| every set to which a point belongs is a neighborhood of it.  If X is the
| set of real numbers and !T! is the usual topology, then a neighborhood of
| a point is a set containing an open interval to which the point belongs.
|
| 1.  Theorem.  A set is open if and only if
|     it contains a neighborhood of each of
|     its points.
|
| Proof.  The union U of all open subsets of a set A is surely an open subset of A.
| If A contains a neighborhood of each of its points, then each member x of A belongs
| to some open subset of A and hence x is in U.  In this case A = U and therefore A
| is open.  On the other hand, if A is open it contains a neighborhood (namely, A)
| of each of its points.  þ
|
| The foregoing theorem evidently implies that a set is
| open iff it is a neighborhood of each of its points.
|
| The 'neighborhood system' of a point is the family
| of all neighborhoods of the point.
|
| 2.  Theorem.  If !U! is the neighborhood system of a point,
|     then finite intersections of members of !U! belong to !U!,
|     and each set which contains a member of !U! belongs to !U!.
|
| Proof.  If U and V are neighborhoods of a point x, there are
| open neighborhoods U_0 and V_0 contained in U and V respectively.
| Then U |^| V contains the open neighborhood U_0 |^| V_0 and is hence
| a neighborhood of x.  Thus the intersection of two (and hence of any
| any finite number of) members of !U! is a member.  If a set U contains
| a neighborhood of a point x it contains an open neighborhood of x and is
| consequently itself a neighborhood.  þ
|
| JLK, Gen Top, pages 38-39.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 5


| 1.  Topological Spaces
|
| 1.2.  Closed Sets
|
| A subset A of a topological space (X, !T!) is 'closed' iff its relative
| complement X~A is open.  The complement of the complement of the set A is
| again A, and hence a set is open iff its complement is closed.  If !T! is
| the indiscrete topology the complement of X and the complement of the void
| set are the only closed sets;  that is, only the void set and X are closed.
| It is always true that the space and the void set are closed as well as open,
| and it may happen, as we have just seen, that these are the only closed sets.
| If !T! is the discrete topology, then every subset is closed and open.
|
| If X is the set of real numbers and !T! the usual topology, then the
| situation is quite different.  A 'closed interval' (that is, a set
| of the form {x : a =< x =< b}) is fortunately closed.  An open
| interval is not closed and a 'half-open interval' (that is,
| a set of the form {x : a < x =< b} or {x : a =< x < b}
| where a < b) is neither open nor closed.  Indeed ...
| the only sets which are both open and closed are
| the space and the void set.
|
| According to the De Morgan formulae, 0.3, the union (intersection) of
| the complements of the members of a family of sets is the complement
| of the intersection (respectively union).  Consequently, the union
| of a finite number of closed sets is necessarily closed and the
| intersection of the members of an arbitrary family of closed
| sets is closed.  These properties characterize the family
| of closed sets, as the following theorem indicates.
| The simple proof is omitted.
|
| 4.  Theorem.  Let !F! be a family of sets such that
|     the union of a finite subfamily is a member, the
|     intersection of an arbitrary non-void subfamily is
|     a member, and X = |_|{F : F in !F!} is a member.
|     Then !F! is precisely the family of closed sets
|     in X relative to the topology consisting of all
|     complements of members of !F!.
|
| JLK, Gen Top, page 40.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 6


| 1.  Topological Spaces
|
| 1.3.  Accumulation Points
|
| The topology of a topological space can be described in terms of
| neighborhoods of points and consequently it must be possible to
| formulate a description of closed sets in terms of neighborhoods.
| This formulation leads to a new classification of points in the
| following way.  A set A is closed iff X~A is open, and hence iff
| each point of X~A has a neighborhood which is contained in X~A,
| or, equivalently, is disjoint from A.  Consequently, A is closed
| iff for each x, if every neighborhood of x intersects A, then x
| is in A.  This suggests the following definition.
|
| A point x is an 'accumulation point' (sometimes called
| 'cluster' point or 'limit' point) of a subset A of a
| topological space (X,!T!) iff every neighborhood
| of x contains points of A other than x.  Then it
| is true that each neighborhood of a point x
| intersects A if and only if x is either a
| point of A or an accumulation point of A.
| The following theorem is then clear.
|
| 5.  Theorem.  A subset of a topological space
|     is closed if and only if it contains the
|     set of its accumulation points.
|
| JLK, Gen Top, pages 40-41.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 7


| 1.  Topological Spaces
|
| 1.3.  Accumulation Points (cont.)
|
| If x is an accumulation point of A it is sometimes said, in a pleasantly
| suggestive phrase, that there are points of A arbitrarily near x.  If we
| pursue this imagery it appears that an indiscrete topological space is
| really quite crowded, for each point x is an accumulation point of every
| set other than the void set and the set {x}.  On the other hand, in a
| discrete topological space, no point is an accumulation point of a set.
|
| If X is the set of real numbers with the usual topology a
| variety of situations can arise.  If A is the open interval
| (0, 1), then every point of the closed interval [0, 1] is an
| accumulation point of A.  If A is the set of all non-negative
| rationals with squares less than 2, then the closed interval
| [0, 2^½] is the set of accumulation points.  If A is the set
| of all reciprocals of integers, then 0 is the only accumulation
| point of A, and the set of integers has no accumulation points.
|
| 6.  Theorem.  The union of a set and the
|     set of its accumulation points is closed.
|
| Proof.  If x is neither a point nor accumulation point of A, then there
| is an open neighborhood U of x which does not intersect A.  Since U is
| a neighborhood of each of its points, no one of these is an accumulation
| point of A.  Hence the union of the set A and the set of its accumulation
| points is the complement of an open set.  þ
|
| The set of all accumulation points of a set A
| is sometimes called the 'derived' set of A.
|
| JLK, Gen Top, pages 41-42.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 8


| 1.  Topological Spaces
|
| 1.4.  Closure
|
| The 'closure' (!T!-closure) of a subset A of a topological space
| (X, !T!) is the intersection of the members of the family of all
| closed sets containing A.  The closure of A is denoted by A˜, or
| by Ã.  The set A˜ is always closed because it is the intersection
| of closed sets, and evidently A˜ is contained in each closed set
| which contains A.  Consequently A˜ is the smallest closed set
| containing A and it follows that A is closed if and only if
| A = A˜.  The next theorem describes the closure of a set
| in terms of its accumulation points.
|
| 7.  Theorem.  The closure of any set
|     is the union of the set and the
|     set of its accumulation points.
|
| Proof.  Every accumulation point of a set A is an
| accumulation point of each set containing A, and is
| therefore a member of each closed set containing A.
| Hence A˜ contains A and all accumulation points of A.
| On the other hand, according to the preceding theorem,
| the set consisting of A and its accumulation points is
| closed and it therefore contains A˜.  þ
|
| The function which assigns to each subset A of a topological
| space the value A˜ might be called the closure function, or
| closure operator, relative to the topology.  This operator
| determines the topology completely, for a set A is closed
| iff A = A˜.  In other words, the closed sets are simply
| the sets which are fixed under the closure operator.
|
| It is instructive to enquire:  Under what circumstances is an operator
| which is defined for all subsets of a fixed set X the closure operator
| relative to some topology for X?  It turns out that four very simple
| properties serve to describe closure.  First, because the void set
| is closed, the closure of the void set is void;  and, second, each
| set is contained in its closure.  Next, because the closure of each
| set is closed, the closure of the closure of a set is identical with
| the closure of the set (in the usual algebraic terminology, the closure
| operator is idempotent).  Finally, the closure of the union of two sets is
| the union of the closures, for (A |_| B)˜ is always a closed set containing
| A and B, and therefore contains A˜ and B˜ and hence A˜ |_| B˜.  On the other
| hand, A˜ |_| B˜ is a closed set containing A |_| B and hence also (A |_| B)˜.
|
| JLK, Gen Top, pages 42-43.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 9


| 1.  Topological Spaces
|
| 1.4.  Closure (cont.)
|
| A 'closure operator' on X is an operator which assigns to each
| subset A of X a subset A^c of X such that the following four
| statements, the 'Kuratowski closure axioms', are true.
|
| a.  If 0 is the void set, 0^c = 0.
|
| b.  For each A, A c A^c.
|
| c.  For each A, A^cc = A^c.
|
| d.  For each A and B, (A |_| B)^c  =  A^c |_| B^c.
|
| The following theorem of Kuratowski shows that these four statements
| are actually characteristic of closure.  The topology defined below
| is the topology 'associated' with a closure operator.
|
| 8.  Theorem.  Let ^c be a closure operator on X, let !F! be the family
|     of all subsets A of X for which A^c = A, and let !T! be the family
|     of complements of members of !F!.  Then !T! is a topology for X,
|     and A^c is the !T!-closure of A for each subset A of X.
|
| Proof.  Axiom (a) shows that the void set belongs to !F!, and (d) shows that
| the union of two members of !F! is a member of !F!.  Consequently the union
| of any finite subfamily (void or not) of !F! is a member of !F!.  Because of
| (b), X c X^c, so that X = X^c, and the union of the members of !F! is then X.
| In view of theorem 1.4, it will follow that !T! is a topology for X if it is
| shown that the intersection of the members of any non-void subfamily of !F! is
| a member of !F!.  To this end, first observe that, if B c A, then B^c c A^c,
| because A^c = ((A ~ B) |_| B)^c = (A ~ B)^c |_| B^c.  Now suppose that !A!
| is a non-void subfamily of !F! and that B = |^|{A : A in !A!}.  The set B is
| contained in each member of !A!, and therefore B^c c |^|{A^c : A in !A!} =
| |^|{A : A in !A!} = B.  Since B c B^c, it follows that B = B^c and B in !F!.
| This shows that !T! is a topology, and it remains to show that A^c is A˜,
| the !T!-closure of A.  By definition, A˜ is the intersection of all the
| !T!-closed sets, that is, the members of !F!, which contain A.  By axiom
| (c), A^c in !F!, and hence A˜ c A^c.  Since A˜ in !F! and A˜ contains A
| it follows that A˜ contains A^c and hence A˜ = A^c.  þ
|
| JLK, Gen Top, page 43.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 10


| 1.  Topological Spaces
|
| 1.5.  Interior and Boundary
|
| There is another operator defined on the family
| of all subsets of a topological space, which is
| very intimately related to the closure operator.
| A point x of a subset A of a topological space is
| an 'interior point' of A iff A is a neighborhood of x,
| and the set of all interior points of A is the 'interior'
| of A, denoted A°.  (In the usual terminology, the relation
| "is an interior point of" is the inverse of the relation
| "is a neighborhood of".)  It is convenient to exhibit
| the connection between this notion and the earlier
| concepts before considering examples.
|
| 9.  Theorem.  Let A be a subset of a topological space X.
|     Then the interior A° of A is open and is the largest
|     open subset of A.  A set A is open if and only if
|     A = A°.  The set of all points of A which are not
|     points of accumulation of X~A is precisely A°.
|     The closure of X~A is X ~ A°.
|
| Proof.  If a point x belongs to the interior of a set A, then x is
| a member of some open subset U of A.  Every member of U is also a
| member of A°, and consequently A° contains a neighborhood of each
| of its points and is therefore open.  If V is an open subset of A
| and y in V, then A is a neighborhood of y and so y in A°.  Hence
| A° contains each open subset of A and it is therefore the largest
| open subset of A.  If A is open, then A is surely identical with
| the largest open subset of A.  Hence A is open iff A = A°.  If x
| is a point of A which is not an accumulation point of X~A, then
| there is a neighborhood U of x which does not intersect X~A and
| is therefore a subset of A.  Then A is a neighborhood of x and
| x in A°.  On the other hand, A° is a neighborhood of each of
| its points and A° does not intersect X~A, so that no point
| of A° is an accumulation point of X~A.  Finally, since A°
| consists of the points of A which are not accumulation
| points of X~A, the complement, X ~ A°, is precisely
| the set of all points which are either points of
| X~A or accumulation points of X~A;  that is,
| the complement is the closure (X~A)˜.  þ
|
| JLK, Gen Top, page 44.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 11


| 1.  Topological Spaces
|
| 1.5.  Interior and Boundary (cont.)
|
| The last statement of the foregoing theorem deserves a little
| further consideration.  For convenience, let us denote the
| relative complement X~A by A’.  Then A’’, the complement
| of the complement of A, is again A (we sometimes say ’
| is an operator of period two).  The preceding result
| can then be stated as A°’ = A’˜, and, it follows,
| taking complements, that A° = A’˜’.  Thus the
| interior of A is the complement of the closure
| of the complement of A.  If A is replaced by
| its complement it follows that A˜ = A’°’, so
| that the closure of a set is the complement
| of the interior of the complement. *
|
| * An amusing and instructive problem suggests itself.  For a given subset
|   A of a topological space, how many different sets can be constructed by
|   successive applications, in any order, of closure, complementation, and
|   interior?  From the remarks in the above paragraph and the fact that
|   A˜˜ = A˜, this reduces to:  How many distinct sets may be formed from
|   a single set A, by alternative applications of complementation and the
|   closure operator?  The surprising answer is given in problem 1.E.
|
| JLK, Gen Top, pages 44-45.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 12


| 1.  Topological Spaces
|
| 1.5.  Interior and Boundary (cont.)
|
| If X is an indiscrete space the interior of every set except X itself
| is void.  If X is a discrete space, then each set is open and closed
| and consequently identical with its interior and with its closure.
| If X is the set of real numbers with the usual topology, then the
| interior of the set of all integers is void;  the interior of a
| closed interval is the open interval with the same endpoints.
| The interior of the set of rational numbers is void, and the
| closure of the interior of this set is consequently void.
| The closure of the set of rational numbers is the set X
| of all numbers, and the interior of this set is X again.
| Thus the interior of the closure of a set may be quite
| different from the closure of the interior;  that is,
| the interior operator and the closure operator do not
| generally commute.
|
| JLK, Gen Top, page 45.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 13


| 1.  Topological Spaces
|
| 1.5.  Interior and Boundary (cont.)
|
| There is one other operator which occurs frequently enough to justify
| its definition.  The 'boundary' of a subset A of a topological space
| X is the set of all points x which are interior to neither A nor X~A.
| Equivalently, x is a point of the boundary iff each neighborhood of
| x intersects both A and X~A.  It is clear that the boundary of A is
| identical with the boundary of X~A.  If X is indiscrete and A is
| neither X nor void, then the boundary of A is X, while if X is
| discrete the boundary of every subset is void.  The boundary
| of an interval of real numbers, in the usual topology for
| the reals, is the set whose only members are the endpoints
| of the interval, regardless of whether the interval is open,
| closed, or half-open.  The boundary of the set of rationals,
| or the set of irrationals, is the set of all real numbers.
|
| It is not difficult to discover the relations between
| boundary, closure, and interior.  The following theorem,
| whose proof we omit, summarizes the facts.
|
| 10.  Theorem.  Let A be a subset of a
|      topological space X, and let b(A)
|      be the boundary of A.  Then:
|
|      1.   b(A)       =   A˜ |^| (X~A)˜   =   A˜ ~ A°.
|
|      2.   X ~ b(A)   =   A° |_| (X~A)°.
|
|      3.   A˜         =   A  |_|  b(A).
|
|      4.   A°         =   A ~ b(A).
|
|      A set is closed if and only if it contains its boundary and
|      is open if and only if it is disjoint from its boundary.
|
| JLK, Gen Top, pages 45-46.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 14


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases
|
| In defining the usual topology for the set of real numbers
| we began with the family !B! of open intervals, and from this
| family constructed the topology !T!.  The same method is useful
| in other situations and we now examine the construction in detail.
|
| A family !B! of sets is a 'base for a topology' !T!
| iff !B! is a subfamily of !T! and for each point x
| of the space, and each neighborhood U of x, there
| is a member V of !B! such that x in V c U.
|
| Thus the family of open intervals is a base for the usual topology of
| the real numbers, in view of the definition of the usual topology and
| the fact that open intervals are open relative to this topology.
|
| There is a simple characterization of bases which is frequently used
| as a definition:  A subfamily !B! of a topology !T! is a base for !T!
| iff each member of !T! is the union of members if !B!.  To prove this
| fact, suppose that !B! is a base for the topology !T! and that U in !T!.
| Let V be the union of all members of !B! which are subsets of U and suppose
| that x in U.  Then there is W in !B! such that x in W c U, and consequently
| x in V.  Hence U c V and since V is surely a subset of U, we have that V = U.
| To show the converse, suppose !B! c !T! and each member of !T! is the union
| of members of !B!.  If U in !T!, then U is the union of the members of
| a subfamily of !B!, and for each x in U there is V in !B! such that
| x in V c U.  Consequently !B! is a base for !T!.
|
| Although this is a very convenient method for the construction
| of topologies, a little caution is necessary because not every
| family of sets is the base for a topology.  For example, let X
| consist of the integers 0, 1, 2, let A consist of 0 and 1, and
| let B consist of 1 and 2.  If !S! is the family whose members
| are X, A, B, and the void set, then !S! cannot be the base
| for a topology because:  by direct computation, the union
| of members of !S! is always a member, so that if !S! were
| the base of a topology that topology would have to be !S!
| itself, but !S! is not a topology because A |^| B is not
| in !S!.  The reason for this situation is made clear by
| the following theorem.
|
| 11.  Theorem.  A family !B! of sets is a base for some
|      topology for the set X = |_|{B : B in !B!} iff
|      for every two members U and V of !B! and each
|      point x in U |^| V there is W in !B! such
|      that x in W and W c U |^| V.
|
| Proof.  If !B! is a base for some topology, U and V are members
| of !B!, and x is in U |^| V, then, since U |^| V is open, there
| is a member of !B! to which x belongs and which is a subset of
| U |^| V.  To show the converse, let !B! be a family with the
| specified property and let !T! be the family of all unions
| of members of !B!.  A union of members of !T! is itself
| a union of members of !B! and is therefore a member
| of !T!, and it is only necessary to show that the
| intersection of two members U and V of !T! is
| a member of !T!.  If x in U |^| V, then we
| may choose U' and V' in !B! such that:
|
| x  in  W  c  U' |^| V'  c  U |^| V.
|
| Consequently U |^| V is the
| union of members of !B!,
| and !T! is a topology.
| þ
|
| JLK, Gen Top, pages 46-47.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 15


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases (cont.)
|
| We have just seen that an arbitrary family !S! of sets may fail to be the base
| for any topology.  With admirable persistence we vary the question and enquire
| whether there is a unique topology which is, in some sense, generated by !S!.
| Such a topology should be a topology for the set X which is the union of the
| members of !S!, and each member of !S! should be open relative to the topology;
| that is, !S! should be a subfamily of the topology.  This raises the question:
| Is there a smallest topology for X which contains !S!?  The following simple
| result will enable us to exhibit this smallest topology.
|
| 12.  Theorem.  If !S! is any non-void family of sets
|      the family of all finite intersections of members
|      of !S! is the base for a topology for the set
|      X = |_|{S : S in !S!}.
|
| Proof.  If !S! is a family of sets let !B! be the family of
| finite intersections of members of !S!.  Then the intersection
| of two members of !B! is again a member of !B! and, applying the
| preceding theorem, !B! is the base for a topology.  þ
|
| A family !S! of sets is a 'subbase for a topology' !T! iff
| the family of finite intersections of members of !S! is a
| base for !T! (equivalently, iff each member of !T! is the
| union of finite intersections of members of !S!).  In view
| of the preceding theorem every non-empty family !S! is the
| subbase for some topology, and this topology is, of course,
| uniquely determined by !S!.  It is the smallest topology
| containing !S! (that is, it is a topology containing !S!
| and is a subfamily of every topology containing !S!).
|
| There will generally be many different bases and subbases
| for a topology and the most appropriate choice may depend on
| the problem under consideration.  One rather natural subbase
| for the usual topology for the real numbers is the family of
| half-infinite open intervals;  that is, the family of sets of
| the form {x : x > a} or {x : x < a}.  Each open interval is the
| intersection of two such sets, and this family is consequently a
| subbase.  The family of all sets of the same form with 'a' rational
| is a less obvious and more interesting subbase.  (See problem 1.J.)
|
| JLK, Gen Top, pages 47-48.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 16


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases (cont.)
|
| A space whose topology has a countable base has
| many pleasant properties.  Such spaces are said
| to satisfy the 'second axiom of countability'.
| (The terms 'separable' and 'perfectly separable'
| are also used in this connection, but we shall
| use neither.)
|
| 13.  Theorem.  If A is an uncountable subset of a space whose topology has
|      a countable base, then some point of A is an accumulation point of A.
|
| Proof.  Suppose that no point of A is an accumulation point and that !B! is
| a countable base.  For each x in A there is an open set containing no point
| of A other than x, and since !B! is a base we may choose B_x in !B! such that
| B_x |^| A = {x}.  There is then a one-to-one correspondence between the points
| of A and the members of a subfamily of !B!, and A is therefore countable.  þ
|
| A sharper form of this theorem is stated in problem 1.H.
|
| JLK, Gen Top, pages 48-49.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 17


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases (cont.)
|
| A set A is 'dense' in a topological space X iff the closure of A is X.
| A topological space X is 'separable' iff there is a countable subset which
| is dense in X.  A separable space may fail to satisfy the second axiom of
| countability.  For example, let X be an uncountable set with the topology
| consisting of the void set and the complements of finite sets.  Then every
| non-finite set is dense because it intersects every non-void open set.  On
| the other hand, suppose that there is a countable base !B! and let x be a
| fixed point of X.  The intersection of the family of all open sets to which
| x belongs must be {x}, because the complement of every other point is open.
| It follows that the intersection of those members of the base !B! to which
| x belongs is {x}.  But the complement of this countable intersection is the
| union of a countable number of finite sets, hence countable, and this is a
| contradiction.  (Less trivial examples occur later.)  There is no difficulty
| in showing that a space with a countable base is separable.
|
| 14.  Theorem.  A space whose topology has a countable base is separable.
|
| Proof.  Choose a point out of each member of the base, thus obtaining
| a countable set A.  The complement of the closure of A is an open set
| which, being disjoint from A, contains no non-void member of the base
| and is hence void.  þ
|
| JLK, Gen Top, page 49.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 18


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases (cont.)
|
| A family !A! is a 'cover' of a set B iff B is a subset of the
| union |_|{A : A in !A!}, that is, iff each member of B belongs
| to some member of !A!.  The family is an 'open cover' of B iff
| each member of !A! is an open set.  A 'subcover' of !A! is a
| subfamily which is also a cover.
|
| 15.  Theorem (Lindelöf).  There is a countable subcover of each open
|      cover of a subset of a space whose topology has a countable base.
|
| Proof.  Suppose A is a set, !A! is an open cover of A, and !B!
| is a countable base for the topology.  Because each member of !A!
| is the union of members of !B! there is a subfamily !C! of !B! which
| also covers A, such that each member of !C! is a subset of some member
| of !A!.  For each member of !C! we may select a containing member of !A!
| and so obtain a countable subfamily !D! of !A!.  Then !D! is also a cover
| of A because !C! covers A.  Hence !A! has a countable subcover.  þ
|
| A topological space is a 'Lindelöf space' iff each
| open cover of the space has a countable subcover.
|
| JLK, Gen Top, pages 49-50.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 19


| 1.  Topological Spaces
|
| 1.6.  Bases and Subbases (cont.)
|
| Since the second axiom of countability has been mentioned, it seems only
| proper that the first be stated.  This axiom concerns a localized form of
| the notion of a base.  A 'base for the neighborhood system' of a point x,
| or a 'local base' at x, is a family of neighborhoods of x such that every
| neighborhood of x contains a member of the family.  For example, the family
| of open neighborhoods of a point is always a base for the neighborhood system.
| A topological space satisfies the 'first axiom of countability' if the neighborhood
| system of every point has a countable base.  It is clear that each topological space
| which satisfies the second axiom of countability also satisfies the first;  on the
| other hand, any uncountable discrete topological space satifies the first axiom
| (there is a base for the neighborhood system of each point x which consists
| of the single neighborhood {x}) but not the second (the cover whose members
| are {x} for all x in X has no countable subcover).  The second axiom of
| countability is therefore definitely more restrictive than the first.
|
| It is worth noticing that, if U_1, U_2, ..., U_n, ... is a countable
| local base at x, then a new local base V_1, V_2, ..., V_n, ... can be
| found such that V_n contains V_n+1 for each n.  The construction is
| simple:  let V_n = |^|{U_k : k =< n}.
|
| A 'subbase for the neighborhood system' of a point x, or a 'local subbase'
| at x, is a family of sets such that the family of all finite intersections of
| members is a local base.  If U_1, U_2, ..., U_n, ... is a countable local subbase,
| then V_1, V_2, ..., V_n, ... where V_n = |^|{U_k : k =< n} is a countable local base.
| Hence the existence of a countable local subbase at each point implies the first axiom
| of countability.
|
| JLK, Gen Top, page 50.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 20


| 1.  Topological Spaces
|
| 1.7.  Relativization, Separation
|
| If (X, !T!) is a topological space and Y is a subset of X
| we may construct a topology !U! for Y which is called the
| 'relative topology', or the 'relativization' of !T! to Y.
|
| The relative topology !U! is defined to be the family of all
| intersections of members of !T! with Y;  that is, U belongs to
| the relative topology !U! iff U = V |^| Y for some !T!-open set V.
| It is not difficult to see that !U! is actually a topology.  Each
| member U of the relative topology !U! is said to be 'open in Y', and
| its relative complement Y~U is 'closed in Y'.  The !U!-closure of a
| subset of Y is its 'closure in Y'.  Each subset Y of X is both open
| and closed in itself, although Y may be neither open nor closed in X.
| The topological space (Y, !U!) is called a 'subspace' of the space
| (X, !T!).  More formally, an arbitrary topological space (Y, !U!)
| is a subspace of another space (X, !T!) iff Y c X and !U! is the
| relativization of !T!.
|
| It is worth noticing that, if (Y, !U!) is a subspace of (X, !T!)
| and (Z, !V!) is a subspace of (Y, !U!), then (Z, !V!) is a subspace
| of  (X, !T!).  This transitivity relation will often be used without
| explicit mention.
|
| JLK, Gen Top, pages 50-51.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 21


| 1.  Topological Spaces
|
| 1.7.  Relativization, Separation (cont.)
|
| Suppose that (Y, !U!) is a subspace of (X, !T!) and that A is a subset
| of Y.  Then A may be either !T!-closed or !U-closed, a point y may
| be either a !U!- or a !T!-accumulation point of A, and A has both
| a !T!- and a !U!-closure.  The relations between these various
| notions are important.
|
| 16.  Theorem.  Let (X, !T!) be a topological space,
|      let (Y, !U!) be a subspace, and
|      let A be a subset of Y.  Then:
|
|      a.  The set A is !U!-closed if and only if it is
|          the intersection of Y and a !T!-closed set.
|
|      b.  A point y of Y is a !U!-accumulation point of A
|          if and only if it is a !T!-accumulation point.
|
|      c.  The !U!-closure of A is the intersection of Y
|          and the !T!-closure of A.
|
| Proof.  The set A is closed in Y iff its relative complement Y~A
| is of the form V |^| Y for some !T!-open set V, but this is true
| iff A = (X~V) |^| Y for some V in !T!.  This proves (a), and (b)
| follows directly from the definition of the relative topology and
| the definition of accumulation point.  The !U!-closure of A is the
| union of A and the set of its !U!-accumulation points, and hence
| by (b) it is the intersection of Y and the !T!-closure of A.  þ
| 
| JLK, Gen Top, pages 51-52.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 22


| 1.  Topological Spaces
|
| 1.7.  Relativization, Separation (cont.)
|
| If (Y, !U!) is a subspace of (X, !T!) and Y is open in X,
| then each set open in Y is also open in X because it is the
| intersection of an open set and Y.  A similar statement, with
| "closed" replacing "open" everywhere, is also true.  However,
| knowing that a set is open or closed in a subspace generally
| tells very little about the situation of the set in X.  If X
| is the union of two sets Y and Z and if A is a subset of X
| such that A |^| Y is open in Y and A |^| Z is open in Z,
| then one might hope that A is open in X.  But this is
| not always true, for if Y is an arbitrary subset of X
| and Z = X ~ Y, then Y |^| Y and Y |^| Z are open in Y
| and Z respectively.  There is one important case,
| in which this result does hold.
|
| Two subsets A and B are 'separated' in a topological space X
| iff A˜ |^| B and A |^| B˜ are both void.  This definition of
| separation involves the closure operation in X.  However, the
| apparent dependence on the space X is illusory, for A and B are
| separated in X if and only if neither A nor B contains a point or
| an accumulation point of the other.  This condition may be restated
| in terms of the relative topology for A |_| B, in view of part (b)
| of the foregoing theorem, as:  Both A and B are closed in A |_| B
| (or, equivalently, A (or B) is both open and closed in A |_| B) and
| A and B are disjoint.  As an example, notice that the open intervals
| (0, 1) and (1, 2) are separated subsets of the real numbers with the
| usual topology and that there is a point, 1, belonging to the closure
| of both.  However, (0, 1) is not separated from the closed interval
| [1, 2] because 1, which is a member of [1, 2], is an accumulation
| point of (0, 1).
|
| JLK, Gen Top, page 52.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 23


| 1.  Topological Spaces
|
| 1.7.  Relativization, Separation (cont.)
|
| Three theorems on separation will be needed in the sequel.
|
| 17.  Theorem.  If Y and Z are subsets
|      of a topological space X and both
|      Y and Z are closed or both are open,
|      then Y ~ Z is separated from Z ~ Y.
|
| Proof.  Suppose that Y and Z are closed subsets of X.  Then Y and Z
| are closed in Y |_| Z and therefore Y~Z = ((Y |_| Z) ~ Z) and Z~Y
| are open in Y |_| Z.  It follows that both Y~Z and Z~Y are open
| in (Y~Z) |_| (Z~Y), and since they are complements relative to
| this set both are closed in (Y~Z) |_| (Z~Y).  Consequently
| Y~Z and Z~Y are separated.  A dual argument applies to
| the case where both Y and Z are open in X.  þ
|
| 18.  Theorem.  Let X be a topological space
|      which is the union of subsets Y and Z
|      such that Y ~ Z and Z ~ Y are separated.
|      Then the closure of a subset A of X is the
|      union of the closure in Y of A |^| Y and the
|      closure in Z of A |^| Z.
|
| Proof.  The closure of a union of two sets
| is the union of the closures, and hence:
|
| A˜  =  (A |^| Y)˜  |_|  (A |^| Z~Y)˜.
|
| Consequently:
|
| A˜ |^| Y  =  ((A |^| Y)˜ |^| Y)  |_|  ((A |^| Z~Y)˜ |^| Y).
|
| The set (Z~Y)˜ is disjoint from Y~Z,
| hence (Z~Y)˜ c Z, and it follows that:
|
| (A |^| Z~Y)˜  is a subset of  (A |^| Z)˜ |^| Z.
|
| Similarly:
|
| A˜ |^| Z  is the union of  (A |^| Z)˜ |^| Z
|
| and a subset of  (A |^| Y)˜ |^| Y.
|
| Consequently:
|
| A˜  =  (A˜ |^| Y)  |_|  (A˜ |^| Z)
|
|     =  ((A |^| Y)˜ |^| Y)  |_|  ((A |^| Z)˜ |^| Z)
|
| and the theorem is proved.  þ
|
| 19.  Corollary.  Let X be a topological space
|      which is the union of subsets Y and Z
|      such that Y~Z and Z~Y are separated.
|      Then a subset A of X is closed (open)
|      if   A |^| Y is closed (open) in Y
|      and  A |^| Z is closed (open) in Z.
|
| Proof.  If A |^| Y and A |^| Z are closed in Y and Z respectively,
| then, by the preceding theorem, A is necessarily identical with its
| closure and is therefore closed.  If A |^| Y and A |^| Z are open
| in Y and Z resepctively, then Y |^| X~A and Z |^| X~A are closed
| in Y and in Z, and hence X~A is closed and A is open.  þ
|
| JLK, Gen Top, pages 52-53.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 24


| 1.  Topological Spaces
|
| 1.8.  Connected Sets
|
| A topological space (X, !T!) is 'connected' iff X
| is not the union of two non-void separated subsets.
|
| A subset Y of X is connected iff the topological space Y
| with the relative topology is connected.  Equivalently,
| Y is connected iff Y is not the union of two non-void
| separated subsets.  Another equivalence follows from
| the discussion of separation:  A set Y is connected
| iff the only subsets of Y which are both open and
| closed in Y are Y and the void set.  From this
| form it follows at once that any indiscrete
| space is connected.  A discrete space
| containing more than one point is
| not connected.  The real numbers,
| with the usual topology, are
| connected (problem 1.J), but
| the rationals, with the usual
| topology of the reals relativized,
| are not connected.  (For any irrational 'a'
| the sets {x : x < a} and {x : x > a} are separated.)
|
| 20. Theorem.  The closure of a connected set is connected.
|
| Proof.  Suppose that Y is a connected subset of a topological space and that
| Y˜ = A |_| B, where A and B are both open and closed in Y˜.  Then each of
| A |^| Y and B |^| Y is open and closed in Y, and since Y is connected,
| one of these two sets must be void.  Suppose that B |^| Y is void.
| Then Y is a subset of A and consequently Y˜ is a subset of A
| because A is closed in Y˜.  Hence B is void, and it follows
| that Y˜ is connected.  þ
|
| There is another version of this theorem which is apparently
| stronger, which states that, if Y is a connected subset of X
| and if Z is a set such that Y c Z c Y˜, then Z is connected.
| However, the stronger form is an immediate consequence of
| applying the foregoing theorem to Z with the relative
| topology.
|
| JLK, Gen Top, pages 53-54.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 25


| 1.  Topological Spaces
|
| 1.8.  Connected Sets (cont.)
|
| 21.  Theorem.  Let !A! be a family of connected subsets
|      of a topological space.  If no two members of !A!
|      are separated, then |_|{A : A in !A!} is connected.
|
| Proof.  Let C be the union of the members of !A! and suppose that D is both
| open and closed in C.  Then for each member A of !A!, we have that A |^| D
| is open and closed in A, and since A is connected either A c D or A c C~D.
| Now if A and B are members of !A! it is impossible that A c D and B c C~D,
| for in this case A and B, being respectively subsets of the separated sets
| D and C~D, would be separated.  Consequently either every member of !A!
| is a subset of C~D and D is void, or every member of !A! is a subset
| of D and C~D is void.  þ
|
| A 'component' of a topological space is a maximal connected subset;
| that is, a connected subset which is properly contained in no other
| connected subset.  A component of a subset A is a component of A with
| the relative topology;  that is, a maximal connected subset of A.  If a
| space is connected, then it is its only component.  If a space is discrete,
| then each component consists of a single point.  Of course, there are many
| spaces which are not discrete which have components consisting of a single
| point -- for example, the space of rational numbers, with the (relativized)
| usual topology.
|
| 22.  Theorem.  Each connected subset of a topological space
|      is contained in a component, and each component is closed.
|      If A and B are distinct components of a space, then A and B
|      are separated.
|
| Proof.  Let A be a non-void connected subset of a topological space and
| let C be the union of all connected sets containing A.  In view of the
| preceding theorem, C is surely connected, and if D is a connected set
| and contains C, then, since D c C, it follows that C = D.  Hence C is
| a component.  (If A is void, and the space is not, a set consisting
| of a single point is contained in a component, and hence so is A.)
| Each component C is connected and hence, by 1.20, the closure C˜
| is connected.  Therefore C is identical with C˜ and C is closed.
| If A and B are distinct components and are not separated, then
| their union is connected, by 1.21, which is a contradiction. þ
|
| It is well to end our remarks on components with a word of caution.
| If two points, x and y, belong to the same component of a topological
| space, then they always lie in the same half of a separation of the space.
| That is, if the space is the union of separated sets A and B, then both
| x and y belong to A or both x and y belong to B.  The converse of this
| proposition is false.  It may happen that two points always lie in the
| same half of a separation but nevertheless lie in different components.
| (See problem 1.P.)
|
| JLK, Gen Top, pages 54-55.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 26


With great reluctance I am going to skip Chapter 2 on "Convergence"
and proceed directly to Chapter 3 on "Product and Quotient Spaces".
I am doing this by way of more quickly picking up the all-important
ideas of a "continuous function" and a "homeomorphism", the latter
also known as a "topological transformation".

| 3.  Product and Quotient Spaces
|
| It is the purpose of this chapter to investigate two methods of constructing
| new topological spaces from old.  One of these involves assigning a standard
| sort of topology to the cartesian product of spaces, thus building a new space
| from those originally given.  For example, the Euclidean plane is the product
| space of the real numbers (with the usual topology) with itself, and Euclidean
| n-space is the product of the real numbers n times.  In chapter 4 arbitrary
| cartesian products of the real numbers will serve as standard spaces with
| which one may compare other topological spaces.
|
| The second method of constructing a new space from a given one depends on
| dividing the given space X into equivalence classes, each of which is a point
| of the newly constructed space.  Roughly speaking, we "identify" the points of
| certain subsets of X, so obtaining a new set of points, which is then assigned
| the "quotient" topology.  For example, the equivalence classes of real numbers
| modulo the integers are assigned a topology so that the resulting space is
| a "copy" of the unit circle in the plane.
|
| Both of these methods of constructing spaces are motivated by making certain
| functions continuous.  We therefore begin by defining continuity and proving
| a few simple propositions about it.
|
| JLK, Gen Top, page 84.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 27


| 3.  Product and Quotient Spaces
|
| 3.1.  Continuous Functions
|
| For convenience we review some of the terminology and a
| few elementary propositions about functions (chapter 0).
| The words "function", "map", "mapping", "correspondence",
| "operator", and "transformation" are synonymous.
|
| A function f is said to be on X iff its domain is X.  It is to Y,
| or into Y, iff its range is a subset of Y, and it is onto Y iff
| its range is Y.  The value of f at a point x is f(x), and f(x)
| is also called the image under f of x.
|
| If B is a subset of Y, then the inverse under f of B, f^(-1)[B],
| is {x : f(x) in B}.  The inverse under f of the intersection (union)
| of the members of a family of subsets of Y is the intersection (union)
| of the inverses of the members;  that is, if Z_c is a subset of Y for
| each member c of a set C, then:
|
| f^(-1)[ |^| {Z_c : c in C} ]  =  |^| {f^(-1)[Z_c] : c in C},
|
| and similarly for unions.
|
| If y is in Y, then f^(-1)[{y}], the inverse of the
| set whose only member is y, is abbreviated f^(-1)[y].
|
| The image f[A] of a subset A of X is the set of
| all points y such that y = f(x) for some x in A.
|
| The image of the union of a family of subsets of X is
| the union of the images, but, in general, the image of
| the intersection is not the intersection of the images.
|
| A function is one to one iff no two distinct points have
| the same image, and in this case f^(-1) is the function
| inverse to f.
|
| ( Notice that the notation is arranged so that,
|   roughly speaking, square brackets occur in the
|   designations of subsets of the range and domain
|   of a function, and parentheses in the designations
|   of members.  For example, if f is one to one onto Y
|   and y in Y, then f^(-1)(y) is the unique point x of X
|   such that f(x) = y, and f^(-1)[y] = {x}.)
|
| JLK, Gen Top, pages 84-85.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 28


| 3.  Product and Quotient Spaces
|
| 3.1.  Continuous Functions (cont.)
|
| A map f of a topological space (X, !T!) into a topological space (Y, !U!)
| is 'continuous' iff the inverse of each open set is open.  More precisely,
| f is continuous with respect to !T! and !U!, or (!T!, !U!)-continuous, iff
| f^(-1)[U] is in !T! for each U in !U!.  The concept depends on the topology
| of both the range and the domain space, but we follow the usual practice
| of suppressing all mention of the topologies when confusion is unlikely.
|
| There are one or two propositions about continuity which are
| quite important, although almost self-evident.  First, if f is
| a continuous function on X to Y and g is a continuous function
| on Y to Z, then the composition g o f is a continuous function
| on X to Z, for (g o f)^(-1)[V]  =  f^(-1)[g^(-1)[V]] for each
| subset V of Z, and using first the continuity of g, then that
| of f, it follows that if V is open so is (g o f)^(-1)[V].
|
| If f is a continuous function on X to Y, and A is a subset
| of X, then the restriction of f to A, f|A, is also continuous
| with respect to the relative topology for A, for if U is open
| in Y, then (f|A)^(-1)[U] = A |^| f^(-1)[U], which is open in A.
| A function f such that f|A is continuous is 'continuous on' A.
| It may also happen that f is continuous on A but fails to be
| continuous on X.
|
| JLK, Gen Top, pages 85-86.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 29


| 3.  Product and Quotient Spaces
|
| 3.1.  Continuous Functions (cont.)
|
| 1.  Theorem.  If X and Y are topological spaces
|     and f is a function on X to Y, then the
|     following statements are equivalent.
|
|     a.  The function f is continuous.
|
|     b.  The inverse of each closed set is closed.
|
|     c.  The inverse of each member of a subbase for
|         the topology for Y is open.
|
|     d.  For each x in X the inverse of every neighborhood
|         of f(x) is a neighborhood of x.
|
|     e.  For each x in X and each neighborhood U of f(x)
|         there is a neighborhood V of x such that f[V] c U.
|
|     f.  For each net S (or {S_n : n in D}) in X which converges
|         to a point s, the composition f o S ({f(S_n) : n in D})
|         converges to f(s).
|
|     g.  For each subset A of X the image of the closure is a subset
|         of the closure of the image;  that is, f[A˜] c f[A]˜.
|
|     h.  For each subset B of Y, f^(-1)[B]˜ c f^(-1)[B˜].
|
| Proof.
|
| (a <=> b).  This is a simple consequence of the fact that the
| inverse of a function preserves relative complements;  that is,
| f^(-1)[Y ~ B]  =  X ~ f^(-1)[B]  for every subset B of Y.
|
| (a <=> c).  If f is continuous then the inverse of a member of a subbase is
| open because each subbase member is open.  Conversely, since each open set
| V in Y is the union of finite intersections of subbase members, f^(-1)[V]
| is the union of finite intersections of the inverses of subbase members;
| if these are open, then the inverse of each open set is open.
|
| (a => d).  If f is continuous, x in X, and V is a neighborhood of f(x),
| then V contains an open neighborhood W of f(x) and f^(-1)[W] is an open
| neighborhood of x which is a subset of f^(-1)[V];  consequently f^(-1)[V]
| is a neighborhood of x.
|
| (d => e).  Assuming (d), if U is a neighborhood of f(x), then
| f^(-1)[U] is a neighborhood of x such that f[f^(-1)[U]] c U.
|
| (e => f).  Assuming (e), let S be a net in X which converges to a point s.
| Then if U is a neighborhood of f(s) there is a neighborhood V of s such that
| f[V] c U, and since S is eventually in V, f o S is eventually in U.
|
| (f => g).  Assuming (f), let A be a subset of X and s a point of the closure A˜.
| Then there is a net S in A which converges to s, and f o S converges to f(s),
| which is therefore a member of f[A]˜.  Hence f[A˜] c f[A]˜.
|
| (g => h).  Assuming (g), if A = f^(-1)[B], then f[A˜] c f[A]˜ c B˜
| and hence A˜ c f^(-1)[B˜].  That is, f^(-1)[B]˜ c f^(-1)[B˜].
|
| (h => b).  Assuming (h), if B is a closed subset of Y,
| then f^(-1)[B]˜ c f^(-1)[B˜] = f^(-1)[B] and f^(-1)[B]
| is therefore closed.
|
| þ
|
| There is also a localized form of continuity which is useful. *
| A function f on a topological space X to a topological space Y
| is 'continuous at a point' x iff the inverse under f of each
| neighborhood of f(x) is a neighborhood of x.  It is easy to
| give characterizations of the form of 3.1.e and 3.1.f for
| continuity at a point.  Evidently f is continuous iff
| it is continuous at each point of its domain.
|
|
| * If f is defined on a subset A of a topological space,
|   then continuity at points of the closure A˜ may also be
|   defined (see 3.D);  several useful propositions result.
|
| JLK, Gen Top, pages 86-87.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 30


| 3.  Product and Quotient Spaces
|
| 3.1.  Continuous Functions (cont.)
|
| A 'homeomorphism', or 'topological transformation',
| is a continuous one-to-one map of a topological space X
| onto a topological space Y such that f^(-1) is also continuous.
|
| If there exists a homeomorphism of one space onto another,
| the two spaces are said to be 'homeomorphic' and each
| is a 'homeomorph' of the other.
|
| The identity map of a topological space onto itself is always
| a homeomorphism, and the inverse of a homeomorphism is again
| a homeomorphism.  It is also evident that the composition of
| two homeomorphisms is a homeomorphism.  Consequently the
| collection of topological spaces can be divided into
| equivalence classes such that each topological space
| is homeomorphic to every member of its equivalence
| class and to these spaces only.  Two topological
| spaces are 'topologically equivalent' iff they
| are homeomorphic.
|
| Two discrete spaces, X and Y, are homeomorphic iff there is a one-to-one
| function on X onto Y, that is, iff X and Y have the same cardinal number.
| This is true because every function on a discrete space is continuous,
| regardless of the topology of the range space.  It is also true that
| two indiscrete spaces (the only open sets are the space and the void
| set) are homeomorphic iff there is a one-to-one map of one onto the
| other, because each function into an indiscrete space is continuous
| regardless of the topology of the domain space.  In general, it may
| be quite difficult to discover whether two topological spaces are
| homeomorphic.
|
| The set of all real numbers, with the usual topology, is homeomorphic to the
| open interval (0, 1), with the relative topology, for the function whose
| value at a member x of (0, 1) is (2x-1) / x(x-1) is easily proved to be
| a homeomorphism.  However, the interval (0, 1) is not homeomorphic to
| (0, 1) |_| (1, 2), for if f were a homeomorphism (or, in fact, just
| a continuous function) on (0, 1) with range (0, 1) |_| (1, 2),
| then f^(-1)[(0, 1)] would be a proper open and closed subset
| of (0, 1), and (0, 1) is connected.
|
| This little demonstration was achieved by noticing that one of the spaces is
| connected, the other is not, and the homeomorph of a connected space is again
| connected.  A property which when possessed by a topological space is also
| possessed by each homeomorph is a 'topological invariant'.  The proof that
| two spaces are not homeomorphic usually depends on exhibiting a topological
| invariant which is possessed by one but not by the other.  A property which
| is defined in terms of the members of the space and the topology turns out,
| automatically, to be a topological invariant.  Besides connectedness, the
| property of having a countable base for the topology, having a countable
| base for the neighborhood system of each point, being a T_1 space or
| being a Hausdorff space, are all topological invariants.  Formally,
| topology is the study of topological invariants. *
|
| *  A 'topologist' is a man who doesn't know the
|    difference between a doughnut and a coffee cup.
|
| JLK, Gen Top, pages 87-88.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Note 31


| 3.  Product and Quotient Spaces
|
| 3.2.  Product Spaces
|
| ...
| 
| JLK, Gen Top, pages 88-89.
|
| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

MAT. Mathematical Notes

CAT. Category Theory

Introduction

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg04463.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg04466.html
  3. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg04467.html

The above material is excerpted from:

  • Saunders Mac Lane, Categories for the Working Mathematician, 2nd edition, Springer, New York, NY, 1997.

DIF. Differential Geometry

Preface

01.  http://suo.ieee.org/ontology/msg04056.html

1.  Introduction

02.  http://suo.ieee.org/ontology/msg04057.html
03.  http://suo.ieee.org/ontology/msg04058.html

2.  Manifolds And Their Maps

2.1.  Differentiable Manifolds

04.  http://suo.ieee.org/ontology/msg04059.html
05.  http://suo.ieee.org/ontology/msg04060.html
06.  http://suo.ieee.org/ontology/msg04061.html

2.2.  Examples

07.  http://suo.ieee.org/ontology/msg04062.html

2.3.  Manifold Maps

08.  http://suo.ieee.org/ontology/msg04063.html

3.  Tangent Spaces

09.  http://suo.ieee.org/ontology/msg04065.html

3.1.  The Tangent Space of the Sphere ...

The above material is excerpted from:

| Brian F. Doolin & Clyde F. Martin,
|'Introduction to Differential Geometry for Engineers',
| Marcel Dekker, New York, NY, 1990.

HOC. Higher Order Categorical Logic

Part 0.  Introduction to Category Theory

1.  Categories and Functors

01.  http://suo.ieee.org/ontology/msg03373.html
02.  http://suo.ieee.org/ontology/msg03375.html
03.  http://suo.ieee.org/ontology/msg03376.html
04.  http://suo.ieee.org/ontology/msg03377.html
05.  http://suo.ieee.org/ontology/msg03378.html
06.  http://suo.ieee.org/ontology/msg03381.html

2.  Natural Transformations

07.  http://suo.ieee.org/ontology/msg03383.html
08.  http://suo.ieee.org/ontology/msg03384.html
09.  http://suo.ieee.org/ontology/msg03392.html
10.  http://suo.ieee.org/ontology/msg03393.html
11.  http://suo.ieee.org/ontology/msg03394.html
12.  http://suo.ieee.org/ontology/msg03395.html

Part 1.  Cartesian Closed Categories & Lambda Calculus

Introduction to Part 1

13.  http://suo.ieee.org/ontology/msg03396.html

Historical Perspective on Part 1

14.  http://suo.ieee.org/ontology/msg03398.html
15.  http://suo.ieee.org/ontology/msg03399.html
16.  http://suo.ieee.org/ontology/msg03400.html
17.  http://suo.ieee.org/ontology/msg03401.html
18.  http://suo.ieee.org/ontology/msg03402.html

1.  Propositional Calculus as a Deductive System

19.  http://suo.ieee.org/ontology/msg03403.html
20.  http://suo.ieee.org/ontology/msg03404.html
21.  http://suo.ieee.org/ontology/msg03405.html
22.  http://suo.ieee.org/ontology/msg03406.html

2.  The Deduction Theorem

23.  http://suo.ieee.org/ontology/msg03409.html

3.  Cartesian Closed Categories Equationally Presented

24.  http://suo.ieee.org/ontology/msg03410.html
25.  http://suo.ieee.org/ontology/msg03411.html
26.  http://suo.ieee.org/ontology/msg03412.html

Back to Part 0

3.  Adjoint Functors

27.  http://suo.ieee.org/ontology/msg03415.html
28.  http://suo.ieee.org/ontology/msg03416.html
29.  http://suo.ieee.org/ontology/msg03417.html
30.  http://suo.ieee.org/ontology/msg03418.html

The above material is excerpted from:

| Lambek, J. & Scott, P.J.,
|'Introduction To Higher Order Categorical Logic',
| Cambridge University Press, Cambridge, UK, 1986.
|
| http://uk.cambridge.org/mathematics/catalogue/0521356539/

INF. Information Flow

...

The above material is excerpted from:

| Jon Barwise & Jerry Seligman,
|'Information Flow, The Logic of Distributed Systems',
| Cambridge University Press, Cambridge, UK, 1997.

MOD. Model Theory

1.  Introduction

1.1.  What Is Model Theory?

01.  http://suo.ieee.org/ontology/msg03985.html
02.  http://suo.ieee.org/ontology/msg03986.html
03.  http://suo.ieee.org/ontology/msg03987.html

1.2.  Model Theory for Sentential Logic

04.  http://suo.ieee.org/ontology/msg03988.html
05.  http://suo.ieee.org/ontology/msg03989.html
06.  http://suo.ieee.org/ontology/msg03991.html
07.  http://suo.ieee.org/ontology/msg03992.html
08.  http://suo.ieee.org/ontology/msg03993.html
09.  http://suo.ieee.org/ontology/msg03994.html
10.  http://suo.ieee.org/ontology/msg03995.html
11.  http://suo.ieee.org/ontology/msg03996.html
12.  http://suo.ieee.org/ontology/msg03997.html
13.  http://suo.ieee.org/ontology/msg03999.html
14.  http://suo.ieee.org/ontology/msg04000.html
15.  http://suo.ieee.org/ontology/msg04001.html
16.  http://suo.ieee.org/ontology/msg04002.html
17.  http://suo.ieee.org/ontology/msg04003.html
18.  http://suo.ieee.org/ontology/msg04004.html

1.3.  Languages, Models, and Satisfaction

19.  http://suo.ieee.org/ontology/msg04005.html
20.  http://suo.ieee.org/ontology/msg04006.html
21.  http://suo.ieee.org/ontology/msg04007.html
22.  http://suo.ieee.org/ontology/msg04008.html
23.  http://suo.ieee.org/ontology/msg04009.html
24.  http://suo.ieee.org/ontology/msg04010.html
25.  http://suo.ieee.org/ontology/msg04011.html
26.  http://suo.ieee.org/ontology/msg04012.html
27.  http://suo.ieee.org/ontology/msg04016.html
28.  http://suo.ieee.org/ontology/msg04017.html
29.  http://suo.ieee.org/ontology/msg04019.html
30.  http://suo.ieee.org/ontology/msg04020.html
31.  http://suo.ieee.org/ontology/msg04021.html

1.4.  Theories and Examples of Theories

32.  http://suo.ieee.org/ontology/msg04022.html
33.  http://suo.ieee.org/ontology/msg04023.html
34.  http://suo.ieee.org/ontology/msg04024.html
35.  http://suo.ieee.org/ontology/msg04025.html
36.  http://suo.ieee.org/ontology/msg04026.html
37.  http://suo.ieee.org/ontology/msg04027.html
38.  http://suo.ieee.org/ontology/msg04028.html

1.5.  Elimination of Quantifiers

39.  http://suo.ieee.org/ontology/msg04029.html

The above material is excerpted from:

| C.C. Chang and H.J. Keisler, 'Model Theory',
| North-Holland, Amsterdam, Netherlands, 1973.

SEM. Program Semantics

Preface

01.  http://suo.ieee.org/ontology/msg03884.html

1.  An Introduction to Denotational Semantics

1.1.  Syntax and Semantics

02.  http://suo.ieee.org/ontology/msg03885.html
03.  http://suo.ieee.org/ontology/msg03886.html
04.  http://suo.ieee.org/ontology/msg03887.html

1.2.  A Simple Fragment of Pascal

05.  http://suo.ieee.org/ontology/msg03890.html
06.  http://suo.ieee.org/ontology/msg03895.html
07.  http://suo.ieee.org/ontology/msg03896.html
08.  http://suo.ieee.org/ontology/msg03898.html
09.  http://suo.ieee.org/ontology/msg03904.html
10.  http://suo.ieee.org/ontology/msg03905.html

1.3.  A Functional Programming Fragment

11.  http://suo.ieee.org/ontology/msg03906.html
12.  http://suo.ieee.org/ontology/msg03909.html
13.  http://suo.ieee.org/ontology/msg03910.html
14.  http://suo.ieee.org/ontology/msg03911.html
15.  http://suo.ieee.org/ontology/msg03912.html
16.  http://suo.ieee.org/ontology/msg03915.html
17.  http://suo.ieee.org/ontology/msg03919.html

1.4.  Multifunctions

18.  http://suo.ieee.org/ontology/msg03926.html
19.  http://suo.ieee.org/ontology/msg03927.html
20.  http://suo.ieee.org/ontology/msg03929.html

1.5.  A Preview of Partially Additive Semantics

21.  http://suo.ieee.org/ontology/msg03930.html
22.  http://suo.ieee.org/ontology/msg03932.html
23.  http://suo.ieee.org/ontology/msg03933.html
24.  http://suo.ieee.org/ontology/msg03934.html
25.  http://suo.ieee.org/ontology/msg03935.html
26.  http://suo.ieee.org/ontology/msg03938.html
27.  http://suo.ieee.org/ontology/msg03939.html
28.  http://suo.ieee.org/ontology/msg03942.html
29.  http://suo.ieee.org/ontology/msg03944.html
30.  http://suo.ieee.org/ontology/msg03945.html

2.  An Introduction to Category Theory

31.  http://suo.ieee.org/ontology/msg03946.html

2.1.  The Definition of a Category

32.  http://suo.ieee.org/ontology/msg03947.html
33.  http://suo.ieee.org/ontology/msg03949.html
34.  http://suo.ieee.org/ontology/msg03950.html
35.  http://suo.ieee.org/ontology/msg03953.html
36.  http://suo.ieee.org/ontology/msg03954.html

2.2.  Isomorphism, Duality, and Zero Objects

37.  http://suo.ieee.org/ontology/msg03955.html
38.  http://suo.ieee.org/ontology/msg03956.html
39.  http://suo.ieee.org/ontology/msg03958.html
40.  http://suo.ieee.org/ontology/msg03960.html
41.  http://suo.ieee.org/ontology/msg03963.html
42.  http://suo.ieee.org/ontology/msg03977.html
43.  http://suo.ieee.org/ontology/msg03979.html
44.  http://suo.ieee.org/ontology/msg04013.html

2.3.  Products and Coproducts

45.  http://suo.ieee.org/ontology/msg04014.html
46.  http://suo.ieee.org/ontology/msg04015.html
47.  http://suo.ieee.org/ontology/msg04018.html
48.  http://suo.ieee.org/ontology/msg04037.html

The above material is excerpted from:

| Ernest G. Manes & Michael A. Arbib,
|'Algebraic Approaches to Program Semantics',
| Springer-Verlag, New York, NY, 1986.

SET. Set Theory

01.  http://suo.ieee.org/ontology/msg04082.html

Appendix.  Elementary Set Theory

02.  http://suo.ieee.org/ontology/msg04083.html

A.1.  The Classification Axiom Scheme

03.  http://suo.ieee.org/ontology/msg04084.html
04.  http://suo.ieee.org/ontology/msg04086.html
05.  http://suo.ieee.org/ontology/msg04088.html

A.2.  Elementary Algebra of Classes

06.  http://suo.ieee.org/ontology/msg04089.html
07.  http://suo.ieee.org/ontology/msg04091.html
08.  http://suo.ieee.org/ontology/msg04092.html
09.  http://suo.ieee.org/ontology/msg04093.html
10.  http://suo.ieee.org/ontology/msg04094.html

A.3.  Existence of Sets

11.  http://suo.ieee.org/ontology/msg04095.html
12.  http://suo.ieee.org/ontology/msg04096.html
13.  http://suo.ieee.org/ontology/msg04097.html

A.4.  Ordered Pairs:  Relations

14.  http://suo.ieee.org/ontology/msg04098.html
15.  http://suo.ieee.org/ontology/msg04099.html

A.5.  Functions

16.  http://suo.ieee.org/ontology/msg04100.html

Links 2 through 16 of the above material are
selected and transcribed into plaintext from:

| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

TOP. Topology

1.  Topological Spaces

1.1.  Topologies and Neighborhoods

01.  http://suo.ieee.org/ontology/msg03863.html
02.  http://suo.ieee.org/ontology/msg03867.html
03.  http://suo.ieee.org/ontology/msg03868.html
04.  http://suo.ieee.org/ontology/msg03869.html

1.2.  Closed Sets

05.  http://suo.ieee.org/ontology/msg03870.html

1.3.  Accumulation Points

06.  http://suo.ieee.org/ontology/msg03871.html
07.  http://suo.ieee.org/ontology/msg03872.html

1.4.  Closure

08.  http://suo.ieee.org/ontology/msg03874.html
09.  http://suo.ieee.org/ontology/msg03880.html

1.5.  Interior and Boundary

10.  http://suo.ieee.org/ontology/msg03882.html
11.  http://suo.ieee.org/ontology/msg03883.html
12.  http://suo.ieee.org/ontology/msg03888.html
13.  http://suo.ieee.org/ontology/msg03889.html

1.6.  Bases and Subbases

14.  http://suo.ieee.org/ontology/msg03892.html
15.  http://suo.ieee.org/ontology/msg03893.html
16.  http://suo.ieee.org/ontology/msg03894.html
17.  http://suo.ieee.org/ontology/msg03899.html
18.  http://suo.ieee.org/ontology/msg03900.html
19.  http://suo.ieee.org/ontology/msg03903.html

1.7.  Relativization, Separation

20.  http://suo.ieee.org/ontology/msg03908.html
21.  http://suo.ieee.org/ontology/msg03914.html
22.  http://suo.ieee.org/ontology/msg03916.html
23.  http://suo.ieee.org/ontology/msg03917.html

1.8.  Connected Sets

24.  http://suo.ieee.org/ontology/msg03918.html
25.  http://suo.ieee.org/ontology/msg03920.html

2.  Convergence [omitted]

3.  Product and Quotient Spaces

26.  http://suo.ieee.org/ontology/msg03921.html

3.1.  Continuous Functions

27.  http://suo.ieee.org/ontology/msg03922.html
28.  http://suo.ieee.org/ontology/msg03923.html
29.  http://suo.ieee.org/ontology/msg03924.html
30.  http://suo.ieee.org/ontology/msg03925.html

3.2.  Product Spaces ...

The above material is excerpted from:

| John L. Kelley, 'General Topology',
| Van Nostrand Reinhold, New York, NY, 1955.

MAT. Mathematical Notes • New Versions

CAT. Category Theory • Ontology List

Introduction

01.  http://suo.ieee.org/ontology/msg04789.html
02.  http://suo.ieee.org/ontology/msg04790.html
03.  http://suo.ieee.org/ontology/msg04791.html
04.  http://suo.ieee.org/ontology/msg04792.html
05.  http://suo.ieee.org/ontology/msg04793.html
06.  http://suo.ieee.org/ontology/msg04794.html
07.  http://suo.ieee.org/ontology/msg04795.html

1.  Categories, Functors, and Natural Transformations

1.1.  Axioms for Categories

08.  http://suo.ieee.org/ontology/msg04796.html
09.  http://suo.ieee.org/ontology/msg04892.html
10.  http://suo.ieee.org/ontology/msg04893.html

1.2.  Categories

11.  http://suo.ieee.org/ontology/msg04894.html
12.  http://suo.ieee.org/ontology/msg04895.html
13.  http://suo.ieee.org/ontology/msg04896.html
14.  http://suo.ieee.org/ontology/msg04897.html
15.  http://suo.ieee.org/ontology/msg04898.html
16.  http://suo.ieee.org/ontology/msg04899.html

1.3.  Functors

17.  http://suo.ieee.org/ontology/msg04900.html
18.  http://suo.ieee.org/ontology/msg04901.html
19.  http://suo.ieee.org/ontology/msg04903.html
20.  http://suo.ieee.org/ontology/msg04904.html
21.  http://suo.ieee.org/ontology/msg04905.html
22.  http://suo.ieee.org/ontology/msg04906.html

1.4.  Natural Transformations

23.  http://suo.ieee.org/ontology/msg04907.html
24.

The above material is excerpted from:

| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

CAT. Category Theory • Inquiry List

Introduction

01.  http://stderr.org/pipermail/inquiry/2003-May/000463.html
02.  http://stderr.org/pipermail/inquiry/2003-May/000464.html
03.  http://stderr.org/pipermail/inquiry/2003-May/000465.html
04.  http://stderr.org/pipermail/inquiry/2003-May/000466.html
05.  http://stderr.org/pipermail/inquiry/2003-May/000467.html
06.  http://stderr.org/pipermail/inquiry/2003-May/000468.html
07.  http://stderr.org/pipermail/inquiry/2003-May/000469.html

1.  Categories, Functors, and Natural Transformations

1.1.  Axioms for Categories

08.  http://stderr.org/pipermail/inquiry/2003-May/000470.html
09.  http://stderr.org/pipermail/inquiry/2003-July/000621.html
10.  http://stderr.org/pipermail/inquiry/2003-July/000622.html

1.2.  Categories

11.  http://stderr.org/pipermail/inquiry/2003-July/000623.html
12.  http://stderr.org/pipermail/inquiry/2003-July/000624.html
13.  http://stderr.org/pipermail/inquiry/2003-July/000625.html
14.  http://stderr.org/pipermail/inquiry/2003-July/000626.html
15.  http://stderr.org/pipermail/inquiry/2003-July/000627.html
16.  http://stderr.org/pipermail/inquiry/2003-July/000628.html

1.3.  Functors

17.  http://stderr.org/pipermail/inquiry/2003-July/000629.html
18.  http://stderr.org/pipermail/inquiry/2003-July/000630.html
19.  http://stderr.org/pipermail/inquiry/2003-July/000632.html
20.  http://stderr.org/pipermail/inquiry/2003-July/000633.html
21.  http://stderr.org/pipermail/inquiry/2003-July/000634.html
22.  http://stderr.org/pipermail/inquiry/2003-July/000635.html

1.4.  Natural Transformations

23.  http://stderr.org/pipermail/inquiry/2003-July/000636.html
24.  ...

The above material is excerpted from:

| Saunders Mac Lane,
|'Categories for the Working Mathematician',
| 2nd edition, Springer, New York, NY, 1997.

MAT. Mathematical Notes • Meta Links

Inquiry List

⌑⌑⌑ http://web.archive.org/web/20150302021003/http://stderr.org/pipermail/inquiry/2003-April/thread.html#331
MAT http://web.archive.org/web/20150302021042/http://stderr.org/pipermail/inquiry/2003-April/000331.html
CAT http://web.archive.org/web/20150302021042/http://stderr.org/pipermail/inquiry/2003-April/000332.html
DIF http://web.archive.org/web/20070309000609/http://stderr.org/pipermail/inquiry/2003-April/000333.html
HOC http://web.archive.org/web/20070314023056/http://stderr.org/pipermail/inquiry/2003-April/000334.html
INF http://web.archive.org/web/20061013224035/http://stderr.org/pipermail/inquiry/2003-April/000335.html
MOD http://web.archive.org/web/20150302033409/http://stderr.org/pipermail/inquiry/2003-April/000336.html
SEM http://web.archive.org/web/20150302033410/http://stderr.org/pipermail/inquiry/2003-April/000337.html
SET http://web.archive.org/web/20070307111148/http://stderr.org/pipermail/inquiry/2003-April/000338.html
TOP http://web.archive.org/web/20070303021001/http://stderr.org/pipermail/inquiry/2003-April/000339.html

Ontology List

⌑⌑⌑ http://web.archive.org/web/20080620074754/http://suo.ieee.org/ontology/thrd15.html#04731
MAT http://web.archive.org/web/20060722151021/http://suo.ieee.org/ontology/msg04731.html
CAT http://web.archive.org/web/20070302105800/http://suo.ieee.org/ontology/msg04732.html
DIF http://web.archive.org/web/20070302105811/http://suo.ieee.org/ontology/msg04733.html
HOC http://web.archive.org/web/20070302105822/http://suo.ieee.org/ontology/msg04734.html
INF http://web.archive.org/web/20070302105834/http://suo.ieee.org/ontology/msg04735.html
MOD http://web.archive.org/web/20070304205818/http://suo.ieee.org/ontology/msg04736.html
SEM http://web.archive.org/web/20070302105846/http://suo.ieee.org/ontology/msg04737.html
SET http://web.archive.org/web/20070302105857/http://suo.ieee.org/ontology/msg04738.html
TOP http://web.archive.org/web/20071007170953/http://suo.ieee.org/ontology/msg04739.html