A LOGIC OF CONCEPTS
an informal description of a natural language logic incorporating relevance and distributive logics and highlighting interpretive paradox avoidance
The classical model of formal predicate calculus has a number of problems, of which two of the most notable are its treatment of existence and related concepts and the ease with which certain kinds of paradox arise. Some of the issues concerning existence have been pointed out in the (unfinished) article: Existence - what it is not and what it might be. The scheme described here is the main outcome of various attempts to overcome some of the deficiencies of classical predicate calculus without unduly compromising its scope. It is a combination of an extended (classical) propositional calculus with a quantificational or "distributive" calculus that reflects the logical relations depicted by Euler's circles. I call it a logic of concepts (or sets or ideas) because those are the natural language elements for which the internal variables or relations of variables stand. The machinery of the system does not comply with the definition of "proposition" to which I am firmly committed (see Six kinds of proposition...#2), and for additional reasons noted below in the "characteristics of the distributive component". However, I shall use "propositional" terminology wherever the new system matches the classical system.
DISTRIBUTIVE PREDICATE CALCULUS
We'll go from back to front, beginning with the distributive aspect because it was, in fact, the starting point for this system. There's no connection between this and the relevance component, except that they both attempt to reflect natural language and avoid certain interpretive paradoxes of standard formal logic.
The main characteristics of the distributive component (which replaces the predicate/quantifier extension of standard logic) are that it contains no individual variables (x, y etc) and no negated predicate variables (~f, ~g etc), and in its basic form it does not distinguish between operations, relations and proper subsets, between subject and predicate, between affirmative and subordinate locutions (e.g. "swans are white" and "white swans"), between "existing and "non-existing" things, or between things, stuff and attributes (e.g. swans, water and whiteness). I do not consider any of these divisions to be philosophically or logically fundamental. For example, one can easily imagine a world in which, say, colours are the principal objects of perception while physical form is attributive or predicative in nature. (Indeed it would not be at all surprising to discover that some animals in the present world enjoyed this inverted view of reality.) However, some of these distinctions can be contrived by adding them in as concepts (such as "existence" - see interpretation B below). The propositional component is similar to classical propositional calculus with "deducibility" and "relevance" qualifiers incorporated.
1. Standard propositional calculus
Propositional variables are p, q, r ...
To explain how the distributive component works, only the following connectives of propositional calculus will be used:
~ negation ("it is not the case that")
& conjunction ("and")
V disjunction ("either ... or ... or both")
⊃ implication ("implies")
2. Distributive component
Set names (concept variables) are f, g, h ...
There are only two distributors, used to indicate relations between sets or "operations" on sets, or in certain circumstances to denote the subsets so formed (see "Interpretations, C"). They are defined (by example) as follows:
f ^ g engagement: a part of f (which) engages g
f < g disengagement: a part of f (which) is disengaged from g
(f ^ g), (f < g) etc are called distributive terms.
There are just five independent relationships between any two sets, as illustrated by Euler diagrams, and these can be represented symbolically by formulae combining distributive terms with propositional connectives as follows:
1. Intersection (f^g)&(f<g)&(g<f)
2. Inclusion (1) (f^g)&(g<f)&~(f<g)
3. Inclusion (2) (f^g)&(f<g)&~(g<f)
4. Coincidence (f^g)&~(f<g)&~(g<f)
5. Isolation ~(f^g) (no part of f engages g)
Since these are five mutually exclusive cases they could represent the headings of a five-column truth table which would have 32 rows representing all possible combinations of these cases.
Conversely, translating the distributors in terms of the diagrams, f ^ g means "1, 2, 3 or 4" and f < g means "1, 3 or 5".
Axioms/theorems of distribution
Let A, B, C be replaceable by set names or evaluable distributive terms. Let * be replaceable by any distributor. Then every evaluable distributive term has the form (A*B), and there is an assumptivity axiom:
(A *1 (B *2 C)) ⊃ (B *2 C)
where *1 may be the same or different from *2 and *2 represents the same distributor on each occurrence.
Note that expressions of the form (A*B), when standing alone or when connected by propositional connectives, should be thought of (and treated as) "propositions" in the classical sense, though not in my preferred sense. However, secondary, tertiary etc occurrences of such expressions, i.e.when A or B is itself an expression of the form (A*B), are more likely to be regarded as noun clauses corresponding to subsets (see "Interpretations, C").
The following terminology derives from the propositional system described in Six kinds of proposition..., (especially see classification chart):
orthology (orthologous) - this corresponds to Wittgenstein's tautology, i.e. a proposition or expression which is "necessarily" true, or true for all assignments of truth values to its variables (or elementary components)
untenability (untenable) - this corresponds to Wittgenstein's inconsistency, i.e. a proposition or expression which is "necessarily" false, or false for all assignments of truth values to its variables (or elementary components)
solubility (soluble) - a proposition or expression which is either orthologous or untenable
insolubility (insoluble) - a proposition or expression which is not soluble
Under a normal interpretation only the following purely distributive expressions are soluble:
However, while (A^A) remains orthologous under every interpretation of the system, (A<A) may change its logical status under alternative interpretations (see below).
Together with the assumptivity axiom and the normal rules of propositional calculus, the following are significant orthologies of the combined system under a normal interpretation, but do not necessarily represent either a concise or a complete set of axioms:
1. (f^g) ⊃ (g^f)
2. ~(f<g) ⊃ (f^g)
3. (f^(g^h) ⊃ ((f^g)^h)
4. ((f<g)^h) ⊃ (f^h)
5a. ((f^g) & ~(g<h)) ⊃ (f^h)
5b. (~(f<g) & ~(g<h)) ⊃ ~(f<h)
5c. (~(f<g) & ~(f<h)) ⊃ (g^h)
Although Euler's circles can depict the precise meaning of an expression, in practice they are far too cumbersome. As a rough estimate, with just three variables (set names) there are well over 100 unique combinations of circles, which would expand to a truth table of around 10^30 relations (a much larger number than the number of stars in the universe!) It's almost impossible to make sense of any but a very small fraction of these.
Though less painful, similar considerations apply to formulae of the type A*(B*C) and more complicated nested distributive formulae. In practice ordinary language is more often reflected by formulae such as (A*B) % (B*C) where % is a propositional connective.
The normal interpretation of this system is called "natural distribution", which is a form of quantification free from allusions to existence and number. In my view it correctly represents the normal English usage of "all", "some" and "no" ("none"), i.e. the proper logical relations between these terms, avoiding the idiosyncrasies of Aristotelian and classical quantification. In this inerpretation (f^g) is translated as "some f is g" and (f<g) as "some f is not g", so ~(f<g) means "all f is g" and ~(f^g) means "No f is g". It's possible, of course, to introduce new symbols to cover these cases, as also the cases depicted by Euler's circles, which are translated as follows:
1. Some f is g, some f is not g and some g is not f
2. All f is g and some g is not f
3. All g is f and some f is not g
4. All f is g and all g is f
5. No f is g
It's possible to simulate the distributive aspects of this logic using classical predicate calculus by introducing a restrictive "theorem". For arguments involving only two variables, the theorem is:
∃f & ∃g & ∃~(f V g)
and the resultant system can also be represented by a "truth table" of 5 columns;
or for arguments involving any number of variables:
∃f & ∃g & ∃h & .... & ∃~(f V g V h V .....)
Interestingly, a slight modification of the 2-variables restriction results in a system akin to Aristotelian logic:
∃f & ∃g & ∃~f & ∃~g
which can be represented by seven unique diagrams and thus has 128 possible relations (including orthology and untenability)
The main reasons for alienating quantification from the concept of existence are outlined in Section 3 of the yet-to-be-completed article Existence - what it is not and what it might be.
It's possible to assume that every variable and term in a normal use of the system is existential. It's also possible to assume that no normal usage is existential, in which case any reference to existence can be introduced as a kind of predicate, ε, where the meaning/context of ε has been suitably defined. Thus "Some f exists" is represented by f ^ ε. The problem with this, however, is that some uses of ε might need to be barred, as they don't always make good sense. This is a matter of dispute, but normally one might want to block such statements as "Some f does not exist" and "All f exists". This can be accomplished by introducing a rule for ε to the effect that A < ε is illformed.
C. Subset denotation
The same basic expressions can be used to denote "proper subsets". With 2 variables there are only three kinds:
f ^ g the part of f which engages g
(or the part of g which engages f)
f < g the part of f which is disengaged from g
g < f the part of g which is disengaged from f
Any un-negated expression can be regarded as a subset, every negated expression implies one or more subsets, and since subsets are existentially noncommittal, they occur in relational or mixed formulae whenever required. In practice, they are normally only read as subsets when they occur in complex basic expressions (i.e. distributive formulae containing more than one distributor), for example in:
~(f^g) & ((f<g) ^ h)
(f<g) is normally read as a subset. Thus there's no essential difference between un-negated relational terms and proper subset terms. However, when it becomes necessary to refer to a specific subset within a complex expression, the relevant term can simply be underlined. (If the subset can only be singled out by way of implication from a negated term, some other device will be needed, but this is only a question of surface notation).
D. Particularity and individuals
The original version of the distributive system contained a particularity assumption, the distributors being interpreted along these lines:
f ^ g engagement: a certain (or given) part of f (which) engages g
f < g disengagement: a certain (or given) part of f (which) is disengaged from g
(such that any part of f is correctly named 'f')
The expression f<f was then assumed to be tenable (not untenable as in the normal interpretation). The proposition ~(f<f) was taken to mean "there is no part of f (and called 'f') which is separate from any part of f", and this was taken to imply that 'f' names an individual or entirity (for nothing is f other than the whole of f). In this interpretation, therefore, (given a suitable definition of the word "certain" or "given"), ~(f<f) implies that, for any g, ~(f<g) V ~(f^g)
This is a weird bag of tricks and may have repurcussions for the rest of the system. (However, it is nothing like as absurd as Russel and Whitehead's attempt to derive number from logical principles.) Whatever, the nature of entirities is such that they cannot be distributed over other concepts: they either possess a given attribute or they don't. In an Euler diagram they would be indicated by dots (a representation that in no way conveys existential import!). So, in the current system, a statement such as "The Eiffel tower (f) is a famous Paris landmark (g)" could be represented by ~(f<f) & ~(f<g). But we also need to take into account that this is presumably also a subject/predicate statement, discussed in section E.
The subject/predicate (S/P) distinction is explained in
Section 3-3 of the article on existence. Essentially, a subject is the carrier of an argument and does not take part in the manipulations of the argument itself. In principle this concept is very simple, but in practice there are a few complications.
In the normal interpretation of distribution the internal variables can freely change position and are thus treated as argumentive. There is no provision for individual (unquantified) expressions like Fa in classical logic.
One could take the view that quantified expressions are intrinsically predicative and argumentive and cannot be construed as or applied to S/P statements. If that were my approach, I would nonetheless hold that S/P statements could have general terms as subjects. "Cats have whiskers" is just as good an S/P statement as "Tom has whiskers". As such, its proper denial is "Cats don't have whiskers". Only when you think of it as being quantified ("All cats have whiskers") does it apparently lose its S/P form, since "cats" and "whiskered things" can change places. But this is true only if one considers the classical form of the proposition to be (x)(Cx⊃Wx). (There's a pointless argument around to the effect that "Cats have whiskers", as an S/P statement, really means "The species cat is whiskered", which is a singular proposition. But why? After all, it's the cats, not the species, that have the whiskers!)
If, on the other hand, one considers the proposition to have the classical form (c)(Wc), one can admit quantified expressions that seemingly do not compromise the subjectival status of "cats". Indeed classical predicate calculus has an excellent apparatus for handling the S/P format but unfortunately it gets used mainly for other purposes. At the very least, one would need the theorem (x)Fx ⊃ (∃x)Fx. Also, there is some ambiguity (of little account) as to whether the quantifiers themselves are part of the subject. ("All cats have whiskers", "Some cats have whiskers" and "No cats have whiskers" are all statements about cats, but the distributors seem to be external to the subject.)
Given the first view of S/P propositions, in distributive logic the S/P relation is superficially similar to the relation of entirity or individual to whatever is predicated of it (section D). The dot/circle analogy decribes both, but does not mark the conceptual distinction: a subject need not be an entirity and an entirity need not be a subject. It seems clear, however, that it is the S/P relation that is not accurately reflected by the dot/circle analogy - hardly surprising, as it is not an "ordinary" logical relation at all, but, rather, a conversational expediency. (Frege, the chief inventor of modern quantification theory, was quite right in insisting that the S/P distinction has no place in mathematical logic.) For, although from any proposition of the form A*B we can deduce a proposition of the form B*A, the S/P convention prevents us from doing so.
Consequently in distributive logic there are no supplementary theorems that could serve to restrict a distributive relation to an S/P form of argument. As with subsets, all one can do is bracket or underline the subject term and adopt the convention that it shall be placed to the left of any associated distributor.
F. Kinds, attributes and unlimited range
Like classical predicate calculus, this system does not distinguish ("up front", as it were) between predicates expressing a class or "kind" to which a subclass or subject belongs, and predicates expressing an attribute (whether defining or contingent) of the subclass or subject. For example, the syllogism "roses are red, red is a colour, therefore roses are a colour" looks invalid. In one of my notebooks, however, I have argued at length that the distinction is vague and largely, perhaps entirely, linguistic. (Where others look for differences, I seek commonality!) In particular I have argued that whether speaking of kinds or attributes, or various intermediate species, a common substrate (usually of material objects) is presupposed. To cut a long story short, one might as well, and perhaps ought to, use a uniformly referring language; so in the above example one could say "Rosy things are red things and red things are coloured things" or maybe "Roses are reds and reds are coloureds", either of which would make the syllogism look valid again. This, anyway, is a pre-requisite for any sensible interpretation of the present system.
However, this requirement raises the possibility of conflict with the "unlimited range" assumption, which I consider vital for any coherent interpretation of predicate logic. This assumption mainly relates to the role of negation - for example, to say that x is not red does not imply that x is some colour other than red. For x might not possess any colour, it might not even be the kind of entity that can be coloured. Again "My train is not happy" (seemingly committing the category error) is not to be taken as equivalent to "My train is sad". Clearly my train can be neither happy nor sad, the two adjectives are contrary, not contradictory (thus the category error is circumvented). According to the unlimited range hypothesis, then, no other attribute is implied by the non-possession of a given attribute. In fact there is no possible positive description that could replace a negated description, and this is one of the main reasons for developing a system in which there are no negated predicate terms.
To avoid conflict with the kind/attribute solution, all that is necessary is to stipulate, in any argument, that the argument is confined to a certain category. In terms of Euler's circles and roses, this amounts to enclosing the circles depicting roses and red things within a circle depicting coloured things. But what about the prickly roses and the tall ones and the vanilla-scented ones and the ones that droop without plenty of superphosphate? As I said, one takes for granted a certain general kind of substrate or context. The important thing to realise is that in the universe at large there are contraries but no contradictories. Contradictories arise only when contextual restrictions are imposed.
G. Other uses - Subjective propositions
The distributive predicate calculus format can be used for various purposes, an example of which follows.
The badly named realm of "subjective propositions" is supposed to cover most (equally badly named) "attitudinal" and "indirect" statements. Basic examples are: Mary claims that little lambs are white, Mary thinks little lambs are blue, Whatever Mary says is true, Nothing that Mary believes is true, Mary believes what Tom believes. The distributive calculus provides an opportunity to provide a template for subjective propositions by recasting them in terms of personal worlds and the real world. Thus Mary believes little lambs are blue can be recast as In Mary's world, little lambs are blue; Nothing that Mary believes is true becomes Mary's world is disjoined from the real world; Mary believes whatever Tom believes becomes Mary's world and Tom's world coincide. These "worlds" are just unbounded domains of propositions and there can be as many personal worlds as you like. The basic template, which represents only general cases and contains no specific internal propositions, looks like this:
If f is the world of Mary's beliefs and g the real world:
1. the intersection of f and g - only some of the propositions that Mary believes are true (and not every true proposition is believed by Mary)
2. the inclusion of f in g - everything that Mary believes is true(and not every true proposition is believed by Mary)
3. the inclusion of g in f - Mary believes everything that's true (as well as some propositions that are false)
4. the coincidence of f and g - whatever Mary believes is true and every true proposition is believed by Mary
5. the isolation of f from g - nothing that Mary believes is true
Under the beliefs interpretations #3 and #4 look bizarre (unless Mary is God!), even more so with the "indirect statement" substitution. However, there are many kinds of attitudinal propositions and this formula is supposed to accommodate most of them. Thus, representing #3, "Mary wishes she knew every true proposition there is" makes sense. But this aside, #3 and #4 are required for interpersonal propositions; for example if f is Mary's world and g is Tom's world, Mary believes exactly what Tom believes is represented by #4.
When required (in complex expressions) this usage can be combined with the subset interpretation (C). As for the adaptation of the template for particular propositions (Mary believes that p, etc), the kind of apparatus needed is fairly elementary but probably outside the scope of the system as described here.
By way of introduction to this topic, note the differences between insoluble formulae such as p⊃q, overt solubilities such as p⊃p (which is orthologous), stipulative orthologies whereby p⊃q is declared to be orthologous (when p and/or q are analysed) and steps in a deductive argument such as p&q p (or p&q ∴ p), in which the antecedent is assumed to be true. Where an overt orthology contains ⊃ as the main connective, it is of little account whether ⊃ or a sign for stipulative orthology is used, my preference being for the former when the value of the formula has not been established, and for the latter when it is known to be orthologous. As for , I have never found much use for it, except in mathematical proofs.
Entailment, represented by the → sign, is an optional interpretation of ⊃ that can be given only under certain conditions in order to eliminate some of the interpretive paradoxes associated with material implication. A→ B is read either as "A logically entails B" or as "B is deducible from A".
Consider one of the simplest paradoxes of this kind:
q ⊃ (p ⊃ q)
There's nothing deceptive about this formula provided that ⊃ is interpreted uniformly as "materially implies" (making it precisely equivalent to ~q V ~p V q, which is a simple orthology). Assuming that p has in effect been introduced by the (contentious) rule of V-introduction, the formula could be read as "If q is true, then no matter what else is the case, q is true" (for it is irrelevant what occupies the position occupied by p: it might just as well be ~p). However, a "paradox of material implication" arises when the second occurrence of ⊃ is incorrectly interpreted as "logically entails" or, in reverse, when p ⊃ q is rendered as "q is deducible from p".
A plausible set of minimum specifications for "deducible from" is:
q is deducible from p if and only if
1. p ⊃ q is orthologous
2. p and q are both insoluble
3. ~ (p ≡ q). (This proclaims that p ⊃ q and q ⊃ p cannot both be valid under entailment. It follows that p → p is not valid, i.e. p is not deducible from itself).
So if p and q are elementary variables, q → (p ⊃ q) is a valid construal of q ⊃ (p ⊃ q) but q → (p→ q) and
q ⊃ (p→ q) are not.
Another example is (p & ~p) ⊃ q , which appears to cause concern amongst proponents of relevance logics when ⊃ is read as entails, because the antecedent is not "relevant" to the consequent. Whether or not relevance is an issue here, (p & ~p) → q is invalid under the above definition of deducibility since the antecedent is soluble (untenable).
(p⊃q) V (q⊃p)
again an obvious orthology, but strained under most natural language readings. A non-paradoxical (invalid) eqivalent is (p→ q ) V (q→ p)
(but see below).
As well as entailment/deducibility there are various other kinds of interpretation of p ⊃ q involving a relation of dependency between p and q which is not reflected in the formla q ⊃ (p ⊃ q). Examples are: "The door will open" entails "If you turn the key, the door will open" and "Fred is unmarried" entails "If Fred is a bachelor, Fred is unmarried" and "A heavenly body does not move uniformly" entails "If a force is applied to a heavenly body, it does not move uniformly". Broadly speaking connections of this kind come under the umbrella of "relevance", but it's extremely unlikely that one set of rules could cover all cases. (For convenience I shall also include deducibility under the relevance label.) While the above examples are not especially problematic, some slightly more complicated propositions are trickier. A good example is:
((p ⊃ q) V (r ⊃ s)) ⊃ ((p ⊃ s) V (r ⊃ q))
which appears to be exemplified by
"Either if it rains the cat will get wet or if the Sun explodes the Earth will disintegrate" implies "Either if it rains the Earth will disintegrate or if the Sun explodes the cat will get wet".
Again, if the formula is read straightforwardly as ~p V q V ~r V s there's no problem. A problem only arises when the causal connections of the corresponding verbal expression need to be taken into account. Thus causality, or something like it, takes over from deducibility, and can be treated in much the same way, although it seems no purely formal definition can be given.
Indeed it's quite obviously a waste of time trying to define "relevance implication", "strict implication" and the like in exclusively logical terms. I'm inclined to think that only strictly formal concepts such as logical entailment/deducibility can be so defined. Non-formal kinds of relevance must rely on relationships of more worldly ideas, even if they do possess certain logical properties.
Because there are various kinds of "relevance" and because implication is not the only logical operation that has interpretive problems, rather than proliferate connective signs ad nauseam it is both more convenient and more realistic to use "qualifiers" (somewhat like quantifiers), whose scope is the propositional variable or bracketed expression following the qualifier. My personal preference is to use capital letters corresponding to the initial letter of certain propositional types described in the previously mentioned Six kinds of proposition..., namely the types indicated in bold italics in the upper corners of the hexagons depicted in the classification chart. (First see the diagram on this page and refer to earlier text if necessary). Thus:
and the general form of "relevant entailment" is (Θ)(p⊃q) where Θ is one of the above qualifiers (though as yet I'm doubtful about the inclusion of R and V in this set).
Each qualifier indicates the general type of interpretation to be attributed to the conditional (or any other expression) within the qualifier's scope. However, further conditions may need to be stipulated to characterise the mode of operation of the conditional thus qualified. For example, O(p⊃q) by itself does not completely characterise entailment/deducibility. Whether all the types of entailment can be defined in an analogous way to deducibilty is questionable, but if so, then the further requirement for relevant entailment is that both p and q belong in the class specified at the bottom of the relevant hexagon in the propositional scheme. For example, semantically relevant entailment requires that:
1. p ⊃ q is characterising, and
2. p and q are both reportive
keeping in mind that all the kinds of proposition in the hexagons below the semantic category are reportive. Unfortunately these constraints usually do fail in the case of representative entailment, but if such propositions are regarded as intrinsic (which is not unreasonable insofar as they exemplify general laws covered by this category) a case can be made for maintaining the constraints.
Some more examples:
p ⊃ (pVq) is normally considered valid, but p ⊃ (Θ)(pVq) is invalid.
(a) in case q is replaced by ~q, the result is equivalent to p ⊃ (Θ)(q⊃p) (mentioned above in relation to
(b) introducing q by V-introduction does not imply that q is in any way relevant to p. It certainly does not imply:
that (pVq) is orthologous.
(p&q) ⊃ (p⊃q) is valid but (p&q) ⊃ (Θ)(p>q) is invalid. Positing p and q does not imply any relevant connection between them.
A somewhat confusing example is:
(p&q ⊃ r) ⊃ ((p⊃r) V (q⊃r))
This example is convincing because it's difficult not to give a "relevance" interpretation of the formula. Thus it's hard to deny that the following is a good match to the formula, but at the same time is obviously invalid:
"If you insert the key and press the button, the door will open" entails "Either if you insert the key the door will open, or if you press the button the door will open". For the following completely logical, tense-free example, it should be possible to draw a circuit diagram illustrating its validity, but in practice the problem persists:
"If switch A is on and switch B is on the lamp is alight" entails "Either if switch A is on the lamp is alight, or if switch B is on the lamp is alight".
First, note that the antecedent and consequent of the main connective are equivalent, both reducing to ~p V ~q V r, or alternatively ~ (p & q & ~r), for which an electrical circuit diagram can easily be constructed. So we might ask whether it is the antecedent or the consequent that most closely matches "Either switch A is not on or switch B is not on or the lamp is alight" (or alternatively "It is not the case that switch A is on and switch B is on and the lamp is not alight"). To my mind the antecedent matches either of these expressions perfectly well, and it is the consequent that causes the problem (perhaps because V tends to be read as &, perhaps for other reasons). Indeed mixtures of disjuncts and implications invariably cause problems - consider the possibilities of interpretation of an innocent looking example such as (pVq) ⊃ (pVq). [For a start, we could re-write it as (pVq) ⊃ ((p)V(q)).] By including qualifiers in the formula, however, these interpretive ambiguities can be averted. Thus (going back to the example in the previous paragraph):
(Θ)(p&q ⊃ r) ⊃ ((Θ)(p⊃r) V (Θ)(q⊃r))
is not valid - easily checked by substituting the deducibility qualifier, O, for (Θ), although a more appropriate qualifier for examples of this kind would be R or I.
The paradoxes considered here are not among the most important in the theory of logic. Following this article I have written another, the last section of which is supposed to lay bare the bones of logic and expose the ultimate paradox, demonstrating that all formal logic is necessarily paradoxical. No fancy proofs! - just a commonsensical show and tell job. (This is quite distinct from my contention that logic is empirical and not analytic.)