## A LOGIC OF CONCEPTS
The classical model of formal predicate calculus has a number of problems, of which two of the most notable are its treatment of existence and related concepts and the ease with which certain kinds of paradox arise. Some of the issues concerning existence have been pointed out in the (unfinished) article: concepts (or sets or ideas) because those are the natural language elements for which the internal variables or relations of variables stand. The machinery of the system does not comply with the definition of "proposition" to which I am firmly committed (see ), and for additional reasons noted below in the "characteristics of the distributive component". However, I shall use "propositional" terminology wherever the new system matches the classical system.Six kinds of proposition...#2## DISTRIBUTIVE PREDICATE CALCULUSWe'll go from back to front, beginning with the distributive aspect because it was, in fact, the starting point for this system. There's no connection between this and the relevance component, except that they both attempt to reflect natural language and avoid certain interpretive paradoxes of standard formal logic. ## Symbols
## Distributive relationsThere are just five independent relationships between any two sets, as illustrated by Euler diagrams, and these can be represented symbolically by formulae combining distributive terms with propositional connectives as follows: ## Axioms/theorems of distribution
C)) ⊃ (B *_{2} C) _{2}where * may be the same or different from *_{1} and *_{2} represents the same distributor on each occurrence. _{2}Note that expressions of the form (A*B), when standing alone or when connected by propositional connectives, should be thought of (and treated as) "propositions" in the classical sense, though not in my preferred sense. However, secondary, tertiary etc occurrences of such expressions, i.e.when A or B is itself an expression of the form (A*B), are more likely to be regarded as noun clauses corresponding to subsets (see "Interpretations, C"). More terminology The following terminology derives from the propositional system described in , (especially see Six kinds of proposition...): classification chart - this corresponds to Wittgenstein's tautology, i.e. a proposition or expression which is "necessarily" true, or true for all assignments of truth values to its variables (or elementary components) orthology (orthologous) - this corresponds to Wittgenstein's inconsistency, i.e. a proposition or expression which is "necessarily" false, or false for all assignments of truth values to its variables (or elementary components) untenability (untenable) - a proposition or expression which is either orthologous or untenable solubility (soluble) - a proposition or expression which is not solubleinsolubility (insoluble)Distributive solubilities Under a normal interpretation only the following purely distributive expressions are soluble: (A^A) (orthologous) (A<A) (untenable) However, while (A^A) remains orthologous under every interpretation of the system, (A<A) may change its logical status under alternative interpretations (see below). Mixed orthologies Together with the assumptivity axiom and the normal rules of propositional calculus, the following are significant orthologies of the combined system under a normal interpretation, but do not necessarily represent either a concise or a complete set of axioms: 1. (f^g) ⊃ (g^f) 2. ~(f<g) ⊃ (f^g) 3. (f^(g^h) ⊃ ((f^g)^h) 4. ((f<g)^h) ⊃ (f^h) 5a. ((f^g) & ~(g<h)) ⊃ (f^h) 5b. (~(f<g) & ~(g<h)) ⊃ ~(f<h) 5c. (~(f<g) & ~(f<h)) ⊃ (g^h) ## Practical considerationsAlthough Euler's circles can depict the precise meaning of an expression, in practice they are far too cumbersome. As a rough estimate, with just three variables (set names) there are well over 100 unique combinations of circles, which would expand to a truth table of around 10^30 relations (a much larger number than the number of stars in the universe!) It's almost impossible to make sense of any but a very small fraction of these. ## Interpretations
Existence - what it is not and what it might be.It's possible to assume that every variable and term in a normal use of the system is existential. It's also possible to assume that no normal usage is existential, in which case any reference to existence can be introduced as a kind of predicate, ε, where the meaning/context of ε has been suitably defined. Thus "Some f exists" is represented by f ^ ε. The problem with this, however, is that some uses of ε might need to be barred, as they don't always make good sense. This is a matter of dispute, but normally one might want to block such statements as "Some f does not exist" and "All f exists". This can be accomplished by introducing a rule for ε to the effect that A < ε is illformed. C. Subset denotation The same basic expressions can be used to denote "proper subsets". With 2 variables there are only three kinds: f ^ g the part of f which engages g (or the part of g which engages f) f < g the part of f which is disengaged from g g < f the part of g which is disengaged from f Any un-negated expression can be regarded as a subset, every negated expression implies one or more subsets, and since subsets are existentially noncommittal, they occur in relational or mixed formulae whenever required. In practice, they are normally only read as subsets when they occur in complex basic expressions (i.e. distributive formulae containing more than one distributor), for example in: ~(f^g) & ((f<g) ^ h) (f<g) is normally read as a subset. Thus there's no essential difference between un-negated relational terms and proper subset terms. However, when it becomes necessary to refer to a specific subset within a complex expression, the relevant term can simply be underlined. (If the subset can only be singled out by way of implication from a negated term, some other device will be needed, but this is only a question of surface notation). D. Particularity and individuals The original version of the distributive system contained a particularity assumption, the distributors being interpreted along these lines:f ^ g engagement: a certain (or given) part of f (which) engages g f < g disengagement: a certain (or given) part of f (which) is disengaged from g (such that any part of f is correctly named 'f') The expression f<f was then assumed to be tenable (not untenable as in the normal interpretation). The proposition ~(f<f) was taken to mean "there is no part of f (and called 'f') which is separate from any part of f", and this was taken to imply that 'f' names an individual or entirity (for nothing is f other than the whole of f). In this interpretation, therefore, (given a suitable definition of the word "certain" or "given"), ~(f<f) implies that, for any g, ~(f<g) V ~(f^g) This is a weird bag of tricks and may have repurcussions for the rest of the system. (However, it is nothing like as absurd as Russel and Whitehead's attempt to derive number from logical principles.) Whatever, the nature of entirities is such that they cannot be distributed over other concepts: they either possess a given attribute or they don't. In an Euler diagram they would be indicated by dots (a representation that in no way conveys existential import!). So, in the current system, a statement such as "The Eiffel tower (f) is a famous Paris landmark (g)" could be represented by ~(f<f) & ~(f<g). But we also need to take into account that this is presumably also a subject/predicate statement, discussed in section E. E. Subject/predicate The subject/predicate (S/P) distinction is explained in of the article on existence. Essentially, a subject is the carrier of an argument and does not take part in the manipulations of the argument itself. In principle this concept is very simple, but in practice there are a few complications. Section 3-3In the normal interpretation of distribution the internal variables can freely change position and are thus treated as argumentive. There is no provision for individual (unquantified) expressions like Fa in classical logic. One could take the view that quantified expressions are intrinsically predicative and argumentive and cannot be construed as or applied to S/P statements. If that were my approach, I would nonetheless hold that S/P statements could have general terms as subjects. "Cats have whiskers" is just as good an S/P statement as "Tom has whiskers". As such, its proper denial is "Cats don't have whiskers". Only when you think of it as being quantified (" All cats have whiskers") does it apparently lose its S/P form, since "cats" and "whiskered things" can change places. But this is true only if one considers the classical form of the proposition to be (x)(Cx⊃Wx). (There's a pointless argument around to the effect that "Cats have whiskers", as an S/P statement, really means "The species cat is whiskered", which is a singular proposition. But why? After all, it's the cats, not the species, that have the whiskers!) If, on the other hand, one considers the proposition to have the classical form (c)(Wc), one can admit quantified expressions that seemingly do not compromise the subjectival status of "cats". Indeed classical predicate calculus has an excellent apparatus for handling the S/P format but unfortunately it gets used mainly for other purposes. At the very least, one would need the theorem (x)Fx ⊃ (∃x)Fx. Also, there is some ambiguity (of little account) as to whether the quantifiers themselves are part of the subject. ("All cats have whiskers", "Some cats have whiskers" and "No cats have whiskers" are all statements about cats, but the distributors seem to be external to the subject.) Given the first view of S/P propositions, in distributive logic the S/P relation is superficially similar to the relation of entirity or individual to whatever is predicated of it (section D). The dot/circle analogy decribes both, but does not mark the conceptual distinction: a subject need not be an entirity and an entirity need not be a subject. It seems clear, however, that it is the S/P relation that is not accurately reflected by the dot/circle analogy - hardly surprising, as it is not an "ordinary" logical relation at all, but, rather, a conversational expediency. (Frege, the chief inventor of modern quantification theory, was quite right in insisting that the S/P distinction has no place in mathematical logic.) For, although from any proposition of the form A*B we can deduce a proposition of the form B*A, the S/P convention prevents us from doing so. Consequently in distributive logic there are no supplementary theorems that could serve to restrict a distributive relation to an S/P form of argument. As with subsets, all one can do is bracket or underline the subject term and adopt the convention that it shall be placed to the left of any associated distributor. F. Kinds, attributes and unlimited range Like classical predicate calculus, this system does not distinguish ("up front", as it were) between predicates expressing a class or "kind" to which a subclass or subject belongs, and predicates expressing an attribute (whether defining or contingent) of the subclass or subject. For example, the syllogism "roses are red, red is a colour, therefore roses are a colour" looks invalid. In one of my notebooks, however, I have argued at length that the distinction is vague and largely, perhaps entirely, linguistic. (Where others look for differences, I seek commonality!) In particular I have argued that whether speaking of kinds or attributes, or various intermediate species, a common substrate (usually of material objects) is presupposed. To cut a long story short, one might as well, and perhaps ought to, use a uniformly referring language; so in the above example one could say "Rosy things are red things and red things are coloured things" or maybe "Roses are reds and reds are coloureds", either of which would make the syllogism look valid again. This, anyway, is a pre-requisite for any sensible interpretation of the present system. However, this requirement raises the possibility of conflict with the "unlimited range" assumption, which I consider vital for any coherent interpretation of predicate logic. This assumption mainly relates to the role of negation - for example, to say that x is not red does not imply that x is some colour other than red. For x might not possess any colour, it might not even be the kind of entity that can be coloured. Again "My train is not happy" (seemingly committing the category error) is not to be taken as equivalent to "My train is sad". Clearly my train can be neither happy nor sad, the two adjectives are contrary, not contradictory (thus the category error is circumvented). According to the unlimited range hypothesis, then, no other attribute is implied by the non-possession of a given attribute. In fact there is no possible positive description that could replace a negated description, and this is one of the main reasons for developing a system in which there are no negated predicate terms. To avoid conflict with the kind/attribute solution, all that is necessary is to stipulate, in any argument, that the argument is confined to a certain category. In terms of Euler's circles and roses, this amounts to enclosing the circles depicting roses and red things within a circle depicting coloured things. But what about the prickly roses and the tall ones and the vanilla-scented ones and the ones that droop without plenty of superphosphate? As I said, one takes for granted a certain general kind of substrate or context. The important thing to realise is that in the universe at large there are contraries but no contradictories. Contradictories arise only when contextual restrictions are imposed. G. Other uses - Subjective propositions The distributive predicate calculus format can be used for various purposes, an example of which follows. The badly named realm of "subjective propositions" is supposed to cover most (equally badly named) "attitudinal" and "indirect" statements. Basic examples are: Mary claims that little lambs are white, Mary thinks little lambs are blue, Whatever Mary says is true, Nothing that Mary believes is true, Mary believes what Tom believes. The distributive calculus provides an opportunity to provide a template for subjective propositions by recasting them in terms of personal worlds and the real world. Thus Mary believes little lambs are blue can be recast as In Mary's world, little lambs are blue; Nothing that Mary believes is true becomes Mary's world is disjoined from the real world; Mary believes whatever Tom believes becomes Mary's world and Tom's world coincide. These "worlds" are just unbounded domains of propositions and there can be as many personal worlds as you like. The basic template, which represents only general cases and contains no specific internal propositions, looks like this:If f is the world of Mary's beliefs and g the real world:1. the intersection of f and g - only some of the propositions that Mary believes are true (and not every true proposition is believed by Mary)2. the inclusion of f in g - everything that Mary believes is true(and not every true proposition is believed by Mary)3. the inclusion of g in f - Mary believes everything that's true (as well as some propositions that are false)4. the coincidence of f and g - whatever Mary believes is true and every true proposition is believed by Mary5. the isolation of f from g - nothing that Mary believes is trueUnder the beliefs interpretations #3 and #4 look bizarre (unless Mary is God!), even more so with the "indirect statement" substitution. However, there are many kinds of attitudinal propositions and this formula is supposed to accommodate most of them. Thus, representing #3, "Mary wishes she knew every true proposition there is" makes sense. But this aside, #3 and #4 are required for interpersonal propositions; for example if f is Mary's world and g is Tom's world, Mary believes exactly what Tom believes is represented by #4.When required (in complex expressions) this usage can be combined with the subset interpretation (C). As for the adaptation of the template for particular propositions ( Mary believes that p, etc), the kind of apparatus needed is fairly elementary but probably outside the scope of the system as described here.
## ENTAILMENT/DEDUCIBILITYBy way of introduction to this topic, note the differences between insoluble formulae such as p⊃q, overt solubilities such as p⊃p (which is orthologous), stipulative orthologies whereby p⊃q is ## RELEVANCEAs well as entailment/deducibility there are various other kinds of interpretation of p ⊃ q involving a relation of dependency between p and q which is not reflected in the formla q ⊃ (p ⊃ q). Examples are: "The door will open" entails "If you turn the key, the door will open" and "Fred is unmarried" entails "If Fred is a bachelor, Fred is unmarried" and "A heavenly body does not move uniformly" entails "If a force is applied to a heavenly body, it does not move uniformly". Broadly speaking connections of this kind come under the umbrella of "relevance", but it's extremely unlikely that one set of rules could cover all cases. (For convenience I shall also include deducibility under the relevance label.) While the above examples are not especially problematic, some slightly more complicated propositions are trickier. A good example is: . (First see the diagram on classification chart and refer to earlier text if necessary). Thus: this pageO (orthologous) T (translative) C (characterising) I (intrinsic) R (representative) V ("veracious") and the general form of "relevant entailment" is (Θ)(p⊃q) where Θ is one of the above qualifiers (though as yet I'm doubtful about the inclusion of R and V in this set). Each qualifier indicates the general type of interpretation to be attributed to the conditional (or any other expression) within the qualifier's scope. However, further conditions may need to be stipulated to characterise the mode of operation of the conditional thus qualified. For example, O(p⊃q) by itself does not completely characterise entailment/deducibility. Whether all the types of entailment can be defined in an analogous way to deducibilty is questionable, but if so, then the further requirement for relevant entailment is that both p and q belong in the class specified at the bottom of the relevant hexagon in the propositional scheme. For example, semantically relevant entailment requires that: 1. p ⊃ q is characterising, and 2. p and q are both reportive keeping in mind that all the kinds of proposition in the hexagons below the semantic category are reportive. Unfortunately these constraints usually do fail in the case of representative entailment, but if such propositions are regarded as intrinsic (which is not unreasonable insofar as they exemplify general laws covered by this category) a case can be made for maintaining the constraints. Some more examples: p ⊃ (pVq) is normally considered valid, but p ⊃ (Θ)(pVq) is invalid. (a) in case q is replaced by ~q, the result is equivalent to p ⊃ (Θ)(q⊃p) (mentioned above in relation to deducibility) (b) introducing q by V-introduction does not imply that q is in any way relevant to p. It certainly does not imply: that (pVq) is orthologous. (p&q) ⊃ (p⊃q) is valid but (p&q) ⊃ (Θ)(p>q) is invalid. Positing p and q does not imply any relevant connection between them. A somewhat confusing example is: (p&q ⊃ r) ⊃ ((p⊃r) V (q⊃r)) This example is convincing because it's difficult not to give a "relevance" interpretation of the formula. Thus it's hard to deny that the following is a good match to the formula, but at the same time is obviously invalid: "If you insert the key and press the button, the door will open" entails "Either if you insert the key the door will open, or if you press the button the door will open". For the following completely logical, tense-free example, it should be possible to draw a circuit diagram illustrating its validity, but in practice the problem persists: "If switch A is on and switch B is on the lamp is alight" entails "Either if switch A is on the lamp is alight, or if switch B is on the lamp is alight". First, note that the antecedent and consequent of the main connective are equivalent, both reducing to ~p V ~q V r, or alternatively ~ (p & q & ~r), for which an electrical circuit diagram can easily be constructed. So we might ask whether it is the antecedent or the consequent that most closely matches "Either switch A is not on or switch B is not on or the lamp is alight" (or alternatively "It is not the case that switch A is on and switch B is on and the lamp is not alight"). To my mind the antecedent matches either of these expressions perfectly well, and it is the consequent that causes the problem (perhaps because V tends to be read as &, perhaps for other reasons). Indeed mixtures of disjuncts and implications invariably cause problems - consider the possibilities of interpretation of an innocent looking example such as (pVq) ⊃ (pVq). [For a start, we could re-write it as (pVq) ⊃ ((p)V(q)).] By including qualifiers in the formula, however, these interpretive ambiguities can be averted. Thus (going back to the example in the previous paragraph): (Θ)(p&q ⊃ r) ⊃ ((Θ)(p⊃r) V (Θ)(q⊃r)) is not valid - easily checked by substituting the deducibility qualifier, O, for (Θ), although a more appropriate qualifier for examples of this kind would be R or I. The paradoxes considered here are not among the most important in the theory of logic. Following this article I have written another, the last section of which is supposed to lay bare the bones of logic and expose the ultimate paradox, demonstrating that all formal logic is necessarily paradoxical. No fancy proofs! - just a commonsensical show and tell job. (This is quite distinct from my contention that logic is empirical and not analytic.) |