and the mathematical nature of reality

Adapted from "Six kinds of proposition and the edges of normality"


Mathematics was for long revered as the queen of sciences, but in recent times the discovery of a number of serious anomalies in its foundations and methodology have dented its reputation. These flaws include uneliminable elements of randomness, uncomputability, uncompletability, inconsistency and unprovability, undermining the one feature that mathematics was believed to possess uniquely among the sciences - certainty. However mathematics continues to display a remarkable, often unexpected, talent for describing the real world, and in that respect it now more than ever holds sway over the material sciences.

Why and how mathematics enjoys such outstanding success in modelling reality are unanswered questions. "The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious" wrote physicist E.P. Wigner in a 1960 essay. Nearly fifty years later we are no closer to providing answers, but we can speculate! "Mysterious" might still be the adjective of choice, when one considers that maths is supposed to be formal in character and entirely independent of observation, while the physical world seems almost cruelly empirical. There is surely just as little sense in talking of mathematical models approximating to or matching real situations as there is in comparing the word "elephant" with actual elephants. And yet maths really works - without it, nuclear reactors, moon missions and digital television would be impossible.

Hardly surprising, then, that some theorists have claimed that the universe must be intrinsically mathematical, but it isn't clear how far they want to push this idea. What is left over if you take the maths away? On the other hand, others believe that the existence of a multitude of mathematical concepts, only a fraction of which are used in science and technology, shows that maths has no prerogative in describing the universe. Often one finds that only one of a number of possible models fits a given state of affairs, while sometimes several models appear to fit equally well. Thus Morris Kline in his book Mathematics: The Loss of Certainty proffers: "Several differing geometries fit spatial experience equally well. All could not be true." (Therefore mathematical design is not inherent in nature.)  I'm not sure how the logic of this pans out. The critical question is: why does even one model fit so incredibly well?

I am therefore strongly inclined to accept the first view - how else can one explain so many "perfect fits" of quite simple, neat formulae? Mathematical models "fit" and "work" because the objects they match and the events they predict are themselves mathematical. Still, this is not too clear and seems to require a modification of the meaning of "mathematical". What is really involved in matching and predicting? Aren't these empirical concepts? Indeed, as an explanation of the magical match of maths with reality, there's an alternative (or perhaps a complement) to the view that the universe is mathematical, namely that mathematics is empirical. But perhaps neither of these views is quite right: as we shall see, if the deep structure of the universe is purely mathematical, there may be no need to match anything with anything.

There's nothing new about the idea that the universe is intrinsically mathematical - Pythagoras (570-495 BC), Galileo (1564-1642) and possibly even Newton (1642-1727) held something like this view. However, until quite recently the converse view - that mathematics is empirical - had rarely been expressed (J.S. Mill was an exception. Currently, various kinds and degrees of empiricism are attributed to maths; for example it is sometimes held that particular foundational concepts and branches are selected just because they have relevance in the real world. Also see H. Lehman, 1979.) The standpoint supported by the following remarks is dualistic, the two aspects being interdependent. Yes, the unverse comprises nothing but mathematics. And no, mathematics is not analytic - its basic concepts are inexorably empirical and consequently the operations and models of mathematics per se are constrained by empirical factors - the very same factors that shape the universe. This leads to the bewildering possibility that the universe is in some strange way identical with either the whole or a part of mathematics, given that the whole of mathematics is itself limited by the universe, if not by the nature of the brains or computers that "do" mathematics. Which of these is paramount? All one can say is: mathematics, the mind and the universe have a common foundation.

Working on the rash assumption that arithmetic (and not logic) is the primary substance of mathematics, these notes will focus almost entirely on this discipline. But whether central to the subject or not, arithmetic will serve well enough to exemplify the main concepts.


Arithmetic, like geometry, is a curiously mixed pursuit. Few would disagree that a proposition such as "There are about 20,000,000 people living in Australia" is entirely empirical, because (provided one doesn't think too hard about it!) it contains no trace of the calculative procedures that characterise the science of mathematics. Number, as used in this proposition, is apparently a purely empirical concept. As soon as one begins to calculate, however, the character of number seems to change. That the population of Australia is close to 20,000,000 is a fact, but that 400 x 50,000 is exactly equal to 20,000,000 is a fact of a very different kind. Propositions of the second kind have long been considered analytic, in the strongest sense of that word. In fact it has sometimes been held that arithmetic tows the analytic line, as it were - it is the archetypal system of analytic propositions. While this may be going too far, it is apparent, at least, that the philosophers of mathematics and logic have done their best to divest number of its empirical connections and transform arithmetic - indeed the whole of mathematics - into an all-embracing analytic discipline. In spite of some devastating snags identified by Russel, Godel and others, this is pretty much how maths is envisaged by most people today. Yet many of the foundational concepts of maths are blatantly empirical, while many of its fundamental axioms (such as that every number has a successor) seem to beg the question of empirical status. My aim in what follows is to give credence to the opinion that, even in its most formal aspects, arithmetic cannot throw off its empirical chains.

A problem for many philosophers, but few mathematicians, with modern arithmetic is the riddle of the meaning of infinite numbers. In fact Georg Cantor, one of the founders of number theory, divined that there's an infinite hierarchy of infinite sets, but, thank goodness, we shall barely need to touch one of them. While of course I don't question the indispensability of the systematic concept of infinities to mathematics, I do subscribe to a certain intuitionist/formalist precept which grants priority to finitary concepts and syntax, crediting them with a type of meaning or "contentfulness" (David Hilbert's term) which is absent from notions of the infinite and from incompletable expressions. The thrust of my concern, however, is with the concepts of the finite and the denumerable, not with the infinite. It seems to me that two grey areas have been neglected, lying between the infinite and the observably finite or numerically accountable - one of them at the immensely extensive or multitudinous end of the scale, the other at the minutely divisible or infinitesimal end. It is here that I believe mathematics fails, that its failure is in principle demonstrable and that its weakness in this area impinges upon the whole of mathematics and rational thinking.

Although it is now well established that mathematics is "weak" in the sense that it contains many elements of indecisiveness (uncomputability, paradox, unprovability etc), these failings may be unrelated to those which we are about to consider. I suspect that, despite their acknowledgment of the former kinds of weakness, many mathematicians would reject the notion of empirical conformity, and would doubtless think the following conjectures are intuitionism-gone-crazy (although, so far as it goes, the "theory" is essentially realist). Should they remain of that opinion, however, they will one day be jolted into submitting alternative theories when the inevitable happens: mathematics will become so far stretched that it will cease to produce consistent results for reasons which seem non-systematic. While doubtless much of my story is old hat, I'll try to arouse a glimmer of interest by adopting a speculative but graphic approach. In offering some rather secular and simplistic illustrations, I hope to supply clues as to how it might be possible to test the central idea; for it must be assumed that this is either a cosmological theory with practical consequences or else metaphysical nonsense.

Space as structure

First, a little speculation about the nature of space is in order. Space is hard to define, but the prospect looks a little brighter if we consider space-time rather than space alone. I'm inclined to think of space-time in terms of what must be done to cause any two widely separated massive bodies to meet, not forgetting that the “doing” may involve something going on in the vicinity of either or both bodies. Here "widely separated" and "massive" mean greater than subatomic distances and masses, and "meeting" means that the bodies more or less "dock" and come to rest relative to one another. This is how I visualise macroscopic space-time. For all I know, the connection between this kind of space-time and the microscopic space-time inhabited by subatomic particles may be quite tenuous. There's probably no special reason for thinking them identical. The definition implies a kind of absoluteness - it is a model that coordinates the observations and actions of all observers and agents (everywhere and at any time, one feels like saying, but this is putting cart before horse). It's a very simplistic idea, and it's always possible that no such model exists.

It is common knowledge that Euclidean geometry and Newtonian mechanics provide a satisfactory model for coordinating macroscopic observations and actions here on Earth, while on a cosmological scale, according to modern physics, some kind of non-Euclidean geometry and Einstein's Relativity theory apply. Let's begin at the beginning, by considering the idea of space presented to individual observers via the senses. This idea of space is basic to human beings because they carry it with them wherever they go and no matter what they are doing. They could be lounging on Bondi Beach or hurtling through space at near the speed of light, but the way they perceive nearby space remains unchanged. So space, in this sense, means local or domestic space, the space we live in, see and feel and about which you and I, as well as the Euclids and Newtons, tend to form intuitive, pragmatic judgments.

It is widely appreciated, I think, that the impressions of space received through the various sense organs are entirely different in kind from one another: there's no resemblance between visual space, auditory space and tactile space, and no inherent reason why one should suppose any object in a visual field to be identical with one perceived via any other sensory route. What, then, is the binding force that unites and coordinates these different kinds of sense data, along with their peculiar space-like perspectives, causing us to embrace them as representations of a single space? A very plausible answer is that they are related by a mathematical or logical structure. But once having brought ourselves around to that outlook, it begs little further insight to reach the judgment that space consists of nothing more than a mathematical structure. For nothing is required besides mathematics to unify these different sensory perspectives; the assumption that there is a "noumenal" physical space apart from pure structure is metaphysical and needless. Space is nothing but mathematics.

Since the human brain is a structure in space, and has evolved primarily as a mechanism for sustaining itself in its spatial environment, one might suppose that the conditions of pragmatic thought in general, and of logical and mathematical thought in particular, are themselves influenced by the nature of space. And unless there are aspects of thought which are independent of the physical existence of the brain, conceived as a spatial object, it would be very surprising indeed if the topology of human thought turned out to be unrelated to that of space. This is of course a chicken-and-egg situation, but one that is of no immediate consequence: the significant idea is just that local space and the way we think have a predominantly common structure. (Admittedly the equation must be extended to include at least space, time and force in a broadly Newtonian structure, but the resultant complications would not further our cause.) The crucial point to grasp is that there's a bigger chicken and a bigger egg: the logical and mathematical scope of the human mind is subject to the very same constraints that the mind attributes to the local universe. Conversely, the local structure of the universe as comprehended by the human mind is limited by the same intrinsic topological factors that determine the kind of logic and mathematics of which human thought is capable. Mathematics, the mind and the local universe have a common foundation.

The reality of the situation, however, is that the physical universe does not (according to present-day reckoning) possess the intuitive Euclidean/Newtonian structure we once believed in. It's different, and therefore one should expect the foundations of mathematics to differ in a related way, or to collapse in certain circumstances. The situation is evidently complicated by the fact that the mathematics we have available to undertake cosmological enquiries is just the mathematics whose foundations are in question: it seems we need to understand the nature of mathematics before we can describe the universe, but there's a vicious circle involved. Nevertheless, I believe the hypothesis that the foundations of mathematics are awry is in principle empirically testable, and, if true, there are many far-reaching consequences.

Formalist arithmetic

There are, of course, countless "theories" about the nature of arithmetic, ranging from the purest formalism through various psychologistic interpretations to the most implacable realism. While my own view leans toward formalism, this is not so much a philosophy of arithmetic as a judgment that all calculative symbolic systems are intrinsically arithmetical. Arithmetic is the science of recurrence. But while the notion of recurrence is crucial to understanding symbolic systems it applies well beyond that sphere, being an integral component of almost every aspect of human experience. Consequently this predilection for formalism is of little significance.

Furthermore the question of the ontology of arithmetic is of no immediate importance. For there could surely be an agenda - call it "arithmetic" or not - which is formalist, as well as one in which the symbols are objectively or psychologistically interpreted (so that the meaning of the symbols is not just more symbols in the same system). In respect of the latter discipline, however, one might expect to find a variety of interpretations of "arithmetic-like" syntax and hopefully a reasonable explanation as to why any of them be regarded as distinctively mathematical; in particular one would feel entitled to an explanation of how the analytic character of such a discipline is conserved in its ontology rather than merely in its syntax. Regardless, I shall initially assume that arithmetic is formalist (i.e. it comprises nothing but symbols - I don't think this coincides with formalism in the modern sense, which puts more emphasis on meaninglessness and rigid proofs).

Within a broadly formalist framework one can still approach a number of issues from different angles: specifically, it looks as if one can adopt attitudes which might appear, to the sophisticated, to be ontologically different. For example, one might take the view that arithmetical syntax comprises nothing but transformations of utterly meaningless symbols; or one might say that the system contains names or tokens which denote other systematic objects (which in turn can be used as names or tokens), thereby giving the impression that the syntax is after all meaningful. Although this particular distinction (which may be verbal only) doesn't affect my case, I prefer the second approach because it buoys my view that arithmetic is best envisaged as possessing a "fluid hierarchical" structure (see below).

Number and tokens

Arithmetic begins at the small end, the human end of the mathematical spectrum. It begins with counting. This truism (the cradle in which, it might be thought, the avowed intuitionist chooses to spend his life), seems to have been respected by the inventors of number theory, notably Cantor and Giuseppe Peano. And how could things have been otherwise? Who would have taken any notice if the theory had had no footing in the nuts-and-bolts concept of natural number with which we are all familiar? Are we not entitled to assume that this is what the theory is about? Unfortunately when arithmetic is stretched into the realms of the uncountable, we can no longer take this assumption for granted.

I know I'm not alone in harbouring the feeling of being duped by Cantor's theory of sets and numbers, the feeling that it's somehow circular and fails to get to grips with number as such. (This is not a psychological problem, but rather a problem about the validity of real-world proofs for mathematics and, conversely, of the existential status of the mathematical objects so defined.) Although Cantor himself was obviously no formalist, his explanations are uncompromisingly syntactical in as much as they depend on establishing one-to-one correspondence relationships between series of numeral-tokens, that is, between symbols occupying a small zone of Euclidean space. Its assumption that these series of demonstrations are indefinitely extendable and/or indefinitely interpolatable appears to rely on the assumed topological properties of infinite extension and infinite divisibility of the space in which they are represented. And although, under a less formalist interpretation, it might be held that these representations do not depend on real space for their actualisation, it would still appear that whatever it is that's supposed to be going on requires a logical space with similar properties. For without this assumed space, one could never predict that every supposed one-to-one correspondence would in fact be unique, or even possible, when the series is "represented at length". Thus Cantor's definitions of numerical infinities rely on an undefined notion of spatial or logical infinity. The acceptance of his technique as a valid mathematical method depends on the unfounded and improbable belief that what can be physically demonstrated on a piece of paper can be extrapolated ad infinitum to increasingly unwieldy gesticulations that cannot actually be symbolised anywhere or anyhow.

As Cantor draws upon ever more picturesque techniques, the limitations of the page become increasingly bothersome and the proofs less convincing. For example, in the procedure that's supposed to show that the number of proper fractions (rationals) is the same as the number of cardinals, Cantor introduces two complications. First, in order to deal with the the fact that every fraction can be represented in an infinite number of ways, he has to delete an infinitude of irrelevant fractions (namely, all those whose numerator and denominator have a common factor). Secondly, he has to coax us along a zigzagging path through his two-dimensional array of fractions, skipping the irrelevant ones on the way. This is an extremely "spatial", undependable looking procedure (see Endnote 4).

There is of course no difficulty with the notion of correspondence of relatively small, finite sets which can in fact be matched and whose number might feasibly be ascertained. But the extrapolation of the notion to larger sets involves the use of synoptic tokens which do not themselves possess the properties of the sets referred to. This, however, is a feature of arithmetical syntax in general (irrespective of ontological presuppositions).

Yes, arithmetic does begin at the human end of the mathematical spectrum, with counting - a truth recognised long ago by Henri Bergson (1910). Formalist accounts may accommodate this maxim by observing that some expressions in arithmetic are relatively "primitive", while others are really names or tokens for (often exceedingly extensive or infinite) collections of primitive symbols. In other words, there are synoptic tokens whose meanings are complexes of more primitive signs, and which in principle can be expanded analytically, by correct calculation, into the complexes that they represent. Naturally there are degrees of primitiveness, degrees of complexity and little inclination to single out any particular set of signs as being "the meaning" of any other set. But one can surely sympathise with the idea that a token such as 123 can be expanded to one of the form 1+1+1+....+1 which better captures the literal meaning of the original token in so far as it contains just as many 1's as the number signified by 123: it exemplifies the number and does not merely signify it. But now what of the token 219937-1? Evidently this expression could not be expanded to one of exemplificatory form (1+1+1+....+1) even though one had begun to compute it at the beginning of time using every available minuscule in the universe. Yet more than twenty-five years ago this number was proven to be a prime, and very much larger primes have since been discovered. (And of course still larger numbers can be expressed in token form. For example, there's a number called a moser which is unimaginably huge, but which can easily be defined using only the number 2, a few geometrical symbols and the concept of exponent.) So, on this view, a proposition such as 101000 = 10500 x 10500 is analytic but meaningless because the most primitive forms of expression betokened by the terms of the equation cannot in fact be completed. But regardless of whether some signs are more basic than others, it remains true that arithmetic alludes to some translations of signs which can never be depicted or used because they are too extensive or, rather, their components are too numerous! Since no such translations can possibly exist, there is strong justification for the claim that any reference to them is "uncontentful".

If this is right, then the sign 219937-1 cannot be used to refer to an expansion of the form 1+1+1+....+1 and remain a bona fide component of an analytic system. Owing to its central concern with the rudimentary concept of number, however, conventional arithmetic does contain references to uncompletable signs. Consequently arithmetic as a whole lacks the credentials for analyticity. And (for those whose concept of number is not tied to mere symbols) it's clear also that no token in an arithmetical system can denote an extrasystematic occurrence of a precise number (such as a counting of objects) if such an occurrence does not, nor ever could, exist.

Thus arithmetic contains expressions denoting either incomplete (and uncompletable) tokens or uncountable sets of objects, or both. Although, so far as I know, this verdict doesn't at present detract from the utility of the analytic craft of sign juggling, it might be prudent to keep in mind both that sign juggling is not necessarily the same as number crunching and that the analyticity of mathematics is vulnerable just to the degree that it projects its language beyond the reaches of the conceivable. Much as Newtonian physics is vulnerable to the degree that its domestic language of space and time loses meaning when we want to converse with the electrons and the stars. (Is it simply the physics or are there already signs of the maths going wrong?) As with other sciences, mathematics holds no built-in guarantees of performance.

I have used a spatial picture to show that we cannot predict how numerical signs behave when we try to imagine an extrapolation of a series into regions beyond the immediate environment which establishes the conditions of sign writing. It cannot be assumed that the framework and assumptions applicable to manageable numbers has legitimacy for gigantic numbers. Had we been more adventurous and chosen an illustration befitting our times, such as the way that computers handle arithmetical information (as has been done, for example, by Rolf Landauer, 1986), we should have arrived at just the same conclusion - that arithmetic is empirically constrained. A more anthropocentric illustration, however, is provided by the concept of counting.


Numbers begin with counting! According to the axioms of natural number ascribed to Peano (1908), every number, n, has a unique successor, n+. An intuitionist might take this to mean: take any particular number, there is just one number that is one greater than it. But how would you in practice take any particular number? Suppose it was a very large number whose value could not feasibly be checked by counting. How would you then know that you had the number you intended to take? The notion of identifying an uncountable number as being a particular number is incomprehensible. On this account, Peano's axiom seems meaningless because it doesn't satisfy the criterion of real countability.

If counting is to be explained in terms of putting signs and objects into one-to-one correspondence with one another, then nothing more need be said: we have seen, first, that this is an empirical procedure and, secondly, that it cannot actually be done with very large numbers. But is it not also possible to count by rote, as school children often do, by learning the sequence of signs without attaching them to physical objects? Well, how can we distinguish counting by rote from counting things? The first kind of counting seems to consist only in reproducing the conventional tokens for successive numbers in the series of ordinals, while the second kind involves both reproducing those tokens and placing them in one-to-one correspondence with the members of a set of objects. It seems to me, however, that the distinction lacks substance: when counting by rote, we do in fact put different signs into correspondence with instances of something, even if only intervals of time. Of course, if we are counting events, or just counting off definite intervals of time, such as seconds, it's easy to contend that we are employing the correspondence procedure. But it might seem that counting by rote lacks objectivity, that it doesn't involve events and that the time intervals are somehow too arbitrary and inconsequential. I doubt this: the activity of counting itself supplies the events and thereby demarcates and orders the intervals.

One possible objection is that we cannot in principle do a recount: objects can be recounted, events can be recorded and recounted, but how does one recapture a counting per se? Well, couldn't we replay a recording of our counting, assuming we counted aloud, and count our counting again, so to speak? Suppose we just recount the noises as such, without taking notice of their form. Then surely we could be said to be recounting our original count - which was, so to speak, a labelling of noises contrived by giving a particular shape to each noise. I think the following consideration completely justifies this view. If we count by rote, say, from 1 to 20, it would be in order for anyone to ask if we have counted right. If we did not count right, then at some stage in our counting the number of noises delivered up to that stage would not correspond to the meaning of the noise uttered at that stage. (I say "at some stage", not necessarily upon reaching 20, for we could have made two or more mistakes which cancelled each other out, and so have made 20 noises yet not have counted right.) Now suppose there was in principle no way of checking the count. In what sense could we then be said to be counting at all? How could we ever be sure that we were not simply uttering noises at random? Counting by rote entails counting objects, namely the signs that constitute the counting; the signs are labelled by giving each of them an unique, conventional form; if the counting is right, the form of every relevant token corresponds to the number of tokens delivered up to that stage. Counting cannot take place in a void. Even when "nothing is being counted", counting is an obstinately experiential process, temporal and psychological. And since every sign that represents a natural number must represent a countable number, or else be meaningless, arithmetic in general is empirical. Its applicability to the real world is irrelevant to this argument. Arithmetic is inherently real.

If counting is experiential, the proposition that every number is countable in principle is meaningless: something, even if only the numeral tokens themselves, must be countable in practice. A numeral token such as 101000 in isolation cannot stand on its own feet - we cannot tell whether it stands for "the number" it's supposed to, nor conceive of its basic meaning, nor ascertain whether any such number exists, since it is not and never will be literally countable even on the fastest computer that could in principle be designed. On the other hand, the number 1000 is meaningful because it's countable in practice and there are sets of objects or events that can be placed in one-to-one correspondence with the series of natural numbers up to and including 1000.

How else might we attempt to count things? Although it isn't necessary to literally count the members of sets to compare their number, some method of matching them is required. It's easy to show, however, that any procedure for matching sets or patterns at some stage involves at least as many discrete operations as there are objects common to both sets (or elements common to patterns), and therefore a similar number of operations as would be involved in doing a literal count. The "contentfulness" of number depends on this pragmatic potentiality and cannot be captured by shortcut techniques. It's of little account whether we imagine these operations to be essentially spatial, temporal, psychological or belonging in some more abstruse logical space; it matters little in what framework we conceive of the existence of numbers. Given a coherent view of space, time and "psychological space", we shall find that the various formalist and psychologistic concepts of number are practically identical. Somewhere along the line we turned the manuscript upside down: number itself calls the tune, erratic though it be, and both space and time dance to its strange music.

Mathematics and meaning

The breadth and complexity of the subject-matter of mathematics make it almost impossible to define, or to decide whether a realist, formalist, intuitionist, psychological or some other interpretation is most appropriate. Without getting involved in questions about the nature of mind and artificial intelligence, one might presume that an effective test of whether mathematics comprises only pure structure or demands a psychological interpretation is whether computers can do it. But the test is marred by the need to resolve, in turn, what it is that computers do, and (having decided that) by the purely terminological question of whether it's proper to call what they do "mathematics". In regard to classical logic, for example, computers can perform the appropriate systematic operations, but I doubt whether they can do logic. I don't hold quite the same doubts about their capacity to do mathematics, or some mathematics, anyway.

Much depends on what one makes of the business of interchanging symbols that have complicated meanings. While the principal domain of mathematics is structure of all kinds, it seems to me that as a formal system it comprises nothing but a sign language, employing signs and about signs. Its alleged analyticity consists in its calculative procedures for translating signs into other signs. Mathematics is everywhere hierarchical, it employs an indefinite series of types. A clear-cut distinction between mathematical and metamathematical levels is nowhere to be found, but the difference invades mathematical systems in subtle ways at every point. Signs (tokens) are objects and vice-versa: the objects that are the meanings of signs are themselves signs: the distinction is relative. Essentially there are symbols that are more synoptic and symbols that are more expansive (such as strings of operations or sets characterised denotatively). But the analytic nature of mathematics becomes suspect whenever a sign cannot be completed, or when the objects in a set that a sign stands for cannot be listed, or when the operations required to expand an expression cannot be enumerated. One can only say that an expression (such as that 219937-1 is a prime) is meaningful if one can envisage a practical, "uncompressed" procedure employing a finite number of steps to prove it. And then it is meaningful just "to the extent of the procedure" and not in any sense that exceeds it. Thus the number of operations required to disclose the primitive or expansive meaning of an expression is of paramount importance. Number is of the essence.

Mathematics in the large therefore differs little from physics: it has inherent experiential limits and it is subject to similar conditions of uncertainty and relativity. Expressions with colossal or infinitesimal denotation, however, might be amenable to statistical treatment, so that mathematics could in some measure solve its own problems by adopting the same kinds of procedures for itself as it does for problems of physics. (For example a gigantic number, x, might be regarded as having a 95% probability range, x-y to x+y, even though x as such is not a real entity. Such limits would be dictated by the size of x and would of course be extremely narrow when x is not gigantic but "normal". In fact y would then be infinitesimal and would itself be associated with significant probability limits similar to those obtaining with gigantic numbers. It may be possible to estimate the probability limits of gigantic numbers by considering the number of physical actions, such as electronic pulses of some kind, required to track down a specific number such as a gigantic prime.) But I believe also that mathematics will turn out to be "assessor-relative" or subject-dependent. No procedures exist that will equally well serve every one in every time, place and condition. There is no almighty algebraist, no mystical realm embracing every possible mathematical construction. On the contrary, all mathematical structures, no matter how complex or inevitable - or how paradoxical or uncomputable! - they may seem, come into existence as the creations of calculating people, and all mathematical solutions must be understood by them. At the same time, this seemingly constructivist activity possesses an objectivity that is scarcely distinguishable from physical reality; and I can see little value in the belief that mathematics is endowed with a special sort of reality of its own.

The table below suggests that the speed of fast computers today is many powers of 10 lower than the speed that might be needed to discover mathematical anomalies within a reasonable period of time.

        Some interesting numbers (to nearest power of 10 unless defined exactly)
Estimated number of particles (including photons) in observable universe (excluding dark matter) 10^89
Estimated number of baryons in observable universe 10^80
Number of synapses in human brain 10^14 – 10^15 (average 125 trillion)
Age of observable universe 10^17 s
Standard second (cesium fountain clocks) 9,192,631,770 cycles of a certain electron transition in cesium 133 atoms
Uncertainty of cesium fountain clocks (e.g. NIST-F2) 10^-16
Planck time (lower limit on time intervals) 10^-43 s
Measurable laser light pulses 10^-18 s
Shortest events inferrable in experiments 10^-25 s
*Fastest electronics currently available 10^-11 s
Diameter of observable universe 10^27 m
Planck distance (lower limit on space intervals) 10^-35 m
Diameter of a proton 10^-15 m
Mass of observable universe 10^53 kg +
Mass of a proton 10^-27 kg (10^3 MeV/c2)
Rest mass of an electron or upquark 10^-30 kg (10^0 MeV/c2)
Largest known prime number (GIMPS/Curtis Cooper 2013) 2^57,885,161 – 1  (approx 10^17,420,000)


It is a curious fact that the most abstruse physics, plying the kind of stuff that philosophers are most disposed to call "unobservable", presents the greatest potential to turn our lives upside down, if it has not done so already. Out of the no-man's land of subatomic particles and mc2 looms the capacity to obliterate all physics, all philosophy, all life. What at first we cannot touch turns out to impose the most tyrannical presence, holding both the outside world and our inner senses in an awesome grip. How much stronger, then, must be the command of mathematics alone, flying free of all commitments to physical description! Clearly, in the physical sciences, effects have the punch but mathematics has the power. And there's absolutely nothing in between.

If, as I've suggested, space is mathematical, then at least some mathematics is spatial, and if space is also in some sense physical, so is at least some mathematics. And if we can extend this alliance to time and mass (and perhaps even if we cannot), the following possibilities arise: some mathematical laws might be verifiable by observation; some mathematical laws might be modifiable by experiment; and some physical situations might be responsive to mathematical activity alone.

But surely facts can only agree or disagree with mathematical models! I believe this is an old-fashioned sentiment, one which defies all reason and which can no longer be of great service to mankind. In the first place, certain twists, paradoxes and puzzles of nature can be regarded as, or explained in terms of, peculiarities of the fundamental structure of mathematics; secondly, under certain conditions some theoretically computable problems should yield unpredictable or unexpected results, regardless of the materials used in producing and chronicling these results; and thirdly some mathematical models that do have sufficient integrity might interfere directly with reality.

These proposals imply that some of the future strategies of mathematicians will not succeed, simply because the maths will not find the room to work. In due course this hypothesis could be tested either by carrying out sufficiently complex calculations requiring sufficiently precise answers or by experimentally enforcing conditions which compromise the reliability of the calculations. On the other hand the most ingenious mathematical schemes will succeed not only in managing their physical constraints, but in altering the environment in which they are produced. Is it possible that there's already evidence of the ability of mathematical models to influence reality?

Up to this point in time, few scientific experiments have produced results that might lead one to make such an outrageous suggestion, so one might be inclined to turn to the shadowy pages of popular "metaphysics" for some indicators. This genre, of course, thrives on all things supernatural, from ESP and necromancy to UFOs and teletransportation. It eschews the boundary between the mental and physical categories, delving into a world of pseudo-existence that combines aspects of both but failing abysmally either to convince intellectuals of its legitimacy or to find a unified explanation for the endless variety of apparitions that dwell there. Academics just might gain something from a more earnest inquiry into this realm. But, if we are to take any of it seriously, perhaps we should be seeking interpretations lying outside the immediate psycho-physical domain in which these phenomena seem to occur. Explanations in terms of mathematical structure are an obvious possibility.

Apart from the blatantly paranormal, the universe is rife with mathematical structures, which appear again and again, reproducing themselves in countless organisations of stars, atoms and genes and their associated effects. In the biological field, for instance, some examples of convergent evolution could be attributed to mathematical effects alone, as could many of the examples cited by Sheldrake (1981) in support of his theory of morphic resonance (according to which all self-regulating organisations, ranging from molecular to social systems, respond to "morphic fields" that provide templates for the development of each type of organism. Lyall Watson's "contingent system" (1979) is a similar but broader, evolution-based theory.)  In many areas of science it's becoming easier to accept the mathematical model and harder to understand why there are so many and varied instances of it.

Added to this is the spectacle of the "scientific zoo", whose inhabitants seem half mathematical, half empirical in character. But isn't it possible that these creatures have become too much of a distraction? Instead of hunting down ephemeral particles and pondering over their inherent nature as opposed to their observable effects, we might do better to inquire into existent mathematics and ascertain its empirical effects as opposed to its abstract properties.

In the world of computers and electronic engineering, hardware is increasingly being displaced by software, the actual by the virtual. Computing tasks that used to depend on moving tapes, discs or other physical devices and took months to complete are now performed in a flash by the motion of electrons in microchips. Cumbersome, grooved records that reproduced three minutes of song when revolved on turntables have given way to tiny MP3 players that can store and deliver hours of entertainment, with no moving parts and no obvious evidence of any physical “thing” corresponding to the sound patterns that are produced. Computer programs can mimic many of the functions of hardware such as sound cards. The trend is obvious and we can only wait for the ultimate artificial mind – a computer that has no existence, except as information inscribed on a ball of virtual reality!

Mathematicians explore structural possibilities, and structural possibilities delineate reality, the cosmos. The aims of mathematics and cosmology are essentially similar. Moreover the productions of mathematics and those of technology are convergent. Always at the forefront of technological progress, the military will surely be the first to recognise and capitalise upon the possibilities of mathematical models, aiming for the capacity to build weapons of remote destruction. If my thesis is right, and its principles are open to controlled exploitation, military developments will present the main threat to an otherwise philanthropic, invigorating era of "mathtech". While nuclear weapons are very much a result of mathematical thinking, future weapons will actually comprise mathematical models themselves - the bomb will be replaced by the computer. Whereas in nuclear weapons the mathematics is confined within the explosive device which must be located at the target, in the next generation of weapons the mathematical trigger will be external to the device. In the third generation there will be no device - only a model existing in some computer remote from the target. And in the final generation even the computer may not exist as such.

Of course the target is likely to be some sort of recurring well-defined structure, either man-made or naturally occurring, and the method of destruction a model that alters some or all instances of the structure. Presumably the artefacts of high technology, including other computer systems, which are themselves largely the products of sophisticated modelling techniques, would be especially susceptible to direct mathematical manipulation. In the biological world the obvious targets will be specific kinds of DNA molecules - mutations at the press of a button anywhere in the universe! (Could it be happening already?) Similar considerations apply to the evolution of benevolent uses of mathtech.

When physicists speak of laws of nature and believe in them, they can only mean one thing - that nature is mathematical. Which implies something like this: the universe is a theory, a mathematical model, a tautology. Perhaps it's a kind of self-perpetuating model, a model that must expand its axioms indefinitely, a kind of computer that must keep on working. Such a conjecture was mooted as early as the mid nineteenth century by Charles Babbage, the father of modern computers, and I believe it has since been echoed and elaborated by many a physicist. So one ought to be able to "seed" a universe just by formulating the "axioms". The inventor would not need to construct the whole universe if the axioms contained the means of self expansion. It would create itself, rather like the chain reaction of a nuclear explosion. Therefore, if our own universe is like this, to say that it started "with a big bang out of nothing" is to say that it started with a formula. Obviously this says nothing whatever about certainty, predestination or free-will: the model could allow for totally random syntheses. All it need contain are the essential conditions for existence. But whoever discovers the complete formula owns the genetic material of the universe - not just an image in the head or on paper, but the real McCoy.

What is the main stumbling block to attaining these designs? Just that the mathematician is confined within the walls of the very universe which he seeks to explain. He needs a kind of "twister" theory of mathematics to get out of the bind. Such a concept, I think, would provide a more useful avenue to a "grand unifying theory" than would a twister theory of physical forces!

So, from the physicalisation of mathematics, my story turns full circle, attributing nothing but structure to the whole of existence. Where it differs sharply from the more extreme forms of rationalism, especially the higher grade essentialist doctrines, is in its denial of the necessity of any truth, whether mathematical, logical or anything else. Structure is not pure. When all is said and done, it remains only useful, a conjunction of many approaches and attitudes. The genius who would profess to hold a Theory of Everything must understand perfectly the fusion of mathematical, physical, psychological, even biological and ethical perspectives. It's unlikely that any such mastermind will ever exist but, well, the universe exists, doesn't it?


Bergson, Henri (1910). Time and Free Will, Chapter 2. George Allen and Unwin, London.

Cantor, G.F.L.P. (1897). Contributions to the Founding of the Theory of Transfinite Numbers (English trans. P.E.B. Jourdain, 1915). (See Ian Stewart (1996). From Here to Infinity. Oxford Paperbacks).

Lehman, H. (1979). Introduction to the Philosophy of Mathematics. Basil Blackwell, Oxford.

Landauer, R. (1986). Computation and physics: Wheeler's meaning circuit? Foundations of Physics 16: 551-564.

Kline, M. (1980). Mathematics : the loss of certainty. Oxford University Press.

Peano, G. et al (1908). Formulario Mathematico. (See Weisstein, E.W. (1998). CRC Concise Encyclopedia of Mathematics. Boca Raton, FL: CRC Press.

Sheldrake, R. (1981). A New Science of Life: The Hypothesis of Formative Causation. Blond & Briggs, London.

Watson, Lyall (1979). Lifetide, Hodder and Stoughton, GB. Coronet edition 1980.

Wigner, E.P. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Communications on Pure and Applied Mathematics, 13: 1–14.

.......Dabs of Grue..........04/11/07.....................HOME