On the evolution of physical and mathematical knowledge

**Summary** It is widely understood in physics that evaluation criteria for empirical theories are determined by what is called the objective structures of an outside and real world, and on this basis, discussions ensue as to whether our scientific efforts to condense observations into theories will eventually result in a "theory of everything" (Feynman 1965, Hawking 1979 ,Barrow 1990, Chalmers 1982) reflecting precisely these structures. "Unless one accepts that the regularities (we perceive) are in some sense objectively real, one might as well stop doing science" (Davies 1990a). I.e., reality is seen as a prerequisite for a non arbitrary and reasonable development of theories. Without reality "anything goes" - which is the downright unacceptable in physics. On the other hand, if regularities are objective in the sense that they depend on the structures of an objective outside world, it remains unclear why mathematics which obviously does not include any information on these structures is nevertheless so helpful in describing them in such a way that purely mathematical extrapolations will lead to correct predictions. This is the old question about "the unreasonable effectiveness of mathematics in the natural sciences" (Wigner 1960), or, as Davies (1990b) put it, "why the universe is algorithmicly compressible" (i.e. why the obviously complex structure of our world can be described in so many cases by means of relatively simple mathematical formulae). This question is closely linked to why induction and, therefore, science at all, succeeds. It is difficult to avoid asking whether mathematics, as the outcome of human thinking has its own specificity which, for what ever reason, fits to the specificity of what man would see or experience. As long as this question is not comprehensively answered science may explain much - but not its own success. But how can such entirely disparate categories as perceiving and thinking be linked with each other? This question will be discussed here in the context of a new constructivist version of evolutionary approaches to epistemology (Diettrich 1991, 1993), which will lead to a revised notion of reality, as well as to some rather unexpected links between the phenomena of non-classical physics and the mathematical findings of Gödel.

One of the first things we experiences in our life is that there are correlations between our actions and what they will bring about. Indeed, in many cases doing the same thing under the same conditions will lead to the same result. Physicists are used to say that results or, more generally, experiences are invariants of certain operations (a parlance they developed to highly successful perfection particularly in quantum mechanics). To know which experience is the invariant of which operation (i.e. what action will lead to what experience) is crucial for the management for our whole lives. Without any knowledge of these correlations we can neither realise our intentions nor can we articulate reasonable expectations. The second experience is, that we apparently have no influence at least on some of the correlations concerned. We know that a stone will fall if we just let it go, but we have no means to make it rise instead. Let us say that these correlations are due to what we will term weak reality. Strong reality, however, refers to the view that all the correlations and other regularities we see can be derived from the structures of an independent and objective outside world. Strong (or metaphysical) reality is the very legitimation of all empirical sciences: discovering the structure of the world is seen as a heuristic imperative to get information on how to master the physical problems of our life: the better we know the world, the better we can control it. This is why no one would really contest the need for basic research aiming at nothing but new details of the world's structure. In a formal sense, strong reality can be seen as a special theory of weak reality insofar as it claims to explain the specificity of weak reality. The question which is central to this paper is whether strong reality is the only possible theory to explain weak reality, i.e. if there are no other ways to escape the 'anything-goes-trap'.

Constructivist Evolutionary Epistemology (CEE) (Diettrich 1991, 1993) to which we will refer here can be seen as a derivative of Evolutionary Epistemology (EE) (Campbell 1973, Riedl 1980, Vollmer 1975). EE argues that cognitive categories and capabilities have evolved phylogenetically as tools for our management of life in the same way as our organic survival equipment. CEE shares this view but, further to this, brings into play a primarily physical idea:

In view of the fact that the very difficulties classical physics had in coming to an arrangement with quantum mechanics and the theory of relativity arise from the use of variables and notions without checking whether a defining device (such as measuring apparatus) can be constructed, physicists agreed to accept for their theories only those quantities which can be operationally defined (i.e.'operationalised') in order to avoid similar experiences. CEE extended this concept, which became heuristically very useful, by the idea that the demand for operational definitions is imperative, not only for successful non-classical theoretical terms but also for classical observational terms and even for all logical and mathematical notions.

As to the observational terms, CEE realises this by interpreting all the regularities directly perceived by human sensory organs and all laws of nature derived from them as invariants of inborn mental cognitive operators. This is applicable even to the law of conservation of energy, one of the strongest cornerstones in physics. As this law can be derived from the homogeneity of time (i.e. from invariance under the translation of time) it will depend on the special physiological mechanisms generating the metric of our mental time perception and by this, determining what we would say is homogeneous in time. The law of conservation of energy, therefore, is a specificum of human beings. Other beings with different mental generators (depending, say, on special elements of space perception) will not consider those processes to be equal in time upon which we base our clocks and time measurements, such as the oscillations of a free pendulum or of other harmonic oscillators. They would instead refer to processes which are physically related to their own phylogenetically developed metric generator. Accordingly they would come to different conservation laws (Diettrich 1989). This holds for causality too. That we consider lightning to be the cause of thunder (rather than the opposite) is based - prior to later theory - on the fact that the time observed between lightning and the next thunderclap is usually much shorter and varies less then the length of time between thunderclap and lightning. But to distinguish between shorter and longer intervals is only possible within the context of a time metric generator, which by this, is central to the constitution of the causal order we "discovered". This holds for any perception. The invariants of perceptional operators construct the syntax of our theories. They form, so to speak, what in quantum mechanics is called a representation in Hilbert space. There are many such representations and each of them will constitute a special picture of the world but none is generally distinguished by itself. Also the representation (i.e. the "Weltbild") which we, as human beings, use is not distinguished by nature. Its importance for us is just due to the fact that it is based on the particular invariants of our phylogenetically established mental operators.

The concept of operational definitions allows a reduction of the distinction between perception and action, one of the central dichotomies of our classical world picture. Further to measurement and cognitive operators where we are interested mainly in their invariants, there are operators in a more literal sense such as our hands and tools that we mainly use to modify something. A hammer, for example, is an instrument primarily designed to alter certain objects. But also a hammer in its quality as an operator has invariants: objects and properties which would resist the hammer's strokes of a given strength. The hammer, then, can be used to measure mechanical properties such as the strength of a material. So, both perceiving and acting mean to apply operators. The essential difference is that in the case of perception we ask for the invariants of the operator in question, i.e. what remains unchanged under the application of the operators, whereas in the case of action we ask for what changes under the operator's influence. A similar reduction to a rather minor detail concerns the frequently addressed difference between observational and theoretical terms: both can be defined operationally, observational terms by cognitive operators, theoretical terms by measurement instruments. This means in particular that it is no longer possible to say that observational terms refer to 'facts' and therefore have a closer relationship with the structures of reality than theoretical terms which are a matter of scientific decisions.

If the regularities we perceive and the laws of nature we derive from them are really "home-made" in the sense proposed here, then the question arises as to what extent we still need empirical research and why we could not replace it by simply analyzing our brain's hard- and software. Even if such a programme, despite all the methodical difficulties, succeeded and provided us with a better understanding of the metatheory of physics and of some laws of nature - would it really determine all the laws that modern physics may still discover? Or, in other words, is our brain the generator of the theory of everything?

If we extend the domain of inborn natural perception by means of physical, experimental, or measurement facilities, we can use the results obtained to further develop the theories of classical physics, provided that the experimental and inborn cognitive operators are commutable in the sense of the operator algebra (we term this quantitative extensions). Otherwise the experimental operators will have invariants which no longer belong to the spectrum of the cognitive operators. The results concerned, then, can no longer be described in classical terms and would require either additional ad hoc explanations from outside the theory in question or the formation of non-classical theories such as quantum mechanics (we term this qualitative extensions). So, only those "laws of nature" which result from purely sensorily perceived observations can be reduced (if at all!) to the structure of our brain and to the mental software which runs there. The laws, however, of higher, non-classical physics such as elementary particle physics, would depend on the experimental operators used and their invariants.

Here we have to notice that the construction of physical apparatuses, even for non- classical applications, is entirely classical. Mechanical, electrical and optical elements are combined according to the rules of classical physics and nevertheless lead to results which can no longer be interpreted in classical terms. As to whether other criteria can be found in order to identify non-classical operators by means of their structure alone, or to construct them ad hoc in order to obtain new non-classical laws, remains an open question. The most striking consequence of this is the following: as the set of possible experiments is never closed, we are never sure that we will not be faced with new and unpredictable invariants which would require possibly considerable realignments of the theories previously designed. Scientific evolution, then, is as open as organic evolution. Human knowledge and scientific progress cannot approach a "theory of everything" (the pride of science so to speak), just as organic evolution cannot converge towards a definitive and optimal species (the pride of creation so to speak). Though modern physics cannot be reduced to brain functions, it is nevertheless home-made in the sense that it depends on the experiments we have chosen.

This is in conflict with what most physicists understand. Davies^{5} for ex. argues as follows: "Let me express this point in a somewhat novel way. Hawking^{2} has claimed that 'the end of theoretical physics may be in sight'. He refers to the promising progress made in unification, and the possibility that a 'theory of everything' might be around the corner. Although many physicists flatly reject this, it may nevertheless be correct. As Feynman^{1} has remarked, we can't go on making discoveries in physics at the present rate for ever. Either the subject will bog down in seemingly limitless complexity and/or difficulty, or it will be completed."

This allows us to see the realist's main argument in another light^{13}: the basic experience of all men is that our perception contains regularities we cannot influence (weak reality). So, they must be objective, the realist infers, and hence it is legitimate to try to condense them to the laws of an objective world. Here, we concede that we indeed have no means to influence the regularities perceived nor can we alter what we call the (classical) laws of nature - but only so far as the present is concerned. In the past, as we have seen, we intervened well through the phylogenetic decision on the development of the mental operators and by this on the regularities we perceive. This process is finished, as the biological development of these operators can well be considered to be completed. What is not finished, however, is the development of possible physical extensions in form of novel experimental facilities with new invariants leading to new laws. So, law-making is not generally completed. It has rather shifted from the genetic to the cultural level. In a certain sense the classical laws of nature we know are part of what we could call our cognitive phenotype which can be changed as impossibly as our organic phenotype. This has consequences for the notion of truth. If regularities are nothing but invariants of cognitive and experimental operators, then theories describing these regularities are correct or 'true' if and only if they emulate the mental and experimental processes generating the regularities concerned. As these processes are the outcome of human organic, cognitive and cultural history, the criterion that theories have to consider is coherence with this history, rather than coherence with the structures of an external and previously defined reality. The same reasoning applies for organic adaptation to reality. The survival conditions for evolutionary changes depend on what organic requirements of the species concerned have to be met: that we have to protect ourself against cold is not a general requirement imposed by the environmental reality but rather the consequence of our ancestors' 'decision' to become warm-blooded animals. So, in both organic and cognitive evolution, the criterion of being in accordance with what has been acquired in the past saves us from the 'anything goes' verdict as effectively as the existence of an objective outside world was expected to do.

The strongest argument against reality however is that it does not comply with the criterion of operational definition. According to our understanding the structures of reality and the laws of nature reflecting these structures are something upon which we have no influence. Reality, therefore, must be invariant under all our doing and acting, not only now but also under all we may do in future. So, an operator which is to characterise the category of reality must commute with all possible operators. Unfortunately, the only operator which does so is the trivial unity-operator. An operational definition of reality is non-trivial only if it is based on a finite subset of all operators, such as the set of all operations men ever have carried out up to now - and this is exactly the definition we implicitly apply when we say: "according to all our (past) experiences nature has this or that structure".

This argument makes sure that the rejection of the classical notion of strong reality as proposed here does not mean that we could ignore our actual environment or the facts of our actual situation such as the timetables at the station or the traffic lights at the street, as often argued in this context. It only means that the various life strategies of all possible living beings do not have a common denominator to which we could refer as universal laws of nature in the sense that everybody could profit from it, independent of his physical and cognitive constitution. The meaning of laws is that their knowledge helps to solve problems. But there are no universal laws as there are no universal problems. Problems are always special problems of special organisms with special constitutions.

But if there is no real reason for reality, why then the notion of the independently existing structures of a real world nevertheless evolved as the most central category of our life strategy? There may have been functional rather than structural reasons. Mutually profitable communication about our perceptions and experiences is only possible if the way we interpret sensory data is accepted as a general human standard. Evolution has rendered this standard immutable by a remarkable trick. Evolution "told" us that the regularities we perceive have their origin in a region to which we have no access, as it is "outside" ourself, so that the structure of what we see is independent from us. This region we called reality and, as a consequence, we call an illusionist anyone who does not believe in the universal character of reality.

We already mentioned that the concept of operational definition has to be applied to mathematical and logical terms as well as to observational terms, i.e. we have to consider mathematical and logical structures also as invariants of special mental operators. If we start from the apparent suggestion that the perceptional and mathematical operators concerned have co- evolved and therefore are related to each other, then their products, the mathematical and sensuously perceived structures must also show certain similarities. This would explain why mathematics does so well in describing the regularities we perceive, or why the world is algorithmicly compressible: the physical world - which is the world of our perceptions - is itself, on the ground of its mental genesis, algorithmicly structured. Perceived regularities and mathematical structures are phylogenetic homologa. This is the reason why the formulation of (physical) theories in terms of the specific mathematics we are acquainted with is an essential prerequisite for their capability to emulate the genesis of perception and, therefore, for their truthfulness. From the classical point of view (i.e. within the theory of reality), however, the algorithmic compressibility of the world cannot be explained, and neither, on the same basis, can the success of induction.

This line of thinking also provides us with the possibility to define the terms simple and complex (Hedrich 1993, Lindgren 1988, Maddox 1990) more precisely: A structure is simple if it is the invariant of a single cognitive operator, and it is complex the more operators have to be combined in order to get a result of which the structure is an invariant. That we consider the motion of a force free body to be the most simple process in time is based on the fact that it is the invariant of our time metric generator rather than on what we are used to call the motion's objective simplicity. On the other hand, that the sequence of digits in the decimal representation of the square root of 2 seems to be so complex is due to the fact that the generating algorithm is not realised within our brain's hard- or software. Physical simplicity or complexity depends on our cognitive 'phenotype' i.e. on the set of cognitive operators we have chosen phylogenetically to construct our 'Weltbild'. It is as in mathematics: a function which is simple in exponential representation is usually rather complex in Fourier representation and vice versa.

If both perceptional and mathematical categories are really the outcome of analogue mental operators we must face the possibility of qualitative extensions also in mathematics. This sounds strange, but there is some plausibility behind this idea. Similar to the operators generating sensual perception which can be extended by physical facilities, the mental operators generating our elementary mathematical conceptions can also be extended through higher and more complex mathematical calculi. This is what mathematics does as science. Insofar as the higher mathematics used is based on appropriate axioms, i.e. (in CEE parlance) on axioms which emulate correctly the cognitive operators concerned, there is no reason from the classical point of view to believe that this will lead to "non-classical" statements, i.e. to statements which can no longer be formulated within the syntax constituted by the axioms concerned. This view substantiated the confidence in Hilbert's program of the complete axiomatization of mathematics - or, in the terms used here, the confidence that mathematics can extend itself only quantitatively.

From Gödel, however, we know (see the summary of E. Nagel, 1958) that there are mathematical procedures which, though entirely constructed by means of well proven classical methods, will lead to statements representing a truthfulness which can no longer be derived from the axioms concerned. Mathematics (of the kind we know) has turned out to be as incomplete as classical physics. In either case, nothing but the application of well-tried and sound methods and procedures can lead to results which cannot be extracted from the foundations of these methods and procedures. We must therefore conclude that we cannot be sure that there will be no surprises of a similar kind in the future. Indeed: just as experimental operators, though constructed entirely according to the rules of classical physics, may lead to results which cannot be described in classical terms, there are also mathematical calculi which, as shown by Gödel, though based entirely on well tested axioms, can lead to statements which cannot be proven within the context of these axioms. So we have qualitative extensions in physics as well as in mathematics.

In this respect, the only difference between the physical and the mathematical situation is that we already have in physics two non-classical theories (quantum mechanics and special relativity) and that we can say precisely under what conditions we have to apply them, namely (simply spoken) in subatomic areas and with very high speeds. In mathematics we only know from Gödel's theory that there must be non-classical phenomena, but we do not know what they are and, more particularly, we cannot say which operations would bring us out of the classical domain. Is it the notion of cardinal or ordinal numbers, or the notion of set or of infinity, or is it the combined application of these notions which constitute the cause of non-classical mathematical phenomena? Will logic turn out to be as incomplete as physics or mathematics? And what will happen as we deal with more and more powerful computers? Up to now we do not know. But when we will, we will have modern, non-classical mathematics as well as physics.

The astonishment of mathematicians with respect to Gödel's proof continues, unbroken. Literature is full of respective manifestations. Among others the explanation was proposed that the brain's action cannot be entirely algorithmic (Lucas 1961, Penrose 1989). Further to the fact that it is not quite clear what in a neural network such as the brain could be non-algorithmic, this kind of reasoning is not necessary at all. What follows from Gödel's proof is only that what certain mathematical calculi can bring about is not necessarily the same as what a combination of them could generate. It is as if physicists believed that physics cannot be entirely natural because apparatus constructed according to the laws of classical physics would not necessarily reproduce the laws of classical physics as seen in quantum mechanics.

In contrast to physicists who suggested as an explanation for their respective experiences that they had happened to come into domains of nature where other and unpredictable laws would rule, mathematicians hesitated to admit the idea that mathematical research is empirical in the sense that it would lead to really new discoveries which by no way could have been expected, even not a posteriori. If mathematics had its own specificity at all as included in the notion of Plato's reality, then, according to general mathematical understanding, this must be something which is included in the very rudiments and which, from there, would determine all possible consequences. In other words: if there is such a thing like Plato's reality it must reveal itself by the fact that a consistent mathematics can be based only on particular axioms (the analogy to the laws of physical reality, so to speak). Once they have been found - as per Hilbert's conviction - they would settle once and for ever the "phenotype" of all future mathematics. Mathematics, then, would be nothing but a kind of craft filling up the possibilities defined by the axioms identified - similar to physics which, according to prevailing understanding could do nothing but look for the applications of the "theory of everything" once it has been found.

In the beginning it was hoped, that extending or modifying the axioms in view of the unprovable statements concerned could solve the problem. Unfortunately the new axioms would be in no better situation, as for any set of axioms unprovable statements may be found. This applies also for physics. Of course, we can modify theories according to 'unprovable' phenomena, i.e. new phenomena which cannot be formulated within the existing theories, and we did so when establishing quantum mechanics - but this will provide no guarantee that similar things will not happen again and again. So, neither in physics nor in mathematics can a 'tool for everything' be found by means of which all problems concerned, present and future, can be solved in a purely technical or formalistic manner.

The relationship between physics and mathematics as suggested by CEE constitutes a certain heuristic balance. Experimental physics is no longer privileged in providing information from the outside world, whereas mathematics has to set it in theories. Instead, hopes are reasonable that a possibly successful study of non-classical mathematical phenomena could be a key to better understanding non-classical phenomena in physics too - and vice versa. In a way, physics and mathematics can see each other as very general theories. So, mathematics could outgrow the role of an auxiliary science, which it has hold from the outset of empirical science, into the role of an heuristic partner of equal rights. Strictly speaking, this has already happened. Of course, that we consider the world to be algorithmicly compressible reflects nothing but the suitability of mathematics for prognostic purposes in physics. This is what physicists call "the unreasonable effectiveness of mathematics in the natural sciences" which, in the light of CEE, might well be reasonable.

Barrow, J. D. (1990): Theories of everything. Oxford University press.

Campbell, D. T. (1973): Evolutionary epistemology. in Schilpp, P. (ed.): The Philosophy of Karl Popper. Part I, Open Court, La Salle, pp. 413-463

Chalmers, A. F. (1982): What is this thing called science? Buckingham: Open University Press

Feynman, R. P. (1965): The character of physical law. BBC Publication.

Davies, P. C. W. (1990a): Why is the physical World so comprehensible? In Complexity, Entropy and the Physics of Information, Santa Fe Institute studies in the Sciences of Complexity, ed. W. H. Zurek, Vol VIII, Addison Wesley, p. 61-70.2.

Davies, P. C. W. (1990b): Why is the universe knowable? To appear in Mathematic and Science, World Scientific Press, ed. Ronald E. Mickens, p. 14-33.

Diettrich, O. (1989): Kognitive, organische und gesellschaftliche Evolution. Berlin Hamburg: Parey.

Diettrich, O. (1991): Induction and evolution of cognition and science. In Gertrudis Van de Vijver (Ed.): Teleology and Selforganisation. Philosophica Nr. 47/II, p. 81-109

Diettrich, O. (1993): Cognitive and communicative development in reality free representation.

Communication and Cognition - Artificial Intelligence, **11**, Nr. 1-2, p.55-89

Hawking, S. W. (1979): Is the end in sight for theoretical physics? Inaugural Lecture for the Lucasian Chair. University of Cambridge.

Hedrich, R. (1993): Die nicht ganz so unglaubliche Effizienz der Mathematik. Philosophia Naturalis, Bd. 30, p. 106-125.

Lindgren, K.; Nordal, M.G. (1988): Complexity measures and cellular automata. Complex Systems 2, 409-440

Lucas, J. R. (1961): Minds, Machines and Gödel. Philosophy 36, p. 120-124

Maddox, J. (1990): Complicated measures of Complexity. Nature 344, 705

Nagel, E. und Newman, J. (1958): Gödel's Proof. London: Routledge

Penrose, R. (1989): The Emperor's new Mind. Oxford University Press

Riedl, R. (1980): Biologie der Erkenntnis. Berlin, Hamburg: Parey

Vollmer, G. (1975): Evolutionäre Erkenntnistheorie. Stuttgart: S. Hirzel

Wigner, E. (1960): The unreasonable effectiveness of mathematics in the natural sciences. Comm. Pure Appl. Math. 13, 1.