Powerful mathematical theories of computer science are characterized by *inferential* "completeness," meaning every proposition is provable or disprovable, the opposite of the inferential incompleteness that is the essence of Kurt Gödel's results. Strongly typed theories of natural numbers, ordinals, sets, lambda calculus, and Actors each characterize mathematical objects up to a unique isomorphism with inferential completeness, thereby removing any ambiguity in the mathematical objects being "axiomatized."^{2} *Having these mathematical objects precisely defined makes systems more secure because there are fewer ambiguities and holes in reasoning to be exploited.* On the other hand, the 1^{st}-order theories of Gödel's results necessarily leave mathematical objects (such as natural numbers, ordinals, lambda expressions, and Actors) ill-defined. Computer science thus relies on the *opposite* of Gödel's results.

Self-reference in mathematics arises from the possible existence of fixed points. For example, the mapping from a proposition p to its negation (p |-> ~p) would have as a fixed point the proposition *I'mFalse* (such that *I'mFalse*⇔~*I'mFalse*), except that in the mapping, the proposition ~p has order one greater than the proposition p because p is a propositional variable. Consequently, a fixed point does *not *exist, and the proposition *I'mFalse* does not appear in the modern mathematics of computer science.

Moreover, the mapping from a proposition p to the proposition that p is unprovable (p |-> ~|-p) does not have a fixed point because, in the mapping, the proposition p has order one greater than the proposition p, meaning a fixed point *I'mUnprovable *(such that *I'mUnprovable*⇔~|-*I'mUnprovable*) used as the basis of Gödel's results likewise does *not* exist in the modern mathematics of computer science.^{2} Wittgenstein noted correctly that allowing Gödel's *I'mUnprovable* infers inconsistency in mathematics:^{3}

"Let us suppose [Gödel was correct^{4} and therefore] I prove the improvability (in Russell's system) of [Gödel's *I'mUnprovable*] *P*; [*i.e*., |-_{Russell} ~|-_{Russell} *P* where *P* <=> ~|-_{Russell} *P *] then by this proof I have proved *P* [*i.e*., |-_{Russell} *P * because *P* <=> ~|-_{Russell }*P *]*. *Now if this proof were one in Russell's system [*i.e*., |-_{Russell} |-_{Russell} *P *] — I should in this case have proved at once that it belonged [*i.e*., |-_{Russell} *P *] and did not belong [*i.e*., |-_{Russell} ~*P *because ~*P* <=> |-_{Russell }*P *] to Russell's system. But there is a contradiction here! [*i.e*., |-_{Russell} *P *and |-_{Russell} ~*P *] ... [This] is what comes of making up such sentences*.*"

Gödel made important contributions to the metamathematics of 1^{st}-order logic with the compactness theorem and formalization of provability. However, Gödel's results do not hold for even the standard strongly typed theory of natural numbers (based on the work of Richard Dedekind^{1}) that characterizes the natural numbers up to a unique isomorphism with inferential completeness, which is not 1^{st}-order because it is not compact. The standard theory is "effective" because proof-checking is computationally decidable^{2}, whereas it is of no practical importance that theorems are not computationally enumerable because they are uncountable. Also contrary to Gödel's results, the standard theory proves its own "formal" consistency.^{2} ("Operational" consistency using stated axioms and rules of the theory remains unresolved.) For these and other reasons, computer science cannot rely on 1^{st}-order theories for its foundations.

Theorems abstracted from strings are not computationally enumerable because otherwise (by inferential completeness) whether a proposition abstracted from a string is a theorem would computationally decidable by enumerating theorems until the proposition or its negation occurred. Because theorems abstracted from strings are not computationally enumerable, the standard theory of natural numbers is algorithmically "inexhaustible" (intuition behind the work of Torkel Franzén).^{2} Incompleteness for foundational mathematical theories of computer science is computational rather than inferential:

- Computational undecidability whether a proposition holds
- Incompleteness in computationally enumerating which propositions hold

On the other hand, fixed points do exist for strongly typed procedures of standard mathematical theories proving *computational *"undecidability" of whether a proposition is a theorem à la Alonzo Church and Alan Turing, making these "self-references" valid for the foundations of computer science.

**References**

1. Richard Dedekind. *What are and what should the numbers be? *Friedr. Vieweg & Sohn, 1888. Translated by David E. Joyce, Clark University, Dec. 2005;

2. Carl Hewitt. *Strong Types for Direct Logic.* HAL Archive; https://hal.archives-ouvertes.fr/hal-01566393

3. Ludwig Wittgenstein. *Remarks on the Foundations of Mathematics*, *Revised Edition* Basil Blackwell. 1978.

4. Kurt Gödel. *On formally undecidable propositions of Principia Mathematica* Monatshefte für Mathematik und Physik. 1931. Translation in *From Frege to Gödel: A Source Book in Mathematical Logic*. Harvard University Press. 1967.

**Carl Hewitt** is an emeritus professor of the Massachusetts Institute of Technology. He is board chair of iRobust, an international scientific society for the promotion of the field of Inconsistency Robustness, and board chair of Standard IoT, an international standards organization for the Internet of Things, which is using the Actor Model to unify and generalize emerging standards for IoT.