Mere programming, which is what almost everyone today learns and practices, inevitably produces buggy code; and bugs don’t just create risks of accidental software failures (which can be bad enough); they also create vulnerabilities that allow adversaries to hijack software for their own ends. Even the most rigorous application of traditional coding methods still produces several bugs per thousand lines of code. With modern systems comprising tens of millions of lines of code (and interconnected systems of systems involving billions of lines of code), the systems we rely on today have millions of bugs creating endless opportunities for hackers to hijack essentially all major systems. Our inability to produce trustworthy code for truly critical applications puts our society at serious risk of major system failures, especially in case we ever end up in a major conflict with capable adversaries. Even short of that, as code moves deeply into the physical environment — e.g., into self-driving cars — bugs in code present enormous risks of system failures with consequences that range from severe to catastrophic.
Imagine if the grocery stores were empty because our transportation systems were no longer working; our banks and payment systems were down because national and international financial systems were attacked; and telecommunications and emergency response systems were inoperable. The software bugs that we simply cannot avoid producing today using mere programming techniques now pose such existential risks to our society. It’s beyond plausible, for example, that the next time we get into a major international conflict, several of our (and their) critical infrastructure will be crashed, perhaps even by sleeper code that is already deployed in these systems. Bugs innocently introduced into the code that runs these systems by people doing mere programming created are allowing adversaries to hack in and place their own malicious code into these systems — hidden in the labyrinthine complexity of millions of lines of code, working silently, or waiting to be activated on demand, with devastating consequences. The problem is that these systems have essentially all been built by mere programmers using mere programming concepts, methods, and tools. For the most part, we continue to teach programming as if it were carpentry, not engineering: as a craft, not as a profession distinguished by a deep understanding of the scientific, logical, and mathematical foundations of software and tools and techniques and their practical application to produce systems that we can certifiably rely upon.
My teaching aims to produce software super-heroes of the future by rebooting what and how we teach. We need to teach the logical foundations of programming starting no later than college-level CS1, and ideally in primary and secondary school. Even very young children can understand mathematically rigorous logical reasoning. Now we need to teach it. It sounds scary, but it’s not. What software is all about becomes truly clear only when it is introduced in a rigorously constructive manner. Rather than tossing students into a highly complex, highly error-prone industrial imperative programming language and environment, e.g., C++, Python, or Java, we can start with the shockingly simple but extraordinarily expressive logic and data definition capabilities of a foundational constructive logic proof assistant. and its mechanisms for functional programming. We can build everything from there, from the “standard libraries” of a typical industrial language to the basic concepts and mechanisms of any industrial imperative programming language. But it’s not enough to teach mere functional programming. Using languages such as ML, Haskell, or Racket, is not good enough, either.
What we now need, at least for critical systems, is to produce not just code but associated proofs that our code is fit for purpose: that it has rigorously specified engineering properties that provide assurances that it is not going to fail in the many ways that mere code without such proofs is prone to fail. The problem with mere functional languages is that just like their popular industrial counterparts (Java, Python, C++, Swift, and so forth), they also lack means to express logical propositions about code and to produce proofs of such propositions in the same language in which the code is written, and to have those propositions checked for validity fully automatically before the code can even be run.
Constructive logic proof assistants provide such languages and mechanisms. People who learn how to use these and related tools, rooted in the mathematical logic of computing, will have the new super-powers of the future. Microsoft Research, for example, is now developing and applying formal logical methods to produce a mathematically certified implementation of the HTTP and HTTPS protocols that make the World-Wide Web go. We do our students an enormous favor by teaching not only mere programming but also the logical concepts, notations, reasoning techniques, and tools needed to provide strong assurances that code is good enough to count on, even for one’s very survival.
At both the undergraduate and graduate level, I have therefore been working to develop an approach to teaching programming starting with its logical foundations. I teach an introductory CS course for majors in our College of Arts and Sciences that comprises a computational subset of an introductory course on constructive logic. At the graduate level I have been teaching the full constructive logic course using Coq and Benjamin Pierce’s book, Software Foundations. My view is that debates over what programming language to use to teach intro CS pale in comparison with the question of whether to provide students with foundational knowledge for the engineering of software, rather than just giving them what one might call software carpentry skills in any of the usual industrial languages.