WeSearch

"Why not just use Lean?"

·8 min read · 0 reactions · 0 comments · 1 view
"Why not just use Lean?"
Original article
Github
Read full at Github →
Full article excerpt tap to expand

"Why not just use Lean?" 23 Apr 2026 [ AUTOMATH LCF HOL system HOL Light Lean formalised mathematics ] I have been told that when proposing to formalise mathematics these days, you have to explain why you are not using Lean. And that reminds me why I left the dependent-typed world 40 years ago: its cultism, insularity and conformity. Lean is a great language with good tools, a large library and a huge, enthusiastic user community that has lately accomplished astounding things. But let’s not forget that the formalisation of mathematics goes back nearly 60 years. Amidst the hype around today’s progress, we must remember how we got here. It was not by people following the crowd. In the beginning, there was AUTOMATH Part of the hype mentioned above is the frequent claim “Lean has made the formalisation of mathematics possible”. Sorry, we got there in 1968. NG de Bruijn’s AUTOMATH already included most of the necessary ingredients. By 1977, Jutting had used it to formalise Landau’s Foundations of Analysis, which covers the construction of the complex numbers starting from pure logic. Jutting worked with equivalence classes and with sets of rational numbers. He formally proved the Dedekind completeness of the real number line. His accomplishment would not be matched for 20 years, despite vast advances in computer power. Finally, in the mid-90s, the real numbers were formalised again by John Harrison (using HOL Light) and Jacques Fleuriot (Isabelle/HOL). I believe that almost anything that has been formalised today in any system could have been formalised in AUTOMATH. Its main drawbacks were its notation, which really was horrible, and its complete lack of automation. Proofs were long and unreadable. And yet, for reasoning about equivalence classes, it is still probably better than Rocq. For while users of the latter rail against “setoid hell”, Jutting in his dissertation dispassionately describes his formalisation of equivalence classes. He even formalised one of Landau’s chapters a second time, adopting equivalence classes because he thought they were the right approach. And just after, there were Boyer and Moore From a completely different corner came the work of Robert Boyer, J Moore and their many colleagues. First announced in 1973 with the title “Proving theorems about LISP functions”, they set out their objective as the verification of code, not mathematics. Their “computational logic” has clear limitations for general mathematics, but this has not prevented its use in formalising a variety of deep results, ranging from Gödel’s incompleteness theorem to quadratic reciprocity to the Banach–Tarski theorem. The current incarnation is called ACL2 and it is chiefly applied to hardware verification. You can go far by being different. After LCF: Coq, HOL and Isabelle The groundbreaking Edinburgh LCF focused narrowly on programming language theory, but its idea of having a functional programming language as the metalanguage (hence ML) of a proof assistant had a broad impact. Groups in Cambridge, INRIA, Cornell and further afield built tools using ML, including early versions of HOL, Coq (now Rocq) and Nuprl. The HOL group was firmly fixated on hardware verification, but the need to verify floating point hardware brought with it a need for real analysis. Soon, John Harrison had proved some serious mathematics, such as the prime number theorem via Cauchy’s integral formula. He set himself the task of verifying as many of the famous 100 theorems…

This excerpt is published under fair use for community discussion. Read the full article at Github.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from Github