in reply to Re^2: What do you know, and how do you know that you know it? in thread What do you know, and how do you know that you know it?
What he described is less ambitious than Hilbert's Program.
He just wanted to have computers (which hopefully are less fallible than humans) systematically verify known mathematics. While I see this as interesting to attempt, it is also doomed to fail because more math is being produced faster than you can possibly verify it.
Which brings up a dirty little secret of mathematics. You can consider a mathematical proof as being like a computer program that has been specced out in detail, but never run. Yeah, you think that it would work. But if you tried to implement it you'd run into little roadblocks. Most of them you could get around. But you never have any idea what is lurking in there, and no real way to find out.
And no, this is not a trivial problem. In fact entire areas of mathematics have fallen apart because problems were found, and found to be serious. My current favorite example is the classification of finite groups. Nobody has really reviewed the proof. Everyone knows that the proof has flaws. And the situation is serious enough that there is a decadeslong effort under way to find a different proof of the result! (The effort was started by some of the people who made their reputation working on the original proof.)
Re^4: What do you know, and how do you know that you know it?
by kabel (Chaplain) on Aug 02, 2004 at 21:53 UTC

i think the proof problem is due to the application of classical logic. in intuitionistic logic, there is no such thing like an pure existence proof. instead, existence is showed by providing a method of computation  an algorithm  which constructs the thing whose existence is to be proofed.
provided that a) existence proofs are the problem and b) i understood your points.
 [reply] 

Conditions a) and b) are both not satisfied.
The problem has nothing to do with formalism vs intuitionism. The problem is that mathematical proofs are written by humans and read by other humans, yet are supposed to adhere to an inhuman level of consistency. It is human to err, and as such errors compound, purported absolute proofs become..somewhat less so.
I made the analogy to programming because this audience is one that has experienced firsthand the chasm between what you think that your program should do, and what it will do when put into a computer. I drew the analogy to a program that has only been specified because, at the research level, most proofs have significant gaps and steps that the reader is expected to be able to fill in. (Sometimes, of course, these gaps can't be filled in. At least not easily.) This is part of why a professional mathematician is not surprised to find that they can only read research papers at a rate of a page or two a day.
 [reply] 

so you are a step or two deeper to what i was thinking. thanks for clarifying. (sorry, i need to wrap this in my words to get a glimpse of it). second shot *peng*
mathematical proofs are written by few and read by many. lets assume that a script is also read by many. sure, the writer must expect a reader to be at a certain level of skill. otherwise immediate understanding will not be possible. the writer comments all stuff he thinks that can be misinterpreted, and stuff he knows is his usus (and perhaps more); so that if you read the script, it makes perfect sense.
but this seems to be the exception. with the effects of the personal belief system, everyone has an other view of the solution of a problem. so if the reader does not understand, he is either not qualified enough or he cannot see the intentions the writer had while writing it. in the latter case, there is an information loss; this information should actually be represented by a comment.
a perfect script is one that its writer has all his belief knots undone and everything he cannot expect a reader to know is either transported by a good naming scheme or by comments. that means, there are no perfect scripts because it is dependent on the knowledgelevel/belief system of the reader.
 [reply] 
Re^4: What do you know, and how do you know that you know it?
by hv (Parson) on Aug 03, 2004 at 11:17 UTC

It need not necessarily be doomed to fail for that reason, if the software involved could be useful enough that from some point an increasing proportion of new maths were produced using it.
Making the technology sufficiently useful is probably the magic part though; fixing existing proofs is also likely to be a big job, since my impression is that even today they are riddled with informal appeals to reason or analogy in a manner that would in many cases be hard to formalise.
Of course any hard step can be left for later, or for someone else, simply by declaring that step as an axiom.
An interesting aspect of this  though also one with the potential to explode the problem space  is that it could make it much easier to see the overlap in accessibility of theorems under different sets of rules of logic.
I think I first started thinking about this way back when I was first reading Godel, Escher, Bach. I spent a lot of time with the Peano axioms described there, and while I could prove things like commutativity and associativity of addition and multiplication easily enough the lack of axioms for handling negatives meant it was impossible to prove something like "every integer is either even or odd":
Ax: (Ey: ((x = (SS0 . y)  (x = S(SS0 . y)))))
Hugo  [reply] [d/l] 

Fixing existing proofs is the key roadblock. Furthermore automated proof techniques would have different amounts of utility in different parts of math. And finally, most mathematicians like understanding their proofs. Automated proofs don't tend to be understandable. It is like the endgame studies that computers did in chess  you can see that it works, but you can't see how anyone could think of or remember it. (Which is why no human had thought of it...)
Incidentally your playing around with the Peano axioms mislead you. The Peano axioms allow you to reason about the positive integers. (Proving that every positive integer is either even or odd is simple induction.) To get from there to the integers, you model each integer as an equivalence class of pairs of integers (p, n) with (p, n) in the same equivalence class as (p', n') if p'+n = p+n'. (Think of (p, n) as pn.) Now you have to prove that the equivalence relation is welldefined, and that definition of addition is likewise. Define the usual multiplication, and prove that that works. And that you can map the positive integers into the integers with p>(p+1, 1). Then you can start proving other things, such as that every integer is either positive or negative.
Once you have all of that, then you're in a position to prove things about the integers (like every integer is even or odd).
You have to go through the same process again to define rationals as equivalence classes of pairs of integers. You can then define real numbers as equivalence classes of Cauchy sequences of rationals. Once you've done that you then have to reprove everything. But, amazingly, you can get all of the way through Calculus with just the Peano axioms. (It takes a lot of work though.)
 [reply] 

I am not greatly interested in automated proofs, but rather in tools to assist humans in developing proofs and studying them  I'm talking more about distributed databases and user interfaces than about anything from AI, and about using strict validation techniques to replace the need for a trust model such as peer review.
In respect of the even/odd thing, I may have misremembered what I was trying to prove: I'd have to find back my notes from my teenage years. I'm sure the problem revolved around proving a negation, and was intended to be a stepping stone to playing with prime numbers; I suspect it was less to do with the axioms as P1 .. P5, and more to do with the axioms as (P1 .. P5 plus the permitted rules of logic as described by Hofstadter).
Hugo
 [reply] 

