I think you're overlooking the real thrust of the situation. Wiles' proof was relatively lengthy and involved, but it can be examined and studied by humans in an extremely reasonable amount of time. It only took three days for Wiles to present his original proof. There aren't many people in the world who can understand it, and there are fewer who are knowledgeable enough to confirm its validity, but they exist.
Conversely, the proof described in the NYTimes article is of such length that no single mathematician could confirm its validity -- rather than deducing the non-existence of the object in question by a logical argument, it examined a huge number of possible cases. In that respect, it is far more similar to the proof of the four color map theorem. The issue is not so much whether or not we trust the computer's result, but moreso what it means for mathematics to proceed with results that we do not, in a traditional sense, understand.
This question came up in some of my more abstract classes in college. A few professors in my department were working on slightly different problems in the same general domain of automated mathematical problem solving and proof construction. The general consensus as I remember it was that the simplest way to get around the problem of no human being able to survey these proofs was to do something like this:
1. Define a machine read- and writable logic that can express your theorem and the steps you think it'll take to get there.
2. Write an automated proof-checker that can verify that proofs in this language are correct.
3. Prove the correctness of the checker.
4. Write a program that starts with your axioms and writes out a proof that ends in your theorem.
5. Verify it with the checker.
Now the only proof that needs to be human-surveyable for us to be certain that everything is correct is the one in step 3. The proof created by step 4 can fill up a skyscraper full of hard disks, and as long as the proof checker verifies it we know that it must be correct. Given a simple enough proof language (FOPL, for example) and a suitable programming language (the choice at the time was lisp, I believe) step 3's proof is easily short enough for a human to verify.
The only hole left is the possibility of a subtle machine malfunction that causes the checker to falsely categorize a proof as correct. On modern hardware this possibility is remote enough that once a proof has been verified a number of times by independent researchers on their own hardware we can safely ignore it.
Reminds me of discussions which purported to prove, not that a particular number was prime, but that there was a 1 in BigNum probability that it was not prime.
The mathematicians in the audience seemed to consider that unacceptable to the point of being useless. Nonetheless, someone pointed out (to much laughter) that the chances of any given proof being incorrect were significantly higher than that 1 in BigNum. Of course, we've all been there--thought we proved something we didn't.
I regard the chances of machine malfunction similarly, and have a similar standard for proof. If you can examine the code and the processes sufficiently, there is no reason not to trust the machine. At least, no more reason than there is not to trust your and others' minds. I suspect this view is common, given that most folks consider the Four Color Theorem proven.
Yes, exactly, what if the thing they proved is not what they coded in the end, there is a good enough probability of bugs if the code is sufficiently big !
At least to me, this is a nonissue. If the program can be proven to be correct (in the sense that the logic can be confirmed so that it will result in a correct answer), and the hardware can be shown to be able to properly execute the program, and the programs answer is proven.
Even the already tiny chance of a hardware error can be eliminated in all practicality by the combination of error checking within the program itself and repetition on independent hardware.
Conversely, the proof described in the NYTimes article is of such length that no single mathematician could confirm its validity -- rather than deducing the non-existence of the object in question by a logical argument, it examined a huge number of possible cases. In that respect, it is far more similar to the proof of the four color map theorem. The issue is not so much whether or not we trust the computer's result, but moreso what it means for mathematics to proceed with results that we do not, in a traditional sense, understand.