> After developing some 64-bit math routines (multiply and divide), I got to > wondering how I would properly verify them. I realize the best way (let's > say for the divide routine) would be to divide every possible number by > every possible divisor and then check to see that the answer is correct. > That would take a very long time (understatement of the year :-). > > Does anybody have suggestions for ways to give math routines a 99.9% test > that possibly could be performed in a reasonable period of time? Well, unless your algorithm is particularly complicated, just feeding it a bunch of random numbers should be good. To avoid correlation effects, I'd recommend reading data 64 bits at a time from a DES-encrypted file; if that isn't possible then try to come up with something that will avoid correla- tion effects between the generated numbers. In addition to testing with "purely" random numbers, you might want to do some testing with numbers that have more "1"'s or more "0"'s to make sure you don't have any problems in any of those cases. To "flog out" those cases, you may generate your 64-bit input values by taking (for each input value) two 64-bit "random" numbers and either AND or OR them together. This will give you val- ues which contain predominantly zeroes or predominantly ones. Note, BTW, that while random testing can heardly be an exhaustive test, it can often find problem cases that a human tester might not consider (e.g. on a Pentium-60, running random divides will hit the bug once every few hours). So if your algorithms do that many random divides (nb: it would alas take a bit longer than a Pentium-60 doing them) without errors it's at least as well tested as Intel's chips. >:*3