On Fri, 2 Dec 2005, James Newtons Massmind wrote: > Any yet there is not a SINGLE computer language that tracks the actual > precision of calculations! If we consider higher level languages as primarily tools intended to handle lots of otherwise error prone details for us, then as far as accuracy and precision goes, we haven't seen a lot of progress in this area. We are still mostly stuck with the machine register mentality of numerical quantities, while in other areas high level programming languages have made substantial advances. But the subject isn't easy. Programmers are not routinely even taught how to deal with the difficulties of using programming languages to avoid the problems that software introduces into this. Most don't even realize that such problems exist. A very few might have taken one course in numerical analysis towards the end of their college work. But programmers deal with "real numbers" and accuracy and precison, often without even thinking what the accuracy or precision is, let alone having good training for dealing with complicated calculations using floating point math. It is common for people who don't have a lot of time or patience for this to just retort "well then just use double precision" when you point out flaws in the calculations. That doesn't solve the problem either. Only recently have I started seeing people say "well then use ARBITRARY precision!", thinking that if twenty digits doesn't make the problem go away, maybe ten thousand digits will make me go away. I desperately wished for a little introductory programming class supplementary text that would begin by breaking the hearts of the students, showing them again and again how even simple calculations with real numbers can fail in surprising ways. And then it would provide a concise set of guidelines for reasonable programming methods when using real numbers. Scattered across the literature there are a few papers and books that cover bits of this. But I am not aware of anyone who has written a little book like this that really tries to bring together coherent best practice rules. Interval mathematics is one attempt at expressing the accuracy of quantities. But I am unaware of any production compiler that supports these. Mathematica does include this. TK! Solver was an early entry into the business of trying to provide math tools that deeply cared about results being correct. We considered embedding that inside instruments as the programming language for the user. And they were interested in the idea. But the company I worked for was having other difficulties at the time and I don't believe that went anywhere after I left. The difficulty with interval math is that the bounds usually grow unrealistically quickly, even after a moderate amount of calculation almost nothing is known about value of a quantity. My personal feeling on this is that using a uniform distribution, a "square box", doesn't realistically represent most quantities well enough and that another distribution might hold more promise. At one time I went looking for a "nice enough" probability distribution function, one not too wildly shaped but in which both sums and products of variables with this distribution would again have this distribution. Normal+normal=normal but normal*normal!=normal, so that won't work. The idea I had was that if a satisfactory distribution could be found then I'd create a programming language were ALL real numbers and variables were expressed using these distributions, you were required to give real values in terms of the parameters of the distribution. I didn't find one. But I didn't convince myself that one did not exist. If one were found it might be possible that the calculation costs would be acceptable to use this. This problem even shows up in simulation classes, where almost all the simulation software does not provide direct support for the uncertainty involved in the calculations. I've repeatedly seen students told "well, run the simulation ten times and look at the plots to see if you find areas where there is variation." I was almost driven to create a clone of iThink/Stella which would directly support the uncertainty involved in modelling and which would produce all results in terms of "fuzzy plots", where rather than spidery thin lines you would see clouds of points with the dispersion of the points graphically driving home how much certainty or uncertainty there was in your result. On a similar subject, a couple of decades ago when I was deeply involved in this, there was some discussion on the net and in the ACM SIGPLAN Notices on the idea of bringing units into programming languages. That isn't particularly difficult to do and we did some work at that time trying this out. Incorporating units into the language and enforcing their usage would almost certainly scrub away one more category of tedious programming errors. We have seen major software failures that resulted from programmers having to keep the units in their heads, rather than incorporating them into their source code, the european space launch and the U.S. mars failure come to mind. > http://www.sxlist.com/techref/expeval2.asp is my poor attempt to > do such a thing. Play with it and tell me why programmers don't > demand that ability in their compilers? I don't think most programmers want to have to deal with the subject. -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist