On Dec 2, 2005, at 10:28 AM, James Newtons Massmind wrote: > Any yet there is not a SINGLE computer language that tracks the actual > precision of calculations! E.g. When you multiply 1 byte times another > 1 > byte the result should be stored in 2 bytes Um, a fair number of languages and compilers track assorted arithmetic exceptions, don't they? Even if only to the extent of printing an error message and aborting... > http://www.sxlist.com/techref/expeval2.asp is my poor attempt to do > such a thing. Play with it and tell me why programmers don't demand > that ability in their compilers? > Because most programmers treat numeric precision the same way that network protocol designers treat bandwidth: just provide much more than is actually needed, and you won't have to worry about it. Thus "integers" are usually 32 bits these days, and 80 bit floats, although I can remember days where 32 to 36 bit floats were the norm... And we can all think of plenty of examples of just how badly this scheme has worked for strings input from the 'real world.' Sigh. BillW -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist