all, Norman wrote: >I was wondering if anyone had got the FP routines that are published >in the Embedded Control Handbook to work properly. I am havinf >extreme hassels when I have to deal with the value "zero". I have not looked at the specific routines you mentioned, but in general zero is a miserable number to deal with in floating point. In 1990 I wrote an IEEE 10 byte floating point math suite. It's been a long time so the details are a little sketchy. First of all understand that floating point is nothing but an approximation. This means that every single fraction between any two numbers is not representable. Due to this you run into problems. As you begin subtracting .00000001 from .01 (say in a tight loop) you begin to approach zero from the positive side. eventually the result will underflow and your result will be POSITIVE ZERO (+0). This is literally how the number is represented internally. The same thing can happen from the negative side of zero resulting in NEGATIVE ZERO (-0) When underflow occurs one of two things can happen 1) you get an underflow exception error and processing stops or 2) if the exception is masked a PSUEDO ZERO is returned. (this is where it gets sketchy...) Psuedo Zero's break down into 2 categories: UN-NORMAL (typically intermediate terms that have not been normalized yet, I think) and DE-NORMAL (the result of an operation that is known to be incorrect, verrry sketchy, maybe the result of an operation on 2 un-normals. I do remember that if you continue to do math with de-normals you get NAN, Not A Number). Un-normal and De-normal can be positive or negative in sign. Anyway, the key thing here is that unless you are doing some strange math, (where you will need to know the level of degredation of your result) these are all considered zero. Trouble is they all have different representations. Un-Normals have an exponent of 0x0001 (+Un-normal) and 0x8001 (-Un-normal) De-Normals have an exponent of 0x0000 (+De-normal) and 0x8000 (-De-normal) Probably the greatest problem is created by roundoff and rounding errors There are 3 types of rounding: up, down, and nearest. Each of these will effect the result of operations on any two numbers. the problem is that you get 1 or 2 bit rounding errors all the time, no matter what you do Remember this is just an approximation gaurenteed to a specific number of significant digits. Ususlly there is 2-3 bits left over that are not part of the "significant-ness" of the number, but the influence partial results. (dont try this *exact* example, I'm faking it) assume you have a system gaurenteed for 8 significant digits For example .00000005 - .00000004 should equal .00000001 right? but .00000005 cant be represented exactly in FP so it's actually .00000005013 and when you subtract them you get .00000001013 which is accurate to 8 digits. now you subtract .00000001 from that number and you get .00000000013 (which is zero from the standpoint of 8 significant digits) but internally this is being compared to .00000000000 and they are not equal. this is why generally you never say: if( float1 = float2 ) ... you normally say if( abs(float1 - float2) > theta ) ... where theta is generally the smallest representable number (or some other "magic" tollerance value) needless to say zero is a paticularly nasty number in floating point sorry this is so long winded. Hope it helps Michael J. Schreck