Isaac Marino Bavaresco wrote: > I may be wrong, but the C standard states that operands must be promoted > "at least" to int, or to the type of the operand with highest precision. >=20 > Also, the OP could use: >=20 > #define S2_MIN 90u > #define S2_MAX 125u > #define S2_STEP 5u I think there may be a more subtle issue going on here. The expression (S2_MAX + S2_STEP) is evaluated at compile time, not run time, as we can se= e from the generated code. My memory's a little hazy on this point, but doesn't the C standard state t= hat compile-time operations are to be done with the largest available size of e= ach type; i.e., longs for integers and doubles for floating-point? It looks like the compiler in this case assumed that a compile-time constan= t calculation was to be done with signed char, which it then sign-extended to 16 bits for the run-time comparison. At the very least, this violates the principle of "least surprise". -- Dave Tweed --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .