On 25 January 2012 14:22, Dave Tweed wrote: > My memory's a little hazy on this point, but doesn't the C standard state > that > compile-time operations are to be done with the largest available size of > each > type; i.e., longs for integers and doubles for floating-point? > Should be used 'int' type, yes. > It looks like the compiler in this case assumed that a compile-time > constant > calculation was to be done with signed char, which it then sign-extended = to > 16 bits for the run-time comparison. At the very least, this violates the > principle of "least surprise". > I think in C18 the default is 'char' -- maybe to save program memory and runtime --, maybe if integer promotion was used then it would go to the C89 standard (have not checked that). I vaguely remember discussing about such kind of issues here in this list few years back. Tamas > > -- Dave Tweed > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > --=20 int main() { char *a,*s,*q; printf(s=3D"int main() { char *a,*s,*q; printf(s=3D%s%s%s, q=3D%s%s%s%s,s,q,q,a=3D%s%s%s%s,q,q,q,a,a,q); }", q=3D"\"",s,q,q,a=3D"\\",q,q,q,a,a,q); } --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .