On Sun, 4 Dec 2005, Gerhard Fiedler wrote: > sergio masci wrote: > > >>> byte_var = byte_var + word_var // see (2) > >>> (2) ... add operation performed as two bytes > >> > >> This is surprising, and may lead to a wrong result (for signed variables). > >> IMO this should at least trigger a compiler warning. > > I'd like to get your feedback on this. Is there something I was missing? > Assuming all three being signed, at first I thought that something like > (-127 + 254) would give a wrong result, but it doesn't: (0x81 + 0xfe) = > 0x7f = 127. Does this work for all cases where the result fits in a signed > byte? Seems so... and is a bit counterintuitive for me. Hi Gerhard, I'm not sure exactly what you mean. if you are talking about 254 as being the low byte of a signed word (16 bits) then you can treat it an as "unsigned byte" because the high byte is zero, so you are effectively adding a "signed byte" to an "unsigned byte". if you add a "signed byte" to a "signed byte" and treat the result as signed, there is a resultant set of values that will cause overflow and cannot be stored in a "signed byte". The same is true for "signed byte" + "unsigned byte" -> "signed byte", but the set that causes overflow will be different. also "signed byte" + "unsigned byte" -> "unsigned byte", again different set for overflow also "unsigned byte" + "unsigned byte" -> "unsigned byte", again different set for overflow also "unsigned byte" + "unsigned byte" -> "signed byte", again different set for overflow e.g. "unsigned byte" + "unsigned byte" -> "unsigned byte" 255 (0xff) + 1 -> 256 (0x100) overflow 255 (0xff) + 255 (0xff) -> 510 (0x1fe) overflow 127 (0x7f) + 1 -> 128 (0x80) no overflow 127 (0x7f) + 129 (0x81) -> 256 (0x100) overflow "signed byte" + "signed byte" -> "signed byte" -1 (0xff) + 1 -> 0 no overflow -1 (0xff) + -1 (0xff) -> -2 (0xfe) no overflow 127 (0x7f) + 1 -> -128 (0x80) overflow 127 (0x7f) + -127 (0x81) -> 0 no overflow "signed byte" + "unsigned byte" -> "signed byte" -1 (0xff) + 1 -> 0 no overflow -1 (0xff) + 255 (0xff) -> 254 (0xfe) overflow 127 (0x7f) + 1 -> -128 (0x80) overflow 127 (0x7f) + 129 (0x81) -> 256 (0x100) overflow "signed byte" + "unsigned byte" -> "unsigned byte" -1 (0xff) + 1 -> 0 no overflow -1 (0xff) + 255 (0xff) -> 254 (0xfe) no overflow 127 (0x7f) + 1 -> 128 (0x80) no overflow 127 (0x7f) + 129 (0x81) -> 256 (0x100) overflow > > > > I got carried away in my enthusiasm and forgot about the promotion to > > "int" in the conditional in C. But what I wrote also holds true for ints > > and longs in XCSB whereas it does not in C > > Yes, I thought about this, but since I didn't know whether you wanted to go > there, I didn't include that in my reply. > > > if unsigned_int_var > signed_int_var then > > > > always true if unsigned_int_var > 32767 > > always false if signed_int_var < 0 > > otherwise true or false as expected > > endif > > I think C treats the signed variable as unsigned, without changing its > bits, so to speak. Which of course is not good, and may give an unexpected > result. I tend to avoid that, unless I can guarantee the sign of the signed > variable :) (And it definitely should trigger a warning. I'll have to try > to see whether it does.) Your solution definitely makes more sense. I seem to recall many years ago there was some "strangeness" associated with comparing signed and unsigned ints. IIRC one (possibly more) compiler I used either did a signed or unsigned comparison based on the left hand side of the relational operator (e.g. A == B treated as unsigned compare if A was declared as unsigned, or as signed compare if A was declared as signed). For many years now through, the compilers I've used have always flagged an error if I try to compare a signed and unsigned int. And I've been forced to corece one to match the other. > > > > and in anycase although the byte version has the same net effect under > > both C and XCSB, the XCSB version does not need to generate zero and sign > > extended ints prior to the comparison :-P > > Since I have never worked with your compiler, I can't really say much. (I'd > like to -- you have some interesting ideas --, but you know how that is... > you have something that works, so you go with it :) Tell me about it :) > But at least the > HiTech PICC compilers seem to do a good job of optimizing those operations, > so that unnecessary promotions to int don't get executed. Yes this does seem to be an excellent compiler so I'm usually quite happy to see my compiler generating code which is on a par with it (sometimes even slightly better :) > It's also > sometimes effective (code-wise) to give the compiler hints in the form of > type casts about the expected size of intermediate results. oh... type casts, horrible nasty vicious things :-) Yes I understand what you're saying and I've done this many times myself, but it's so incredibly dangerous. What you are essentially saying to the compiler is "stop doing what you know is right and will work - do what I tell you no mater how stupid it may be" (this is not intended to imply that the programmer is in any way stupid :-) The problem with a type cast is that it prevents the compiler from coping with changes you make to your code without telling you about it and without extensive retesting you cannot be sure ANY changes you make will have the effect you intended. e.g. original source byte A, B // lots of statements... // type cast buried deep in your code somewhere if ((byte)A == (byte)B) // do something modified source int A, B // lots of statements... // type cast buried deep in your code somewhere if ((byte)A == (byte)B) // do something you might be able to get around this potential problem by adding #if !(sizeof(A) == sizeof(B) && sizeof(A) == 1) #error "unsafe hint detected" #endif Regards Sergio Masci http://www.xcprod.com/titan/XCSB - optimising PIC compiler FREE for personal non-commercial use . -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist