On Wed, 18 Feb 2009, Gerhard Fiedler wrote: > sergio masci wrote: > > >> In the embedded world the additional cycles could be considered > >> expensive, and a C programmer 'who knows his stuff' may well be > >> peeved that the compiler is out-thinking him, and introducing > >> issues. In fact, a 'well versed' programmer (for better or worse) > >> may in fact take advantage of this in some other code, and then > >> become perplexed when compiled with XCSB. > > > > Maybe you're under the misconception that the XCSB compiler is a C > > compiler? > > I don't know for sure, but to me it didn't sound like this. While this > is a useful idea, it does introduce extra cycles that are wasted in all > cases where you want to compare signed and unsigned ints and know that > they are not outside the positive range of a signed int. Which is a > surprisingly frequent case. If you really KNOW this and WANT to take advantage of it then XCSB wont stand in your way. It will let you cast either operand to do exactly what you want. > > > The whole point of not just jumping on the bandwagon and writing another > > 'C' compiler was to have the freedom to change things for the better. > > Right. Better in some cases, worse in others (the introduced run-time > overhead). I think it was Walter Banks here who said that he can create > an equivalent C program for any assembly program, with not more > instructions than the assembly program. > > > > Allowing programmers to compare signed and unsigned values correctly > > is a way of eliminating some obscure bugs. > > The warning I mentioned before almost equally eliminates these bugs. Of > course, if the values go out of the range that's adequate for the cast, > then the comparison is wrong. However, this goes for most integer > operations: add, subtract, multiply. No it doesn't. The result of these operations can "overflow" whereas the result of a comparison cannot. And yes XCSB is consistant between interger operations such as add, sub, mult AND compare. Unlike 'C' which tends to just evaluate expressions, XCSB is driven by what you intend to do with the result. In XCSB if I add 2 8 bit numbers and store the result in a third 8 bit variable then the compiler knows that I don't care about overflow and only performs an 8 bit addition. If on the other hand I add 2 8 bit numbers and store the result in a 16 bit variable then the compiler knows that I do care about overflow and performs an optimised 16 bit addition. > Do you do extra range checking > here, too? I suppose not... because of the run-time overhead. No I don't do extra checking here because the value of doing so is much less for the programmer but I do generate code which is consistant with how the result will be used. > So in a > way, the C behavior of integer comparison is a result of consistency > with other integer comparisons. 'C' behavior just seems to be: evaluate an expression then do something with the result. Comparison just being based on wheather the result is true or false. Often a good optimiser will make the generated code look like 'C' is designed to do something intelligent in the case of a comparison, but it is not. > > You abandoned a bit of consistency for a bit of usefulness in certain > cases. It's a different trade-off, but arguments can be made for either > side. No it just looks like that from your point of view because you are not aware of all the facts. > > >> One of the benefits of C is the low-level of the language, and the > >> fact that it does not make these sorts of decisions on behalf of the > >> programmer. > > > > This is just a side effect of the implementation of the compiler not > > the original design goal. > > Are you sure? I think that keeping C very low-level, sort of a portable > and better readable assembler, was an original design goal. I'm pretty sure. Don't forget 'C' only came about because its predecesor ('B') was too slow (being an interpreted language). You can't get much further from a portable assembler than an interpreter. Also if your goal really were to produce a portable assembler you would not be using a dynamic stack to pass arguments between functions since there is an overhead in setting these up and using them within the called function. > > > Look at recent posts concerning the implementation of the right shift > > operator ">>". It can either preserve the sign or not depending on > > how easy it is for the compiler to generate the specific code. > > Exactly... IMO this fits the original design goal of a very low-level > language. I really cannot accept this. The greatest justification of all used by almost every 'C' programmer is: "portability" If the right shift operator ">>" is implementation dependent for a signed int then the behaviour of a program may or may not be the same if I compile a 'C' program using different 'C' compilers, let alone different target machines. Where does that leave portability? No, I contend that 'C' is different things to different people :-) > > > If you compare a signed or unsigned int to a float in 'C' the compiler > > will produce code which will give the mathematical result you expect. > > Why should this be any different when you compare signed and unsigned > > ints? > > As I wrote above, there are two issues involved. One is consistency with > other integer operations (you of course say that your implementation > maintains consistency with other comparison operations), and the other > is the low-level nature: a C comparison is typically a single assembler > instruction. Actually it is typically at least 2 assembler instructions :-) Friendly Regards Sergio -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist