sergio masci wrote: > I chose sin because to a great many people it is a black box which > they can relate to rather than some complex function that I'd invent > as an example. > > Yes the compiler writer could allow the programmer to attach a ton of > attributes to a library function to help it understand the function > better but that's not the point. The point is to make it simpler for > a programmer to write good code that is easy to maintain - to allow > the compiler to extract the necessary information from the function > itself without the programmer needing to add extra attributes - which > like comments might actually be at odds with the code and be wrong > (yet something else to debug). I was talking about standard libraries, where the programmer doesn't have to attach anything. The difference between libraries in general and standard libraries is that the standard libraries must conform to a standard to be a standard library, not any more or any less than a compiler. Of course such a standard in general doesn't define how it is implemented, but it defines what it does. > Let's try something a little bit more interesting: > [...] > main() > { > ... > delete_item(pos, &len, arr); > insert_item(pos, val, &len, arr); > ... > } > > Now how would you give the compiler enough information in the attributes > of the functions to be able to automatically optimise the above into the > equivalent of: > > replace_item(pos, &len, arr); This function should have been part of the interface in the first place. This is the tricky part of defining a good library interface: provide the interface that makes sense. > A far more concrete example would be that of having multitasking built > into the language rather than as a bolt-on library. XCSB knows about > multitasking and generates optimised code around it. You don't need > to generate several stacks and functions that need to be re-enterent > just in case they are being used in several tasks. Don't you think > that's a benefit? Of course. C for example lacks any multitasking specification; it wasn't relevant at the time the C spec was created, because it was handled by the operating system, exclusively. Now it's common for applications to do multithreading inside the application, and this shows the shortcomings of a suitable standardization. But besides this, I think that what you are talking about is an optimization. The compiler should have enough knowledge to be able to do it, and it should be specified well enough so that multitasking features are specified, but I still fail to see the big difference whether this is specified as a standard library or as part of the compiler. > I mean the fact that you can get tight efficient multitasking code > written in a HLL to work on something as small as a 16F628? People do it all the time. It is of course a feat to have this built into the compiler. The real feat is, however, to design this and standardize this in a way that it is efficient and suitable for everything from an embedded application running on "bare metal" in a 16F628 to an embedded application running on a stripped down Linux in an ARM to a desktop application running on Windows XP in a multicore processor. > This was a trivial example but consider a much more real case where > the above code would actually exist in a library as part of the > 'init' sequence of a module. Ideally the programmer would be > insulated from having to build large complex static tables within > his/her main line. This should all be taken care of by the library > writer / maintainer. > > You must have come across some horrible libraries yourself where > several things need to be declared as "#defines" before the > corresponding header of the library is included into your main line > for the sake of efficiency. Of course. But this is just a side effect of a limitation of C, not of the fact that this was in a library rather than in the compiler. Much of this ugly preprocessor code you're referring to is due to the fact that the C language doesn't specify much (if any?) compile-time evaluations. To do this, people have to resort to the preprocessor. Complex compile-time evaluations could/should part of a good language definition; it would help a lot, especially on small systems. But IMO this hasn't much to do with whether these calculations are defined in library code or in something that's supported by the compiler directly. > If you replaced the operators '=' and '*' with 'assign' and 'mult' such > that the above was re-written as > > assign(&x, mult(j, 2)); > > how would you give the compiler enough information on the above > functions to be able to do the same level of optimisation as before? If the semantics of assign and mult are clearly defined by a standard, it could do the same optimizations. > And forgetting about optimisation completely, what about promoting > integers to floats? > > e.g. > float x; > int y, z; > > x = y / z; > > here a decent language / compiler could see that the result result needs > to be a float and promot y and z to floats and perform a floating point > division. This may be what the programmer wants, or not. The compiler doesn't know whether I want y/z be an integer division or a float division or a double division. Chances are that if I had written this, it wouldn't be what I want. But I don't write such code; I tend to use explicit casts to make sure both the compiler and the human reader of the code know what the code is supposed to do. If I could, I'd disable automatic promotions in C and C++, so that I don't forget any explicit cast. Where compilers have warnings for this, I enable them. > How would you do that with library functions (even allowing for > overloading)? This is just a matter of how you define automatic promotions. But I'm not a huge fan of automatic promotions; it's just not that easy to get them right (I mean to get the rules right so that they make sense and don't create more confusion than they help), and the work involved (for the programmer) is IMO more with automatic promotions (always need to be on the lookout for the cases where the automatic promotions don't work as needed) than with manual type casts. In the case above, if I wanted to assign the integer division result of y/z to the float x, I'd have to remember to use a temporary intermediate variable, so that the compiler actually does what I want: float x; int y, z; int temp; // Only needed to "trick" the compiler temp = y / z; // into doing what I want. x = temp; I don't like this anymore than I like the C style automatic promotions. I'm a declared fan of explicit type casts. > How would the compiler do the promotion here? > > assign(&x, div(y, z)); Depending on how the language standard defines them, and the way assign() is declared. I still fail to see what the difference is between an operator '=' and a function assign(). (In fact, in C++ there isn't even any, built-in or library.) Of course, I agree that there probably are some optimizations that can be done when the code generation is in charge of everything. But it seems that so far you couldn't bring a good example where the difference wasn't in something else. The first examples were about compile-time evaluations, which are not part of the C standard, but could be part of a similar language standard. The latter examples were about function calls where the functions are "black boxes" to the compiler, ignoring the fact that for /standard/ library functions, the compiler knows exactly what they do (or are supposed to do). Gerhard -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist