> This is a simple question and I doubt I'll get a simple answer but here goes.. > I hope I phrase things the right way here. > > It's a given that tight ASM code is always the most efficient, ie few if any > wasted instructions. This is not always true. Simply having a smaller number of instructions does not necessarily make the section run faster. Consider: for (j=0; j<5; j++) { a++; } /* */ and a++; a++; a++; a++; a++; > > > If I write and compile a pogrom from a higher language such as C or > PicBasic how > will it compare to the ASM version? I'd assume the C version should be better > than the PB version with the amount of benefit related to how good the > compiler is. I would agree that it is totally dependent on how good the compiler is but I would not agree that a C compiler would necessarily generate better code than a BASIC compiler. > > I've not done enough (or tried to do enough) ASM lately to be able to tell > and I have > no C compiler to play with. I'd assume other than a critical timing situation or perhaps > something involving interrupts the basic version wouldn't be too bad. Even with critical timing situations and interrupt handling a good BASIC compiler should produce respectable code that should do the job just as well as the equivalent C compiler. Unfortunately in this cost sensitive industry users spec MCUs that can just about manage the task required of them so there tends to be very little tolerence for inefficiency. Bottom line - a loss of only a few percent in performance gives the impression that code generated by a compiler can't hack it whereas code hand crafted by a skilled assembler programmer can. > I don't > know how far off the mark I am but I'd like to get an idea of what > situations require > or need Sort of along the lines of if ASM is 100% then C would be 90% ie > 10% wasted > or inefficient code and say basic might be 75%. In my experience a good compiler will produce code that will knock the crap out ofcode produced by an average assembler programmer so perhaps we should redefine your 100% Let us say that given a single high level statement a highly skilled expert assembler programmer will producing equivalent code that cannot be further optimised is 100% In this case I have seen many compilers (not necessarily for the PIC) that will routinely produce 100% efficient code (say about 90% of the time) and that the efficiency drops down to about 80% for less common strange C or BASIC source. It tends to drop in steps of about 5% this is mainly because of the way high level source statements map into small sequences of assembler instructions. So a C statement that maps to 5 assembler instructions is 80% of the equivalent 4 instructions produced by the highly skilled expert assembler programmer. You can see that there is not really much give here for the poor compiler even though it is only out by one instruction. From this you should conclude that most good compilers should produce code that is optimal to better than 95% So how come everyone still claims 90% tops ? The answer is that the highly skilled expert assembler programmer tends to optimise large sequences of instructions in combination. A good compiler will also optimise sequences of instructions. But the assembler programmer knows the intent of his code whereas the compiler is trying to work it out. The assembler programmer knows properties about the problem that the C or BASIC programmer cannot comminicate to thier compiler so the compiler has to produce safe code in all cases, it cannot take unsafe short cuts. You can get around this problem by using higher level languages or mixing your high level language with a lot of assembler. The XCASM assembler actually allows you to enter high level statements and it generates optimised code for these in situ (often to 100%). It's kind of like using intelligent macros within your assembler source code. The great thing about this is that it doesn't interfere with the overall optimisation unless you need to maintain W and FSR intact for a long sequence of assembler. But even then the use of FSR is predictable and restricted to use with pointer dereferencing (yep XCASM knows about pointers), and W is such a scarse resource it would be very difficult even for a highly skilled expert assembler programmer to reserve its exclusive use for a long sequence of instructions. XCASM will convert A = A | 4 to bsf A,2 and A[j+3-1] = A[j+3-1] + 1 to movf j,w addlw A+2 movwf FSR incf INDF If you're interested you might like to look at the Hi-Tech C compiler, the SDCC (Scotts) C compiler and my own XCSB structured BASIC compiler. XCSB does actually optimise single high level instructions very well. It uses XCASM for the back end so mixed mode (ASM/BASIC) programming should kick butt. XCSB and XCASM both know about pointer arithmetic and early out operators (just like the C logical AND and OR operators). The LITE version of XCSB is free of charge for personal non-commercial use. Regards Sergio Masci -- http://www.piclist.com hint: To leave the PICList mailto:piclist-unsubscribe-request@mitvma.mit.edu