On Thu, 11 Aug 2011, jim@jpes.com wrote: > > Issac, > > The point you make about HLL's for PIC's being faster to develop > software with is valid. At least to a > point in my opinion. Many HLL's, C in particular, use many assembly > like statements to initialize some > hardware aspects, such as ports or A to D's. And if HLL's are what you > want to program in, that is all > well and good. > > I, persoanlly, prefer to remain with assembly. Not beccause I can['t > progrqqm in C. I can, and rather > handily I might add. I can also program in BASIC. I have even touched > on JAL, and PASCAL. However, I > just don't think the PIC's, at least the 16 series, gains enough of an > edge from HLL's to justify their > cost or complexity. Assembly is just fine. I have written enough > assembly over the years that I have a > large library of routines and algorithms to draw from. Plus, on the > PICLIST website, there are many many > routines and algorithms contributed by users. So the need for HLL's in > my opinion is a false need. Actually, when you have something as resource limited as a 16F628 it makes= =20 sence to use a high powered tool to squeeze every last little bit of=20 performance out of it. Yes you can move to a bigger MCU like the 16F648=20 but size isn't always the limitting resource. What about speed. What about= =20 cutting the execution time of your code by half instead of doubling the=20 clock. If you could stick with a small slow MCU wouldn't that (often)=20 translate to a cheaper and lower power (energy) product. I think that the main problem with the ASM vs HLL debate is the view that=20 HLL equals C equals terrible optimisation compared with hand crafted=20 assembler written by an expert assembly programmer. But that really isn't=20 about HLL's at all. It's actually about C and specific implemenations of C= =20 compilers for a particular CPU. If you write a lot of assembler sooner or later you will end up with a=20 sizable library of macros. Some of these macros will be similar to other=20 macros but differ in order to achive better optimisation for some special=20 conditions. e.g. you might have two macros ADD16 and ADD16_I where the=20 first adds a 16 bit variable to another 16 bit variable and the second=20 adds a 16 bit constant to a 16 bit variable. As time goes on you might=20 expand on this and have say ADD16_H which adds an 8 bit variable to a 16=20 bit variable (an optimisation to help you get 2 x 8 bit variables where=20 previously you only had room for 1 x 16 bit variable). Then the inevitable= =20 happens and you add the "immediate" counterpart to reduce the amount of=20 program space being used (say ADD16_HI to add 8 bit constant to 16 bit=20 variable). Your code gets denser and denser and in the end you end up with your own=20 personal HLL implemented as macros. Now you can lovingly hand craft your=20 assembly program so that it looks beautiful to you but the assmbler is=20 pretty dumb and just blindly accepts what you've written without offering=20 the slightest help. You might use your ADD16 macro incorrectly, say adding= =20 a 16 bit variable to an 8 bit variable (corrupting something in memory=20 next to the 8 bit variable). One of the things HLL's do (when coupled with= =20 a decent compiler) is keep track of the way you are using things in your=20 code. They can optimise for things like adding an 8 bit variable to a 16=20 bit variable (without promoting the 8 bit variable to a 16 bit variable=20 first), adding a constant to a variable even special cases such as adding=20 1 to a variable. Then we get the less obvious optimisations such as: what if I only ever call subroutine X with a constant parameter? Should I then create a new=20 version of X with the constant embedded in it? What if I only ever use=20 subroutine X once in my code, should I incure the overhead of setting up=20 the parameter and calling and returning from the subroutine? What if you=20 actually call subroutine X in many places in you code but those places are= =20 impossible to reach under certain build conditions. Do you then need=20 multiple versions of X, one with embedded constants, one used as a=20 single inline macro, one used as a regular subroutine? What happens when=20 things really start getting complicated and you have nested macros and=20 subroutines passing parameters and results around? Then you have all the headache of managing your RAM pages and CODE banks.=20 Lot's of stuff growing and shrinking as you optimise this and that. In=20 fact some optimisations come about purely by rearranging where variables=20 and code sit in memory. What better way to try hunderds of thousands of permutations than to do it= =20 with a computer. Many people seem to want to use C because it produces very predictable=20 code yet they complain when it doesn't produces super sneaky optimisations= =20 like a veteran assembly programmer can. If you want super sneaky, you need= =20 to be able to tell the compiler a lot more about your intensions. In fact=20 this becomes obvious when you look at some optimisations C programmers try= =20 to achive using the C preprocessor. Some compilers will analyse your code=20 more deeply and generate better optimisations but there is only so much=20 you can do with C. I have tried to explain time and again that good optimisations come about=20 when the compiler is able to understand the intensions of the programmer.=20 A "for" loop in C is a good example of this. It tells the compiler that=20 code within the loop is to be repeated. It also tells the compiler (in=20 many cases) what the control variable is and what the range is. The=20 compiler can then check the use of the control variable within the loop=20 and produce some very good optimsations based on this. If the cheap and=20 nasty C compiler you are using doesn't do this then you should not=20 conclude that all C compilers don't do this. There are some things that C does which are not suited to small embedded=20 systems (like having re-enterent code and passing parameters on a dynamic=20 runtime stack). But that doesn't translate to HLL is bad for this=20 application just that some aspects of C are. If you circumvent these=20 aspects, C becomes much friendlier to small embedded systems and allows=20 much greater optimisation. In conclusion, a good HLL and compiler can help you produce a highly=20 efficient product using a slow cheap MCU. This was the main reason I=20 produced XCSB as opposed to another C compiler. Regards Sergio Masci --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .