On Sat, Feb 18, 2006 at 03:36:09PM -0500, David VanHorn wrote: > :) I'm surprised that the responses have been so sparse. Perhaps everyone's > away from their keyboards. > > I do all my projects in assembler. I am learning C, but I am reluctant to > use it in any real project, because I see that it adds a couple of layers of > problems to the mix. I don't have to worry about compiler bugs, or what the > compiler will make out of my code. I don't have to worry that changing one > statement will cause some "ripple effect" and make changes elsewhere because > the optimization now works out differently. I use SDCC, and I will say I have run into a number of compiler bugs. It sucks, a hell of a lot, and you gotta know assembly to be able to convince yourself that it really is the compilers fault. Usually I'll find some behavior that I think is impossible, so I'll read the interm .asm files the compiler makes, and see what things look like. Usually everything looks right and it's some subtle bug somewhere else, but sometimes you see really weird stuff that's probably a compiler bug. Recode to get around it. This has only happened when I've done really weird stuff, like used piles of goto's to make a state machine... That said. I use C for everything I possibly can. Most of the stuff I make isn't speed critical, I also don't make many copies of it, so the extra money for a big, fast, 18F chip verses a carefully optimized 12F asm implementation is *well* worth it. Lifes too short to write asm. I'm actually investigating other types of microprosessors right now, like 8050's and the Atmel chips. I'd like to find something better supported by the free compilers I can get, for more complex projects. > Frankly, I'm surprised at the relatively primitive level of the compilers > I've seen. Optimization seems pretty poorly done. I agree %100! Man, the stuff I've seen, and I've yet to figure out how to do inline functions in SDCC, it's not in the documentation anywhere, and the included libraries don't ever use the inline keyword... > Part of it I'm sure, is how I was brought up, but I think the exposure to > the machine's internals is very important. "Buy a bigger chip" isn't always > an option, and sometimes you really need all the performance you can get, > and a compiler isn't going to do your thinking for you here.. You have to > know what's the fastest way to get something done on THIS machine. Well even with my attitude of "what's the biggest chip I can fit?" I'm very glad I've done plenty of asm programming too. It's nice to know that my lazy use of floating point in some code is really slow, so I know to look there first if I need to speed something up. Another thing I find that helps is to prototype stuff on PC's first. I like to use a graphical programming language called pure data to make mockups of the behavior of my projects. It's really easy to write complex things in it, so I get an idea of if I want to build the project at all. One day I might even go as far as to write algoritms on the PC first, have never needed to yet, but it sounds like a good idea to me. Linux can do hard-realtime stuff with the RTLinux extensions, so I can see myself making stuff with commercial IO cards and plug those into circuitry with all my code running on the PC. Premature optimization is the root of all evil... -- pete@petertodd.ca http://www.petertodd.ca -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist