On Sun, 12 Oct 2008, Olin Lathrop wrote: > sergio masci wrote: > > You're assuming that you MUST use the hardware stack to > > call a function, > > On a PIC that is the case if you use the native instructions for that > purpose. You can simulate a call in software, but it will take a > significant number of additional instructions and the return address still > has to be saved somewhere. Also the subroutine has to be written to support > this since the mechanisms for returning would be different. What do you class as significant? Surely "a significant number" would mean a number that would correspond to a significant overhead in either execution time or program space, not just "hey it takes 4 extra instructions to do this simulated call - that's significantly more than 1". > > > that all functions must incure overheads for a call. > > Unless you in-line expand them at assembly/compile time, they are going to > incurr some runtime overhead. If you do expand them, they will take > additional program memory. There is no free lunch. Correct. But if it's the cost of overcomming a problem then so be it. It's like saying I need to do floating point calculations on a PIC so I need a floating point library. Yes it consumes promgram space but if you need it you need it. > > The model of many short subroutines with no globals can be useful on "large" > systems, but on small systems like PICs the costs can get in the way. For > low volume projects you can maybe use a bigger PIC, but that's not always a > option. There is a reason Microchip make little PICs too. Yes I agree. And sometimes it becomes necessary to convert a few common instructions into a subroutine and call it in several places even though the net result is only a saving of a few instructions, just so that the program will fit in the limited space available. Yes you incure the overhead of the call and return but at least your program fits. > > As with any guidelines, you have to know when to apply them and when it's > not appropriate. Global variables can be very useful, and they are > appropriate in some cases. The same holds true for subroutines that are > more than 10 lines long. Even on large systems, I don't really care about > the size of a subroutine in lines of code. Agreed. > > It's much more important, on all size systems, that the overall project be > broken into functional units in the right places. You first have to break > the project into subsystems, then think about the tasks each of those > subsystems have to perform in a general way outside the context of the > particular project. If you can separate the subsystem from the particular > implementation and think of its true low level tasks, you can probably > design a good subroutine interface for it. Then of course there are always > compromises about how general is needed versus how complicated. The right > subroutine interface gives you a better tradeoff between these, but the > tradeoff is always there. But that's what a good HLL and compiler will do for you. It will have a selection of protocols available to it from which it can select the optimum way of calling functions and passing parameters and results between them. > Artificially breaking subroutine just because > they exceeded some arbitrary code limit can make the module less > understandable and more of a pain to maintain. Yes but sometimes you've no choice. I'm not sure I follow. On the one hand you seem to be argueing against breaking down software so that it will fit into a small PIC and on the other you seem to be argueing that your software should not be compromised by being forced into a small PIC? > > There is a right ballance to everything. About the only hard and fast rule > is that hard and fast rules are generally bad. Agreed. Regards Sergio Masci -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist