Terry Harris wrote: > So you say you dare not use a heap because you can't tell if > everything will fit, I never said this. I dare use pretty much everything that makes sense. I actually said that I use the heap a lot, in general -- just not in every application I write. > (assuming typical kind of normal vs peak memory requirements and no > significant heap fragmentation problems). My thing here is about the applications where assuming is not good enough, were I need to calculate and predict rather than measure and assume. >> But this discussion here is about programming in C on small systems >> with memory that is, sometimes severely, limited. My argument is >> that there are applications where there is no problem with using the >> heap, and that there are applications where there is one, and that >> it's important to know the difference. What's the problem with this >> opinion? > > I don't have a problem with that. I certainly don't recommend using > dynamic memory allocation without an understanding of what is going > on and the ability to monitor the allocation system. Sometimes, I'd say, there are applications where exactly the understanding of what goes on tells me that using the heap is not the most efficient solution. My argument still is that not all small embedded system applications are good candidates for using the heap, because there are situations where its advantages are outweighed by its problems -- nothing else. Is this really such a problematic statement? All the potential problems I described with the heap are not a problem in all situations, in all applications, but with some. (Just describing that there are applications where they don't apply doesn't take a bit away from this -- I know there are, I write those applications, too.) And with a part of those applications where those concerns apply, it makes it more efficient (programming-wise) not to use the heap, because the effort to make the heap work according to spec is more than making static memory work according to spec, without significant gain from using the heap. > I do have a problem with an argument that because I can't predict > exactly what the worst case heap usage and fragmentation will be I > can't use a heap at all. In some situations, this is exactly the killer. It's just a balance of advantages vs disadvantages that sometimes goes this way. > Worst case stack usage or worst case response time to non-interrupt > generating events are equally difficult to predict exactly. No. My 16F and 18F PIC compilers tell me exactly how the stack is laid out, worst case and otherwise, after a build, without any additional work on my end. I'm not sure you and I are on the same page here. Worst case response time can be calculated if needed, and is a bit more time-consuming, but this can be mitigated by modularizing the program and the calculation, and writing the program with this need in mind. Of course this can be done with the heap, too -- but then you're at the point where often the use of the heap brings with it so many additional constraints that the gain from using it is both limited and offset by the additional constraints. > If you are do real safety critical stuff then things get a lot more > anal. Many of my embedded systems are "real safety critical stuff." Many of embedded systems in general are. Many systems in, for example, process control or automotive are "real safety critical". Gerhard -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist