Terry Harris wrote: > On Thu, 15 Oct 2009 12:00:49 -0300, Gerhard Fiedler wrote: > >> Vitaliy, you never explained how you can make heap usage predictable. >> Of course I don't mind if you disagree -- as I see it, this is your >> problem not mine --, but thing is that as long as you just say >> "believe me" without presenting any further explanation, I for one >> won't believe you and take the lack of further explanation for an >> absence of good arguments :) > > Come on, if you can predict memory usage for static allocation at > compile time then you can predict heap usage for dynamic allocation > at run time because it is the same. Could be, or could not be. I think there's a difference. For example, it has been mentioned during the course of this discussion that one advantage of using the heap is that you can use memory for one task when it is not in use for another task. (You also say that later in this message.) When you start doing such things, you may end up using less total memory than you'd use with static allocation -- but you will end up with a system that is much more difficult to predict in its memory usage. You may need to guarantee that those overlapping memory usages don't overlap in time. (Depending on the spec; some systems may fail with a message to the user, others may not when operated within spec.) I'm not sure whether it is possible to analyze the code and predict the heap usage for each spec'ed scenario, but in any case nobody has yet described how they do this in practice. And any such evaluation needs to be repeated whenever a change to the program is made, which means it should be automatic. I'm not aware of any tools that do this. > What is harder to predict is how much free heap is unavailable due to > fragmentation. Exactly. The problem is not really that much to determine how much memory is needed, but whether the available memory is sufficient. Figuring out the available memory with the standard C heap is not trivial -- unless you add specific constraints. > That depends on the nature, size, and lifetime of the allocations. > That doesn't mean you can't make a good enough prediction ... Of course. But you need to do this whenever you change the program. And since it is a manual and time-consuming process (nobody has yet suggested an automated process for this), it is very likely to be shortcut once in a while, if not frequently. > ... and various measures can be taken to mitigate fragmentation. Sure. Often by replacing the heap with a custom allocator based on arrays. (I hope the irony is being noted :) > The problem with static allocations is being sized for worst case > conditions they all have a lot of wasted free space most of the time. > Dynamic allocation lets you share that free space (or contingency) > between all types of allocation. I agree with this. But the difficulty comes with predicting whether and how these different tasks and their memory needs overlap in time. You can of course have a shorter input buffer, designed to be enough for most scenarios, and count on the fact that it can be expanded when needed. But this then needs to make sure that no other task is needing this memory -- or that it is not in use by another task when needed. Such predictions are quite difficult to make without closely analyzing many different usage scenarios and code paths -- which is a lot of work, and for me, for a certain class of applications, makes the use of the heap so onerous that I just don't do it. > Will free space on the heap lost to fragmentation ever be larger than > the free space lost to static allocation contingency? I would say you > probably have to deliberately design a system with unfortunate > allocation patterns to make that true. This depends a lot on the system, the type of program, the amount of memory needed and the amount of memory available. I'm not sure this came through, but I use the heap a lot in my daytime job programming (mostly C++ these days, and big systems). In a typical C++ program, you have to use the heap (at least if you want to use any libraries). And even there, we need to take costly and difficult measures to limit memory (that is mostly heap) usage or else the system just grabs too much and slows down to a crawl due to the OS paging too much. But this discussion here is about programming in C on small systems with memory that is, sometimes severely, limited. My argument is that there are applications where there is no problem with using the heap, and that there are applications where there is one, and that it's important to know the difference. What's the problem with this opinion? Do you guys really think that there is /never/ a problem with using the heap on a PIC? Gerhard -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist