Gerhard Fiedler wrote: > All agreed, but this is probably a device that may fail if the memory it > needs isn't available -- typically because it has a user interface and > user configuration possibilities, and when it fails it's because the > user added too much functionality for the available memory. Your typical > small-micro embedded device may not fail within specified operation > constraints, and memory must be guaranteed to be available. Which > requires the memory design I was talking about. > > These are two quite different scenarios, and probably should be > discussed separately. > > Or -- if I'm mistaken --, how do you guarantee that a program that uses > heap allocation doesn't fail (due to lack of memory) when operated > within specs? First of all, how do you guarantee that _any_ program does not fail? AFAIK it is a programming axiom that you cannot prove that a program of any reasonable complexity is correct. You see, we're really comparing apples to oranges here. If you were to simply replace static memory allocation (SMA) with dynamic memory allocation (DMA) by allocating memory only on startup and not freeing it, then there is no downside to using DMA from a "safety" POV. You just make sure that every call to malloc() has an error handler that prints a message if the device is out of memory. You will get an error when you first run the program, as opposed to at compile time. If you must use DMA in your program to get its benefits, then it's pointless to argue whether it's inferior to the SMA or not. You just need a bit more discipline, think about the worst case situations, and write a test suite that simulates them. Vitaliy -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist