There are nominally three levels of debugging, I think. I'm most familiar with them on rather large systems, but the same levels seem to exist in smaller embedded systems. 1) Debugginng via external event. You write code, put it on your system, and see if it works. When you're desparate, you put printf's in the appropriate critical sections of code, so you can see where it is and what it thinks the state of the world is (for small systems, you might be able to toggle some unused signals in meaningful ways.) This is painful and time consuming. 2) Debugging via CODE. I think the "ICD" like devices fit in here. Here, you can insert breakpoints into your executable code, and have some sort of supervisor snatch control away and let you look at the internal state of your program. If you're lucky, you get visibility at a source code level, with appropriate logical symbols. If you're less lucky, you only have visibility at the assembly level (unless that's your source, too!) Key identifying feature: multiple code breakpoints. 3) debugging via HARDWARE. This is your ICE. Specialized hardware lets you debug based on internal state of the processor, and allows the "supervisor" from (2) to be completely independent of the code being debugged. Key identifying feature: breakpoint on memory address reference. A personal observation is that there is a sort of double-humped learning curve. If you're at (1), there are certain features that you're just DYING for in (2), and you can jump on them immediately. But to learn the full capabilties of (2) takes quite a while. (I guess you're there when "going back" is "impossible" instead of just paintul.) I've lived most of my life at (2), and I can see some things that I'd like to have (3) for, every once in a while, but I don't NEED (3) OTOH, there are people who use (3) most of the time, and I bet THEY would have problems doing without it (even in the same "application environment" I'm working in.) The most useful features from each stage tend to migrate downward, as well. Our pre-debugger environment included a "rom monitor" that allowed a single breakpoint, deposit, examine, and continue. It was a lot better than nothing! Larger processors or systems will tend to include some sort of "address breakpoint" feature that software can trap, as well, perhaps using VM hardware... I'm not quite sure where simulators fit in. In some sense, they're type (3) systems - complete internal visibility. It another sense, since they have no real connections to the outside world, they can be useless. I guess it depends on whether you're debugging an algorithm or a troublesome piece of hardware... BillW