Well said, but I think there are two subdivisions of your level 2. 2A) is, yes, software via some minimal support in the instruction set. Its the monitor / debugger system that is more useful and popular on processors other than the PIC because they use special hardware and instructions to support the debugging operation. Breakpoints are set by replacing the code at that address with a special instruction and saving the original instruction for later restoration. Some processors also have a "single step" mode where the monitor routine regains control after every instruction is executed in the main code. At that point, variable values can be checked etc... and the single stepping continued or whatever. I write Win32Asm code everyday and I use the WinDbg (free from Microsoft) on my '98 machine and it just rocks... I can see everything happening, break on variable value change, stack under/overflow, etc... But it doesn't really exist on the PIC except as a cooperative system where code is added to assist in the debugging. Sort of an advanced version of level 1. The Stickley register monitor is a good example http://www.piclist.com/stickleyregmon but a real PIC monitor doesn't seem to be possible as the '877 even requires an external interrupt (RB6 or 7) for the chip to shadow the PC so that the interrupt (monitor) routine can report to the host where the main program was and there is no single step. But what if the RTCC was set to interrupt after one instruction when returning from the monitor ISR? The only problem remaining is where did the main code branch to? Maybe on the next generation of chips. This level also has a serious problem for embedded controllers, as it does not allow them to run at full speed while at the same time debugging by running to breakpoint, etc... unless the code is modified and with flash memory, this is just not a good idea. Even if the single step thing were possible, it would interfere with timing on so many types of code that its usefulness would be limited. Almost every project involving a PIC also involves a close interface with external hardware. There is another level: 2B) This is where the ICD works. Its not just software, but a combination of hardware and software. A special interrupt on RB 6 or 7 causes the PC to be copied into the reserved registers and a special interrupt routine reads this and reports it along with the other register values to the host. Right now, that is the ICD, but it could easily be a standard ASCII terminal. To set a breakpoint, these same reserved registers are set to the address where the monitor wants to regain control and the hardware in the target chip (a compairitor) generates an interrupt when the PC reaches that address. Of course, the monitor could also set any other register on command from the host. The nice thing is that this hardware is built into each and every '877 made. Its not only on the version of the chip used for ICE. It uses some resources (pins and registers and code) and it can't do everything, but its heads and shoulders above 1) or 2a) because it allows the chip to run at full speed and does not require modification of the code (inserting breakpoint instructions, etc...) to work. But the best thing about level 2B) is that it is available in every production part. Just set aside some code space and leave RB 6 & 7 and the shadow registers free. If the production chip has a problem, you will be able to see it. You can say, Ok, run the (original, unmodified) program to this point and then stop and show me the registers. You know what values should be there at that point and looking at what values are there, will probably tell you what the problem is. You can single step if time is not an issue, and modify registers on the fly to setup artificial tests. Now, if we wanted to, we could re write the firmware so that it used the USART to communicate directly with an ASCII terminal rather than through the ICD module and the MPLAB software, connect RB 6 to RB 7 so that we could generate our own hardware interrupt to get the shadow register set to PC for external stops and even breadboard a little circuit that asserts RB 6,7 when a hardware event occurs (when the PIC asserts RA.1, stop the program and show me where it is in the code). Use the ROMZap or other boot loader for programming the code and you have a complete development system in 1 chip with the only cost being the RS232 interface. And you don't even have to loose the RS232 port to your application since the bootloader and monitor can serve as your serial interface ISR for the main application code. --- James Newton mailto:jamesnewton@geocities.com 1-619-652-0593 http://techref.massmind.org NEW! FINALLY A REAL NAME! Members can add private/public comments/pages ($0 TANSTAAFL web hosting) -----Original Message----- From: pic microcontroller discussion list [mailto:PICLIST@MITVMA.MIT.EDU]On Behalf Of William Chops Westfield Sent: Tuesday, March 21, 2000 17:11 To: PICLIST@MITVMA.MIT.EDU Subject: Debugging (ICD vs ICE vs whatever) There are nominally three levels of debugging, I think. I'm most familiar with them on rather large systems, but the same levels seem to exist in smaller embedded systems. 1) Debugginng via external event. You write code, put it on your system, and see if it works. When you're desparate, you put printf's in the appropriate critical sections of code, so you can see where it is and what it thinks the state of the world is (for small systems, you might be able to toggle some unused signals in meaningful ways.) This is painful and time consuming. 2) Debugging via CODE. I think the "ICD" like devices fit in here. Here, you can insert breakpoints into your executable code, and have some sort of supervisor snatch control away and let you look at the internal state of your program. If you're lucky, you get visibility at a source code level, with appropriate logical symbols. If you're less lucky, you only have visibility at the assembly level (unless that's your source, too!) Key identifying feature: multiple code breakpoints. 3) debugging via HARDWARE. This is your ICE. Specialized hardware lets you debug based on internal state of the processor, and allows the "supervisor" from (2) to be completely independent of the code being debugged. Key identifying feature: breakpoint on memory address reference. A personal observation is that there is a sort of double-humped learning curve. If you're at (1), there are certain features that you're just DYING for in (2), and you can jump on them immediately. But to learn the full capabilties of (2) takes quite a while. (I guess you're there when "going back" is "impossible" instead of just paintul.) I've lived most of my life at (2), and I can see some things that I'd like to have (3) for, every once in a while, but I don't NEED (3) OTOH, there are people who use (3) most of the time, and I bet THEY would have problems doing without it (even in the same "application environment" I'm working in.) The most useful features from each stage tend to migrate downward, as well. Our pre-debugger environment included a "rom monitor" that allowed a single breakpoint, deposit, examine, and continue. It was a lot better than nothing! Larger processors or systems will tend to include some sort of "address breakpoint" feature that software can trap, as well, perhaps using VM hardware... I'm not quite sure where simulators fit in. In some sense, they're type (3) systems - complete internal visibility. It another sense, since they have no real connections to the outside world, they can be useless. I guess it depends on whether you're debugging an algorithm or a troublesome piece of hardware... BillW