> In this way, the ISR stays relatively free most of the > time. It only executes some short code when a byte > is received. So, even if the processing of the message > data takes a long time (for whatever reason), the ISR > is free to continue pushing data into the buffer. > Meanwhile, the processing thread runs "in the > background" to handle the received data. > > I would imagine that using an interrupt (rather than > consistently polling for data) is far more efficient. > Is this the way it's usually done in the microcontroller > world? Or have I totally missed something? I think, you are right; and not only for the microcontroller world, the concept holds true for other worlds too, for general software development - the so called n-layer architecture, for instance; and even beyond the software development, in my opinion... Some resource is catching the data in "real time", next level resource would gather and process the data later to produce more meaningful result. The next level resource would handle the data further. That was a good point, I must admit. -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist