Olin Lathrop wrote: > Vitaliy wrote: >> While it's true that you can process data "on the fly", I don't like >> the coupling it creates. I like it when the responsibilities of each >> module are clearly defined. Processing data is not the comm module's >> responsibility, its job is to assemble a packet and pass it onto the >> next level. > > Ah, you're still stuck in the big system mindset. Just because the > data is handled as a stream doesn't mean protocol layering and module > boundaries need to be broken. Consider the comm mondule's job as > presenting a byte stream to the next layer up instead of a packet. > Let's say data is arriving via a UART. The comm module might handle > flow control, escape codes, validating packets, and extracting the > payload in those packets. It presents a GET_BYTE routine to the next > level up. As one packet is exhausted, it automatically switches to > the payload from the next. This is actually not only a small-system concern. If you're dealing with lots of data, like gigabytes, each layer of buffering costs a lot, even on today's typical server machines. If you do the typical small-data decoupling between layers, which consists of a lot of copying of data from one layer's object structure to the next layer's object structure, you can easily end up with a dozen of objects where data is copied without need. It's all in the relation between the data we're talking about and the system resources. For a 16F, allocating (and copying) an 80 byte buffer per layer may be already onerous, as is for a 32GB server system allocating (and copying) a 2GB data buffer per layer. Gerhard -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist