Gerhard Fiedler wrote: >>> While it's true that you can process data "on the fly", I don't like >>> the coupling it creates. I like it when the responsibilities of each >>> module are clearly defined. Processing data is not the comm module's >>> responsibility, its job is to assemble a packet and pass it onto the >>> next level. >> >> Ah, you're still stuck in the big system mindset. Just because the >> data is handled as a stream doesn't mean protocol layering and module >> boundaries need to be broken. Consider the comm mondule's job as >> presenting a byte stream to the next layer up instead of a packet. >> Let's say data is arriving via a UART. The comm module might handle >> flow control, escape codes, validating packets, and extracting the >> payload in those packets. It presents a GET_BYTE routine to the next >> level up. As one packet is exhausted, it automatically switches to >> the payload from the next. > > This is actually not only a small-system concern. If you're dealing with > lots of data, like gigabytes, each layer of buffering costs a lot, even > on today's typical server machines. If you do the typical small-data > decoupling between layers, which consists of a lot of copying of data > from one layer's object structure to the next layer's object structure, > you can easily end up with a dozen of objects where data is copied > without need. Have you guys ever heard of passing data by reference? Vitaliy -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist