Olin Lathrop wrote: >> You never work on projects where data comes in while the processor is >> busy doing something else? > > Sure, but that's different from deliberately buffering up a app-visible > chunk to then go off and work on it. You can take care of bursty data or > processing with a FIFO, but from the application's point of view it's > still > just a stream of data. Many of our projects would not be able to work, period, without a large UART buffer. You get a command via UART, you have to go service it, and in the meantime there's another command arriving from the host. So dynamic or not, you have to use a buffer. There are other instances where we process incoming data on the fly, when it is easier to do it that way (e.g., calculating checksum) or there's no other choice. > A common method on a big system for reading a text protocol is to buffer > up > a line at a time, then parse it. However when you really look at the > parsing algorithm, most of it is sequentially reading the characters in > the > line. With a little thought, you can usually eliminate or greatly reduce > the buffer by processing the stream a byte at a time in the first place. There are two problems I see with this: 1. It is less straightforward. 2. You end up with non-reusable code. >> While it's true that you can process data "on the fly", I don't like >> the coupling it creates. I like it when the responsibilities of each >> module are clearly defined. Processing data is not the comm module's >> responsibility, its job is to assemble a packet and pass it onto the >> next level. > > Ah, you're still stuck in the big system mindset. Yeah, I too feel sometimes like we're two deaf men shouting at each other. > Just because the data is > handled as a stream doesn't mean protocol layering and module boundaries > need to be broken. Consider the comm mondule's job as presenting a byte > stream to the next layer up instead of a packet. Let's say data is > arriving > via a UART. The comm module might handle flow control, escape codes, > validating packets, and extracting the payload in those packets. It > presents a GET_BYTE routine to the next level up. As one packet is > exhausted, it automatically switches to the payload from the next. What if it's a CAN message? >> Bottom line, what you say makes sense for projects with severe memory >> constraints. Once you stop counting bytes, dynamic memory allocation >> becomes just another (very powerful) tool in a programmer's toolbox. > > I think we agree then. Part of the distinction between big and small > systems is that you have to count bytes, or at least consider memory > usage, > on a small system. On big systems there is a OS, memory manager, and > other > abstractions so you can largely allocate Mbytes or more without much > consideration. What I said applies as much to the PIC24H as it does to MacOS. It would still apply to the higher-end 18Fs, but probably not to the lower end chips. Vitaliy -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist