Vitaliy wrote: >> Most of the time you don't need to >> store a buffer full of input data to process it. > > You never work on projects where data comes in while the processor is > busy doing something else? Sure, but that's different from deliberately buffering up a app-visible chunk to then go off and work on it. You can take care of bursty data or processing with a FIFO, but from the application's point of view it's still just a stream of data. A common method on a big system for reading a text protocol is to buffer up a line at a time, then parse it. However when you really look at the parsing algorithm, most of it is sequentially reading the characters in the line. With a little thought, you can usually eliminate or greatly reduce the buffer by processing the stream a byte at a time in the first place. > That's small project thinking! ;) Yes, think small ;-) > While it's true that you can process data "on the fly", I don't like > the coupling it creates. I like it when the responsibilities of each > module are clearly defined. Processing data is not the comm module's > responsibility, its job is to assemble a packet and pass it onto the > next level. Ah, you're still stuck in the big system mindset. Just because the data is handled as a stream doesn't mean protocol layering and module boundaries need to be broken. Consider the comm mondule's job as presenting a byte stream to the next layer up instead of a packet. Let's say data is arriving via a UART. The comm module might handle flow control, escape codes, validating packets, and extracting the payload in those packets. It presents a GET_BYTE routine to the next level up. As one packet is exhausted, it automatically switches to the payload from the next. In cases where packet boundaries need to be app-visible, there needs to be a little more interaction with the next level up, like be able to return a end of packet status from GET_BYTE or something. The point is, handling data as a stream is usually just as doable with good software structuring as handling it blocks at a time, and often more efficient for small systems. >> Occasionally you may need to process a chunk of bytes together, like >> when converting them from ASCII to integer, but that is still >> smaller than a whole line or record or something. Such small >> temporary buffers can often be allocated by adding them to the stack >> since their use is local to the current routine. > > Oh, c'mon Olin. Don't tell me you've never looked at Microchip's USB > or TCP stacks. I'm not sure what your point is. I looked at Microchip's USB routines a long time ago. After I stopped gagging, I wrote my own. By the way, my USB routines are a great example of what I'm talking about. The main application interface routines are USB_GETn and USB_PUTn. Each transfers one byte at a time from/to endpoint N. All the packetizing, ping pong hardware buffers and triple software buffers is handled inside the USB module. There are a few additional routines so that the app can control packetization to some extent, but in most cases a app would just call the PUT and GET byte routines. As for the Microchip TCP stack, yes I've looked at it in far more detail that I wanted to. A bunch of years ago I grabbed the latest stack at the time and integrated it into a system running a 18F6627 with a ENC28J60. It's been a pain in the butt ever since. I think the basic problem is that Microchip didn't think out multiple simultaneous TCP connections very well, and their aversion to multi-tasking made everything more difficult. Most of the code was just plain badly written too. I found and fixed probably half a dozen bugs from the MAC layer to the TCP layer, but still the system wedges occasionally. Over the years I have patched and kluged workarounds in addition to fixing the outright bugs, but it recently got to the point where we just couldn't continue that way. I finally sat down and wrote my own driver for the ENC28J60. That presents a nice layer to send and receive ethernet packets, and was designed with my cooperative multitasking system from the start. I then added ARP, IP, and enough of ICMP to respond to PING. Of course I tested and beat on it each step of the way. What's there is really solid. We can run even a basic PING test on the existing system and it rolfs after a short time. My system will occasionally drop a packet if you give it too many too fast, but so far even our sadistic test guy (that's a good thing for a test guy) hasn't been able to make it wedge. The TCP layer is next, which is really not that big a deal given the IP layer support that is already working and tested. > Bottom line, what you say makes sense for projects with severe memory > constraints. Once you stop counting bytes, dynamic memory allocation > becomes just another (very powerful) tool in a programmer's toolbox. I think we agree then. Part of the distinction between big and small systems is that you have to count bytes, or at least consider memory usage, on a small system. On big systems there is a OS, memory manager, and other abstractions so you can largely allocate Mbytes or more without much consideration. ******************************************************************** Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products (978) 742-9014. Gold level PIC consultants since 2000. -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist