On Jul 4, 2010, at 2:58 PM, Isaac Marino Bavaresco wrote: > For ASCII protocols it is easy to have start and end markers that > can't > appear inside the packet itself, but ASCII protocols are less > efficient > than binary protocols. > > For machine-to-machine communication I prefer binary protocols, with a > start marker, length, payload and a checksum. Protocols with explicit "special" characters usually have a mechanism for "escaping" them if they appear inside the packet itself. But you can also just allow the "end" character to occur inside the packet. If it's not ACTUALLY the end (as determined by length, etc), you can still detect that by other mechanisms at either ISR or process level, and you're still cutting down the overall effort involved. Packet formats with checksums as the last bytes are rather sucky :-( For example, one way to speed up PPP in some complex network topologies (eg over X.25 PAD intermediate network) is to send a "return" character after each packet. It comes after the checksum has been parsed at the receiver, so it doesn't go in the packet, and it comes during a state when a "start" character is expected, making it particularly easy to ignore (you don't have a partially complete input packet that you have to figure out what to do with.) > I stipulate a maximum delay between any two bytes of a packet I hate protocols that do this. It makes them very unreliable to "tunnel" across arbitrary comm technology (eg network protocol to dialout modem pool to public network to async server to destination.) "delays" are not preserved. The less a protocol can depend on timing, the better. (You're still free to use timing to make things more efficient, of course. After a delay between two bytes is a fine time to wake up process level code.) BillW -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist