Em 4/7/2010 20:45, William "Chops" Westfield escreveu: > On Jul 4, 2010, at 2:58 PM, Isaac Marino Bavaresco wrote: > >> For ASCII protocols it is easy to have start and end markers that = >> can't >> appear inside the packet itself, but ASCII protocols are less = >> efficient >> than binary protocols. >> >> For machine-to-machine communication I prefer binary protocols, with a >> start marker, length, payload and a checksum. > > Protocols with explicit "special" characters usually have a mechanism = > for "escaping" them if they appear inside the packet itself. Escaping is bad when there are a lot of the special character in the payload. > But you can also just allow the "end" character to occur inside the = > packet. If it's not ACTUALLY the end (as determined by length, etc), = This is exactly what I do, just I don't use an "end" character, I use a "start" character and a length ( usually one or two bytes, depending on the maximum allowed packet length). The "start" character may happen anywhere, as long as the receiver is synchronized with the data flow. Problem may arise only if a "start" character is lost and a "start" character is present inside the packet. Most probably the packet will be rejected, but there is a small possibility that once the receiver get unsynchronized and characters with value equal to the "start" characters keep arriving inside packets, it may not be able to resynchronize without a pause in the transmission to detect the timeout. > you can still detect that by other mechanisms at either ISR or process = > level, and you're still cutting down the overall effort involved. = > Packet formats with checksums as the last bytes are rather sucky :-( Checksum is to ensure the received data is correct and the receiver won't act based on bad data. > For example, one way to speed up PPP in some complex network = > topologies (eg over X.25 PAD intermediate network) is to send a = > "return" character after each packet. It comes after the checksum has = > been parsed at the receiver, so it doesn't go in the packet, and it = > comes during a state when a "start" character is expected, making it = > particularly easy to ignore (you don't have a partially complete input = > packet that you have to figure out what to do with.) > > >> I stipulate a maximum delay between any two bytes of a packet > I hate protocols that do this. It makes them very unreliable to = > "tunnel" across arbitrary comm technology (eg network protocol to = > dialout modem pool to public network to async server to destination.) = > "delays" are not preserved. The less a protocol can depend on timing, = > the better. (You're still free to use timing to make things more = > efficient, of course. After a delay between two bytes is a fine time = > to wake up process level code.) > > BillW This timeout I use only for point-to-point protocols (direct serial connection). For other medias (Ethernet, etc.) it is better to rely on their "packeting". Even with this timeout, the protocol may survive tunneling, the timeout is just to help re-synchronize the receiver but it is not mandatory and won't be needed it the error rate is very low. Isaac Isaac __________________________________________________ Fale com seus amigos de gra=E7a com o novo Yahoo! Messenger = http://br.messenger.yahoo.com/ = -- = http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist