Hello Eric, We have Asynchronous communication. That means that both systems involved have their own clock and thus bitrate. They synchronize at the leading edge of the start bit. That is the only guaranteed 0-1 transition in the time-frame sequence. This transition may be from 'idle' to start bit or from stop bit to start bit. Stop bits do nothing else than assuring the line is 'idle' for a minimum time, and therefore adding stop bits or stretching them should have no impact on the receiver. The RS232 and/or V24 standards specify the bitrate to be 9600 bps with a tolerance of - I recall from my memory - 2%. That implicates that after 10 bits (start, 8 data, stop) the 2 UARTs may differ 2+2 makes 4% of 10 bits, that is 0.4 bit. (things are worse in RS485 where 9 databits are used in a frame, 8 data and an address flag). So you have to expect the leading edge of the start bit from just after the middle (to be more exact: about 10 microseconds) of the stop bit. You'd better not to miss this leading edge, because all what follows next is referenced to this edge.... Problem is of course, you have to detect the stop bit (in a simple system, no DSP, just sample at half bit time, to be able to generate a 'framing error', and FROM within 10 us detect the leading EDGE of the following start bit. There is more to say and perhaps still a lot to explain about this, let me know if so, but this is the heart of the matter. O yes, one important thing.. Once you have your byte-by-byte communication working, you'll find another challenge: you have still 2 systems with a different bit-rate, but at a higher level it may be a different byte-rate.. data may come in faster than you send it! In a higher level protocol you have to deal with that too. That is where buffers are invented for. Good luck, I am interested in this! Eric Smith wrote: > Mike Harrison wrote: > > ... err a stop bit shouldn't be able to be too *long* - lengthening it > > should just add idle time! Maybe if you send a lot of data > > back-to-back, a sub-bitlength between characters may prevent the > > receiver re-syncing properly? Obviously, the stopbit time can be too > > short, especially with slow receivers! > > In fact, if you are designing an async receiver (hardware or software) that > might possibly be used with a modem, you should be aware that modern modems > may shave off a fraction of a stop bit in order to handle a speed mismatch if > the modem at the other end is running sligthly fast. If I remember correctly, > this is part of the V.14 standard (along with an even more esoteric thing > called stop bit deletion), and the modem is allowed to shave the stop bit to > 15/16 of the standard length. > > As Mike points out, a device that is supposed to receive correctly with a > single stop bit should work with *any* stop bit length of at least one > bit-time. It shouldn't matter if the transmitter uses 1.00 stop bits, or > 1.02, or 1.15. And good engineering practice suggests that it should be > willing to accept slightly less than 1.00 (even if you aren't trying to > support modems). > > I think that the traditional stand-alone UART chips (AY-3-8500 et al.) > required the stop bit to be at least 9/16 of a bit time. > > Cheers, > Eric -- Regards, Wim ------------------------------------------------ Wim van Bemmel, Singel 213 3311 KR Dordrecht Netherlands mailto:bemspan@xs4all.nl ... Life is about Interfacing ..... ------------------------------------------------