Mike Harrison wrote: > ... err a stop bit shouldn't be able to be too *long* - lengthening it > should just add idle time! Maybe if you send a lot of data > back-to-back, a sub-bitlength between characters may prevent the > receiver re-syncing properly? Obviously, the stopbit time can be too > short, especially with slow receivers! In fact, if you are designing an async receiver (hardware or software) that might possibly be used with a modem, you should be aware that modern modems may shave off a fraction of a stop bit in order to handle a speed mismatch if the modem at the other end is running sligthly fast. If I remember correctly, this is part of the V.14 standard (along with an even more esoteric thing called stop bit deletion), and the modem is allowed to shave the stop bit to 15/16 of the standard length. As Mike points out, a device that is supposed to receive correctly with a single stop bit should work with *any* stop bit length of at least one bit-time. It shouldn't matter if the transmitter uses 1.00 stop bits, or 1.02, or 1.15. And good engineering practice suggests that it should be willing to accept slightly less than 1.00 (even if you aren't trying to support modems). I think that the traditional stand-alone UART chips (AY-3-8500 et al.) required the stop bit to be at least 9/16 of a bit time. Cheers, Eric