At 13:34 04/13/99 -0500, Dave VanHorn wrote: >The break signal is, as far as I can tell, a hack. It induces a framing >error at the other end, with a byte value of 00, but there is no firm >definition of how long a break should be. I've seen "specs" for break of >100mS, 11 bit times, several seconds..... i guess the usual minimum is around 1.5 byte times, to be able to distinguish it from a normal framing error. something like this is built into some uarts. but in order to be on the safe side, many application protocols define a longer break -- this doesn't mean that a shorter break is no break condition, but it doesn't trigger the "break state" of the application protocol. one has to realize that even with a simple serial link you have at least three protocol layers: the physical layer, that is the signals on the lines, when they are considered high or low, the data link layer, which are the start, data, parity and stop bit definitions, and the application layer, which defines stuff such as minimum number of data bits, minimum break times and such, and which tells you what the lower links have to be able to provide to be "transparent" to it. for example, if an application layer protocol defines 7 data bits, all data link layer which provide 7 or more bits are transparent (with respect to this requirement). and the application layer doesn't care whether you use an additional parity bit or better line driver to get down to the max. allowed error rate. sometimes one gets confused with this scheme, because you might find application protocols which require parity bits, because they handle them. but then these are just additional data bits to the data link layer. ge