Please don't take this as a criticism. It's simply meant to point out what happens when one tries to take a square peg and make it fit a round hole. One has to look at the whole -system- in order to ensure a successful design. Assume NOTHING! Bob Ammerman wrote: > > Here is a real-life story about how PC serial port baud rates are _not_ > exact! Of course they're not, especially when computed from 'common' crystal rates like 2,4,8,10 Mhz. 9600bx16x13 =1.996800Mhz, a .16% error. (Now you know why many uC's have a /13 choice for baud rate divisor). > I had an application that required receiving an async serial data stream of > an unusual structure: 1 start bit + 240 data bits + 2 stop bits. Obviously > no hardware UART in the world is going to handle that. Also obviously, Actually it COULD if the clock were derived from an edge based DPLL. Floppy disk controllers commonly extract 10 times that many bits from an incredibly sloppy data stream. The key is that the system be DESIGNED to recover that kind of a bit stream. (E.G. Send 250 bits and use the first 10 to get PLL sync'd, or use the frame time to compute a 'matched' bit rate clock (as you ultimately did). A typical UART is designed to resynchronize itself -every- character (start bit). That's why it's called universal -asynchronous- receiver transmitter. You will note that the 'synchronous' implementations (USARTS) use a PLL clock to ensure bit rate tracking, and a coding scheme that supports it (no really long strings of one bit level allowed). this > will be about 30 times as sensitive to timing error as 8N1 communications. Of course. Square peg, round hole. A UART wasn't designed to handle really long bit stream streams that don't provide for synchronization, software based or not. > Well, I built a bit-banged UART based on a PC platform (actually an embedded > PC product). I needed a high-accuracy clock for sampling the input signal > (which I brought in through the CTS signal on COM1). Instruction cycle > counting was out of the question (with caches, poor documentation, etc.). > > After a bit of thinking, I realized that if I programmed the UART to operate > at 9600 baud, 6 data bits, 1 start bit and 1 stop bit, that it would be > ready for a new character every 8/9600 of a second, or 1/1200 of a second > (my desired baud rate). So I just kept the transmitter supplied with dummy > characters (which I just let fall of the end of the TX wire), and every time > it was ready for a new one, I sampled the input on the CTS signal. So you basically undersampled your data stream by ignoring Shannon's criteria. You sampled your 'information' at the same rate it was supplied, and not twice the rate as required by his law. That's one reason why UARTS commonly use 16X sampling. What I don't understand, is if you used an external 'high accuracy clock', why you would have a problem with differing clock rates. 'Accuracy' and 'precision' are two different measurements. One can have a highly precise WRONG clock rate or one can have a 'accurate' rate, which rate is known to some definable error bound traceable to a standard. Engineers often use the terms interchangeably, but they are NOT the same where metrology is concerned. Basically, you may have a 'precision' of 1 part in 4096 (12 bit A/D) but have an 'accuracy' of only 1 part in 256 (if you have a sloppy reference voltage). > Well, everything worked pretty well on a desktop machine I started with, but > when I tried using the real hardware I started getting all kinds of comms > problems. I tried a bunch of different machines with varying levels of > success. > > Well, to make a long story short, the problem is that the various PC's were > using various clocks for the baud rate. In fact, it seems that common design > practice these days is to find some 'close' frequency on the motherboard > that can be divided down to something approaching the correct input > frequency for the UART chip. I way errors of as much as 0.2%, which were > more than sufficient to kill my communications. Of course, since you didn't resynchronize your communications with any bit transitions that you found. Nor did you send a 'synch' stream to allow a PLL clock to lockup to match the incoming bit rate (i.e. -design- for inaccurate clocking). > I solved my problem by recoding, basing my timing on the system timer chip > instead of the UART. The system timer is NO better a clock than the one the UARTS use. Your scheme worked because it presumably gave you a much finer clock granularity. OR you were starting with matched clock rates so the difference in bit rate was not a problem over the frame time. > Bob Ammerman > RAm Systems > (contract developer of high performance, high function, low-level software) Thank you for sharing your experience with us. Robert -- http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]: PIC only [EE]: engineering [OT]: off topic [AD]: advertisements