"M. Adam Davis" wrote: > To save on the cost of having an extra clock wire for certian portions > of the digital transmission, the last bit of each byte is toggled, > regardless of the actual value of that bit from the A/D conversion. > This enables easy clock recovery for the phone company, and one less > dedicated wire for the clocking circuit. > Of course, the 56k calculation and bit about the phone companies > throwing away the last bit for clocking purposes is out of a DSP book. Unfortunately, it's dead wrong. Telcos have *never* used this method of clocking data over digital lines at any rate. In North America and other countries using the T1 standard, each channel really has 64 kbps of data (8 ksps x 8 bits per sample) devoted to it. However, older equipment using in-band signaling will "rob" (overwrite) bit 7 once every 6 or 12 samples, in order to indicate ringing / off-hook status at each end. A signal that passes through multiple such links in tandem may lose bit 7 in additional samples, because tandem links do not necessarily synchronize at the "multiframe" level, which would also synchronize the bit-robbing. As a result, you can only assume that you've got only 7 usable bits per sample, or 56 kbps. Nate Duehr wrote: > Ain't all this stuff FUN? ;-) > > Telco geek for many years turned Unix geek, but still love telco as > it's such a cool "natural" progression of technology for 30 years... Yes, it is. While the general concepts of your narrative were mostly accurate, the details were wrong in almost every respect. I'm speaking as a recent (up until 2002) designer of T1/E1 terminal multiplexer equipment. For example, repeaters were never "powered by the signal". The ones density requirement is related entirely to maintaining clock synchronization. AMI doesn't help create ones density, but bit-8 stuffing and B8ZS do. And so on. I don't have time to address all of your points. -- Dave Tweed -- http://www.piclist.com hint: To leave the PICList mailto:piclist-unsubscribe-request@mitvma.mit.edu