On Fri, 9 Jan 1998 01:15:30 -0700 "Eric W. Engler" writes: > >The encoding of the data is very simple, but the decoding is not >as easy. The problem arises because you are NOT guaranteed a >signal transition at the main clock frequency (FSK does guarantee >this transition, and so it is easier to code). > >The rest of this discussion is about decoding Manchester data. > >Sometimes it takes a half of one clock to get a transition, and >other times it takes 1, or even 1 and a half clocks before you get a >signal transition. This makes it difficult to recover the clock >that you need to decode the data. This is not Manchester data then. In your earlier post, you said correctly that "manchester data has a transition in the center of each bit". There is either 1/2 or one full bit time between transitions. However, special sequences which violate the protocol (missing a required transition) are often used for sync marker bits. The decoder would need extra logic to detect these. It is easier for me to think of the guaranteed transition being at the start of each bit, with an optional transition in the center if needed to set up for the next bit. The simplest possible decoder detects a transition and waits 3/4 of a bit time, ignoring any other transition during this time. Once the decoder gets in sync by receiving a signal with a full bit time between transitions (which occurs when the sequence 01 or 10 is transmitted), the transition detector will be certain to fire only at the start of each bit. The data line can then be sampled at a known point in the bits and correct data decoded. This decoder doesn't work really well in the presence of noise. For recording on disks or tapes or sending over cable, noise is not usually a major problem. The entire block must be re-sent if noise destroys even one bit, so the decoder's ability to re-sync in the middle of the block is not useful. The FM (single-density disk) format is very similar, having one transition per bit for one value of data and two transitions per bit for the other value of data. The same 3/4 bit time timer can be used, but it is necessary to take 2 samples during each bit to determine if a transition occurred in the center. The advantage of this format is that it is not sensitive to inverting the polarity of the signal during recording or transmission. Of course placing a NRZI encoder/decoder around a Manchester system has the same effect. It was realized that a lot of the transitions in FM are not necessary to convey data, so MFM was developed. If a 1 data bit was defined as having 2 transitions, the transition at the start of a 1 is deleted. So a 0 has a transition at the start, and a 1 has a transition in the middle. Additionally, if a 0 follows a 1, it has no transition at all. This guarantees a full bit time minimum betweeen transitions, with a possibility of 1, 1.5 or 2 bit times before the next transition. The minimum and maximum transition intervals are now twice as long, so the same hardware can record data twice as fast. Thus it is called "double density". "High density" disks use the same MFM format, but with better heads and disk materials that can handle twice again the data rate. Simple one-shot decoders do not work for MFM because there is not a guaranteed transition at the start of each bit. In fact, a long string of ones looks exactly like a long string of zeros: a transition each bit time. But the phase of the transitions is different. The decoder typically uses a PLL operating at twice the bit rate, and logic to remember the last bit state and the time since the last transition. Observe that the MFM standard breaks the data to the disk into basic time units of 1/2 the bit time. Each basic time unit may contain a single transition, however the basic time unit which follows one containing a transition is guaranteed to not contain one. Using shorter basic time units, but longer guarantees of nonrepeated transitions allows more data to be recorded on the disk, by more precisely controlling the phase of the data signal. This is the basis of RLL encoding. It uses a basic time unit of 1/3 bit time, with a guarantee of at least 2 but not more than 7 basic time units with no transition after each transition. Conforming to these rules, it is possible to store 8 bits every 16 basic time units. To record on the same hardware that previously used MFM, every 3 basic time units is equivalent to 1 MFM bit. Thus the overall bit rate is 1.5 times that possible from MFM. This format is used on nearly all modern hard disks as well as CD-ROMs (On CD, 17 basic time units store 8 bits. The extra time is supposedly used to "reduce DC content").