Rizzardi Flavio 309382/IL wrote: > > Hello there. > > > I received an application ideas book called "Dream Machine Reference > > Book" from Phillips. In it they talk about a method of using > ... > > like the PIC16C74. However, I can not quite understand how the > > teqhnique actually works. Does it make any more sense to anyone else > > out there? > > Exactly one year ago there was an article in Scientific American that > covered this issue. I know that the resolution can be improved from > 8 to 10 bit, I'm a little skeptical about a 4-bit improvement. > > Read the article (nov or dec 1995) or let me know if you cannot find > it. > > Bye, > Flavio Rizzardi > spillo@maya.dei.unipd.it The following assumes that the only source of errors is quantization error. Other sources of error include A/D non-linearity and electronic noise. Some of the points made apply to these other error sources as well, but I'll keep it simple for now. Let's suppose that you are sampling at a particular input voltage, say 2.331V. Suppose your 8 bit A/D is scaled so that its 256 possible outputs are spread over the range of 0 to 5.120V, so the LSB is worth 20mV. Let's say you read 117 (I'll use decimal for this). Since 117 * 20mV = 2.34, your reading has a quantization error of 9mV. Repeated sampling and averaging doesn't help, since you will continually get the same reading. However, suppose that you sum in a triangle wave with an average value of zero and 200mV peak to peak amplitude. Now, with repeated sampling of a signal varying between 2.231V and 2.431V, you get values ranging between 111 and 122. These values will average out to 116.5 (Obviously, you need to use more than 8 bit precision in your averaging.). The value 116.5 translates to a voltage measurement of 2.330, which is much closer to the actual value. In order for this scheme to work properly, various errors in the system need to be random (or, alternatively, fortuitously cancelling one another). In general, non-linearities in the A/D are not entirely random, so averaging cannot be expected to improve results beyond a certain point. However, for random errors, the Central Limit Theorem applies, and you can theoretically reduce your measurement error by a factor proportional to root (N-1), where N is the number of samples participating in the averaging process. For a 32x improvement (4 bits) you need at least 1024x oversampling. Due to non-random error contribution, I agree that 4 bits of improvement will be difficult, even if you can oversample that much. -- Paul Mathews, consulting engineer AEngineering Co. optoeng@whidbey.com non-contact sensing and optoelectronics specialists