I'm writing a series of PIC tutorials (yes, yet another...), starting with the baseline architecture (to keep it initially simple) - see http://gooligum.com.au/tutorials.html. I'm currently working on one covering the 8-bit ADC module in the 12F510/16F506. Now, I'm fine with using the ADC - it works great (despite having only ever used the 10-bit ADCs on the midrange parts before). So this is not a "how" question. It's a "why", because I want to make sure that my explanations are correct, and clear. Two things: 1. The ADC conversion clock (setting Tad) can be selected to be Fosc/16, Fosc/8, Fosc/4 or INTOSC/4, where Fosc is the PIC's clock speed (whatever it may be, whether an external or internal clock) and INTOSC is the internal 4 or 8MHz oscillator. Ok, it's clear why there are divisors of /16, /8 or /4 for Fosc. Tad has to be within a certain range, and if you're running a fast clock, you have to divide it down more for the ADC. No problem. And it's clear why INTOSC has to be an option - if you're running a 32kHz clock, Fosc/4 is too slow, so you have to use the internal RC oscillator to clock the ADC, while the rest of the PIC runs at the slow 32kHz rate. Fine. My question is - why not always use INTOSC/4 as the ADC clock? Selecting INTOSC/4 will always work, right? So why have options for Fosc/X at all? If you're using say a 20MHz crystal, using Fosc/16 gives Tad=.8us, while an 8MHz INTOSC/4 gives Tad=0.5us, so it's not as if using a fast crystal makes your ADC conversion any faster. And if you use INTOSC for the ADC clock, the rest of the chip still chuggs along at 20MHz, so it's not as if you're slowing your processing down by using INTOSC for ADC. So why would you not always choose INTOSC/4? What am I missing? If not conversion speed, perhaps stability? But your code has to wait for /DONE anyway, so if the conversion time varies a little with temperature (maybe a few %?), who cares? I don't get it... 2. Acquisition time. The midrange reference manual has a whole section on calculating Tacq, and most examples out there include a small delay (say 20us) after selecting the channel and before beginning conversion, to allow the holding capacitor to charge. However, there is no mention of this requirement in the 16F506 data sheet. The sample code does not include any acquitisition delay, nor does the "A/D converter characteristics" table mention it - other than stating a maximum source impedance of 10k. Is Tacq no longer an issue? Is acquisition simply counted as part of the conversion period? It would appear so, from the data sheet... Thanks in advance for any answers, David -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist