> > As far as I am concerned I don't object to that, but I try to point > > out that this "functionally best" solution makes no sense > when other > > failure modes (with a compareable chance of ocurring) are still > > present. > > I don't agree with this, at least if I understand what you're saying. > > Let's say that I have each failure mode as a die of N sides, > so that it's probability of occurence is that of the number > "1" coming up on a throw. > > If I have 10 failure modes, then I throw 10 dice. If I can > throw nine instead, then I've reduced my odds of seeing a > "1". Do you not agree? You have reduced your failure rate by 10%, if your estimates of the failure rates are good. Experience shows that failure rate estimates are often only accurate to ~ a power of 10, so even that 10% is unsure (if one of the other failure modes happens to have a 10 times higher rate than you expect your reduction will be only 1%). But even if the reduction is 10% this is IME useless, unless it is part of an effort to tackle also the other risk factors with (estimated) compareable rates. So my bottom line is: using pull down/ups instead of just configuring as outputs is sensible (Russel: thish is not the same as technically best!) only when all compareable risk factors are also addressed. At the very least this means series resistors on all switches (when connected to PIC pins) and between PIC pins and outpust of other chips that have a significant drive capability (I think this will include all TTL-like chips, optocouplers, maybe also open-collector style busses like Dallas and I2C, etc etc). When all compareable issues are also addressed the pull up/down solution becomes (beside technically best) also practically sound. Note that when a failure mode with a significantly higher failure rate is present all efforts at reducing lower-rate failure modes are a waste of energy untill that higher rate failure mode is eliminated or reduced. Remeber the first (or maybe second) rule about code optimization: don't start optimizing unless you have measured what the lagest contributions are. A programmers guess in this aspect is often wrong. IME the same thing applies to an engineers estimate at failure reates (for a new design, for an old design the estimates of an experienced 'old hand' will often be dead accurate). As a side note: I once had to find out why a design was (according to the production line's estimates) much more expensive (compared to other designs) than the designers expected. The way the production line estimated the cost (correctly or not, but that's another discussion, which was at that moment irrelevant) included a (relatively heavy) 'per component' and 'per component type' cost. These costs represent pic-n-place machine time, and operator time for placing a component reel. This design in question had a lot of resistors (all the same value) for pulling used and unused pins down, and a lot of different resistor values (one for each value, in an analog part fo the circuit). The designers had never realised that both contributed significantly to the cost (at last, the cost as calculated by the production line). I also had to do failure calculations. Using the default (add all failure rates) calculation those pull up/down resistors were a significant part of the overall failure rate! This is of course not realistic, but using anything but the default calculation had to be agreed on by the customer, which meant lots of paperwork, time (which both contribute to the cost...) Wouter van Ooijen -- ------------------------------------------- Van Ooijen Technische Informatica: www.voti.nl consultancy, development, PICmicro products docent Hogeschool van Utrecht: www.voti.nl/hvu -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist