PICkles: While writing some averaging code, I started realizing my grounding in the theory of how binary math works is very fuzzy. It seems like I worked through binary multiplication some 20 years ago and then promptly forgot how it worked. Could one of you kind folks let me know if this is on the right track? One of the ways to implement a running averaging algorithm is (Old average) *256 - (Old Average) + (New data point) This gives a result that is 256 times the actual average, which would be quite useful in my case. Why divide? Or, if I really need the divided result, just use the high byte of the result. Multiplication by 256 is just shifting left 8 times, right? In this case a one byte number shifted left 8 times becaomes a two byte number with the low byte being zero. No actual shifting required. If (Old average)*256 is stored in AVGHI,AVGLO the formula becomes AVGHI,AVGLO - AVGHI + (NEW DATA POINT) It would be a teensy bit more accurate if I could add in a single digit based on rounding (AVGLO/256). This could be computed by comparing AVGLO to 128 decimal and setting 1 or 0. No need to call a mupltiply routine, just double precision addition and subraction routines. Does this make sense? Best Regards, Lawrence Lile