> If I understand the question properly then I'm not sure that this problem > can be solved accurately. I agree, depending on how you interpret the question. As far as I can see, you need to be oversampling to some extent and that would depend on both the resolution of the result you want and the step size of this continually moving window. If the window size can be upped by a factor of 2 each time, then one approach would be to use a buffer of 1.5n elements, where n is the number of samples you want to report. eg. For a history of 60, the buffer would need to record 90 samples. Starting at buffer[0], sample at rate 1 until you get to the end and then sample at rate 2 (half the rate of 1) with the results overwriting the old data. Keep doing this until the end of the universe. You can stop at any time and always have a valid set of results. If the current buffer pointer is "i" then the first result to report is buffer[i+1]+buffer[i+2] (if i is odd, it would have to be offset by one member). When you reach the end of the buffer reporting 2 at a time, start at the top but only increment 1 at a time until you get back to the original value of "i". I think this would work. I haven't tried it. Steve. > > For example, if I run the logger for 1 second then I expect to have > 60 samples spaced at 16.7 mS apart. > > If I run the logger for 1.4 seconds then I expect to have 60 samples > spaced at 23.3mS apart. > > If I run the logger for 2 seconds then I expect to have 60 samples > spaced at 33.3mS apart. > > etc > > How could you maintain an even sampling rate to something that is not > constant? Time in this case. > > If the operator can turn the logger off at ANY given time how could > there be 60 samples waiting for you when the logging interval was > never known? > > If I start off with a 60 sample time frame of 100mS what is the next time > frame? 101mS, 110mS, 200.000054mS, double the first? > > If I discard old samples that do not fit the current sample rate they are > lost forever, but I may need some of them to match a future sample > rate or I may not. It sounds like an exploding data base may form > here, with excruciating task of sorting it out on the fly. > > Tony > > > Just when I thought I knew it all, > I learned that I didn't.