Hi Mike (Mike Keitz), in <19971105.200527.2638.13.mkeitz@juno.com> on Nov 5 you wrote: > On Wed, 5 Nov 1997 10:12:56 +0200 Eric van Es > writes: > > >> To maintain a fixed size buffer containing > >> a set of n equally spaced in time values that best represent the > >events > >> over a time period that is continually increasing. > >> > >> To ensure there are always n values present (the device could be > >asked to > >> dump its contents at any time) and that they faithfully represent > >the > >> input time series means the n values must be "adjusted" as each new > >sample > >> arrives. > > There are methods for "resampling", for example taking audio samples at > 44.1 KHz from a CD and "adjusting" them so they are ready to record on > digital tape at 32 KHz. For the case of the new sample rate being > exactly double or exactly half (or 3X, etc), it's very simple. But if > the rates don't match well, then it gets complicated and strange aliasing > effects leading to loss of quality (beyond what just reducing the sample > rate does) could be expected. > > So if you have 60 samples in the buffer and resample them at a rate of > 59/60, generating 59 new samples, there will be room for one new sample > in the buffer. The resample process would have to be done for each new > sample, but this would guarantee there would always be either 59 or 60 > samples in the buffer. If you can tolerate not having all 60 at any > given time, then the resample could be done every 5 samples and so on. > The computation would be real easy if you could accept a 1/2 (30/60) > resample, this would halve the sample rate every 30 samples (after the > first 60). But the buffer may have as few as 30 samples if the process > is stopped immediately after a resampling. > > I'm not familiar with exactly how resampling works. I suppose one simple > method would be to do "weighted interpolation", based on where the new > sample points land among the existing ones. For example, if a new sample > point lands at 1.75 (3/4 of the way between existing samples 1 and 2), > take 3 parts of sample 2 and 1 part of sample 1, and divide the result by > 4. This won't work very well, but it's a start. > > Techniques for resampling must be discussed heavily in DSP literature, > where it is needed in many situations. Doubtless there are some web > pages about it, or paper pages in the university library. I don't have a solution for the above problem, but maybe I can add some thoughts. The digital representation of an analog signal suffers an effect called IMAGING. The spectrum -fs/2 to +fs/2 (with fs being the sampling frequency or 1 Hz in this thread) is repeated in each fs interval. This happens because there is an unlimited number of wave forms that share the very same discrete values that you read from the sampler. Here's an analog signal spectrum. Don't get disturbed by the negative frequency side: ^ | /|\ / | \ / | \ | | | | | | ----+----+----+----+----+----+----+----> frequency axis -B 0 B (B is the bandwidth of the signal). Here's its digital representation after sampling it with sampling frequency fs (>2B). ^ . | . \ / \ /|\ / \ / \ / \ / | \ / \ / \ / \ / | \ / \ / | | | | | | | | | | | | | | | | | | ----+----+----+----+---B+----+----+----> frequency axis -fs -fs/2 0 fs/2 fs As you can see those image spectra are spaced in fs Hz intervals in the frequency domain. The analog signal above is bandlimited. Its bandwidth B is lower than half the sampling rate fs. This fact (2Bfs), the image spectra would _overlap_ each other and the signal of interest, as their width grows with the signal width! Overlapping of the image spectra and the signal of interest is called ALIASING. Once you have sampled the signal with fs too low, the aliasing noise (overlapping spectra) cannot be removed anymore. Frequency analysis will lead to erranous results and detect tone energy where there was none in the original analog signal (ie find low frequency energy when there were only high frequency components present). The original poster probably does not care about this effect and just sample his data points in. Doing else would mean using an analog 0.5Hz cutoff lowpass filter (fs=1Hz, Bmax=fs/2=0.5Hz). I don't know if such a thing exists in hardware :-) Note that the filtering must be done in the ANALOG part of the circuit. In the DSP literature there are methods to resample data without loss of quality. But these very much relate to the imaging effect. If the signal has been sampled with aliasing noise, they won't give the expected result. Resampling has to be done in integer steps. In this threads' scenario you would upsample x59, and then downsample x60. Upsampling is called INTERPOLATION and done by inserting an appropriate number of zeros between the samples to create a new datastream of sample rate fs_new (ie take one sample of the old signal, then insert 58 zeros, then take the next old sample, and insert another 58 zeros). The zeros have two other effects next to raising the sample rate: They lower the new signals energy/time. And they induce noise, or to be more specific they move fs_new high above the image spectra - so some of the images are now below Nyquist frequency (fs_new/2) and therefore belong to the signal (bandwidth B_new = fs_new/2). You can remove the unwanted images by lowpass filtering the new signal. This filter is called an INTERPOLATION FILTER. The filter will replace the zeros with new sample values that are perfectly fine with the Nyquist criterion. After this has been done, you DECIMATE the signal by 60 - by picking each 60th sample value from the stream to build a new one. Wait, I hear you say, wouldn't this cause aliasing noise? The answer is yes. Therefore decimation also requires a filter, the DECIMATION FILTER. It limits the bandwidth of the signal to fs_final/2 - so the new spectrum is fine with the new samplerate (2B_final