> I have no real objection to doing RMS, although I would > like it better if the whole waveform were available to the > processor and I would certainly want some low pass > filtering together with the RMS calculations. =A0You seem > to be skipping over that part. It looks like the 50Hz transformer does low pass filtering quite well on its own if we are talking about 1000 samples/s. > My objection to your posts is not about RMS as a > method, but your unscientific way of dismissing other > methods. =A0This includes things like saying something > is a "statistics" approach, which is supposed to make it > inherently better (or was that worse?) than some other > approach, Yes, I would agree I should not use "better" word for the reason. > and unfounded statements like a sine makes things less > useful for some reason. Yes, statements should be founded, but I just thought it was more or less obvious as you wrote "In the limit, you are left with only the fundamental component of the distorted waveform, which will be a pure sine." Imagine the device to be connected to a supply of very distorted sine (say square waveform of the back-up UPS). Would the device care only about what was the fundamental component behind the square waveform? I believe, in this case it may fail at different "fundamental component behind" amplitude than it would fail if it were real sine with the amplitude. >> Basically, as far as I understand, your low pass filtering >> takes into the filtering only a subset of all the measurements. >> RMS takes them all. That's why, since the number of points >> is considerably greater, then at the same quantization period >> it yields greater accuracy. > > It's not as simple as that. =A0First, low pass filtering does > take into account many samples, with the most recent > ones weighted more heavily. =A0In general it is a weighted > average. =A0This can be seen two ways. =A0One way is to > follow the contribution of a single sample into the filter > and as it decays away. =A0Another is to think of the filter > realized as a convolution. =A0The filter kernel function is > then the weighting function of the weighted average. > These are all just different ways of looking at the same > thing. Yes, "low pass filtering does take into account many samples" but still since you are judging about the sine effectively only on the subset of the very top of it, you are bound to very high frequency of getting samples. Your idea to "low pass filter" the wave is fine in my opinion, it just needs to be applied over the entire wave. I would adjust the idea somewhat: - do not figure out peak values - assume you know the amplitude of fundamental component behind the wave; - get samples and "low-pass" them. Add "virtual" points between samples. For each point (including samples) calculate the difference between the point and the fundamental component. - get average of that differences for each 1/4 of the wave. - based on these 4 values adjust amplitude/phase/frequency of the imaginary fundamental component to be used in the next cycle; Output values are: - the amplitude of the imaginary fundamental component; - RMS of that differences calculated along the cycle; The higher RMS - the narrower should be the voltage operating range. > Second, both methods have to apply filtering to > make use of multiple samples. The instantaneous > squared values don't tell you much useful on their > own. =A0These have to be low pass filtered to be useful. > That's what the M in RMS implies. Total of instantaneous squared values for the cycle IS low pass filtered by the very process of addition the values, isn't it? High frequency distortions are unlikely after the transformer, I think. And even if they are we would expect some rise in the total of squared values - and this correspond to our idea to narrow the voltage operating range in the presence of distortions. -- = http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist