ADC oversampling, filter output drifting (we use ADSP-21489)

Hello everyone,

It is sometimes desirable to have greater resolution than is provided by an ADC. This can be done via oversampling and averaging. If you google "Increasing ADC resolution by oversampling and averaging" you will find a variety of articles about this. 

We have been using this technique and we like the resulting increased resolution, but we've noticed what amounts to "drift" in the filtered output. Our code is shown below:

SumOfAllSamples -= BufferOfSamples[SampleFilterIndex];      //Subtract the influence of the oldest sample
SumOfAllSamples += AveragingFilterInput;                    //Add in the influence of the newest sample
BufferOfSamples[SampleFilterIndex] = AveragingFilterInput;  //Put the newest sample in the list
AveragingFilterOutput = SumOfAllSamples * 0.25F;            //Calculate the average (in this case, average of 4 samples for +1 bit of resolution)

if(++SampleFilterIndex > 3) SampleFilterIndex = 0;          //Keep the index in check...

The code is computationally efficient, and by changing only the multiplier on the end and number of samples in the buffer, it allows for the running averaging of any number of samples (even millions if you want). 

Due to the fact that the code uses a circular buffer of samples, along with the fact that it uses a "running accumulated total" and that uses simple addition and subtraction, there should be no possibility of "drift". For example, if this code were run on a fixed point microprocessor, it is understandable that no bits would ever be dropped or picked up, and that the accumulated total (what we call "SumOfAllSamples" above) would always remain valid forever. But we are seeing the "SumOfAllSamples" drift over time -- as though there are multiplications and divisions involved which might drop or otherwise influence the least significant bits of the floating point variable.

Our current theory is that the optimizer is taking the straight-forward code above, and turning it into less straight-forward code, which might calculate "SumOfAllSamples" with the "0.25F" above, which would definitely influence least significant bits if this is happening...

Do you have any idea why the output of this filter could be drifting over time? 

Thank you,