It is sometimes desirable to have greater resolution than is provided by an ADC. This can be done via oversampling and averaging. If you google "Increasing ADC resolution by oversampling and averaging" you will find a variety of articles about this.
We have been using this technique and we like the resulting increased resolution, but we've noticed what amounts to "drift" in the filtered output. Our code is shown below:
SumOfAllSamples -= BufferOfSamples[SampleFilterIndex]; //Subtract the influence of the oldest sample
SumOfAllSamples += AveragingFilterInput; //Add in the influence of the newest sample
BufferOfSamples[SampleFilterIndex] = AveragingFilterInput; //Put the newest sample in the list
AveragingFilterOutput = SumOfAllSamples * 0.25F; //Calculate the average (in this case, average of 4 samples for +1 bit of resolution)
if(++SampleFilterIndex > 3) SampleFilterIndex = 0; //Keep the index in check...
The code is computationally efficient, and by changing only the multiplier on the end and number of samples in the buffer, it allows for the running averaging of any number of samples (even millions if you want).
Due to the fact that the code uses a circular buffer of samples, along with the fact that it uses a "running accumulated total" and that uses simple addition and subtraction, there should be no possibility of "drift". For example, if this code were run on a fixed point microprocessor, it is understandable that no bits would ever be dropped or picked up, and that the accumulated total (what we call "SumOfAllSamples" above) would always remain valid forever. But we are seeing the "SumOfAllSamples" drift over time -- as though there are multiplications and divisions involved which might drop or otherwise influence the least significant bits of the floating point variable.
Our current theory is that the optimizer is taking the straight-forward code above, and turning it into less straight-forward code, which might calculate "SumOfAllSamples" with the "0.25F" above, which would definitely influence least significant bits if this is happening...
Do you have any idea why the output of this filter could be drifting over time?
It's using floating point. What does the assembly code there look like?
Yes, this is using the natural floating point capability of the processor. By the way, we also encapsulated these lines of code into their own function, and forced optimization to be turned off, but the problem still happens.
Regarding the assembly code, great question. We are using the latest version of Visual DSP++ (not that the latest version is very recent...). How do we see the assembly code?
If you're using VisualDSP++, you can put a breakpoint on there, and when it hits the breakpoint, look at the assembly code in the "disassembly" window. If you're using CCES, I think you can see it as mixed c/assmebly (I'm not that familiar with CCES as I don't use it)
You could also go into the project settings for compile/assembly and check the "save temporary files" - it should save the intermediate assembly source listing file that gets generated for your c file when you compile (.s extension) - look in there to see the assembly that it is generating for your c code.
Are all your variables (SumOfAllSamples, AveragingFilter etc) declared as integers?
Hello MarcZ,Apologies for the delay. In order to debug the issue further, could you please answer for below queries.Can you please elaborate more about your setup? Which peripheral are you using to capture the ADC data? Are you facing any issue in receiving data from ADC? Can you confirm whether the output of ADC is correct? If yes, after ADSP-21489 receives the ADC samples, what type of filter(FIR / IIR / ) implementation are you doing?BTW, are you using EZ-kit or custom board? Can you share us a minimal project that replicates the issue on ADSP-21489 Ezkit?Regards,Lalitha
Thanks for your thoughts, however I don't see how this problem could be influenced by hardware of any kind in any way. Just take a look at the code we presented, especially in the context of the comments, which are a part of the code.
Just think about it. We take a number (which is a running total), we subtract the influence of a number that happened in the past, and add the influence of the latest sample, which we also record into the table. What's happening is that the running total is gaining and losing bits. To me, this could only happen due to "rounding errors".
Rounding errors are to be expected if we actually lose bits -- for example as would happen with a division. But the part of the code that is losing the bits only involves subtraction and addition. There should never ever be the possibility of "losing bits" with simple addition and subtraction.
It seems that the problem must be caused in how the compiler constructed the assembly code. It must be that the compiled code is allowed to lose bits, due to its assembly construction (which we still have not had time to capture).
We tried putting the running average code into its own function, and completely disabling optimization over that function, but that did not influence the result in any way.
Thank you for your help :)