In the ADXL206 spec sheet on page 3, the "0 g bias repeatability" is listed as "+/- 10 mg." Sorry if this question is obvious, but could someone clearly define what that means? Is that the turn-on to turn-on repeatability of the sensor? What I'm curious about is if I temperature calibrate / compensate an ADXL206 at one turn on for scale factor and bias errors, then cycle power on the ADXL206 off and attempt to use my previous temperature compensation, assuming that compensation was perfect will there be a possible additional error of +/- 10mg? Thanks to anyone who can shed further light on this!
I have the same question as above and i don't think this ever got answered by AD
The data sheet has a value of +/- 10g for 0g bias repeatability- i'm only interested in the range upto +/- 1g. So my assumption is this sensor only gives 1% accuracy even after calibration (ie 10mg/1g x 100)??
I am sorry that this problem was missed in the past. In the past, we have struggled to develop consistent coverage for this series of products. I believe that with our most recent organization adjustments, this has been appropriately recognized and will have more consistent coverage, in the future.
While I was not on this team, I can offer that this specifications typically represents what we would expect, for end of life drift, which would typically represent a lifetime of 10 years, at +25C (with de-rating for higher temperatures).
Bias drift is independent of the stimulus, so I would assume a linear model for the worst-case error:
Acceleration x (328/312) + 10mg.
One clarification: assuming you calibrate the device to address initial sensitivity error (1- 328 / 312) and temperature dependent errors (0.3%), we can offer some perspective, on end-of-life drift. I am aware of a number of applications, which used similar sensors, in different packages, who have offered that the end of life drift (10 years, +25C) for sensitivity will be in the region of 1%. Based on what I have observed, this is typically done with HTOL testing, then using the Arrhenius relationships to project those drifts towards an equivalent lifetime, at +25C. I suspect that similar thinking could be applied towards the timeframe that this could be supported, at higher temperatures of operation, as well, but that is farther than I have been able to take such consideration.
Does this help?
Many thanks for taking time to answer my question
If I understand correctly, I can calibrate the unit to remove the initial sensitivity errors (and many others) but I still have a potential 10mg end of life drift error, and this is typically a 10y figure.
I think you were implying this "bias repeatability" is linear over time? does that mean for short missions (say 1y) the post calibration bias repeatability error is up to 1mg?
Again, sorry for the gaped coverage on this product. Your interpretation is correct, with one exception: we would anticipate a degradation model that is closer to a square root of time approximation. Hope that helps.