A reliability prediction for the die in any released Analog Devices product can be seen at www.analog.com/ReliabilityData by entering the part name. Similarly the reliability prediction can be done for a process node e.g. 0.18u. These predictions are given at a 60% and 90% confidence level based on an average operating temperature which you can enter.
Figure 1 - A reliability prediction using the tool on the Analog Devices web page for the AD7124 sigma-delta ADC
The above data is derived from high temperature operating life testing (HTOL) in the lab. The acceleration factor can be gauged by using the Arrhenius formula based on the expected average operating temperature for the application vs the temperature used during the HTOL testing. Typical conditions for HTOL testing are 1,000 or 2,000 hours at 125’c. HTOL is one of a suite of tests carried out on every new product at Analog Devices to backup a wafer fab process qualification. Other tests in the suite target early life failures (ELF) but HTOL is designed to estimate the failure rate during the bottom of the reliability bathtub curve and show nothing silly was introduced during the design. For instance, HTOL testing can detect systematic errors such as the use of 1.8V capacitors in a 3.3V domain or similar. Any way I digress so let’s get back on topic.
Depending on the use to which the data is being put some clauses of IEC 61508 require the data at a 70% confidence level or at a 90% confidence level.
For instance, see sub clauses 18.104.22.168.3 describing route 2H and 22.214.171.124 below.
Figure 2 - Route 2H requirement for field experience according to IEC 61508-2:2010 subclause 126.96.36.199
IEC 61508-2:2010 subclause 188.8.131.52 entitled “Requirements for E/E/PE system implementation” and 7.4.5 entitled “Requirements for quantifying the effect of random hardware failures” which applies at the safety function level taking architectures, human errors, diagnostics, diagnostic test intervals, etc. into account.
Figure 3 - Reliability prediction based on IEC 61508-2:2010 subclause 184.108.40.206
Interestingly the data now only has to be available at a 70% confidence level. Since the ADI tool only generates the data at the 60% and 90% confidence level you could just be conservative and use the 90% confidence data or you could do as the standard says and convert the data to the 70% level. Personally, I am not in favor of putting margin on top of margin on top of margin, so I would say convert it. If the authors of the standard had wanted 90% confidence, they should have written that. I’m sure some will say the numbers are all only approximations anyway and trying for too much accuracy is silly; but once again the point of a metric is that it is quantification. If it is to be done, it should be done as written in the standard or the standard should be changed to remove the need. Once again, I digress so back to the math.
Some sources state that the confidence level of data in SN29500 and similar sources are actually at the 99% confidence level. If you feel like a challenge you could argue the case with your external assessor and try reducing it to a 70% confidence level using the formulas described below.
Anyway, back to the math which can be easily captured in a worksheet and give you the freedom to make your own decisions and play with the numbers to see if the extra effort makes a difference worth arguing about.
Figure 4 - Example from worksheet to convert from one confidence level to another
In cell A2 put the total number of operating hours.
In cell B2 put the number of observed failures during those operating hours.
Cell C2 is then calculated using the formula “=2*B2+2” which represents the degrees of freedom in a time truncated test
In cell D2 put the confidence level you require e.g. 70 or 90 to represent 70% or 90%.
Cell E2 is then calculated as “=1-D2/100”
Cell F2 is then calculated as “==CHISQ.INV(D2/100,C2)”
Cell G2 is then calculated as “=2*A2/F2”
Cell H2 is then calculated as “=F2/(2*A2)*1e9”
When you create your own spreadsheet, you can use the 70% and 90% data for products in the ADI database at www.analog.com/ReliabilityData to check you have implemented it correctly.
The influence of the required confidence level (60%, 90% or 99%) wanes rapidly as the number of observed failures rises. Reliability databases such as that from ADI where the number of observed failures is very low as a result of continuous improvement activities including analysis of customer returns, is therefore a worst case for the sensitivity to required confidence level. For example, assuming you have now built your own spreadsheet with the values shown above when you change from 70% confidence to 90% confidence the FIT should increase from 1204 to 2303 which is a factor of 1.9 increase. If the same test is done with 10 failures the increase is only a factor of 1.2 as you go from 70% to 90%.
The tool can also be used to show that a new process where not many products have been qualified will give a very large FIT prediction because of the small sample size.
If you want to learn more
What the real one you gonna talk about?