Skip navigation

In my last Blog I posed the question “What are Safety Integrity levels?”.


A safety integrity level according to IEC 61508 is “discrete level (one out of a possible four), corresponding to a range of safety integrity values, where….”. Actually, the definition is not very useful as an introduction so I cut it short.


The abbreviation for Safety Integrity level is SIL. A SIL is a way of quantifying the level of safety that is either expected or required. There are 4 levels and they are roughly an order of magnitude apart so that for many process control applications a SIL 1 safety function will give a risk reduction of 10, SIL 2 100, SIL 3 1000 and SIL 4 10000.


A hazard analysis as shown below is used to determine what safety functions are required and a risk assessment then determines the required SIL. The risk assessment typically considers things like the number of people who might get hurt, the severity of the injuries and how often someone is exposed to that risk.



It should be remembered that a piece of equipment may be SIL certified as being suitable for use in a safety function with a given SIL but the SIL is attached to a safety function rather than a piece of equipment. In fact a single system can have many safety functions and each of the safety functions could have a different SIL.


When designing a safety function, higher SILs require more measures to be taken to prevent the introduction of errors. This might include better requirements management, more design reviews, the use of coding standards or even the restricted use of certain language features such as pointers or interrupts.


Other safety standards have different forms of SIL:

  • Automotive has ASILs which stands for Automotive Safety Integrity Level and in order of increasing safety they are A,B,C and D
  • The machinery safety standard ISO 13849 has performance levels a,b,c,d and e
  • Avionics has design assurance levels E,D,C,B and A where A offers the most safety and E the least


To me the fact that there are 4 SIL levels also says that you can put a price on safety. Otherwise there would only be one SIL level namely SIL 4. However if everything had to be developed to SIL 4 the products would be so expensive that nobody could afford to buy or use them which wouldn’t increase overall safety.


In the past safety standards have had up to 7 levels. Today some people advocate that SIL 1 and SIL 2 should be combined along with SIL 3 and SIL 4 leaving just two safety levels. For now these people are in the minority and to most experts four safety levels seems about right especially for a basic safety standard such as IEC 61508.


The fact that the levels are an order of magnitude apart also says that when doing a functional safety analysis you shouldn’t be too fussy about getting the numbers correct to 3 decimal places.


Video of the day: (another tenuous link to the topic under discussion but if nothing else highlights the risks people will take and functional safety is meant to cover foreseeable misuse).


For next time: What are the 3 key requirements for a given SIL?


Enjoying this blog? Read more Safety Matters blogs here.

DSPs and audio decoding are critical elements to delivering the type of high quality audio that today’s consumers expect. Today will be the first blog in a series that examines this topic; starting with an overview of why DSPs are critical to audio design.


An infinite number of audio channels are rendered by nature in a truly open space for human ears to enjoy. Scientists and engineers have tried to filter the unwanted, while capturing and reproducing the wanted audio in living rooms. The phonograph, also known by the generic name gramophone provided us a single channel audio just over 140 years ago. Over the years the technology graduated to stereo tapes and multi-channel audio with additional post processing to reproduce audio with highest possible accuracy and fidelity. However, recording audio using discrete microphones and replaying it using discrete speakers brought an unnatural audio experience, forcing technologists to increase the number of channels for recording and playback. This posed yet another problem of recording, transporting and playing the huge amount of data used by a larger number of audio channels. This new problem forced scientists to invent compression/decompression algorithms that did not lose fidelity and dynamic range.


While compressing or encoding the recorded audio channels does not need to be done in real time, as this is accomplished in studios, it is necessary for the decompression or decoding to be  done in real time. The advent of digital signal processing (DSP) chips enabled this. Audio from stationary sources such as concert halls, panel discussion, etc., can be handled easily with this method of transporting audio. But most audio that we experience every day is not stationary, be it cars, people walking, or a conversation we walk past. These sounds move in space around us and are thus not so easy to reproduce with discrete speakers without elaborate recording, mixing and playback techniques. Then came the object audio with larger number of channels and virtualization. This further increased the complexity and horsepower requirement on DSPs for decoding and rendering this object audio.


In my next blog, I’ll take a closer look at the performance characteristics of modern DSPs and the importance of having DSPs certified to work with audio decoder IPs to make music for your customer’s ears.

You've clinked your glasses and made your resolutions, many of which I suspect you've already broken. Before 2017 becomes a speck in our rear view mirrors let's take one final look back.


EngineerZone is a thriving community of customers, partners, and ADI employees sharing ideas and finding solutions. Together we produced a great deal of interesting and informative content. Here are the 2017 highlights...



I recently participated in a panel discussion hosted by the Greater Boston Chamber of Commerce. The topic was “The Case for Reinvention: How to Push Past the Status Quo and Think Big for Long-Term Success.” Here are some excerpts from that discussion and my insights into how Analog Devices has maintained its focus on innovation for half a century.


On a different approach to corporate structure

While ADI is a relatively large corporation, in my experience, ADI is run as a “bottoms up company.” It functions as a federation of smaller operations with some top-level structure. That has given us the ability to stay pretty mobile and agile even as the company has gotten larger.


That has been important throughout the years, but it’s even more so now. The old formula was to try lots of things and be prepared to double down on the ones that work. Today, you don’t have the luxury of trying so many things. You have to be prepared to change the business model and look at the world differently. Maybe you’re not selling products; you’re starting to sell services or change what you’re doing. You have to be able to think differently and make adjustments to your vision.


Pictured from left to right: Audrey Asistio, anchor NECN/NBC Boston; Maggie DeMont, SVP, corporate strategy, Houghton Mifflin Harcourt; Chris Froio, SVP/GM, Reebok America; Dave Robertson, ADI fellow and technology director; Rich Rovner VP of marketing, MathWorks.


How to work with customers 

Most people agree on the importance of listening to their customers, but they don’t always know the best way to do it. You can gather a lot of input, but it can be so varied that it’s hard to find what to focus on. At ADI, we look for customers who are really trying to do something innovative in a certain space and work with them. We try to figure out what works, what doesn’t. Is what they’re trying to do something that others are trying to do?


At the same time, customers are often looking to you to educate them and lend your expertise so they don’t make mistakes. You have to talk to them about their problems, but you have to understand the critical problem that they haven’t even thought about yet. As an example, consider the interface to your smartphone. No one said, “Hey, I hate all these buttons on my phone,” but when somebody gave them a touchscreen, it became, “Wow, this is really cool, how did I live without it?” Getting rid of the buttons solved a problem that you probably didn’t even realize you had.


And you can’t talk to just one customer. One of the interesting things about ADI is the diversity of our end markets. As a semiconductor company, we’re in consumer applications, we’re in automotive applications, we’re in healthcare, we’re in industrial. We will work across these different businesses and technologies to pick up fundamental themes that tend to have longer time constants. We want to avoid getting trapped in a spot market that’s really hot for a minute, but then turns out to be a flash in the pan. The major trend is what we want to ride.


"...some of the most exciting work starts out in

small places."

Maintaining a positive culture

Of the many things that make up an organization’s culture, authenticity is extremely important. One of the advantages of a “bottoms-up” company is that it’s less dependent on the words from senior management. While leadership from the corporate center certainly is important to employees, what really matters are the local teams that they’re in. What’s said and done there is the real proof what the organization is all about.


Creating opportunity

When you’re growing at 20%, there are plenty opportunities for employee growth. A significant challenge comes in creating a dynamic environment when that rate of growth slows. Where do the opportunities come from?


That’s where this idea of a federation of small groups has helped at ADI. A group may see about 15 percent turnover each year, and hopefully that’s not because people are leaving the company. If you hire people who thrive on solving interesting problems, they look for new challenges. So you often get people moving between groups. For employees, it helps keep their jobs from getting stale. And for the teams themselves, it does much the same thing. It means the team has 15 percent new faces, new perspectives, and new skillsets.


Larger companies vs. smaller ones

Size allows you to make certain portfolio bets. Back to the culture, employees are more comfortable with taking risks if they feel like the company can provide some overall stability. Being large helps that because you can encourage a startup mindset with the security of knowing failure won’t be lethal.


On the other hand, some of the most exciting work starts out in small places. So if your management says, “We don’t want to talk about anything that’s not going be $100 million per year in revenue,” you’ll miss all the cool stuff. So again, I think part of how ADI tries to manage that is a federation approach where financially we are a $5.5 billion company, but a lot of strategic decisions are made at this much smaller level where businesses are free to chase a $10 million opportunity that can turn into something big five years from now. 


 About the author: Dave Robertson is a Fellow at Analog Devices (ADI) and the director of its high-speed converter group.


Safety Matters: Is it Safe?

Posted by Tom-M Employee Jan 16, 2018

In my last blog, I promised to answer the question “Is it safe?”.


Safety is freedom from unacceptable risk. The word unacceptable is very important here. Obviously jumping out of a plane with a parachute is risky but for the person involved they have obviously decided it is an acceptable risk. Similarly, in all our lives we take risks from drinking hot coffee to driving cars. Some reckless people even smoke.



When you go to work your risk of dying should not be significantly raised and your employer has a duty of care to make sure you are not harmed or injured. This harm may come about while working with for instance a machine or robot.


Acceptable risk is often interpreted for a healthy adult worker as a chance of dying in a year of 1e-4/year. If the public is at risk you are deemed to have a higher duty to provide more safety and the acceptable risk is a factor of 10 lower at 1e-5/year.


A more technical discussion of the topic is given in The Safety Critical Systems Handbook section 2.1.1.


In general, your employer has 3 options and in order of priority they are

  1. Eliminate the risk
  2. Engineer a solution
  3. Warn and inform


There are many ways to eliminate a risk such as by changing the process so that a dangerous machine or chemical is not required but often they are unpalatable due to cost or other reasons. Warning and informing can be done as per the coffee cup above but is only done as a last resort (see picture of a Canadian coffee cup here).


Functional Safety is mostly about option 2, engineering a solution. For a dangerous machine, this might involve putting a sensor on the machine door so that if the door is opened the machine is stopped before your hand gets inside. In a future session, I will describe more about machine safety but for now I will just saw that typically functional safety involves 3 components 1) A sensor 2) some logic to make a decision on the sensor output and 3) an actuator to take the system to the safe state.


The combination of the above 3 elements constitutes a safety function. That safety function has 3 key properties


  1. A safety integrity level
  2. A safe state
  3. The time to achieve the safe state


While health and safety deals with the everyday safety, functional safety typically deals with the 1 in 10 year, 1 in 100 year or 1 in 1000 year accidents. Functional safety is having the confidence that a piece of equipment will carry out its safety related task when required to do so. However, it is only part of the safety spectrum which includes electrical safety, mechanical safety, intrinsic safety and many others types. 



The Video of the day is based on the Saw Stop system – see


These guys don’t make any claims for functional safety but I think it is a good illustration of a safety function – there is definitely a sensor to sense a human hand, there is definitely something to make the decision to stop the saw and thrash the motor and we can all see the actuator. The safe state is clearly to stop the saw and the time to achieve the safe state is before the saw cuts the hand.


For the next time -  What are safety integrity levels?


Note - for more on the order of priority in design see ISO 12100:2010 figure 1

Safety matters to everybody even Homer SimpsonThere are various forms of safety including electrical safety, mechanic safety and intrinsic safety. A form of safety of particular importance to Analog Devices is Functional Safety. Functional Safety relates to the confidence that a particular piece of equipment will carry out its safety related task when asked to do so. For instance when a machine door is opened the many spinning motors inside should be stopped before the person can put their fingers in the motor. In my new blog series "Safety matters' I will cover some initial introductory topics but there after will vary across topics such as gas sensors, robots, machines, PLC analog i/o cards, trains, software and all the other application areas that come up in my remit as the functional safety for industrial guy. I might even throw in the odd bit about automotive, avionics or medical functional safety.


You might be thinking what makes this guy an expert? Well allow me to introduce myself, my name is Tom Meany. I joined Analog Devices over 30 years ago as the IT guy,  a job I only stuck for about 18 months. Since then I worked in the test department working on parts such as sigma-delta ADCs, uControllers and power meter ICs. After that I worked as a project leader on similar parts and doing some light design work before joining the automotive group where I spent 9 years working on oil level sensors, air bag sensors and lead acid battery monitor chips. For the last 5 years or more I have been the functional safety guy for our industrial products. It was while working in the automotive group on air bag sensors that I was first exposed to functional safety along with the automotive demands for high quality, high reliability and extreme EMC robustness.


I am based in Limerick Ireland and I am the Irish expert on several IEC committees including

  • IEC 61508 hardware working group
  • IEC 61508 software working group
  • IEC 61800-5-2 working group 


I also have an interest in cyber security since I discovered that it can’t be safe unless it is also secure and I am therefore an observing member on ISA 99 who along with IEC TC 65 work on the IEC 62443 specification.


I am a TUV Rheinland functional safety engineer in the area of machinery and I have a certificate in Functional Safety and Reliability from Technis. I am the holder of 8 US patents related to semi-conductors and hold a degree in electronics and a masters in applied maths and computing.


In my personal life, I run a lot and have completed over 35 marathons and ultra-marathons. Long distance running gives you great time to ponder functional safety.


I like to find YouTube videos with a link to safety – no matter how tenuous the link – this one links both marathon running and safety -


In my next blog I will try to answer the question “Is it Safe?”


Bye for now...Tom

One of the benefits of my role as President and CEO of Analog Devices, is traveling the world to meet with customers from a variety of industries and regions and hearing their perspective on the technological, business, and market challenges they face. Our customers produce the wide variety of electronic equipment that we all rely on for transportation, healthcare, communication, and many of the benefits of modern life. Our discussions typically focus on both their current needs to intelligently bridge the physical and digital worlds as well as on the innovation they want to enable in the future. I have drawn upon those conversations and other research to compile the following five technology macro-trends that I believe will have the greatest impact on business and society during 2018.


Artificial intelligence
Customers in every market segment are feverishly trying to understand the value of artificial intelligence and machine learning for their businesses, mirroring efforts of a decade ago to realize the benefits of digitalization. The focus on utilizing AI will accelerate in 2018, with the performance/affordability barrier continuing to be broken down and targeted applications of AI achieving financial and application-level impact in Industrial settings. For example, AI has progressed to a point where industrial robots are able to learn and adapt to new environments or unfamiliar objects without being specifically trained.


AI at the edge will begin transitioning from a novelty to a norm through innovation related to low-power processing, while intelligent edge computing will become a reality with context-enriched data and information driving smarter system partitioning between the edge and the Cloud.


Meanwhile the development of AI applications that rival human intelligence will remain squarely in the university research domain.


Autonomous/intelligent machines
Autonomous systems for cars, drones, and robots will continue to advance during 2018, but only to a certain point due to unresolved regulatory and technical issues. Nevertheless, over the coming months, we will continue to see progress in the adoption of autonomous systems through initiatives such as trial deployments of robo-taxis in limited areas. In particular, long-haul transportation such as trucking and trains will be among the applications that experience true advancement of autonomous functionality in the near term.


Driven by the continuing quest for productivity gains, the drive to add intelligence to machines will also accelerate factory automation/Industry 4.0 initiatives.  For example, advances in machine learning will significantly improve the ability of systems to provide valuable performance recommendations and predictions based on their own independent condition monitoring.


Ubiquitous wireless sensing networks and data
The combination of advanced materials, enhanced functionality, and MEMS is enabling breakthroughs in sensor form factors and cost, which will enable ubiquitous wireless sensor networks. Deployment of wireless mesh networking in IoT and Industrial applications will enable sensing capability to be added to existing systems without extensive rewiring.  However, end-to-end security from the sensor to the Cloud will be the gating requirement for Industrial customers to begin deploying Industrial IoT initiatives at scale.


The drive to make products and systems more intelligent will also increase the need to manage and analyze an ever-increasing flow of data. Data centers will require higher processing performance as the data load continues to increase, as well as advanced power management innovation to mitigate risk presented by high thermal levels in data center systems. We will also begin to see greater intelligence integrated at the edge node to begin to triage and tame the flow of data.


Machine-human interface
Mixed reality systems will continue to emerge and grow in popularity, with augmented reality and virtual reality ecosystems flourishing and stimulating innovation.  As the use of commercial AR/VR systems accelerates, costs will decrease and applicability will extend into spaces such as Industrial for off-site diagnostics and repair.

In addition, voice-as-user interface has now become an expectation but this technology continues to face limitations, especially in noisy environments. Gartner predicts that in 2018, 30% of our interactions with technology will be through "conversations" with smart machines, meaning that technology and service providers need to invest now to improve currently limited voice interfaces.


Heterogeneous manufacturing
With the costs of deep-submicron development skyrocketing and Moore’s Law facing increasing technology and economic headwinds, heterogeneous integration of multiple technologies in a package, on a laminate, or even on a single silicon substrate will increase. New business models will emerge to capitalize on heterogeneous manufacturing, enabling recombinant innovation for small-scale semiconductor industry players who cannot afford to invest in state-of-the art IC lithographies. For suppliers with greater scope and scale, the addition of signal processing algorithms to silicon will increase the value of their solutions.


It will be interesting to see where heterogeneous manufacturing and the other four technology macro-trends evolve over the coming year. It is often said that the best way to predict the future is to create it.  As semiconductor innovation will be the foundation for many of these emerging applications and analog technology will become even more critical in a data-hungry world, you can be sure that we at Analog Devices will be working diligently to make these predictions a reality in 2018.

Don’t Throw the Meter Out with the Bath Water - How to Extend Electricity Meter Lifetime

Electric utilities around the world leverage smart meters and Advanced Metering Infrastructure (AMI) to enable remote meter readings, remote connect/disconnect, demand/response, and other operational efficiencies. Utilities are under constant pressure to improve operational efficiency while mitigating rate hikes and improving customer service. While smart meters and AMI remove the need for in-field meter readers, expensive work crews still need to be scheduled to replace meters that are nearing the end of their useful lives. As it turns out, the vast majority of those replaced meters are still operational and would be for many more years. What if a meter’s lifetime could be extended so that it was only replaced just before its decline in accuracy?


Extending the useful life of meters yields surprisingly high return. Consider a hypothetical utility that spends €100 ($119.77) on a meter and its installation. Assuming a 15-year starting useful life, extending the service life of a meter by just 2 years results in lifetime savings of €13.30 ($15.93) per meter. Extending the meter life by 3 years increases the savings to €20 ($23.95) permeter over its lifetime.  If the current useful life of the meter is less than 15 years, or purchase and installation costs are higher than €100, the resulting savings are even more significant. In addition, extending the useful life of a meter improves customer service, as fewer service disruptions would be needed.


Current best practices require that utilities or regulatory bodies determine useful life of the meters based on statistical distribution of failures. Typically, this is done by reliability engineers using the Weibull function, otherwise known as the bath tub curve depicted in Figure 1. Reliability engineers use these techniques to ensure that a meter’s measurement accuracy remains within class before it is replaced.

Historically, a lot of attention was devoted to reducing the impact of Early Life failures. This is accomplished by improving the manufacturing process, environmental burn-in, and extensive testing. The Wear Out region of the curve, which typically has a Gaussian distribution, is avoided by using conservative (three or more standard deviations) statistical methods to minimize the possibility of an out-of-spec device remaining in service. Furthermore, sample testing is required in many global regions to spot check meter performance during deployment. However, the biggest drawback of these methods is that often well over 99% of meters removed from operation still performwithin specifications. There has never been a cost-effective way to verify the measurement accuracy of each device.


Adopting non-invasive, real-time accuracy monitoring technology in smart meters will extend the useful life of the meter without increasing the risk of out-of-spec devices in the field. One example is mSure® technology from Analog Devices. mSure technology is integrated into an energy metering chip, which continuously monitors the accuracy of the entire measurement signal chain, including the current and voltage sensors in each meter. Meter accuracy is then communicated, along with the energy consumption and other data, via AMI to the cloud. Having the complete picture of each meter’s’ accuracy, enables the utility to make a data-driven decision on required meter replacements.



As with all new technologies, phasing-in real-time accuracy monitoring technology is a prudent approach. Existing field sampling protocols can be used to confirm effectiveness of the solution and to train predictive analytics. After a couple of years of data correlation, field sampling can be reduced or eliminated, realizing additional cost savings.


One question that often comes up is what about all other meter failure mechanisms, which include backup batteries, power supplies and LCD (liquid crystal displays)? Modern meters already monitor the health of the battery and can report this parameter to the meter data management software. Failure of the power supply will result in the meter becoming unreachable over the data network, thus flagging the service department. And with remote meter reading, the LCD is no longer of high importance, and its failure will eventually be reported by the customer.


Energy measurement accuracy is the last mission-critical parameter that can now be effectively monitored enabling the extension of a smart meter’s useful life. With the deployment of non-invasive, real-time accuracy monitoring technologies, smart meters can deliver added value and increase a utility’s return on investment.


To learn more about mSure® technology from Analog Devices, please vist:

According to Northeast Group, energy suppliers are losing $96B each year to energy theft. To put that in context, it is roughly equivalent to the total goal for climate action financing by developing countries in 2020. Although often perceived as a developing nation issue, the problem is widespread and impacts every geographical region. Creative thieves use a variety of methods to siphon energy including direct line tapping, magnetic interference, and bypassing the electricity meter. Given the size of the problem, a variety of methods have been developed to detect theft attempts and inform the energy supplier so that appropriate action can be taken. So far, results have been unsatisfactory and energy theft continues to rise. How can we reverse this trend?


The root cause of the problem is that each tamper detection method has weaknesses. The insights and alerts generated are prone to error, leading to a lack of trust in the solution. They provide interesting views on the problem, but they don’t provide real-time actionable intelligence.


The most pervasive method in use today is pattern-based analytics with machine learning to identify anomalies and profile tamper candidates. Meter-based historical and neighbor data are combined with other sources and mined for patterns that deviate from an expected norm. Anomalies can be priority ranked, and in theory, offenders caught. In practice though, this method tends to deliver an amount of false positives, (i.e. results that are profiled as tampers but are actually not). For example, a homeowner goes on extended work assignment leaving the property unoccupied for a few months. Power consumption drops and a tamper candidate alert is triggered, leading the energy provider to initiate an erroneous investigation with a resulting waste of resources, frustrated homeowner, and damaged reputation. Another problem is that, by definition, the analysis relies on                                                                       historical data and lags the actual theft.


Another common method of tamper detection is meter-hardware protection. Basic meters contain built-in detectors that are tripped by certain kinds of tamper attempts and then alert the energy supplier. Anecdotal feedback from utilities deploying these detectors indicates that, generally, such systems are over-sensitive and also prone to the false positive problem. In short, the alerts cannot be acted upon because in a high number of cases the alerts are triggered innocently.


Recent innovations have taken a more holistic, grid intelligence or network based approach. Energy consumption is measured at multiple points in the energy distribution chain, results are compared, and any differences are attributed to technical or non-technical (i.e. theft) loses. Such solutions show promise, but the granularity of the results is wide-ranging. It is simply not economical to measure consumption at all the network points needed to profile a theft to a specific end node.


All existing methods also suffer from one core flaw. While they can, to a greater or lesser extent, point to a potential tamper, they cannot reliably indicate the amount of energy stolen.


A new approach is needed, providing on-meter, continuous real-time monitoring with an associated analytics capability that can profile, quantify, and alert energy suppliers to tamper attempts. This approach must deliver consistent and reliable results that allow action to be taken with high confidence.



That is where mSure® comes in. mSure is an agent that resides in the smart meter and monitors what happens at the sensor used to detect energy consumption. Any change to the characteristics of the sensor that would be induced by an attempt to bypass or saturate the meter can be immediately detected. That enables mSure to send the energy provider a tamper alert and/or to activate a visual flag at the meter, which can act as a deterrent to potential tampering. As the impact of various direct tamper methods on the sensor can be profiled, the type of tamper can be recognized with high confidence and the number of false positives significantly reduced. In addition, by understanding and analyzing the change in characteristics, an estimate can be made of the amount of energy stolen, not just that a tamper event has occurred.


While a fool-proof revenue protection solution will likely embrace a combination of methods, mSure provides the missing piece of the puzzle: a meter-level real-time tamper detection capability that can be acted on with confidence. In short, actionable intelligence.


To learn more about mSure® technology from Analog Devices, please vist:

The current Information Age is both a blessing and a curse. While we now have access to vast knowledge banks available to us anytime and anywhere, the majority of information produced today is garbage. Our institutions are straining tirelessly to help us sieve the meaningful information we care about from the deluge of junk surrounding it. In the world of engineering we are not immune. With dozens of competitors, thousands of products, and inestimable data quantifying the performance of those products, how is an engineer expected to find the right product quickly with any level of certainty?


This is a challenging problem that affects everyone, and we’ve decided to do something about it.  We would like to introduce Performance Gallery.


Click here to open Performance Gallery in a new window




We believe Performance Gallery provides customers with a quick and efficient interface to view data. Additionally, engineers within Analog Devices now have a streamlined mechanism for sharing data. When both of these features come together a powerful communication channel is opened. This channel allows us to speak powerfully and directly to customers, and in a timely manner. With this data-driven approach we can promote results from an experiment in a lab to the world in a matter of seconds. We can move up in abstraction and allow product-to-product comparisons. There are so many opportunities for innovation that we believe we are only limited by our imagination.


Let’s work through an example where Performance Gallery could help.


Problem: I have an AD4622-2, and I want to know what the common-mode rejection ratio is with a 1 MHz input signal.



Step 1 – Open Performance Gallery and select the AD4622-2:


Step 2 – Under the product name (top, left corner), click on the “Filters” button:


Step 3 – Select the Common-Mode Rejection Ratio field, and Close (you can hit “Close”, click outside, or hit escape):


Step 4 – Mouse over the curves positioning the cursor with a Frequency value of 1 MHz. The trace reflects the curve value nearest the mouse cursor at the given x location:


Answer: In approximately three clicks we were able to get to the answer, somewhere around 50 dB. 


You can further filter the results by clicking on the “Lines” button, and selecting, say ±15 V:



Performance Gallery is not an end-all and be-all solution. It is not an Oracle with infinite wisdom and discernment.  It is not a simulator (that’s Virtual Eval). However, Performance Gallery is an engineer’s virtual art gallery – where the art is the data you want to see. And as you use it imagine what it could do for you tomorrow.


Feel free to provide suggestions below. Don’t see a product you like? Make a request. Your voice carries more weight than you know.


design tools

If you never thought that Analog Devices would attend, let alone participate in the National Ploughing Championship (NPC) in Screggan, Ireland , well you’d better think again. The National Ploughing Championship is the single largest farm event in Western Europe. NPC is hosted over three days with more than 2 million feet of exhibitor space. This year NPC hosted over 1700 exhibitors that attracted 300,000 plus visitors. This competition and trade exhibition is just one of a series of farmer centric events that ADI has participated in the past 12 months.


Analog Devices (ADI) showcased their latest version of our to a broad audience of crop and livestock farmers at the National Ploughing Championship. Our solution comprised of air and soil monitoring sensors, demonstrated a complete environmental sensing capability that accurately measures the micro/nano climate conditions that affect crop growth. Most importantly, we showed how this data is delivered in an efficient, easy to use mobile app, making the information much easier to process and act on. The response from visitors to our booth was overwhelmingly positive. Everyone recognized the value of instrumentation in helping farmers make better, data-driven decisions.

The National Ploughing Championship provided valuable feedback, and confirmation of the types of solutions that farmers need to meet the challenge of feeding a growing population cost effectively and profitably. As a bonus, we fully immersed ourselves in the spirit of the event, including having our car extracted from the mud-covered field by one of the many tractors.


To learn more about our crop monitoring solution please go to



Reliable data is fundamental. Even when sitting in a crowded stadium watching U2? YEE HA I got the tickets! Now I need to tell all my loser friends how clever I was to get here. Connect to stadium WiFi – not happening. I log onto facebook...try to post. Still trying. On to Instagram. (I need to put my glasses on…to see what to do) Snapchat? FORGET IT - my 20 yr old son will disown me!


I spend the entire concert trying to tell my “non existent” virtual friends what they are missing. By the time I am home, I’m obsessed with checking my likes and replies. A crowded or inadequate network situation is something we all know and don’t love. However, a failing network in critical IoT systems can be disastrous.


For IoT, a reliable network is key to success. The vast majority of connected objects will connect back to the cloud wirelessly using RF and microwave frequencies. The ability to operate reliably is especially challenging in an environment such as a factory where there is metal and concrete throughout the facility.


Reliable operation needs to encompass everything from low to high data rates, short to long range operating distances or a device situated in a hard to reach area that only needs to communicate when it's absolutely necessary. Take forest fire sensing, when a blaze is detected, a notification needs to happen and quickly. Therefore some devices may go months or years without communicating and others will need to operate continuously across mission critical secure networks. Also many of these sensor nodes will also be self-powered through batteries or energy harvesters so efficient operation is also key to success. The communication networks are critical to transport the intelligence from sensor to cloud across differing requirements.


What is needed? Ideally, a technology that is low cost, low power and with low latency. But also with the capability to scale a system with unrestricted sensor placement. One example of a reliable network is creating an implementation by using alternate pathways and channels to overcome interference. If a signal faces potential interference, it simply moves on to another channel rather than risk downtime.


And finally, maybe we need to look at the bright side of drop out in crowded social situations; perhaps we should try listening to the music instead of posting about every moment. (I don’t care where you are or how much fun you theoretically are having! I AM NOT JEALOUS!)


From Rant to Reality

To learn more about low power, secure wireless networks from ADI

Want to get a better feel for how a precision SAR ADC performs in a signal chain design before you get your pcb design manufactured?  Tired of having to spend time and re-design effort to get your precision data acquisition design just right? Then give our new AD7960 SAR LTspice model a try and tell us about your experience in the comments. The models, two test benches and a user guide are attached to this blog in a zip file and may be downloaded below.

The pdf user guide contained in the zip file will lead you through the model, as well as demonstrate how to run the two example test benches and the results you should expect. Please note this is a BETA model. As this is still a work in progress, you may find some bugs but that’s what the comment section is for.

The first test bench simulates a full scale input step to the ADC and driver circuit, similar to what you might see if you were multiplexing many channels into one ADC. The simulation results are shown along with wave forms of the input settling below.

The second test bench shows the AC performance of the ADC and drive circuit and how to perform an FFT in LTspice. The input to the driver circuit and ADC in this case is a 14.6Khz differential sine wave and we perform 1024 conversions in order to do a 1024 point FFT. The FFT result is shown below.

Feel free to give the model and test benches a try as well as make changes to simulate different scenarios e.g. different driver amplifiers, RC combinations etc. As stated above the model is a BETA model. We welcome any and all feedback regarding what you like, don’t like or what you would like to see added. Please comment below this blog to share your thoughts on our new LTspice model for the AD7960 Precision SAR ADC.

Welcome back to the ADAQ798x ADC driver configuration blog series! Today, we’ll conclude this series with an overview of the Sallen-Key active low-pass filter topology for the ADAQ798x. This configuration is one of the simpler active filtering implementations, and allows the ADAQ798x to maximize performance even when interfacing with noisy input sources and sensors.


Sallen-Key Low-Pass Filter

The Sallen-Key topology can be used to configure the ADAQ798x’s ADC driver as an active, two-pole, low-pass filter. This configuration is relatively simple, since the ADC driver is set in a simple non-inverting configuration, so the filter doesn’t directly impact its performance and bandwidth (see ADI’s Linear Circuit Design Handbook). The implementation of the low-pass filter requires two resistors (R1 and R2) and two capacitors (C1 and C2) to set the filter cut-off, and an optional two resistors (Rf and Rg) to add signal gain:


The configuration can be thought of as cascading a -40 dB/decade filter followed by a gain stage:



The values of R1, R2, C1 and C2 determine the filter’s shape and response. For this blog post we’ll focus on a configuration where R1 = R2 and C1 = C2. This combination results in a filter with a Q factor of 0.5, and behaves similarly to two equivalent RC low-pass filters in series. The frequency response for this case is:


Assuming R1 = R2 = R and C1 = C2 = C, the filter corner frequency is given by:


At the corner frequency fc, the response of the filter is roughly -6 dB from its dc gain. The dc gain of the filter is given by the non-inverting gain relationship we saw in previous posts:


This configuration can reduce out-of-band noise from the signal source, sensor, or other analog front-end circuitry. If these pieces of the signal chain feature significantly more noise than the components included in the ADAQ798x, and the signal bandwidth is small compared to the Nyquist rate of the ADC, then using this configuration can help improve the system noise performance. The rms voltage noise from a source connected to the filter input (vn rms) is:


where ein is the noise spectral density from the input source, AV is the gain of the ADC driver (shown above), and fENBW is the effective noise bandwidth of the filter. This assumes that the active filter cutoff frequency is significantly lower than that of the ADAQ798x’s integrated RC filter (which will virtually always be the case). fENBW for the filter described above is simply:


The filter cutoff frequency can be selected near the maximum input frequency required for the application to maximize noise reduction. Let’s look at an example to see how this configuration can improve system noise performance.


For a system with an input noise spectral density of 500 nV/√Hz, and a signal gain of 1, what would the cutoff frequency (fc) need to be to make sure the input source contributes no more than 100 μV rms noise to the system? Solving for fc in the equation used above gives:


Using R1 = R2 = 1.2 kΩ and C1 = C2 = 2.7 nF can be used to achieve a filter cutoff close to this (~49 kHz).


Closing Thoughts

Today, we looked at a simple implementation of an active, 2-pole low-pass filter using the ADAQ798x’s integrated ADC driver. This is one of many potential configurations that can be used for achieving active filtering with the ADAQ798x.


System noise performance can be further improved by combining active filtering with oversampling and decimation. Oversampling and decimation is a form of digital filtering, where a certain number of consecutive samples are averaged together to reduce out-of-band noise at the expense of signal bandwidth (see this article for more information).


One thing to keep in mind when designing an active filter is the flatness of the filter’s pass band. Many filters exhibit some deviation in the pass band, especially if they result in resonance or peaking at a certain frequency. When deciding on and designing an active filter topology, be aware of the application’s required gain flatness across the bandwidth of interest.


Thanks again for joining me for this last entry in our blog series about alternate configurations for the ADAQ798x’s integrated ADC driver! Hopefully you’re now equipped to begin taking advantage of the device’s flexible analog front-end for your application!


Have any questions? As always, feel free to ask in the comments section below!

Welcome back to the ADAQ798x ADC driver configuration blog series! In today’s post, we’re going to look at the difference amplifier configuration, another means of interfacing the ADAQ798x with bipolar input signals. This configuration can be used for bipolar signals with wide input voltage ranges and bandwidths. We’ll see how to select the required external components for any given input range and how they affect other specifications like input impedance, noise, and dc errors.


The Difference Amplifier

The ADC driver can be configured as a difference amplifier using four external resistors, shown below:


This configuration can be thought of as a superposition of the non-inverting and inverting configurations; the bipolar input signal is multiplied by the amplifier’s inverting gain, while the dc bias voltage (using VREF, for reasons discussed in previous posts) is multiplied by the non-inverting gain. CN-0393 utilizes this configuration to condition a ±10 V output swing from a PGIA (AD8251). The transfer function for this configuration is:


The first step is to find the appropriate ratio of Rf and Rg, which is determined by the ratio of the input amplitude (ΔvIN) to the full-scale range of the ADC (0 V to VREF):


Unlike with the non-inverting configurations we’ve discussed, the signal gain can be less than 1, so we don’t need to make any modifications (i.e. additional resistors) to attenuate input signals with amplitude larger than VREF. It’s worth noting that the signal does get inverted from the input to the output.


R1 and R2 are then used to attenuate VREF such that the output of the ADC driver is biased to the ADC midscale (VREF/2). The ratio of R1 and R2 is determined by the ratio of Rf and Rg:


The above also assumes that the design is utilizing VREF as the dc input voltage tied to R1.


After finding these ratios, we then need to select specific values for each of the resistors. There are a few considerations to make before we start blindly selecting components:


First, the value of Rf can affect the ADC driver’s stability. If Rf becomes too large, the noise gain frequency response will start peaking, and can become unstable (as described in MT-050). As we mentioned several posts ago in "Adding Gain for Unipolar Inputs", Rf should be limited to prevent this from occurring.


Also, as we saw in our previous post, "Attenuating Bipolar Inputs", larger resistors will result in more system noise. This configuration is more susceptible to noise issues than the one we discussed last week, because the ADC driver’s noise gain will always be larger than 1. The Noise Considerations and Signal Settling section in the ADAQ7980/ADAQ7988 data sheet and the System Noise Analysis section in CN-0393 shows how to quantify the system noise for this configuration.


And still another consideration is the resistors effect on the system offset error. The resistors will interact with the ADC driver’s input bias current to create an offset error at its output. This effect becomes more pronounced as their resistances increase. According to MT-038, in order to mitigate this effect, the parallel combination of R1 and R2 must be equal to that of Rf and Rg.


Let’s consider an example where vIN is ±1.25 V and VREF = 5 V. Using the equations above, we find that Rf must be 2×Rg, and R1 must be 5×R2. If we want to ensure that the input bias currents don’t create system offset error, the parallel combinations of R1||R2 and Rf||Rg must be equal as well, which R1 = 0.8×Rf. If we select Rf = 2 kΩ, for example, we need Rg = 1 kΩ, R1 = 1.6 kΩ and R2 = 320 Ω.


Closing Thoughts

The difference amplifier configuration is capable of interfacing the ADAQ798x with bipolar signals with many amplitude and frequency ranges, and it is remarkably simple to design. There are a couple other things to watch out for, though.

First, remember that some applications are concerned with achieving a high input impedance. With the other configurations we’ve discussed, this is possible by increasing the resistor values (and reducing the input bandwidth to take care of the additional noise). This configuration struggles to do this, however, as Rf and Rg can’t be too large that they impact ADC driver stability. The input impedance of this circuit is equal to that of an inverting amplifier:


To achieve an input impedance of 1 MΩ for example, Rg would need to be 1 MΩ, and Rf will likely be too large for the ADC driver to function correctly (at least when using common gains). The only practical way to increase the input impedance of the system would be to use another signal conditioning stage in front of the ADC driver.


The bright side is that since this configuration will likely feature smaller resistors, it is less likely that it’ll require extra filtering to compensate for resistor noise. Also, this makes balancing the offsets created by the input bias current more practical, as R1 and R2 can be easily selected to balance the offset from Rf and Rg. These two qualities allow this configuration to achieve higher levels of precision and higher signal bandwidths than the non-inverting configurations that could achieve higher input impedance.


Also worth mentioning is that this configuration can be used more easily in single-supply applications where the ADAQ798x’s negative supply is tied to ground. This is because the amplifier’s inputs are held at a constant dc voltage, and there’s less concern of violating the input common mode voltage specifications (shown in the ADAQ7980/ADAQ7988 data sheet).


Thanks again for joining me in this blog series! In our next and final entry, we’re going to look at an active filtering configuration for the ADAQ798x. Follow the EngineerZone Spotlight to be notified when the next addition to this series is available!


Have any questions? Feel free to ask in the comments section below!

Filter Blog

By date: By tag: