Skip navigation

IEC 61508 A Deep Dive

Posted by Tom-M Employee Mar 13, 2018

Last time I promised my next blog would feature a deep dive into IEC 61508, the main functional safety standard. And I keep my promises, however, this will be the last of my introductory blogs covering basic topics for a while. I am keen to move on to more exciting topics such as requirements for Cobots, AI, networking and cyber security. So keep tuning in because these topics will all be covered beginning with my next installment.


Obviously as a semi-conductor manufacturer I am going to concentrate on the semi-conductor functional safety requirements but anything here should be more widely applicable. Also, obviously because of the nature of a blog some poetic licence is taken to quickly explain the concepts.


The graphic below shows a path through the standard for a semi-conductor device. Within Analog Devices this flow is captured in our ADI61508 process.



The first task is to understand the environment. This includes not only the EMC environment, the average and the extremes of the temperatures at which the circuitry is expected to operate but also what standards and regulations apply.


Next comes the hazard analysis where the safety functions are identified. Typically, you will need a safety function to address each hazard unless the item can be redesigned to eliminate the hazard.


The third box is where the safety integrity requirements for each of the safety functions is determined. Typically, this is done based on the severity of the harm and the frequency at which that harm may occur.


The next three vertical boxes show the various ways to address the systematic requirements. Systematic failures are failures not caused by random events. Examples of systematic failures are not having enough EMC robustness, missing requirements, something missed because of insufficient testing. Route 1S based on meeting all the requirements in IEC 61508 is the most common option but Route 2S based on evidence of proven in use is also possible. Route 3S is only an option for software and involves retrospectively doing all the paperwork and analyses you should have done in the first place. For an IC the requirements form IEC 61508-2:2010 Annex F shows a means to achieve route 1S.


Then you have two options on how to meet the hardware integrity requirements. Route 1H allows a trade-off between diagnostic coverage and hardware fault tolerance(redundancy). For example, for SIL 3 you could use no redundancy but have a SFF (safe failure fraction – a measure of diagnostic coverage) of 99% or an HFT (hardware fault tolerance) of 1 and 90% SFF in each channel. Route 2H is based on field experience and minimum levels of HFT.


Next if there is on-chip or off-chip redundancy you need to consider CCF (common cause failures). CCF can easily defeat redundancy and CCF are the most common means to defeat a redundant system. Annex E gives guidance on minimizing the risk of on-chip CCF where on-chip redundancy is used through the use of isolation wells, on-chip separation etc.


Now the PFH (probability of dangerous failure per hour) or PFD (probability of failure on demand) need to be calculated. Depending on the SIL level there will be maximum values for these metrics. Typically, an IC will be allocated only a fraction of that maximum.


"When the weight of the paperwork equals the weight of the plane

it is ready to fly."


Next data communications need to be considered. Guidance says that perhaps 1% of the PFH budget should be allocated to interfaces. This might involve calculations based on the bit error rate for the transmission medium, the number of bits transferred per message, the number of messages per hour and the Hamming distance of any CRC used to detect failures. (There will be a blog on this topic.)


Perhaps at the end is the wrong place to put this but if you have on-chip diagnostics you need to consider what you want to do when the diagnostics discover an error. For a motor control application, you may want to stop the power but for other applications you need to know a lot about the final application. For instance, in a nuclear power station cooling application you probably want to keep the coolant flowing but if it is a system carrying gas you might want to stop the gas flowing.


There are lots of other sub-tasks such as configuration management, change management, gathering evidence of competence, independent assessment - not shown above and remember documentation is key. If it is not written down it didn’t happen. Not only must the product be safe but you must be able to demonstrate the reasoning behind it’s safety. There is a saying in avionics that when the weight of the paperwork equals the weight of the plane it is ready to fly.


Video of the day: shows some of the testing required before an airplane can fly – my understanding is that this test was done, in the dark, with half the exits blocked and nobody knows in advance which half – regardless of the size of the plane everybody must be off in less than 90 seconds – see


For the next time -  The Functional safety requirements for Robots, Cobots and Mobots.

A complex systems challenge needs a comprehensive systems solution. The IoT is a system. A system with far-reaching capabilities, opportunities and benefits. All of which come with significant complexity for those looking to harness its vast potential. The most successful results will come from a systems approach. Extreme IoT solutions call for the most precise and secure data under the widest range of conditions with low power processing at the very edge of the network to reliably connect to the cloud.


Extreme IoT is moving beyond number of connected sensors into systems that may be moving (an “Internet of Moving Things”) or systems irrespective of domain partitioning where analytics can happen at the sensor edge or in the cloud. Time based or “time stamped” data allows insights to be drawn or an outcome to be predicted from many sets of sensor data taken at the edge or in the cloud. But finally, when data is sent into the cloud, it is vital that that it is done with highest reliability.


Consider these examples from my previous blog, Welcome to the Extreme IoT: a sensor in the heart of the desert, another deep in the arctic or sensors on a moving robot in a factory full of radio interference. Just surviving and operating in those extreme settings is challenging.


Low Power Reliable Wireless Sensor Networks (WSNs)

Wireless IoT networks must meet the same requirements of wired networks, but the demands extend beyond reliable transmission in harsh conditions. Wireless networks must also deliver robust performance, security, and the lowest possible power consumption. While the radio is a critical building block, and many low power radio solutions are available, network protocol and architecture play a large role in determining the performance and power consumption of the full solution. And the needs may be very different depending on the application. These can range from large data sets transferred on a continuous basis to small amounts of information transferred on an “as needed” basis. But in all cases, the data needs to incorporate strong security, including encryption and authentication.


From Rant to Reality:

A solution that uses efficient sensor networks, chips, and pre-certified PCB modules with mesh networking software, can enable sensors to communicate in demanding industrial IoT environments. For example, ADI’s SmartMesh® wireless networking products use a time-synchronized channel-hopping protocol with built-in self-diagnostics to transmit data. Each wireless node has an on-board ARM Cortex-M3, which can be used as an edge processor. That way, only the necessary information is transmitted, reducing power consumption and cost. This specialized combination of reliable WSN communications, intelligent sensing, and low power consumption makes wireless solutions like SmartMesh well-suited for placement almost anywhere in demanding industrial environments.


For more Inside IoT blogs click here.

In my last Blog, I promised a discussion on the various functional safety standards. As someone once said about standards, the great thing about standards is “that there are so many to choose from”.


IEC 61508 is what is referred to as an A level or basic standard. It is meant to be non-application specific and to be a general standard. From it are derived sector specific standards such as ISO 26262 for automotive or IEC 62061 for machinery. These sector specific standards are referred to as level B standards. The bottom tier of standards are level C standards and apply to specific pieces of equipment.



There are also some standards such as ISO 13849 or the avionics standards such as D0-254/D0-178C which are not derived from IEC 61508 but if you look at the table of contents in any of these you will note that they cover all the same areas and topics as IEC 61508. Some of these standards such as ISO 13849 refer back to IEC 61508 for complex technology or in the case of the medical standards for the detailed software techniques. Others such as the robot safety standard ISO 10218-1 give SIL and PL from IEC 61508 and ISO 13849 to specify the safety integrity requirements.


Standards are published by various groups including ISO, IEC, ISA, IEEE, UL, CENELEC and many others. The ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) are the two main international standards organizations and the members of these groups are the main standards bodies within a country. For instance, in Ireland the members are the NSAI (National standards authority of Ireland). Each national standards body can then nominate experts to take part in drafting and reviewing the standards. The group dealing with IEC 61508 are split into IEC TC 65/SC 65A/MT 61508-1/2 and IEC 61508/TC 65/SC 65A/MT 61508-3. These standards are meant to be developed by consensus and are therefore referred to as consensus standards. A criticism of this approach is that some people interpret the standards as being the minimum necessary on the basis that this was “all the committee could agree on”. There is some merit in this criticism in that compliance is the minimum you are required to do and in many cases it is also the most you are “required” to do. If consensus cannot be reached then sometimes a standard is not published but instead it is a technical specification. Within a standard such as IEC 61508 some of the parts will be normative and some of the parts will be informative. Normative parts contain the actual requirements of the standard and the informative parts give guidance on how to apply the normative parts.


The standards can be difficult to read and legalistic as shown below and I would advocate reading a good book on the topic if you want to get an overview of the topic. In a future blog, I will feature a functional safety book review. If you do insist on wanting to read the standards they cost in the region of $250/Euro 250 per standard and can be bought directly from the IEC, ISO or your national standards body (note – IEC 61508 is in 7 parts and ISO 26262 is in 10 so buying all the parts will cost upwards of Euro 2000).



Most standards also include the idea of tailoring whereby the standard needs to be interpreted depending on the task in hand and the non-relevant bits can be skipped. As Mike Miller a functional safety expert told us during a functional safety training course “Functional safety should be common sense written down”. When tailoring a standard, you should record the reasons for your decisions as to why you are skipping bits. If you don’t write down your reasons you could be accused of being negligent. If you write down your reasons for not performing some of the actions required then you are at worst stupid.


Sometimes the standards bodies cooperate and a standard can have multiple names such as IEC/ISO/IEEE 5288:2015 on Systems and Software engineering.


Complying with the standards is not normally legally necessary. However, it can be and things like the machinery directive within the EU insist that all machines must be design to “state of the art”. Complying with IEC 61508 and ISO 13849 given evidence that you followed a state of the art development process. Complying with standards such as IEC 61508 can also be put forward as part of the defence case if a company is sued as you have followed state of the art.


Video of the Day: I normally try to pick an entertaining video as the video of the day, this one is a bit alarmist but gives an idea of the importance of complying with the necessary standards -


Next Time: The discussion will be a more detailed look at IEC 61508 and the life cycle it advocates.


Notes: For more on level A, B and C standards see ISO 12100


Enjoying the Safety Matters series? Tell us by liking the blog posts or commenting below. You may find more Safety Matters blogs here.

In a sport where victory and defeat are often separated by 1/100th of a second, it was surprising when both the German and Canadian two-man bobsled teams won gold medals at the PyeongChang 2018 Winter Olympics after finishing with exactly the same times. In fact, the top five teams were separated by just 0.13 seconds. Which is roughly the blink of a human eye.


Precise measurement is absolutely essential for many Olympic events, including bobsled, skeleton, and luge. And luge pushes the measurement limits even further by scoring speed down to 1/1000 of a second.


We’re no strangers to precision measurement at Analog Devices. Or to doing so for Olympic sports. ADI associate design engineer Tom Westenburg was the Principal Engineer for the US Olympic Committee’s Sports Science division. He spent 18 years with the USOC before joining Linear Technology Corporation (LTC) and now ADI.


I had a chance to talk with Tom about some of his experience related to timing and scoring, as well as improving athletic performance.


You discovered a flaw with the timing systems used at many of the sliding tracks and came up with a solution. What was the flaw?


Bobsled and luge tracks use optical sensors at the start and finish. They use modulated light sources so they aren’t  affected by changes in the surrounding lighting, such as when a cloud passes in front of the sun.


So these lights are looking for a specific modulation rate, say 100 Hz. But when the athlete breaks that beam, it really matters as to where that light is in terms of its flash cycle. As a result, you can end up with a random 10 millisecond error at the start and the finish. Now, you might think, “Well 10 milliseconds at the start and the finish will just cancel each other out.” But because they were random, an unlucky athlete could have them both work against him, adding 10ms to his time. Each light can have an error of 0ms to +10ms, so the maximum is 10ms not 20ms. And luge is a sport that’s measured down to the millisecond, so it needed to be much better than this to fairly judge every athlete.


So how did you fix the issue?


We wanted to increase the modulation rate as high as we could. We found a few commercially available lights that would work, and did some lab testing. One had a modulation rate of 20 kHz and looked great in the lab, but it had too many false triggers on the track. Around ~9.4 kHz gave us the best overall performance. It was lower than I had planned, but it was still much better than the 200 to 700-Hz lights that most other tracks were using at that time.


As a side note, when a light beam is broken, typically three pulses must be missed before it counts as a valid break. The random part is when the athlete enters into the pulse cycle. The second and third pulses add a slight delay, which is equal at the start and finish, so it doesn’t affect accuracy.


Then after we had an acceptable timing light, I wanted to make sure it was accurate end-to-end. At this point I needed some expertise and some help. I got in touch with the time frequency expert at NIST and got access via satellite to the NIST Cesium Fountain atomic clock, one of the most accurate clocks in the world. We then built a system that had super high-speed beryllium shutters used for pulsing lasers in surgical applications. We had a set of shutters with a satellite receiver at the start and at the finish. These could be programmed to break the timing light beam with 100ns resolution. The error, including the shutters, was around 50-100us. Without the satellite setup it would have been difficult to accurately test a track that is almost a mile long. In the past, the system timer would be verified in a calibration lab, but not with the timing-lights and a mile of cabling attached. That is a lot easier than an end-to-end test. As far as I know the 2002 Salt Lake games were the only ones ever tested to this level.


You had an interesting experience with luger sliders taking advantage of the timing system, didn’t you?


Yes, that’s kind of how I got involved in all of this. Many of the older tracks were using retroreflective timing lights. The transmitter and receiver were on the same side of the track with a reflector on the other side. So the light from the transmitter would reflect back to the receiver. Once the beam was broken by the feet of a luge slider passing through, the timer would start.


As it turned out, some athletes had suits made of a highly reflective material and a matte black helmet. The suit would reflect the beam back to the receiver and the sensor would not record a break in the beam until the athlete’s head passed through. So the slider was getting basically a full body-length head start, which could be over 200ms (i.e. -250ms + 50ms = -200ms). In a sport won and lost in thousandths of seconds, this was a huge advantage.


Of course, they had to be completely flat on the luge for the helmet to trip the light, and they weren’t always doing that. So there’d be these weird instances when the timer never started, and that raised some eyebrows within the sport.


So they came to us and we looked into it, and with some time and head-scratching we figured out what was happening. Now, nearly all the tracks use a transmitter on one side and the receiver on the other.


You were also involved in helping athletes improve their performance as well.


Yes. One example was the U.S. bobsled team. Like luge, you’re looking for any way to shave a tenth of a second or more off the run. The start is a very important and it can win or lose the race. We focused on how the two-person and four-man team members pushed and loaded into the sled. The goal was to get the sled going as fast as possible with a clean load going into the first timing light, which is where the timing of the run begins.


We had a real sled, but it was a dry-land sled with wheels instead of runners. We used photo-electric sensors on the wheels to measure distance and velocity, and strain-gauges in each of the push handles to measure force. In fact, we used AD626 amps to amplify the strain gauges.


An athlete’s excitement, especially at an event such as the Olympics, can cause him or her to push a bit longer than they should. If the first three athletes in a 4-man sled team do that and delay their load-in, it can cause the brakeman to have to run beyond the point where he/she is applying propulsive force to the sled. They then have to pull themselves into the sled. All of which can cause a poor load-in and slow the sled going into the first timing light.


Using that system, we could calculate where they were on the track and when they were loading. The system transmitted the sled data and mixed in a live video of the athlete to a coach’s laptop. We’d display a force profile on top of that and calculated other parameters which indicated the quality of the push and load. So teams knew how well they were pushing and loading. We wanted each team to have the optimal start burned into memory and not deviate from it. This real-time feedback enabled athletes to find that optimal point by making corrections when what they just did was fresh in their minds. Previously, it would take days to process the data, but by then, it was hard for an athlete to remember what they did, and so it would be almost useless in terms of making an effective correction.


The US four-man team is competing in a few days. Thanks for giving us a unique look at some of what goes on behind the scenes to enhance the precision of their performance.


You’re welcome.

As part of the evolution of the IoT, more information is needed than simply increasing the number of sensors in a system and measuring more modalities. I’ve previously spoken about clever partitioning of systems and breaking an IoT system into what needs to happen at the edge (sensor or gateway) and what happens in the cloud. Extreme IoT requires going beyond stationary, connected and sometimes dumb sensors, irrespective of how much data they produce. What if the target is mobile? The complexities of an Internet of Moving Things goes beyond simple data collection into how to track and measure intelligently. However the information about “when” a measurement happens can be almost as important. Machine learning about what conditions signify can only be gained if you can synchronize an event to a time stamped set of data. 


It’s only then that the real magic of IoT (moving data into value or wisdom) can be unlocked.


Time Sensitive Data in Industrial Ethernet

Two critical elements that come into play for industrial applications are the need for guaranteed “on-time” reliable data delivery and accurate time-stamped data for event sequencing and process analysis. When the data absolutely, positively has to be there at the right moment, deterministic networking can enable everything from motion control applications to process control and factory automation applications. Time-stamped data can be used in algorithms to reveal trends across the factory that deepen the value of the information.


At the 2017 IoT Solutions World Congress in Barcelona, several ideas were presented to show the value and readiness of Time-Sensitive Networks (TSN) and new IEEE standards to support real-time control and synchronization of machinery and processes. The vision was to enable flexible manufacturing for Industrial IoT and Industry 4.0 through deployment of open, standard deterministic networks in production facilities. Analog Devices was one of 17 partners behind an award-winning solution.


From Rant to Reality:

As the IoT evolves at such a rapid pace, solutions should be engineered with the flexibility to meet the current and future requirements. For example, ADI’s fido5000 Real-time Ethernet, Multi-protocol (REM) switch is designed for all of today's major Industrial Ethernet protocols and its configurable blocks will also make it easier to support future IEEE 802.1 TSN protocols.

In my last Blog, I posed the question “What are 3 key requirements for a safety integrity level?”.


A functional safety standard such as IEC 61508 runs to over 700 pages across 7 parts. However, the requirements can be summarized under 3 key requirements


  • Requirement 1: Have good reliability
  • Requirement 2: Be fault tolerant (even though you have good reliability, failures will still happen) 
  • Requirement 3: Prevent design errors (not all system failures are due to hardware failure) 


Requirement 1: Most people would accept that while having good reliability doesn’t guarantee safety it is at least a good first step. Reliability is measured in FIT (failures per billion hours of operation). Reliability predictions can be based on field experience or predictions using systems such as IEC 62380, SN29500 or the FIDES guide. The allowed dangerous failure rate will depend on the SIL with 10000 FIT for SIL 1, 1000 for SIL 2, 100 for SIL 3 and 10 for SIL 4.


ADI publishes the die FIT for all released products at data is presented using a tool which allows the average operating temperature to be entered and gives the reliability predictions at the 60% and 90% confidence levels. The numbers presented below are based on accelerated life testing.



Most equipment suppliers are interested in reliability, but functional safety insists on it with specific limits depending on the required safety level for the allowed probability of dangerous failure. It also offers means to enhance it using techniques such as derating and architectures such as MooN which are topics for future blogs.


Requirement 2: If you accept that no matter how good the reliability the system will still fail, then ways to cope with this failure include diagnostics and redundancy. Diagnostics detect that a failure has occurred and take the system to a safe state. Redundancy implies that there is more than one system capable of performing the safety action and that even if one failure occurs there is another redundant piece of equipment which will maintain safety. In IEC 61508 the diagnostic coverage figure of merit is the SFF (safe failure fraction). SFF gives credit safe failures and detected dangerous failures. For SIL 1 a minimum SFF of 60% is required, for SIL 2 90% and for SIL 3 99%. It is allowed to trade off redundancy (HFT) for SFF so that a SIL 2 safety function can be implemented with two channels each having 60% SFF. At the IC level parts such as the AD7124 feature lots of diagnostics which can be used to detect both internal and system level failures. On-chip diagnostics include references inputs such as 0V, +/-full-scale and +/-20mV and state machines to detect internal bit flips. System level diagnostics include transducer burnout current sources.



Requirement 3: In IEC 61508 functional safety refers to the measures taken to prevent the introduction of design errors as the systematic safety integrity of the item. These measures are necessary since no matter how good your reliability and despite your built-in hardware fault tolerance you must recognize that a system can fail to carry out its safety related task without any failures. The causes of such failures might include missed, forgotten requirements, improper verification or validation. Software coding errors are considered as systematic errors because they are not caused by failures per say as typically the system is operating as designed. Harder to accept is that EMI (electromagnet immunity) failures are also considered as systematic failures because once again the system hasn’t failed as such but rather was not built with enough robustness. Measures advocated by IEC 61508 to prevent the introduction of systematic errors include things like coding standards, design review, verification plans, safety plans, checklists, requirements management and many more.


Video of the day – (the excuse for including this video is that it vaguely relates to determining customer requirements).


For the next time -  Name some functional safety standards?


Click here to read more Safety Matters blogs.


Welcome to the Extreme IoT

Posted by GrainneM Employee Feb 6, 2018

At its highest level, the Internet of Things is frequently mentioned in the same breath as the increasing number of connected sensors.


But as the IoT continues to evolve, so too does our understanding of what it will look like and how it will function.


As the number of sensors increases, so does the amount of information they gather. And all that data is booked for travel to the cloud, leaving the IoT awash in information and overburdened to translate it into insight.


There are other considerations, for instance, what about the power needed to transfer all this data? What if you’re putting garbage into the cloud – how can you expect to get insight from it? What if you need immediate action due to an out-of-bounds measurement or algorithm? What if you simply have to keep data local? What if the network fails?


Internet of Things (IoT) is Much More Than Connected Sensors

This growing complexity is changing the thinking in many IoT circles. Key analysts such as McKinney suggest that as little of 1% of cloud data is actually used. Even massive cloud partners like Microsoft are switching their focus from the cloud in the center to the sensors at the edge. And the edge can often be an environment of extremes.


Consider a sensor in the heart of the desert, and another deep in the arctic. Or sensors on a moving robot in a factory full of radio interference. Just surviving and operating in those extreme settings is challenging. But what if the data being gathered is a complex waveform, or so large that it will take significant power to send it regularly to the cloud?


Extreme IoT applications typically require a systems level approach to designing the end-to-end application. Precise sensing and measurement under the harshest conditions, low power signal processing at the node, and reliable connectivity are three key pieces to getting the most from the Extreme IoT.

One of the phenomena of extreme IoT is that things just simply won’t stay still!


The Internet of Moving Things

High performance industrial sensors are enabling a shift from traditional mechanical, fixed-function, stationary devices to increasingly intelligent, autonomous, and mobile machines. Accurate motion tracking and location determination of the sensor node are becoming central to application success. Location information from the node will enable applications such as smart farms leveraging autonomous land and air vehicles to reduce costs and improve yields. While in hospital operating rooms, it will help precision-guided robotic arms provide the surgical precision needed to produce successful outcomes. In both scenarios, correction for outages or inaccuracies in the primary sensing/feedback loops that enable guidance and controls is critical for protecting machinery and lives.


From Rant to Reality:

A solution can be found in high-performance MEMs IMUs specifically engineered for extreme IoT applications, such as the ADIS1647x and ADIS1646x from Analog Devices (ADI). They are capable of supporting sub-degree pointing accuracy and precise geolocation, while also providing the necessary size and cost efficiencies. They reduce angular jitter, and provide primary guidance during outages or disruptions of other sensors to determine the system-state within complex applications. They advance what was once simple machine measurement into machine control, and further again towards true machine intelligence.


Enjoying this blog? Click here to read more Inside IoT blogs by Grainne Murphy.

In my last Blog I posed the question “What are Safety Integrity levels?”.


A safety integrity level according to IEC 61508 is “discrete level (one out of a possible four), corresponding to a range of safety integrity values, where….”. Actually, the definition is not very useful as an introduction so I cut it short.


The abbreviation for Safety Integrity level is SIL. A SIL is a way of quantifying the level of safety that is either expected or required. There are 4 levels and they are roughly an order of magnitude apart so that for many process control applications a SIL 1 safety function will give a risk reduction of 10, SIL 2 100, SIL 3 1000 and SIL 4 10000.


A hazard analysis as shown below is used to determine what safety functions are required and a risk assessment then determines the required SIL. The risk assessment typically considers things like the number of people who might get hurt, the severity of the injuries and how often someone is exposed to that risk.



It should be remembered that a piece of equipment may be SIL certified as being suitable for use in a safety function with a given SIL but the SIL is attached to a safety function rather than a piece of equipment. In fact a single system can have many safety functions and each of the safety functions could have a different SIL.


When designing a safety function, higher SILs require more measures to be taken to prevent the introduction of errors. This might include better requirements management, more design reviews, the use of coding standards or even the restricted use of certain language features such as pointers or interrupts.


Other safety standards have different forms of SIL:

  • Automotive has ASILs which stands for Automotive Safety Integrity Level and in order of increasing safety they are A,B,C and D
  • The machinery safety standard ISO 13849 has performance levels a,b,c,d and e
  • Avionics has design assurance levels E,D,C,B and A where A offers the most safety and E the least


To me the fact that there are 4 SIL levels also says that you can put a price on safety. Otherwise there would only be one SIL level namely SIL 4. However if everything had to be developed to SIL 4 the products would be so expensive that nobody could afford to buy or use them which wouldn’t increase overall safety.


In the past safety standards have had up to 7 levels. Today some people advocate that SIL 1 and SIL 2 should be combined along with SIL 3 and SIL 4 leaving just two safety levels. For now these people are in the minority and to most experts four safety levels seems about right especially for a basic safety standard such as IEC 61508.


The fact that the levels are an order of magnitude apart also says that when doing a functional safety analysis you shouldn’t be too fussy about getting the numbers correct to 3 decimal places.


Video of the day: (another tenuous link to the topic under discussion but if nothing else highlights the risks people will take and functional safety is meant to cover foreseeable misuse).


For next time: What are the 3 key requirements for a given SIL?


Enjoying this blog? Read more Safety Matters blogs here.

DSPs and audio decoding are critical elements to delivering the type of high quality audio that today’s consumers expect. Today will be the first blog in a series that examines this topic; starting with an overview of why DSPs are critical to audio design.


An infinite number of audio channels are rendered by nature in a truly open space for human ears to enjoy. Scientists and engineers have tried to filter the unwanted, while capturing and reproducing the wanted audio in living rooms. The phonograph, also known by the generic name gramophone provided us a single channel audio just over 140 years ago. Over the years the technology graduated to stereo tapes and multi-channel audio with additional post processing to reproduce audio with highest possible accuracy and fidelity. However, recording audio using discrete microphones and replaying it using discrete speakers brought an unnatural audio experience, forcing technologists to increase the number of channels for recording and playback. This posed yet another problem of recording, transporting and playing the huge amount of data used by a larger number of audio channels. This new problem forced scientists to invent compression/decompression algorithms that did not lose fidelity and dynamic range.


While compressing or encoding the recorded audio channels does not need to be done in real time, as this is accomplished in studios, it is necessary for the decompression or decoding to be  done in real time. The advent of digital signal processing (DSP) chips enabled this. Audio from stationary sources such as concert halls, panel discussion, etc., can be handled easily with this method of transporting audio. But most audio that we experience every day is not stationary, be it cars, people walking, or a conversation we walk past. These sounds move in space around us and are thus not so easy to reproduce with discrete speakers without elaborate recording, mixing and playback techniques. Then came the object audio with larger number of channels and virtualization. This further increased the complexity and horsepower requirement on DSPs for decoding and rendering this object audio.


In my next blog, I’ll take a closer look at the performance characteristics of modern DSPs and the importance of having DSPs certified to work with audio decoder IPs to make music for your customer’s ears.

You've clinked your glasses and made your resolutions, many of which I suspect you've already broken. Before 2017 becomes a speck in our rear view mirrors let's take one final look back.


EngineerZone is a thriving community of customers, partners, and ADI employees sharing ideas and finding solutions. Together we produced a great deal of interesting and informative content. Here are the 2017 highlights...



I recently participated in a panel discussion hosted by the Greater Boston Chamber of Commerce. The topic was “The Case for Reinvention: How to Push Past the Status Quo and Think Big for Long-Term Success.” Here are some excerpts from that discussion and my insights into how Analog Devices has maintained its focus on innovation for half a century.


On a different approach to corporate structure

While ADI is a relatively large corporation, in my experience, ADI is run as a “bottoms up company.” It functions as a federation of smaller operations with some top-level structure. That has given us the ability to stay pretty mobile and agile even as the company has gotten larger.


That has been important throughout the years, but it’s even more so now. The old formula was to try lots of things and be prepared to double down on the ones that work. Today, you don’t have the luxury of trying so many things. You have to be prepared to change the business model and look at the world differently. Maybe you’re not selling products; you’re starting to sell services or change what you’re doing. You have to be able to think differently and make adjustments to your vision.


Pictured from left to right: Audrey Asistio, anchor NECN/NBC Boston; Maggie DeMont, SVP, corporate strategy, Houghton Mifflin Harcourt; Chris Froio, SVP/GM, Reebok America; Dave Robertson, ADI fellow and technology director; Rich Rovner VP of marketing, MathWorks.


How to work with customers 

Most people agree on the importance of listening to their customers, but they don’t always know the best way to do it. You can gather a lot of input, but it can be so varied that it’s hard to find what to focus on. At ADI, we look for customers who are really trying to do something innovative in a certain space and work with them. We try to figure out what works, what doesn’t. Is what they’re trying to do something that others are trying to do?


At the same time, customers are often looking to you to educate them and lend your expertise so they don’t make mistakes. You have to talk to them about their problems, but you have to understand the critical problem that they haven’t even thought about yet. As an example, consider the interface to your smartphone. No one said, “Hey, I hate all these buttons on my phone,” but when somebody gave them a touchscreen, it became, “Wow, this is really cool, how did I live without it?” Getting rid of the buttons solved a problem that you probably didn’t even realize you had.


And you can’t talk to just one customer. One of the interesting things about ADI is the diversity of our end markets. As a semiconductor company, we’re in consumer applications, we’re in automotive applications, we’re in healthcare, we’re in industrial. We will work across these different businesses and technologies to pick up fundamental themes that tend to have longer time constants. We want to avoid getting trapped in a spot market that’s really hot for a minute, but then turns out to be a flash in the pan. The major trend is what we want to ride.


"...some of the most exciting work starts out in

small places."

Maintaining a positive culture

Of the many things that make up an organization’s culture, authenticity is extremely important. One of the advantages of a “bottoms-up” company is that it’s less dependent on the words from senior management. While leadership from the corporate center certainly is important to employees, what really matters are the local teams that they’re in. What’s said and done there is the real proof what the organization is all about.


Creating opportunity

When you’re growing at 20%, there are plenty opportunities for employee growth. A significant challenge comes in creating a dynamic environment when that rate of growth slows. Where do the opportunities come from?


That’s where this idea of a federation of small groups has helped at ADI. A group may see about 15 percent turnover each year, and hopefully that’s not because people are leaving the company. If you hire people who thrive on solving interesting problems, they look for new challenges. So you often get people moving between groups. For employees, it helps keep their jobs from getting stale. And for the teams themselves, it does much the same thing. It means the team has 15 percent new faces, new perspectives, and new skillsets.


Larger companies vs. smaller ones

Size allows you to make certain portfolio bets. Back to the culture, employees are more comfortable with taking risks if they feel like the company can provide some overall stability. Being large helps that because you can encourage a startup mindset with the security of knowing failure won’t be lethal.


On the other hand, some of the most exciting work starts out in small places. So if your management says, “We don’t want to talk about anything that’s not going be $100 million per year in revenue,” you’ll miss all the cool stuff. So again, I think part of how ADI tries to manage that is a federation approach where financially we are a $5.5 billion company, but a lot of strategic decisions are made at this much smaller level where businesses are free to chase a $10 million opportunity that can turn into something big five years from now. 


 About the author: Dave Robertson is a Fellow at Analog Devices (ADI) and the director of its high-speed converter group.


Safety Matters: Is it Safe?

Posted by Tom-M Employee Jan 16, 2018

In my last blog, I promised to answer the question “Is it safe?”.


Safety is freedom from unacceptable risk. The word unacceptable is very important here. Obviously jumping out of a plane with a parachute is risky but for the person involved they have obviously decided it is an acceptable risk. Similarly, in all our lives we take risks from drinking hot coffee to driving cars. Some reckless people even smoke.



When you go to work your risk of dying should not be significantly raised and your employer has a duty of care to make sure you are not harmed or injured. This harm may come about while working with for instance a machine or robot.


Acceptable risk is often interpreted for a healthy adult worker as a chance of dying in a year of 1e-4/year. If the public is at risk you are deemed to have a higher duty to provide more safety and the acceptable risk is a factor of 10 lower at 1e-5/year.


A more technical discussion of the topic is given in The Safety Critical Systems Handbook section 2.1.1.


In general, your employer has 3 options and in order of priority they are

  1. Eliminate the risk
  2. Engineer a solution
  3. Warn and inform


There are many ways to eliminate a risk such as by changing the process so that a dangerous machine or chemical is not required but often they are unpalatable due to cost or other reasons. Warning and informing can be done as per the coffee cup above but is only done as a last resort (see picture of a Canadian coffee cup here).


Functional Safety is mostly about option 2, engineering a solution. For a dangerous machine, this might involve putting a sensor on the machine door so that if the door is opened the machine is stopped before your hand gets inside. In a future session, I will describe more about machine safety but for now I will just saw that typically functional safety involves 3 components 1) A sensor 2) some logic to make a decision on the sensor output and 3) an actuator to take the system to the safe state.


The combination of the above 3 elements constitutes a safety function. That safety function has 3 key properties


  1. A safety integrity level
  2. A safe state
  3. The time to achieve the safe state


While health and safety deals with the everyday safety, functional safety typically deals with the 1 in 10 year, 1 in 100 year or 1 in 1000 year accidents. Functional safety is having the confidence that a piece of equipment will carry out its safety related task when required to do so. However, it is only part of the safety spectrum which includes electrical safety, mechanical safety, intrinsic safety and many others types. 



The Video of the day is based on the Saw Stop system – see


These guys don’t make any claims for functional safety but I think it is a good illustration of a safety function – there is definitely a sensor to sense a human hand, there is definitely something to make the decision to stop the saw and thrash the motor and we can all see the actuator. The safe state is clearly to stop the saw and the time to achieve the safe state is before the saw cuts the hand.


For the next time -  What are safety integrity levels?


Note - for more on the order of priority in design see ISO 12100:2010 figure 1

Safety matters to everybody even Homer SimpsonThere are various forms of safety including electrical safety, mechanic safety and intrinsic safety. A form of safety of particular importance to Analog Devices is Functional Safety. Functional Safety relates to the confidence that a particular piece of equipment will carry out its safety related task when asked to do so. For instance when a machine door is opened the many spinning motors inside should be stopped before the person can put their fingers in the motor. In my new blog series "Safety matters' I will cover some initial introductory topics but there after will vary across topics such as gas sensors, robots, machines, PLC analog i/o cards, trains, software and all the other application areas that come up in my remit as the functional safety for industrial guy. I might even throw in the odd bit about automotive, avionics or medical functional safety.


You might be thinking what makes this guy an expert? Well allow me to introduce myself, my name is Tom Meany. I joined Analog Devices over 30 years ago as the IT guy,  a job I only stuck for about 18 months. Since then I worked in the test department working on parts such as sigma-delta ADCs, uControllers and power meter ICs. After that I worked as a project leader on similar parts and doing some light design work before joining the automotive group where I spent 9 years working on oil level sensors, air bag sensors and lead acid battery monitor chips. For the last 5 years or more I have been the functional safety guy for our industrial products. It was while working in the automotive group on air bag sensors that I was first exposed to functional safety along with the automotive demands for high quality, high reliability and extreme EMC robustness.


I am based in Limerick Ireland and I am the Irish expert on several IEC committees including

  • IEC 61508 hardware working group
  • IEC 61508 software working group
  • IEC 61800-5-2 working group 


I also have an interest in cyber security since I discovered that it can’t be safe unless it is also secure and I am therefore an observing member on ISA 99 who along with IEC TC 65 work on the IEC 62443 specification.


I am a TUV Rheinland functional safety engineer in the area of machinery and I have a certificate in Functional Safety and Reliability from Technis. I am the holder of 8 US patents related to semi-conductors and hold a degree in electronics and a masters in applied maths and computing.


In my personal life, I run a lot and have completed over 35 marathons and ultra-marathons. Long distance running gives you great time to ponder functional safety.


I like to find YouTube videos with a link to safety – no matter how tenuous the link – this one links both marathon running and safety -


In my next blog I will try to answer the question “Is it Safe?”


Bye for now...Tom

One of the benefits of my role as President and CEO of Analog Devices, is traveling the world to meet with customers from a variety of industries and regions and hearing their perspective on the technological, business, and market challenges they face. Our customers produce the wide variety of electronic equipment that we all rely on for transportation, healthcare, communication, and many of the benefits of modern life. Our discussions typically focus on both their current needs to intelligently bridge the physical and digital worlds as well as on the innovation they want to enable in the future. I have drawn upon those conversations and other research to compile the following five technology macro-trends that I believe will have the greatest impact on business and society during 2018.


Artificial intelligence
Customers in every market segment are feverishly trying to understand the value of artificial intelligence and machine learning for their businesses, mirroring efforts of a decade ago to realize the benefits of digitalization. The focus on utilizing AI will accelerate in 2018, with the performance/affordability barrier continuing to be broken down and targeted applications of AI achieving financial and application-level impact in Industrial settings. For example, AI has progressed to a point where industrial robots are able to learn and adapt to new environments or unfamiliar objects without being specifically trained.


AI at the edge will begin transitioning from a novelty to a norm through innovation related to low-power processing, while intelligent edge computing will become a reality with context-enriched data and information driving smarter system partitioning between the edge and the Cloud.


Meanwhile the development of AI applications that rival human intelligence will remain squarely in the university research domain.


Autonomous/intelligent machines
Autonomous systems for cars, drones, and robots will continue to advance during 2018, but only to a certain point due to unresolved regulatory and technical issues. Nevertheless, over the coming months, we will continue to see progress in the adoption of autonomous systems through initiatives such as trial deployments of robo-taxis in limited areas. In particular, long-haul transportation such as trucking and trains will be among the applications that experience true advancement of autonomous functionality in the near term.


Driven by the continuing quest for productivity gains, the drive to add intelligence to machines will also accelerate factory automation/Industry 4.0 initiatives.  For example, advances in machine learning will significantly improve the ability of systems to provide valuable performance recommendations and predictions based on their own independent condition monitoring.


Ubiquitous wireless sensing networks and data
The combination of advanced materials, enhanced functionality, and MEMS is enabling breakthroughs in sensor form factors and cost, which will enable ubiquitous wireless sensor networks. Deployment of wireless mesh networking in IoT and Industrial applications will enable sensing capability to be added to existing systems without extensive rewiring.  However, end-to-end security from the sensor to the Cloud will be the gating requirement for Industrial customers to begin deploying Industrial IoT initiatives at scale.


The drive to make products and systems more intelligent will also increase the need to manage and analyze an ever-increasing flow of data. Data centers will require higher processing performance as the data load continues to increase, as well as advanced power management innovation to mitigate risk presented by high thermal levels in data center systems. We will also begin to see greater intelligence integrated at the edge node to begin to triage and tame the flow of data.


Machine-human interface
Mixed reality systems will continue to emerge and grow in popularity, with augmented reality and virtual reality ecosystems flourishing and stimulating innovation.  As the use of commercial AR/VR systems accelerates, costs will decrease and applicability will extend into spaces such as Industrial for off-site diagnostics and repair.

In addition, voice-as-user interface has now become an expectation but this technology continues to face limitations, especially in noisy environments. Gartner predicts that in 2018, 30% of our interactions with technology will be through "conversations" with smart machines, meaning that technology and service providers need to invest now to improve currently limited voice interfaces.


Heterogeneous manufacturing
With the costs of deep-submicron development skyrocketing and Moore’s Law facing increasing technology and economic headwinds, heterogeneous integration of multiple technologies in a package, on a laminate, or even on a single silicon substrate will increase. New business models will emerge to capitalize on heterogeneous manufacturing, enabling recombinant innovation for small-scale semiconductor industry players who cannot afford to invest in state-of-the art IC lithographies. For suppliers with greater scope and scale, the addition of signal processing algorithms to silicon will increase the value of their solutions.


It will be interesting to see where heterogeneous manufacturing and the other four technology macro-trends evolve over the coming year. It is often said that the best way to predict the future is to create it.  As semiconductor innovation will be the foundation for many of these emerging applications and analog technology will become even more critical in a data-hungry world, you can be sure that we at Analog Devices will be working diligently to make these predictions a reality in 2018.

Don’t Throw the Meter Out with the Bath Water - How to Extend Electricity Meter Lifetime

Electric utilities around the world leverage smart meters and Advanced Metering Infrastructure (AMI) to enable remote meter readings, remote connect/disconnect, demand/response, and other operational efficiencies. Utilities are under constant pressure to improve operational efficiency while mitigating rate hikes and improving customer service. While smart meters and AMI remove the need for in-field meter readers, expensive work crews still need to be scheduled to replace meters that are nearing the end of their useful lives. As it turns out, the vast majority of those replaced meters are still operational and would be for many more years. What if a meter’s lifetime could be extended so that it was only replaced just before its decline in accuracy?


Extending the useful life of meters yields surprisingly high return. Consider a hypothetical utility that spends €100 ($119.77) on a meter and its installation. Assuming a 15-year starting useful life, extending the service life of a meter by just 2 years results in lifetime savings of €13.30 ($15.93) per meter. Extending the meter life by 3 years increases the savings to €20 ($23.95) permeter over its lifetime.  If the current useful life of the meter is less than 15 years, or purchase and installation costs are higher than €100, the resulting savings are even more significant. In addition, extending the useful life of a meter improves customer service, as fewer service disruptions would be needed.


Current best practices require that utilities or regulatory bodies determine useful life of the meters based on statistical distribution of failures. Typically, this is done by reliability engineers using the Weibull function, otherwise known as the bath tub curve depicted in Figure 1. Reliability engineers use these techniques to ensure that a meter’s measurement accuracy remains within class before it is replaced.

Historically, a lot of attention was devoted to reducing the impact of Early Life failures. This is accomplished by improving the manufacturing process, environmental burn-in, and extensive testing. The Wear Out region of the curve, which typically has a Gaussian distribution, is avoided by using conservative (three or more standard deviations) statistical methods to minimize the possibility of an out-of-spec device remaining in service. Furthermore, sample testing is required in many global regions to spot check meter performance during deployment. However, the biggest drawback of these methods is that often well over 99% of meters removed from operation still performwithin specifications. There has never been a cost-effective way to verify the measurement accuracy of each device.


Adopting non-invasive, real-time accuracy monitoring technology in smart meters will extend the useful life of the meter without increasing the risk of out-of-spec devices in the field. One example is mSure® technology from Analog Devices. mSure technology is integrated into an energy metering chip, which continuously monitors the accuracy of the entire measurement signal chain, including the current and voltage sensors in each meter. Meter accuracy is then communicated, along with the energy consumption and other data, via AMI to the cloud. Having the complete picture of each meter’s’ accuracy, enables the utility to make a data-driven decision on required meter replacements.



As with all new technologies, phasing-in real-time accuracy monitoring technology is a prudent approach. Existing field sampling protocols can be used to confirm effectiveness of the solution and to train predictive analytics. After a couple of years of data correlation, field sampling can be reduced or eliminated, realizing additional cost savings.


One question that often comes up is what about all other meter failure mechanisms, which include backup batteries, power supplies and LCD (liquid crystal displays)? Modern meters already monitor the health of the battery and can report this parameter to the meter data management software. Failure of the power supply will result in the meter becoming unreachable over the data network, thus flagging the service department. And with remote meter reading, the LCD is no longer of high importance, and its failure will eventually be reported by the customer.


Energy measurement accuracy is the last mission-critical parameter that can now be effectively monitored enabling the extension of a smart meter’s useful life. With the deployment of non-invasive, real-time accuracy monitoring technologies, smart meters can deliver added value and increase a utility’s return on investment.


To learn more about mSure® technology from Analog Devices, please vist:

Filter Blog

By date: By tag: