Differential amplifier vs Instrumentation amplifier

I'm trying to understand what is the advantage of using a difference amplifier as opposed to an instrumentation amplifier.  Difference amplifiers have the problem of loading the signal,  and mismatched loading will create common-mode voltage.  This won't happen with an instrumentation amp.  The only things I can think of is a diff amp can be faster and has differential output,  and also maybe less expensive?   I wouldn't think there's that much difference though.   Why don't INA's have differential outputs?

Thanks   in advance!

  • Hi jweaver,

    You forgot that ADC has differential input. And differential signal transmission has certain advantages, such as greater noise immunity. This is the scope of differential amplifiers.

    Regards,

    Kirill

  • yes I'm aware of that,  but why not just use instrumentation amplifiers,  that was my question-  I'll go through the designer's guide as mentioned by harrynsc

  • Why don't INA's have differential outputs? They do. They are called difference amplifiers :) You seem to be fixated on redefining what an INA is. That definition is fixed: it is an amplifier with differential input and single-ended output. If it's something else, then it has a different name :)

    There's lots of applications where the differential output is of no use - lots of low frequency data acquisition systems don't need a fully differential signal chain at all. Maintaining a fully differential signal chain adds lots to the cost in such applications, and it's hardly ever necessary. If you need good DC performance, you can modulate the entire signal chain from the transducer all the way into the ADC, and demodulate it in software - it will be more insensitive to common mode offset shifts than all but the best-of-class differential signal chains would be.

    No system component is perfect in isolation: everything depends on the system you're building. Instrumentation amplifiers have single-ended output that floats on an externally-provided reference level. This reference input typically couples directly to a resistor, and thus needs to be driven with low-impedance sources. And I do mean *very* low impedance - 1Ohm may well be too much, otherwise you're sacrificing CMRR. If all you need is such low-impedance-referenced single-ended output, then an instrumentation amplifier is a good fit. But, say, if you want to shift the output level of the in-amp, you'll quickly find that most "buffered" voltage-output multichannel "trim" DACs (8-12 bits) either have too high DC output impedance (5-40Ohm are quite typical) and thus degrade the in-amp's CMRR, or they have excellent output impedance (<0.1Ohm) but very high noise (>100uV p-p, that's two LSBs in a 16-bit A/D system with a 5V input span).

    But to get differential output, one approach that works well is to use a pair of instrumentation amplifiers, connected to the input in anti-phase. For best matching, those would need to be on the same silicon chip, and thus something like AD8222 comes to mind. There aren't all that many dual in-amps! Otherwise, you'd use just one in-amp and couple it to a differential driver. And of course you'd have to characterize the performance of this custom design yourself.

    But, on the other hand, a differential amplifier has both a differential input and a differential output: it drives two output pins in anti phase, centered around a common mode reference voltage that it accepts as an input (or generates internally). The performance is characterized by the manufacturer, so for most applications you just check if the specs match the requirements, and you're assured a good probability of success when using the part. Given the benefits of monolithic integration when targeting high-frequency performance, a lot of the integrated differential amplifiers have performance that requires serious design effort to duplicate using more "discrete" building blocks like stand-alone op-amps.

    It is also not necessarily true that differential amplifiers "load down" the input signals. Many of them have high impedance inputs.

    Switching gears: recall that part cost is never to be looked at in isolation. The parts make up a system. And switching between amplifiers with single- and differential-outputs likely will necessitate other far-reaching changes to the signal chain. So any cost analysis must take into account the entire system: you'll be comparing two alternative designs, each optimized to extract the needed performance from either an in-amp or a diff-amp. This may become a total redesign sometimes - you may end up changing things including the ADC, sometimes even the MCU - because there are often subtle interactions between part specifications and you may wish to leverage them to your advantage, or avoid some potential pitfalls/disadvantages. Whether a difference amplifier is "less" or "more" expensive depends also on how well it solves the problem compared to the alternatives. It may be a cheaper part that is a poorer match to the application and requires more expensive choice of other system components - or vice versa, it may be a much better match that makes everything else much easier. Without knowing the application it's impossible to tell.

    You presume that in-amps and diff-amps are typically alternative choices and thus could be compared "apples to apples": not usually. In most applications it's obvious whether an in-amp or a diff-amp is needed, and quite often when you need a diff-amp the in-amp doesn't even appear in the viable solution space (e.g. when you drive ADCs).

    You also presume that "loading the signal" is universally undesired: not so. Wideband signals often need to be properly terminated, and the fact that a difference amplifier helps establish such termination is by all means positive. You do want the termination to be as close to the point of measurement (e.g. by a diff-amp inputs) as possible, usually.

    To summarize: there is no "advantage" of any particular part in vacuo - separately from the application. Whether there's an advantage to anything is determined solely by the design process of a particular system, no matter how simple or complicated such system may be. Sometimes low price is the best advantage - say you're designing some simple toy that will work just fine with the most basic, low-spec op-amp you can find. Sometimes dealing with the limitations of such a basic part will cost so much engineering time and end up making a low-volume product so much more complicated that the advantage may be lost in just a few hours of the initial design effort: you may save lots of money by choosing a part that's 10-100x more expensive then, since someone else will have borne the burden of designing it and characterizing it and setting up its production process and QC system! While those are the extremes, it's all a continuum. But in all cases, advantages and disadvantages can only be determined as they apply to a given application.