Options

The Engineering Mind Topics

- A/D Converters (ADC)
- ADIEDU
- ADIEDU India
- aerospace and defense
- clock and timing
- Code Snippet
- communications
- consumer
- Developer
- engineering
- first
- first robotics
- frc
- Industrial Automation Technology
- Internet of Things (IoT)
- morethanrobots
- omgrobots
- power management
- precision technology
- problem solving
- robots
- Sensors and MEMS
- student advice
- TMCM-1161
- tutorial

4 minute read time.

**Introduction **

In my **previous blog**, we familiarized ourselves with some terms related to linearity or non-linearity that impact circuit behaviors, especially when RF functions are involved. In this blog post, we will establish a step-by-step, theoretical approach to represent and quantify non-linearities and show the mechanisms explaining the direct consequences produced by non-ideal parts such as harmonics, and intermodulation products.

**Nonlinearity Causes Harmonics and Intermodulation (IMn)**

We begin by considering a general electronic function with signals x and y as the input and output powers, respectively, and A as the transfer function between them (i.e., the “gain” if the device is an amplifier). Referring back to the discussion of the resistor in the previous blog post, in all real-world devices, the curve is not a nice straight line indicating that “y is proportional to x.” Instead, the curve is not perfect and becomes distorted when signals are large.

When x and y are small, the curve is close to a straight line but not 100% straight. Whether or not the designer realizes it, there are nonlinearities. When x and y are large, however, the nonlinearities are highly visible. In general, the device saturates; the output cannot correctly respond to any further increase in the input signal. This phenomenon is better illustrated by the -1dB compression point which shows the upper limit of the applicable signals (i.e., the dynamic range) (**Figure 2**).

* Figure 2. Figure shows **nonlinearity versus ideal linearity behavior.*

Generally speaking, one can write:

** ****y = A _{0} + A_{1}.x^{1} + A_{2}.x^{2} + A_{3}.x^{3} +... + A_{i}.x^{i} +... **

(It is the Taylor series development of any transfer function A.)

For a pure, perfect linear function, we want Ai = 0 for all i >1.

Therefore**: y(Linear) = A _{0} + A_{1}.x ** (Eq. 2)

Unfortunately, (engineers know that!) this is never entirely so; the terms in x², x^{3}, x^{4}, etc. are present as well. Their magnitudes depend on the strength of A_{2}, A_{3}, A_{4}, etc., and they are responsible for the deviation of the transfer function A away from the desired, perfect, proportional law.

Let’s assume now that we are in a sinusoidal world where x(t) is a sinewave signal. Here x(t) contains only one frequency, ω. Therefore, by expressing it in a very general sinewave form:

**X(t) = A cos(****ω****.t + ****φ****) ** (Eq. 3)

By expressing x(t) in its Euler form, we have:

which is a sum of 2 complex numbers

We will assume only the first term of the sum for further discussions (this simplifies the equation manipulations since only the exponential effects will be used in our demonstrations.

Let’s assume the first term of the Euler form in x(t) :

x(t) = K e^{j}^{(}^{ω}^{t+}^{φ}^{)} (Eq. 4)

If device A is perfectly linear, then its response y is a proportional image of x:

y(t) = A_{0} + A_{1}.x (Eq. 5)

y(t) = A_{0} + A_{1}. K e^{j}^{(}^{ω}^{t+}^{φ}^{) }^{ }(Eq. 6)

You see that y(t) contains the same and unique frequency ω. We can draw an important conclusion from this:

*a perfect linear function or device will never generate any other frequency by itself***.**

The above observation allows us to extend the conclusion for x(t) containing multiple frequencies: the output response will contain only the frequencies entered at the input. And as a direct consequence, a band of frequencies entered at x(t) will remain as such at the output.

To summarize:

**With x contains**two**frequencies:****ω**_{a }**and**_{ }**ω**_{b}_{. }

It is easy to show that if the device is linear, it does not matter; y will reproduce exactly the same two original frequencies, ω_{a, }and_{ }ω_{b }:

y(t) = A_{0} + A_{1}. (K_{a} e^{j}^{(}^{ω}_{a}^{t+}^{φ}_{a}^{)} + K_{b} e^{j}^{(}^{ω}_{b}^{t+}^{φ}_{b}^{)}) (Eq. 7)

There are no other frequencies generated!

2. **With x contains multiple frequencies: ****ω**_{a, }**ω**_{b}**, ω**_{c}**, ω**_{d, }**ω _{n}**

Again, if the device is linear, the output_{ }remains a nice image with no distortion of x. The same original frequencies (no more and no less) are found in y.

**Summary **

In this blog post, we have established a mathematical basis to represent and quantify the amount of non-linearity a device might contain. The important conclusion at this stage is: that when you have a perfectly linear device, its output will not generate nor destroy any frequency entered at the input. The signal is not distorted.

In the **next blog**, we will see how the mathematical expression of a no-linear device will behave.