[ADALM-2000] Undesired voltage fluctuations before DAC starts pushing out non-cylic buffers. (p.s. ADALM analog out noise floor)

Dear Developers,

Earlier last month I asked a question regrading pushing non-cyclic buffers using ADALM2000 and the fix on the DAC non-cyclic push has helped us a lot in a project we are currently working on. Although I still noticed some weird behaviors of the DAC output voltage by observing the pushed data using the oscilloscope, I didn't raise a question immediately since it was not affecting our project. However, this week as we try to move our ADALM-based communication system from baseband to RF band, this DAC voltage issue has become inevitable and I would like to ask you guys for help!

The issue is, although after the last fix on pushing non-cyclic buffer we are able to capture clean OOK signals using the oscilloscope, there are some weird voltage variations (fluctuations) after the analog output is enabled and before the first buffer (we call it packet in our project) is sent, as you could see from the oscilloscope screenshot attached below. There are 3 undesired pulses with voltage at around 200 mV. And after the three pulses we began to observe the actually pushed buffers with 400 mV voltage. In fact, we have chose not to calibrate the DAC output since if we calibrate there will be pulses of even higher voltage occurring before the first buffer is sent.

This was previously not an issue since we were sending 1V amplitude OOK signals and triggered at 0.8V (the pulse peaks, as you could see, are only 0.4 V). But now as we transition into the RF band we have to limit the analog out voltage to 0.35 V and these pulses become a problem (e.g., causing unwanted triggering at the beginning of the data transmission). Is there any chance you guys could get rid of these pulses occurring before the first pushed buffer?

Thank you so much for your help! 

Steven

(P.S.) Another quick question: Is it normal to observe about 20mV of noise when connecting ADAML with the USB port of the computer while not running any programs? As we move our project to the RF we were wondering the noise floor of ADALM's analog input. After connecting the ADALM with the laptop, we hooked up the oscilloscope probes with the analog "input" of the ADALM. We observed about 20 mV amplitude noise from the oscilloscope (without running any programs on ADALM except that it is connected with the computer) and were wondering if this is expected (normal). If so I guess we will try amplifying our RF input voltage instead.

Thanks again!

Captured oscilloscope output showing 3 peaks of 200 mV before pushing the non-cyclic OOK signals of 400 mV.

Below is the zoomed-in version of the oscilloscope screen:

ADALM is connected with the laptop via the USB and the analog input 1 and ground are connected with the osciiloscpe.

Observed oscilloscope output, the noise voltage was much lower when ADALM is not connected with the PC/powered on.

Parents
  • +1
    •  Analog Employees 
    on Oct 17, 2019 10:09 AM

    Hello steven,

    I figured out where the pulses are coming from. Unfortunately both ADC and DAC calibration rely on the DAC outputs to calibrate. And because of hardware considerations, we couldn't isolate the pin output from the internals when calibrating. Therefore both ADC and DACs should be disconnected during calibration for best performance.

    I created this small test-script that shows calibration behavior:

    import numpy as np
    import libm2k
    import time
    
    force_calib=True
    
    ctx=libm2k.m2kOpen("ip:192.168.3.2")
    ain=ctx.getAnalogIn()
    aout=ctx.getAnalogOut()
    dig=ctx.getDigital()
    
    if force_calib:
        dig.setValueRaw(0,1)
        dig.setDirection(0,1)
        ctx.calibrateDAC()
        dig.setValueRaw(0,0)
        ctx.calibrateADC()
        dig.setValueRaw(0,1)
    
    
    aout.enableChannel(0,True)
    aout.setSampleRate(0,75000)
    x=np.linspace(-np.pi,np.pi,1024)
    buffer0=np.sin(x)
    aout.setCyclic(False)
    for i in range(3):
        aout.push(0,buffer0)
        time.sleep(0.3)
        
    dig.setValueRaw(0,0)
    libm2k.contextClose(ctx)
    

    The results can be seen in the following picture (sorry for bad quality):

    If i set force_calib to false, the signal looks nice and clean (no spikes), however I'm not sure whether the ADC and DACs are calibrated:

    When calibrating, an internal parameter called calibscale is set. The calibscale defaults to 1.0 at powerup but rarely is set to 1.0 after calibration. This parameter is available for both channels of the ADC and the DAC. Therefore we can do the following workaround which only calibrates the device if it wasn't calibrated. I know it's not ideal, but at least it will only put in unwanted spikes on powerup or whenever you feel calibration is necessary:

    import numpy as np
    import libm2k
    import time
    
    force_calib=False
    
    ctx=libm2k.m2kOpen("ip:192.168.3.2")
    ain=ctx.getAnalogIn()
    aout=ctx.getAnalogOut()
    dig=ctx.getDigital()
    
    
    calib=aout.getCalibscale(0) #calibration happens for both channels when calling calibrate DAC, so it's enough to verify just one channel
    print("AOut:"+str(calib))
    if calib == 1.0 or force_calib: # calib not changed by calibration process
        print("CALIBRATING DAC")
        dig.setValueRaw(0,1)
        dig.setDirection(0,1)
        ctx.calibrateDAC()
        dig.setValueRaw(0,0)
    else:
        print("DAC already calibrated")
    print("Aout:" + str(aout.getCalibscale(0)))
    
    calib=ain.getCalibscale(0)
    print("AIn:"+str(calib))
    if calib == 1.0 or force_calib: # calib not changed by calibration process
        print("CALIBRATING ADC")
        ctx.calibrateADC()
        dig.setValueRaw(0,1)
    else:
        print("ADC already calibrated")
            
    print("Ain:" + str(ain.getCalibscale(0)))
    
    
    aout.enableChannel(0,True)
    aout.setSampleRate(0,75000)
    x=np.linspace(-np.pi,np.pi,1024)
    buffer0=np.sin(x)
    aout.setCyclic(False)
    for i in range(3):
        aout.push(0,buffer0)
        time.sleep(0.3)
        
    dig.setValueRaw(0,0)
    libm2k.contextClose(ctx)

    Also, it is worth noting that calibrating the DAC will silently calibrate the ADC if it was not previously calibrated(in that session). https://github.com/analogdevicesinc/libm2k/blob/master/src/private/m2kcalibration_impl.cpp#L679.

    If it was, it will assume that the calibration values for the ADC are correct. If you want to recalibrate you will have to call both calibrateADC and calibrateDAC.

    I'm running a small script here to demonstrate the silent calibration of the ADC.

    Python 3.6.8 (default, Oct  7 2019, 12:59:55) 
    [GCC 8.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import libm2k
    >>> ctx=libm2k.m2kOpen("ip:192.168.3.2")
    >>> ain=ctx.getAnalogIn()
    >>> aout=ctx.getAnalogOut()
    >>> ain.getCalibscale(0)
    1.0
    >>> aout.getCalibscale(0)
    1.0
    >>> ctx.calibrateDAC()
    True
    >>> ain.getCalibscale(0)
    1.058411
    >>> aout.getCalibscale(0)
    0.78125
    >>> 
    

    -Adrian

Reply
  • +1
    •  Analog Employees 
    on Oct 17, 2019 10:09 AM

    Hello steven,

    I figured out where the pulses are coming from. Unfortunately both ADC and DAC calibration rely on the DAC outputs to calibrate. And because of hardware considerations, we couldn't isolate the pin output from the internals when calibrating. Therefore both ADC and DACs should be disconnected during calibration for best performance.

    I created this small test-script that shows calibration behavior:

    import numpy as np
    import libm2k
    import time
    
    force_calib=True
    
    ctx=libm2k.m2kOpen("ip:192.168.3.2")
    ain=ctx.getAnalogIn()
    aout=ctx.getAnalogOut()
    dig=ctx.getDigital()
    
    if force_calib:
        dig.setValueRaw(0,1)
        dig.setDirection(0,1)
        ctx.calibrateDAC()
        dig.setValueRaw(0,0)
        ctx.calibrateADC()
        dig.setValueRaw(0,1)
    
    
    aout.enableChannel(0,True)
    aout.setSampleRate(0,75000)
    x=np.linspace(-np.pi,np.pi,1024)
    buffer0=np.sin(x)
    aout.setCyclic(False)
    for i in range(3):
        aout.push(0,buffer0)
        time.sleep(0.3)
        
    dig.setValueRaw(0,0)
    libm2k.contextClose(ctx)
    

    The results can be seen in the following picture (sorry for bad quality):

    If i set force_calib to false, the signal looks nice and clean (no spikes), however I'm not sure whether the ADC and DACs are calibrated:

    When calibrating, an internal parameter called calibscale is set. The calibscale defaults to 1.0 at powerup but rarely is set to 1.0 after calibration. This parameter is available for both channels of the ADC and the DAC. Therefore we can do the following workaround which only calibrates the device if it wasn't calibrated. I know it's not ideal, but at least it will only put in unwanted spikes on powerup or whenever you feel calibration is necessary:

    import numpy as np
    import libm2k
    import time
    
    force_calib=False
    
    ctx=libm2k.m2kOpen("ip:192.168.3.2")
    ain=ctx.getAnalogIn()
    aout=ctx.getAnalogOut()
    dig=ctx.getDigital()
    
    
    calib=aout.getCalibscale(0) #calibration happens for both channels when calling calibrate DAC, so it's enough to verify just one channel
    print("AOut:"+str(calib))
    if calib == 1.0 or force_calib: # calib not changed by calibration process
        print("CALIBRATING DAC")
        dig.setValueRaw(0,1)
        dig.setDirection(0,1)
        ctx.calibrateDAC()
        dig.setValueRaw(0,0)
    else:
        print("DAC already calibrated")
    print("Aout:" + str(aout.getCalibscale(0)))
    
    calib=ain.getCalibscale(0)
    print("AIn:"+str(calib))
    if calib == 1.0 or force_calib: # calib not changed by calibration process
        print("CALIBRATING ADC")
        ctx.calibrateADC()
        dig.setValueRaw(0,1)
    else:
        print("ADC already calibrated")
            
    print("Ain:" + str(ain.getCalibscale(0)))
    
    
    aout.enableChannel(0,True)
    aout.setSampleRate(0,75000)
    x=np.linspace(-np.pi,np.pi,1024)
    buffer0=np.sin(x)
    aout.setCyclic(False)
    for i in range(3):
        aout.push(0,buffer0)
        time.sleep(0.3)
        
    dig.setValueRaw(0,0)
    libm2k.contextClose(ctx)

    Also, it is worth noting that calibrating the DAC will silently calibrate the ADC if it was not previously calibrated(in that session). https://github.com/analogdevicesinc/libm2k/blob/master/src/private/m2kcalibration_impl.cpp#L679.

    If it was, it will assume that the calibration values for the ADC are correct. If you want to recalibrate you will have to call both calibrateADC and calibrateDAC.

    I'm running a small script here to demonstrate the silent calibration of the ADC.

    Python 3.6.8 (default, Oct  7 2019, 12:59:55) 
    [GCC 8.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import libm2k
    >>> ctx=libm2k.m2kOpen("ip:192.168.3.2")
    >>> ain=ctx.getAnalogIn()
    >>> aout=ctx.getAnalogOut()
    >>> ain.getCalibscale(0)
    1.0
    >>> aout.getCalibscale(0)
    1.0
    >>> ctx.calibrateDAC()
    True
    >>> ain.getCalibscale(0)
    1.058411
    >>> aout.getCalibscale(0)
    0.78125
    >>> 
    

    -Adrian

Children
  • And because of hardware considerations, we couldn't isolate the pin output from the internals when calibrating. Therefore both ADC and DACs should be disconnected during calibration for best performance.

    Thank you so much Adrian for the very detailed analysis! I am able to obtain clean transmitted signals if neither the ADC nor the DAC is calibrated. (And the calibration could be done separately before running our programs)