[ADALM-2000] Execution time of getSamples and aout.push() and consequences of changing triggering level while running ADALM

Dear Developers,

As you may have noticed from my previous posts, we are currently working on building an On-off keying modulation based communication system which uses two ADALMs as the transmitter and receiver. On the transmitter side, we read data, modulate them into OOK samples and then push them to the analog out using aout.push (non-cyclic, we push buffer by buffer). On the receiver side, we use getSamples along with proper triggering level to retrieve the buffer and demodulate the samples to get the data back.

Although we were able to perform webcam video streaming using our two ADALMs with either jumper wires (wired connection) or antennas & mixers (wireless connection), we are trying to improve the overall throughput of the system as it directly decides the quality of video we can play.

At the beginning of the stage development, the throughput was limited by our receiver's demodulation time. For example, for each buffer of 3068160 samples (511360 bits * 6 oversampling factor). It takes about 1.05s for the aout.push() to complete (it is blocking).  On the receiver side, our ADALM's input sampling rate is set to 100MHz and output sampling rate is set to 75MHz, we get 4/3 times the original amount of samples and the time to complete ain.getSamples() is about 1.05*4/3=1.4s (sometimes longer than this)

Initially, our receiver program first gets the samples using getSamples(), which takes  1.4s, and then it demodulate the retrieve the binary bits after another 1.05s. (So about 2.45s processing time for each buffer sent/received)

During the past week, we implemented multiprocessing using the python multiprocessing module and made getSamples and demodulation two separate processes. This worked and now the total execution time becomes about 1.8s (1.4s getsamples + 0.4s overhead time when passing data between processes, the demodulation is now done in parallel).

At this point, if we want to further increase the throughput of our system, we are out of ideas except looking at the potential of decreasing the getSamples() execution time.

So the question is: Is there any way we can decrease the execution time of getSamples() (the time to retrieve data from the kernel buffer), what factors are influencing the execution time (assuming we are pushing buffers fast enough on the transmitter side)?

Also, what is the size of each kernel buffer when I call ":setKernelBufferCount", if I constantly call getSamples(N), does it mean each kernel buffer contain N samples?

The final question is about changing the triggering level while the program is running:

When we implement our communication system over the wireless link, the signal amplitude decreases as I move the receiver antenna away. Therefore, I programmed the receiver in a way that it will automatically recalculate an appropriate trigger level and then use "setAnalogLevel" to change it. What I am concerned about is: whenever I change the trigger level, does it destroy all the samples currently in my kernel buffer? I am only changing the trigger level and the "N" in getSamples(N) stays the same.

Thank you very much for your time and insights! We deeply appreciate your assistance!

Steven