As you may have noticed from my previous posts, we are currently working on building an On-off keying modulation based communication system which uses two ADALMs as the transmitter and receiver. On the transmitter side, we read data, modulate them into OOK samples and then push them to the analog out using aout.push (non-cyclic, we push buffer by buffer). On the receiver side, we use getSamples along with proper triggering level to retrieve the buffer and demodulate the samples to get the data back.
Although we were able to perform webcam video streaming using our two ADALMs with either jumper wires (wired connection) or antennas & mixers (wireless connection), we are trying to improve the overall throughput of the system as it directly decides the quality of video we can play.
At the beginning of the stage development, the throughput was limited by our receiver's demodulation time. For example, for each buffer of 3068160 samples (511360 bits * 6 oversampling factor). It takes about 1.05s for the aout.push() to complete (it is blocking). On the receiver side, our ADALM's input sampling rate is set to 100MHz and output sampling rate is set to 75MHz, we get 4/3 times the original amount of samples and the time to complete ain.getSamples() is about 1.05*4/3=1.4s (sometimes longer than this)
Initially, our receiver program first gets the samples using getSamples(), which takes 1.4s, and then it demodulate the retrieve the binary bits after another 1.05s. (So about 2.45s processing time for each buffer sent/received)
During the past week, we implemented multiprocessing using the python multiprocessing module and made getSamples and demodulation two separate processes. This worked and now the total execution time becomes about 1.8s (1.4s getsamples + 0.4s overhead time when passing data between processes, the demodulation is now done in parallel).
At this point, if we want to further increase the throughput of our system, we are out of ideas except looking at the potential of decreasing the getSamples() execution time.
So the question is: Is there any way we can decrease the execution time of getSamples() (the time to retrieve data from the kernel buffer), what factors are influencing the execution time (assuming we are pushing buffers fast enough on the transmitter side)?
Also, what is the size of each kernel buffer when I call ":setKernelBufferCount", if I constantly call getSamples(N), does it mean each kernel buffer contain N samples?
The final question is about changing the triggering level while the program is running:
When we implement our communication system over the wireless link, the signal amplitude decreases as I move the receiver antenna away. Therefore, I programmed the receiver in a way that it will automatically recalculate an appropriate trigger level and then use "setAnalogLevel" to change it. What I am concerned about is: whenever I change the trigger level, does it destroy all the samples currently in my kernel buffer? I am only changing the trigger level and the "N" in getSamples(N) stays the same.
Thank you very much for your time and insights! We deeply appreciate your assistance!
getSamples execution time:
I don't think it's possible to decrease getSamples() time - however a way to decrease acquisition time would be to use data in the format that is provided by the hardware. In order to do this you can use getSamplesRawInterleaved() - this method will return the raw data (not volts), and the channel samples would be interleaved - [ch1sample, ch2sample, ch1sample, ch2sample, .... ]. If you are not looking for actual voltages but rather some amplitude value - this would cut down on processing time.
The kernel buffers:
The Tx buffers are separated, so the samples would not be interleaved in any situation. If you want to cut down on processing, and do the conversion yourself, you could use the pushRaw methods.
Yes - there is a trick that is done behind the scenes.
# do some processing
When you do the first getSamples, IIO will generate 10 kernel buffers of size 100 totaling 1000 samples. It will then start filling those buffers.
On the second call, libm2k will ask IIO will return the next available kernel buffer (it will not regenerate the kernel buffers , but rather use what is already allocated)
On the third call, libm2k notices that the number of samples is different. It cannot use the data in the already allocated kernel buffers, because they only come in 100 samples chunks. libm2k decides that the buffers are no good, so it will IIO will destroy the 100 samples buffers, and create 150 samples buffers instead. Hope this answers your question regarding kernel buffers.
The triggering mechanism is disconnected from the buffers mechanism. When you change the trigger level, the samples are not destroyed. However they might become outdated. Let's assume you have 10 kernel buffers which are all filled with trigger at level 0.5V . You change the trigger level to 1V and then run getSamples again.
getSamples will return the next kernel buffer that was acquired with trigger level 0.5V. In fact the next 10 buffers will all have acquired data at 0.5V. Only the 11th buffer will have data triggered at 1V. This may or may not be the desired outcome. We created the flushBuffers() method for this exact usecase. flushBuffer will destroy all the buffers and recreate them. This will cause new data to come in when calling getSamples - which will be triggered at the correct level.
Thank you very much for the detailed explanations of the acquisition, buffering and triggering mechanism, Adrian! Yes, keeping the samples triggered at the previous trigger level is indeed the desired outcome in our case (since we want the video streaming to be continuous as we move our receiver away and lower the trigger level). I'll try out getSamplesRawInterleaved() and let you guys know if I have any other questions!
Thanks again for your help!
I have just verified that indeed the getSamplesRawInterleaved() can significantly decrease the data acquisition time. (From about 1.5s to about 0.5s in my case). However, I am not sure how to extract raw samples from the returned object... I initially thought it's going to return a list just like the getSamples() case, but once I printed out its type I realized its something called "SwigPyObject". I looked it up online and thought it's something like the linked-list I learned in C, so I tried something like the following:
data = ain.getSamplesRawInterleaved(N)
print("value: ", data.own)
print("value: ", data1)
print("value: ", data2)
Instead of getting an integer number, I got something like:
value: <built-in method own of SwigPyObject object at 0x0000014B0599B510>value: <built-in method next of SwigPyObject object at 0x0000014B0599B510>Traceback (most recent call last): File "OOK_receiver_rf.py", line 348, in <module> data_2=data_1.nextAttributeError: 'builtin_function_or_method' object has no attribute 'next'
Could you please show me the right way to deal with this object and extract the raw samples (the integers) or the float samples?
Hi Steven,We tested this and it looks like there is an issue with the Python bindings for some of the exposed library methods. Not all the C++ methods are correctly exposed in Python because of some return type differences. We are looking into an appropriate fix for this and we will come back with an answer and a fix as soon as possible.Thank you,-Alexandra
Is there any update on the Python bindings fix for the getSamplesRawInterleaved() method?
Thanks for your help!