This is about our LTE baseband implementation using the Zynq and FMCOMSS5 boards from Xilinx and AD respectively.
We have done all LTE baseband implementation in C code including frequency and frame offset correction similar to your Matlab LTE example found at GitHub. Our implementation is tested on a host PC that gives a zero BER confirm that the implementation is correct.
We run now the LTE baseband on the Zynq board using the AD-FMCOMSS5 board too. Our C implementation is provided in a 32-bit floating-point format. Thus, we get the discrete-time signal provided by the OFDM from our LTE Tx side baseband. The values of the discrete time signals are in 32-bit floating-point format and in order to send those values over the RF board (ad9361), we have to convert them into a 12-bit fixed-point format.
We have noticed that the smaller and the biggest values of our signal are between -3.5... and 3.4. Thereby, we choose a fixed-point representation of 9 binary digits (i.e. yyy.xxxxxxxxx), where we can represent numbers between -4 and 4. Finally, we add 4 zero bits at the end of the 12bit number making a 16-bit number to send. This is compatible to the format being processed by the AD-FMCOMMS5 boards as far as we know.
From the Rx side now, we get the 16-bit number from the RF and discard the 4 most significant bits. Using the rest 12-bits, we do the dual actions; we convert the 12-bit numbers into a floating-point numbers and deliver them to our LTE Rx side baseband.
However, this implementation gives a calculated BER equal to 0.5. We are wondering what is going wrong, since all baseband components are correct including the frequency and frame-offset correction as well.
We cannot find an answer and we are asking your advice to this issue about the way of processing the floating-point values into the FMCOMSS5 board.