We have tried the 16-bit complex FFT -- both the one in the library and the one described on this forum
Consider two different input (1) DC = 0x7FFF max 16-bit (2) DC = 0x7FFDC (you can use 0x7FFF * cos (f) but that just shifts the position in the spectrum of the problem
After performing DFT via FFT -- then thes will grow to (1) DC = 0x7FFF * 2^ 8 (2) DC = 0x7FFDC * 2^8 and over flow
or else -- since fract 16 -- will get scaled as the algorithm progresses so that you get (1) DC = 0x7F * 2^ 8 (2) DC = 0x7FF * 2^8 -- i.e. values get severely truncated
Actually we need 1024 = 2^ 10 DFT via FTT to boost resolution in frequency comain -- so that get very severe truncation
i.e. we must do
signal >> 12 before using complex_fract16 -- if want to use 1024 FFT -- and that is after doing signal = audio input >> 8 as the A/D is 24
so 1) Am I missing something -- and the scaling, and associated loss of accuracy, is not necessary because I should be doing XXXXX
and 2) if I am missing something to avoid the scaling -- what isthe XXXX ?
and (3) If I am not missing something -- then can something be put into the documentation header files hinting about the loss of accuracy, especially with the 1024 example -- The max value 0x7FED becomes 0x003F after scaling -- loss of 12 bits of precision
I think what I need is an efficient 32-bit FFT for the blackfin -- so I am going to follow up with -- http://ez.analog.com/thread/1547
Another question -- why do the 16-bit library examples also use software bit-reversing when hardware bit-reversing is available ?
I know from trying to work out hardware bit-reversing in an earlier life -- that the hardware bit reversing does not work unless the data is aligned by the size of the data, and I was not aware of any mention when using those libraries that that sort of data alignment is necessary
The example from the forum web page -- definitely uses software -- does the library do the hardware bit reversing on the index?