Hello, I'm working on an ADSP-21262 EZ-Kit, and I'm trying to set up functionality so that audio output is running on single-word interrupts to the SPORT channel, based on the Talkthru (C) example (as I need to be reading DMA audio in concurrently).
I have the audio in working, the FFT based analysis working, and some basic interrupt driven output working, but it's extremely noisy.
But first, a basic question:
TCB_Block_A = (int) TCB_Block_D + 3 - OFFSET + PCI ;
I read the relevant section of the manual and the assignments here are still elluding my understanding. TCB should be an address pointer to
the next TCB block, and it has to be offset by 0x0008 0000 'to match the starting address of the processor's internal memory before it's used'.
Why do you subtract this offset? Why is this necessary? What should I look at for a discussion of the memory structure and the reason for this
(I skimmed the Memory and I/O Processor sections and didn't see another mention of it.) I don't really understand why this is necessary or the
underlying mechanism driving it.
And just to double check, referencing an array without an index returns an address, correct? I googled a little and didn't find any convention for
arrays without an index, and I just wanted to double check. This seems intuitive enough given what the code does, though.
My primary question, however, is the single-word SP1 interrupt. Sorry for my ignorance- I'm just getting started with this and I wanted to make sure I understand properly.
Currently, my code for interrupts is (cut and pasted from various files in the project):
*pSPCTL1 |= (SPTRAN | OPMODE | SLEN24 | SPEN_A | BHD);
interrupt (SIG_SP1, OutputToDAC);
//check DXS bits to see that buffer is empty
if ((*pSPCTL1 & 0x80000000)==0)
long float freq = (100 * 2* 3.14159 * (count++)) / 48000.0;
count %= 96000;
//calc what to write to buffer
unsigned int send_data= (unsigned int) 600000*sin(freq);
//write it to the buffer
*pTXSP1A = send_data;
Along with various ammended headers etc. This is outputing a sine wave, as expected (with a discontinuity every 2 seconds as you would expect from the code, but I'm just trying to get things working for now). The waveform, however, seems just a bit noisy (and looks it too, though my scope isn't the most reliable).
I was wondering if there might be a problem with interrupt priorities (since I'm still reading in large blocks from the audio input using the DMA and processing it with the other interrupt), or if there were any other thoughts floating around as to why this method might not be totally robust and what I should do to fix it (best practice, etc. for single word interrupt driven transfer to the audio codec).
Primarily though I'm worried about the interrupt priority- in the manual it lists SP1 DMA as higher priority than SP0, however I'm no longer using DMA with the channel as I am with SP0. Does this mean the SP0 is now higher priority, and how do I set the priority higher for my SP1 channel if so?
Also I'm worried about the clocking and framing- is the preexisting clocking/framing setup in the code alright for my purposes? As it stands I believe that my transmits are synced properly so that this interrupt is called after each complete transmit, loads the buffer, and then triggers the actual data send on the clock edge. Pretty sure it's fine, just wanted to double check with the experts.
Thanks for the help!