Post Go back to editing

Interrupt period jitter with ADSP21469

Hi

I hope anyone of you can help me on this point.

I'm using a cyclic interrupt with the ADSP21469 and I monitor the isr period and duration with a processor output flag.

The interrupt period is 20us and it can be generated by the dsp timer or by a fpga connected to irq1.

Both case the behaviour is the same and I see a jitter of about 1.1us on isr start.

In particular,  I see some isr calls are delayed of this quantity, while the next one is correct (namely, the average period is correct, 20us)

The anomaly gets worse if I execute the normal code in my application, while it almost disappears if I place a while(1) idle loop in the main function.

The behaviour is the same with normal interrupt or with fast interrupt (use of secondary registers, register isr with interruptf() functions)

I have the same code running on a ADSP21161 and I don't have this problem.

  • Some extra information...

    The problem seems to arise (or simply get a lot worse) whenever I access the DDR2 memory from non-isr code. I don't perform long bursts: periodically reading a single variable in sdram is enough to greatly increase the jitter.

    It seems there is a 1.3us latency between last ddr2 access and irq dispatch. I browsed the hw anomaly list but I couldn't find anything related to this.

    Some figures with isr responding to external irq1:

    - jitter of irq1 signal:  < 50ns   (test irq period = 1ms)

    - jitter of isr entry point, namely the first isr instruction:  1.3us

    - minimum latency between irq1 edge and isr entry point:  840ns with normal isr mode and 270ns with fast isr dispatch (interrupts). This should be fixed in the ideal situation; actually, because of the above jitter, the latency increases by a random value up to 1.3us.

  • Hi Cristiano,

    As far as my understanding is there, since all the interrupt source takes place in the Core Clock domain. Some push/pop operation is done which is nothing but a memory store/load operation, one core clock cycle each. So it takes some CCLK cycles before entering in to the dispatcher and similarly to return to the interrupted code. This is the cause of latency getting added before the codes jumps into the ISR.

    Apart from this, depending on the code, an extra latency will be added depending on the priority of the interrupt assigned in your code.

    You may want to look into the compiler manual as well for your reference, the link for which is given below:

    1. 1. For CCES:http://www.analog.com/static/imported-files/software_manuals/cces_1-0-2_comp_sharc_man_rev.1.1.pdf
    2. 2. For VDSP: http://www.analog.com/static/imported-files/software_manuals/50_21k_cc_mn_rev_1.5.pdf

    Please let me know in case you have any further queries/doubts.

    Thanks,

    Harshit

  • Hi all

    For all who care, I have some useful information about this topic.

    More than two years after my original post  I had again to face this problem.  At that time the 1us jitter was not really an issue, so I didn't bother about it and I didn't want to lose more time on it.

    This time, I had new design requirements and I really needed to get rid of it.   So I spent some time in analyzing it and I eventually could find the reason and the simple solution.

    Actually there is a bug in the code samples provided by AD.

    In the DDR2 initialization code of some samples (i.e. Power on self test for 21469 EZKit) I have:

      #define   DDR2TRFC   (0x20 << 26)

    but the 26 shift is incorrect, since the DDR2TRFC field is located at bit offset 21 in DDR2RRC register.

    I replace the define with

      #define   DDR2TRFC   (0x20 << 21)

    and the jitter was reduced to the 100ns level.