What is the worst case interupt latency on the ADuC7020?
Is there any way of reducing this latency?
On the ARM7 microcontrollers - ADuC7020 included, the worst case interrupt latency is up to 50 CPU clocks
One way of reducing the interrupt latency is to split the ARM7 "multiple store" and "multiple load" opcodes - LDM/|STM. These opcodes, depending on the number of bytes you are storing/loading to/from flash can take a long and if one of these opcodes is in progress when an IRQ becomes pending, the latency to servicing the interrupt will increase dramatically.
In the RealView and IAR compilers, there is a compiler setting that allows these multiple store/load instructions to be split.
Also, to minimize latency, you could use the FIQ vector address to minimize latency by placing your handler in assembly at this location. Attached is a RealView project that shows this for a simple example. Scroll to line 260 of the file ADuC702x_R.s for further details.
Examining the maximum latency figure to see how it is arrived at can give additional tips on how to reduce it.
The following document on the ARM website gives the detailed breakdown.
The Tldm figure is the one that is of most interest in this discussion.
The time for the longest instruction to complete. The longest instruction is an LDM that loads all the registers including the PC. Tldm is 20 cycles in a zero wait state system.
On an ARM7TDMI, an instruction in the execute stage of the pipeline cannot be interrupted until it completes (not so on the Cortex-M3, more on that though another day). This is why Tldm features in the calculation of the maximum interrupt latency.
The ARM7TDMI Technical Reference Manual has a section on the Instruction Cycle Timings to see how this 20 cycle figure is arrived at.
Looking at Table 6.12. Load multiple registers instruction cycle operations in this document, you see how the figure of 20 cycles is achieved.
It is the case of n registers (n>1) including pc where n is 16 for the case that includes all the registers.
The article that I refer to above states that the maximum latency of an ARM7TDMI is 29 cycles. This number assumes a zero wait state system.
Therefore, for a one wait state system, you need to add an additional cycle for each of the cycles that accesses memory in this instruction. This is an additional 19 cycles which gives you a figure of at least 48 cycles.
Running ARM code from ADuC7XXX flash at CD = 0 make for a one wait state system.
Compiling ALL your code for THUMB reduces the interrupt latency. Why? It is because there is no LDM instruction in that instruction set.
If CD != 0, then the interrupt latency is immediately reduced to 29 cycles again.
As MikeL stated, compilers generally provide the option to “Split Load\Store Multiples” into instructions that complete in fewer cycles to reduce the interrupt latency.
I have not seen quantification as to what that reduction is though, only that it is less. If it means that it avoids the LDM\STM instructions completely then I guess you can figure out the new latency by looking at the next longest instruction.
Retrieving data ...