There is a difference in the way the self-modifying codes has to be written in SHARC and SHARC+ core , and if you are porting the ‘asm’ code from previous SHARC , it is worth taking care of this new restriction in SHARC+ Core.
SHARC + core is a 11 stage pipeline and it has 4 fetch stages
Say that an instruction executing from address ‘x’ is trying to modify the binary data present in the address range [a,b] . ‘a’ is the present PC value and ‘b’ is some address in the vicinity[a < x < b]. All the instruction present in the range [a,a+11] would have been fetched already
Now if ‘x’ is between a < x < a+11 , this implies that we are trying to modifying the memory location of the instruction which has already been fetched. In this case, the memory location will be updated with the new value but the execution happens with the old value which was fetched earlier.
This means that , in SHARC+ core, there is a minimum distance (in terms of memory location ) between the address of the ‘modifying’ instruction and the memory location to be ‘modified’
There should also be a ‘SYNC’ instruction after a self-modifying instruction.
Eg . I am executing from L1 Block 0 and one of the instruction writes to the same memory location, thereby changing the opcode
The expectation is that , now the execution should take place based on the new opcodes written . For this to happen , there should be a ‘SYNC’ instruction followed by certain number of ‘nop’ instructions between that self-modifying instruction and the address to be modified.
Refer to the ADSP-SC58x Programming reference manual for the exact number of ‘NOPS’ required .
Also when self-modifying codes are used, Conflict cache, BTB and L1 Cache should be flushed and invalidated.