I'm architecting an FPGA interface with the ADV212 and have some general questions regarding using the chip in encode mode. We will have a microprocessor interface to initialize the ADV212. However, afterwards, all data and register accesses must be done using the FPGA. The ADV212 will run in HIPI mode and will encode 25 tiles per image. Between images, there may be a long delay. Other details of our usage include: 8-bit YCbCr, 256x256 tile size, 128-words/transfer.
This is my understanding of how the ADV212 functions (please correct me if I'm wrong):
1) The ADV212 is initialized (firmware loaded, registers programmed)
2) Flags/interrupts cleared.
3) The ADV212 asserts DMA0 request.
4) Write requests are serviced by writing DMA bursts of pixel data to the device.
5) DMA1 read requests are serviced by reading DMA bursts of code data from the device.
6) Code data will zero-padded to end on a burst boundary (128-words in our case).
My understanding is that after one frame is completed, the ADV212 will then assert a request on DMA channel 0 for the next frame. If this request is not serviced for a long time, will the request be deasserted? Will that require the SWIRQ0 to be cleared?
The reason I'm asking is that there will be no microprocessor intervention while the encoder is running- only at startup. If, in between frames, there is a large delay, will the FPGA need to reprogram any registers or clear the interrupts? That will require additional circuitry in the FPGA and will figure into the architecture of the design. Any help would be greatly appreciated!