This is further to this thread:
The question relates to an ADV212 working as an encoder taking its input from the video bus. It's connected to a ADV7188 video decoder. The syncs are embedded (SAV/EAV) - not separate sync signals.
In the above thread the conclusion was that the field bit in your header was the F bit in the syncs. That is, for the compressed odd field (first field in the frame) the code would be 0xFFFFFFF0 and for the even field (second field in the frame) the code would be 0xFFFFFFF1. (i.e. the opposite of what your documentation says.)
The picture above shows this happening. The bottom trace is my input video. The top trace is an output from my control processor. The top trace goes low when my processor sees a 0xFFFFFFF0 in the header of the file that was just written to the memory on my board (the file being a compressed version of the previous field). It goes high when it sees a 0xFFFFFFF1. So that's fine - it corresponds with the conclusion of the previous thread.
However, if I now remove and reapply the input video a few times, it sometimes seems to get it wrong and produce the opposite sequence (see picture below). Obviously there are two chips (and my oscilloscope) that are involved here. I think I trust a Tek scope to work out which field is which. That leaves the ADV212 and the ADV7188. Anyway, the question is which chip is likely to be the problem and what are my options for doing something about it?