The ADV212 data sheet says that it can handle 1080i60 (1920x960, 60Hz) when two devices are used together (one for Y and one for CbCr).
Based on my calculations, 1080p (1920x1080, 30Hz) overruns the tile size limit, so the part can't natively do a progressive-scan 1920x1080 tile.
My question is: what's the best way to lossily compress a 1080p30 stream? It seems like the obvious thing to do is split the big tile into a top tile and a bottom tile. The throughput of a pair of ADV212s should support this since they're able to handle an interlaced signal at 60Hz; it's essentially the same thing.
The issue is that if you split the tile into a top and bottom tile using lossy compression, will you get an ugly horizontal line in the decoded image due to different amounts of lossiness at the boundary of the tiles, or does the JP2K algorithm account for that and make it look as if it were compressed as one tile (assuming the same compression settings)?
The slightly more complicated way to do it would be to alternately compress or buffer every other line of incoming progressive video, essentially re-interlacing it, then re-combine the streams during decompression to product a progressive image. The down-side here is that you have to buffer half of the image during compression instead of just streaming it straight into the part.
My guess is that the performance would be better though because the compression artifacts due to using multiple tiles would be distributed across the image instead of along a single row of pixels across the middle of the image.