Post Go back to editing

PPI Buffer underun with LCD at VDK


I would like to use the BF561 in combination with a camera Sensor and LCD.


I am using a BF561 EZ Kit with a Micron Sensor M09024 at PPI0 and a LCD on PPI1. I am using VDSP++ 5.0 Version 6.
I have used the sensor stream sample from the ADI Multimedia sample and adjusted the drivers for the LCD and Micron Sensor. This works perfect at a main loop and I can see the video on the output of the LCD from my sensor.
I have ported this do a VDK project since I need to have the lwIP stack later for Ethernet. Thus I have used the VDK default template and written so far only two threads which are going to be started. One is initializing the Micron Sensor and the other one is initializing the LCD Driver.
In first thread I am doing now nothing else than let the LCD read the buffer for LCD output from L3 SDRAM Bank 2 memory via DMA for LCD. At the memory address there is a test image prepared for the LCD output. In the second thread I used the Micron sensor driver and let this capture by DMA to SDRAM Bank 3. Both are using two independent callback functions.
The problem is the LCD driver. I got always PPI Status error UNDR_1 after 5 seconds on LCD PPI when I run both threads at my application. This is not the case when I started both drivers just in one main loop without VDK. So I assume hardware and driver implementation is OK. When I but later both drivers in one VDK thread it is the same issue. I got a PPI underun and the image is shifted after 5 seconds. When I just stop the Sensor driver in my VDK and only using the LCD driver the LCD display is fine.
Again the processor itself is doing nothing just DMA loopback. I have just only two threads starting just the Device drivers with independent callback. Both are just looping back the data via DMA.
What could be the reason when I use the DMA controller in loopback mode on both devices with VDK? I have also given the PPI 2 (respective PPI1) with the LCD already higher priority without any change!
In attachment there is a screenshot what happens. The before centered image on the LCD screen is than shiften on the screen after 5 seconds 10 pixels left and 20 pixels down.
Any suggestion what is going wrong?
Thanks you for the support!
  • Hi Thomas,

    It seems that you could be running out of bandwidth. I am not sure of your application bandwidth requirements - frame size, rate etc. and also what speed you are running the core and system at. So before anything I will get a theoretical estimate of your bandwidth requirements. Then I would refer to the to the following app-note;

    There are too many things to look at in the above app-note and there could be several reasons why you are seeing the error but I will first concentrate on:

    1. Turning on at least the instruction cache

    2. Using traffic control

    3. Enabling the CDPRIO bit. The details are discussed in the app-note.

    It seems that adding VDK has added more memory footprint and likely pushed some of your more executed code/data in external memory where your video buffers reside. This probably is causing excessive page misses, resulting in underruns. So you would likely have to remap your code and data sections to minimize external memory use. I would also like to point you to EE-301 for techniques on efficient data management for multimedia applications.

    Hope the above helps.



  • Hi Kaushal

    First of all many thanks for the response!

    I have read the EE-301 and EE-324. Just a view additional background infos.

    1. Video IN is done by PPI0.

    32 Bit DMA transfer is already used.

    Resolution = 752 x 480

    PPI width16 Bit

    So the buffer size for DMA input is (752 x 480 *4) /2 (4 is because of u32 Datatype)

    The frame rate is 30 or 60 fps

    This means a datarate of ~42 MByte@60fps or ~21 MByte@30fps

    2. Video Out is done via PPI1

    32 Bit DMA transfer is already used

    Resolution 480 x 280 *4 / 2 (4 is because of u32)

    PPI Size 16 Bit RGB565

    Frame Rate 60 fps

    This means a datarate of ~15.4 MBytes@60fps

    Total=~57 MByte/s

    The cclk = 525 MHz

    The sclk = 131 MHz

    CLKIN = 30 MHz

    Cache is enabled at LDF preprocessing USE_CACHE in Menu Project Options at Enty link

    I am using already double buffering with two complete LCD buffers in SDRAM 0 bank 1 and SDRAM 0 bank 3

    I am using already double buffering with two complete Sensor buffers in SDRAM 0 bank 3 and SDRAM 0 bank 2

    Bank Placement is:

    SDRAM 0 Bank 0 all programm code

    SDRAM 0 Bank 1 LCD output buffer[0] followed by data

    SDRAM 0 Bank 2 Senor input buffer[1] followed by data

    SDRAM 0 Bank 3 LCD output buffer[1] followed Sensor input buffer[0]

    Note: When I use bank 0 for frame buffer e.g. for Sensor or LCD the problem is more worse and the image on LCD is frequently skipping

    Any suggestion how to improve mapping is appreciated. I can also send the LDF file for CoreA when desired.

    I will check the traffic control register now and the CDPRIO.



  • Hi Kaushal,

    the problem is not solved complete yet. I have tried all suggestions so far except I am not sure about when I am adding the linker option USE_INSTRUCTION_CACHE at a standard with VDK created LDF file.

    I have used the far the with the VDK auto generated ldf file. This gives me all the output section which are required for my project, e.g. sdram0_bank0, sdram0_bank1, sdram_shared etc.

    When I enable USE_INSTRUCTION_CACHE option at the linker option does this also have impact on the standard VDK.ldf file. I have the impression this is there ignored since I see no difference and picture on LCD is shifted to left after a while.

    In reference to the PPI underrun/overrun (Underrun on LCD/Overrun on Sensor) I have generated now an own LDF file using the LDF creating tool at project options. I also requires some more system heap so I have added extra L3 memory for system heap.

    I need also enlarge the System heap since I want to use the lwip-Stack.

    Thus the from the VisualDSP generated VDK standard template can not be used.


    How do I need to add my output sections to my own created ldf file. e.g. sdram0_bank0, sdram0_bank1, sdram0_bank2, sdram0_bank3, sdram0_shared

    I also would prefer to use the stanard VDK LDF file. Which option I need to enter at linker option to enable USE_INSTRUCTION_CACHE and to use external memory for system heap.




  • Hi Thomas,

    You can enable the cache using the Project wizard. See the attached screenshots on steps to do enable instruction cache. This will enable the cache in the start up code itself.

    Have you also looked at mapping things in L2 memory. Its 128KB of shared memory. This could save you some trips to external memory. Since you are dealing with video, there are some interesting ideas in EE-301.

    Also, you mentioned that you are using 32-bit DMA but I am assuming you also set PACKEN bit in the PPI control register.

    Besides that, try using data cache wherever you can. You can control the cacheable regions of your memroy using the autogenerated CPLB files.

    To map sections of your code/data to custom sections use - section ("section name") void functionName()

    or section ("section name") int dataArrayName[row*column] - etc. at the function definition place.


  • Hi Kaushal,


    Thanks for the support. I have the following settings now.


    1. I have enabled instruction cache.

    2. I have enabled the data cache.

    3. PPI Packaging is not enabled yet. (it is also not an option because we want to connect two Sensors to one PPI and so we need full 32 Bit=>2x565)

    4. The CDPRIO bit is active (Makes in my opinion no changes).


    VDK and SDRAM memory are initalized in a main.c code part to full speed before the VDK is started.


    System information CoreA: cclk 525 MHz sclk 131 MHz vco 525 MHz


    VDK Clock frequency is set to VDK_SetClockFrequency(cclk/1000000);


    In attachment there are some screenshots from my settings.

    When I enable data cache in my project I got a cache miss exception.


    I will now try to place my image buffers again for LCD and Sensors in different memory banks.

    Note I have already two memory sections for the sensor and for the LCD (Double buffering).


    My question is when I create a customized LDF file this will not give me the output sections like sdram0_bank1.

    Do I need to edit the LDF file and add those sections manually in the LDF code file, since on the expert linker the option to add a new memory section is grayed out?


    Please advice on the issue with the Cache miss exception and about the memory map.


    Hopefully this helps to eliminate the PPI buffer underrun on LCD output and PPI buffer overrun on Sensor input.


    Thanks for you help




  • Hi Thomas,

    For troubleshooting exceptions, I would recommend going through EE-307. It will certainly help help you to minimize your debugging efforts. I would also recommend going straight into the ldf source and creating custom input memory sections of your own, if required. The output sections name might be different when you create a customized ldf but you can look into the ldf source to determine the correct names.

    Also, have you tried looking at mapping things in L2 and also using the traffic control.


    PPI Packaging is not enabled yet. (it is also not an option because we want to connect two Sensors to one PPI and so we need full 32 Bit=>2x565)

    Can you use packing at least on the output then?

    Hope this helps!



  • Hi Kaushal,


    sorry may I did explain myself not correct before.


    On PPI0 we using two Sensor 1xColor and 1xBW (Black White) Same resolution on both sensors 752 x 480. So we need 2 x 8 Bit => 16 Bit on PPI0


    The DMA transfer is 32 Bit since this is recommended in the engineering note.


    On the Sencond PPI1 I have connected the LCD with 565 and 16 Bit 565.


    What is your recommendation for the PPI config register and for the 2DMA transfer buffer configuration?


    Also I need to separete the buffer of the PPI0 input in order to get the frames of both sensors. I would use DMA also in this case. Any ideas how to separeate both stream better.

    Any recommendation on this?


    What advantange would it bring to enable packing on PPI1?


    Thanks for the support

    Kind regards




  • ADI Support,

    the problem with the LCD and the sensor was seem to be solved. Now I have added one IDMA transfer and the LCD problem came back. I got always PPI buffer underrun. Do you have any suggestion?

    The PPI FIFO is 16 bit deep. Is there any suggestion and explanation or white paper for the DMA. It seem to be that the DMA stream is interrupted.



  • Hi,

    As a member of the VDK team I can tell that we are not aware of any reason that we know of for not being able to use two PPIs in a VDK project. As a team we do not provide any examples other than to demonstrate how VDK works. Any existing examples with system services device drivers and VDK come from the System Services team. I believe that, if there was an intrinsic issue related to this the System Services team would have let us know and together we would have tried to work out the best solution (or we would have documented it if there was no solution).

    Hopefully somebody from system services will be able to reply here.



  • HI ADI Support,

    We still have the problem with the LCD on our EZ561 board. The question is why is there no VDK Example with Video IN and LCD out from ADI available using VDK? I have just one working sample with Video In and LCD out from the Example project that is working but not using VDK. Is the reason for this why there is no VDK example using both PPIs on BF561?

    The EZ KIT is equipped with two PPI ports. The onboard Video Encoder and Decoders are also connected to these PPIs. I also could not find any example with VDK where both PPI are used. I assume the data tranfers/rates between encoders/decoders and core is almost the same. So from the performance aspect there should be no difference.

    Or is it not recommended to use VDK an both PPI ports! I saw a lot of feedback in BF internet newsgroups where developers and engineers have same issue.

    Is there an FAE who can help us in solving this problem?