Post Go back to editing

About the Channel CPACK Utility Core IP output data in the ADRV9009 ZC706 reference design

Category: Software
Product Number: 0
Software Version: 21.2

hello, I have a question about the Channel CPACK Utility Core IP output data in the ADRV9009 ZC706 reference design.I made a testbench about this ip in a new projech,which only contains this ip core.And from the scoped image below(if it's clear enough).I think the output packed_fifo_wr_data seems wrong,unlinke it's wiki introduction,it's seems to be 0000 0000 00000 0000 4444 3333 2222 1111.It is my error?Or it is the right output.My testbench only simply input the data and give a enable signal.

and I also test when 3 channels are enabled.I am not sure it is right or wrong.and it's data is 01030201 02010302 03020103 which every single number means four same numble.

Thanks,

Cai

Parents
  • Hi  ,

    I looked into this issue and created a testbench for the CPack and the UPack instances since these are similar modules. I tested the modules in different forms by eye, and didn't seem to find an issue with them yet. I'll still have to integrate a scoreboard to make sure that all of the data is transmitted properly and everything is aligned as it should be. 

    You can find this testbench here: https://github.com/analogdevicesinc/testbenches/tree/util_pack. It is on a separate repository dedicated to IP and project verification. 

    A quick guide on how to use the testbenches repository, since we're don't have a documentation for it just yet. 
    1: Clone the Testbenches repository inside the HDL repository (HDL will contain Testbenches)
    2: Go to util_pack
    3: Run this command: make MODE=gui (this will build the project, open the Vivado gui and run the simulation as well)
    4: Check the data and other things you're interested in

    Mentions:
    You can change the configuration to whatever you want to test by going to the cfg folder under util_pack and editing the cfg1.txt file. This will give you other options on how you can set up CPack and UPack parameters for the testbench.
    Once you edit the file, you can just rerun make MODE=gui. This will rebuild the project only if the configuration file was modified. Otherwise, it'll just run the simulation. If you want to make sure you rebuild everything from scratch, so nothing is left behind, run make clean all MODE=gui. This will always clear the project and rebuild it. 

    I'll be working on the scoreboard for these modules tomorrow and get back to you with an update. 

    Regards,
    -Istvan

  •  Could you help me to fix this error?It's seems like the project-sim.mk has some errors.Or just show me the waves you got

  • Hi Istvan,

    After I rename the directory from "testbenches_util-pack"to"testbenches",it works.But then it stay in this page and stuck in this.

    and my computer open vivado automatically.But just stay in its home page and won't run sim.

    Thank you again for your help.

  • Hi  ,

    If you have Vivado opened, please go to the Tcl Console, and check the logs in there to see what's happening. Sometimes Vivado doesn't throw any errors in a new window, it just doesn't run. 

    Regards,
    -Istvan

  • Hi Istvan,

    the error is

    I don't know this command is from which file.

  • Hi  ,

    This file comes from testbenches/scripts/ folder. The file that calls the run_sim.tcl is coming from project-sim.mk.

    As I see there is a part added to support Cygwin as well. I used Linux up until this point, haven't tried it with Cygwin. I'll give it a go on my machine and see if I get a different error. 

    Are you familiar with Makefiles and/or feel comfortable editing the file? Printing the RUN_SIM_PATH will help us check if the path is actually correct or not. 

    Regards,
    -Istvan

  • Hi,Istvan,

    I check the path,it is right. I still have no idea why the file lose "/".

    Regards,

    -Lcy

  • Hi  ,

    This seems to be a makefile/Vivado issue. I have some issues running the testbench myself with Cygwin, I'll look into this part tomorrow. 

    Let's try and minimize the potential issues and narrow down the problem. 

    What you could check is running a different testbench. Check dma_loopback testbench. If it runs without any issue, it might come from the scripts. We updated them a while ago, and didn't check if they still work for Cygwin. If it runs, then it's a difficult situation. Please also try changing the branch to 2021 on testbenches, and run the same dma_loopback test. If it runs on 2021, but not on util_pack branch, then we know it's from the scripts, and we'll look into it. 

    Regards,
    -Istvan

  • Hi Istvan,

    I check the dma_loopback tb which from util_pack branch,and it has same error is this

    # source ../../scripts/adi_env.tcl
    couldn't read file "../../scripts/adi_env.tcl": no such file or directory
        while executing
    "source ../../scripts/adi_env.tcl"
        (file "system_project.tcl" line 2)
    INFO: [Common 17-206] Exiting Vivado at Wed Apr 17 15:03:06 2024...

    and I test the dma_loopback tb which from tb_2021_r1 branch,and it works successfully.

    Thank you for your time.

    Regards,

    Lcy

  • Hi  ,

    This means that we have an issue with our Testbenches for Cygwin users with the latest changes. If you have an option to run the Testbench in Linux (not Cygwin), that will probably solve this issue for now. We'll look into this, and prioritize working on a fix. Up until then, it's rather difficult to suggest a different solution to run a testbench. One of them would be to either cherry-pick commits from the main branch into the tb_2021_r1 branch, or try and figure out what is the cause of this issue on the main branch. 

    Thank you for your patience, and sorry for being a test engineer for us, it shouldn't be like this. Slight smile

    I'm giving you a couple of screenshots of the said testbench to you, so you can actually check a couple of things on it, while we're working on the fix. The configuration that I used to run these tests: Channels=3, Samples=2; Width=8. What you see is the input and the output of the packers. If you'd like to check a couple of other configurations in the meantime as well, let me know, and I'll create a couple of other test scenarios and post the waveforms for you, so you can continue working on your own project. 

    This is for the UPack:

    This is for the CPack:

    Please note that the scoreboard module that checks the functionality is not done just yet. 

    Regards,
    -Istvan

  • Hi Istvan,

    The screenshots helped me a lot.And I tried the "adrv9009" testbench from tb_2021_r1 branch,it doesn't compile successfully,I wonder it still the cygwin's problem?The errors are as follows.

    ERROR: [IP_Flow 19-3458] Validation failed for parameter 'NUM_MI(NUM_MI)' for BD Cell 'axi_cpu_interconnect/inst/ar_switchboard'. Value '20' is out of the range (1,16)
    ERROR: [Common 17-39] 'set_property' failed due to earlier errors.
    ERROR: [Ipptcl 7-5] XIT evaluation error: ERROR: [Common 17-39] 'set_property' failed due to earlier errors.
    
    ERROR: [Common 17-39] 'xit::source_ipfile' failed due to earlier errors.
    CRITICAL WARNING: [IP_Flow 19-1747] Failed to deliver file 'f:/Xilinx/Vivado/2021.1/data/ip/xilinx/smartconnect_v1_0/xit/update_contents.xit': ERROR: [Common 17-39] 'xit::source_ipfile' failed due to earlier errors.
    
    ERROR: [IP_Flow 19-167] Failed to deliver one or more file(s).
    ERROR: [IP_Flow 19-3541] IP Elaboration error: Failed to generate IP 'axi_cpu_interconnect'. Failed to generate 'Elaborate BD' outputs: Failed to elaborate IP.
    INFO: [xilinx.com:ip:smartconnect:1.0-1] test_harness_axi_mem_interconnect_0: SmartConnect test_harness_axi_mem_interconnect_0 is in High-performance Mode.
    ERROR: [IP_Flow 19-3458] Validation failed for parameter 'S_AWUSER_WIDTH(S_AWUSER_WIDTH)' for BD Cell 'axi_mem_interconnect/inst/s00_mmu'. Value '0' is out of the range (114,1024)
    ERROR: [Common 17-39] 'set_property' failed due to earlier errors.
    ERROR: [Common 17-39] 'set_property' failed due to earlier errors.
    CRITICAL WARNING: [IP_Flow 19-1747] Failed to deliver file 'f:/Xilinx/Vivado/2021.1/data/ip/xilinx/smartconnect_v1_0/xit/update_contents.xit': ERROR: [Common 17-39] 'set_property' failed due to earlier errors.
    
    ERROR: [IP_Flow 19-3541] IP Elaboration error: Failed to generate IP 'axi_mem_interconnect'. Failed to generate 'Elaborate BD' outputs: Failed to elaborate IP.
    CRITICAL WARNING: [xilinx.com:ip:smartconnect:1.0-1] rx_jesd_exerciser_inst_0_interconnect_0: The device(s) attached to /S00_AXI do not share a common clock domain with this smartconnect instance. Modify the clock domain values of the attached device(s) or re-customize this AXI SmartConnect instance to add a new clock pin and connect it to the same clock source of the IP attached to /S00_AXI to prevent further clock DRC violations.
    INFO: [xilinx.com:ip:smartconnect:1.0-1] rx_jesd_exerciser_inst_0_interconnect_0: SmartConnect rx_jesd_exerciser_inst_0_interconnect_0 is in Low-Area Mode.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] rx_jesd_exerciser_inst_0_interconnect_0: IP rx_jesd_exerciser_inst_0_interconnect_0 is configured in Low-area mode as all propagated traffic is low-bandwidth (AXI4LITE). SI S00_AXI has property HAS_BURST == 1. WRAP bursts are not supported in Low-area mode and will result in DECERR if received.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] rx_jesd_exerciser_inst_0_interconnect_0: If WRAP transactions are required then turn off Low-area mode using ADVANCED_PROPERTIES. Execute following: set_property CONFIG.ADVANCED_PROPERTIES {__experimental_features__ {disable_low_area_mode 1}} [get_bd_cells /rx_jesd_exerciser_inst_0_interconnect_0]
    CRITICAL WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_jesd_exerciser_inst_0_interconnect_0: The device(s) attached to /S00_AXI do not share a common clock domain with this smartconnect instance. Modify the clock domain values of the attached device(s) or re-customize this AXI SmartConnect instance to add a new clock pin and connect it to the same clock source of the IP attached to /S00_AXI to prevent further clock DRC violations.
    INFO: [xilinx.com:ip:smartconnect:1.0-1] tx_jesd_exerciser_inst_0_interconnect_0: SmartConnect tx_jesd_exerciser_inst_0_interconnect_0 is in Low-Area Mode.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_jesd_exerciser_inst_0_interconnect_0: IP tx_jesd_exerciser_inst_0_interconnect_0 is configured in Low-area mode as all propagated traffic is low-bandwidth (AXI4LITE). SI S00_AXI has property HAS_BURST == 1. WRAP bursts are not supported in Low-area mode and will result in DECERR if received.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_jesd_exerciser_inst_0_interconnect_0: If WRAP transactions are required then turn off Low-area mode using ADVANCED_PROPERTIES. Execute following: set_property CONFIG.ADVANCED_PROPERTIES {__experimental_features__ {disable_low_area_mode 1}} [get_bd_cells /tx_jesd_exerciser_inst_0_interconnect_0]
    CRITICAL WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_os_jesd_exerciser_inst_0_interconnect_0: The device(s) attached to /S00_AXI do not share a common clock domain with this smartconnect instance. Modify the clock domain values of the attached device(s) or re-customize this AXI SmartConnect instance to add a new clock pin and connect it to the same clock source of the IP attached to /S00_AXI to prevent further clock DRC violations.
    INFO: [xilinx.com:ip:smartconnect:1.0-1] tx_os_jesd_exerciser_inst_0_interconnect_0: SmartConnect tx_os_jesd_exerciser_inst_0_interconnect_0 is in Low-Area Mode.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_os_jesd_exerciser_inst_0_interconnect_0: IP tx_os_jesd_exerciser_inst_0_interconnect_0 is configured in Low-area mode as all propagated traffic is low-bandwidth (AXI4LITE). SI S00_AXI has property HAS_BURST == 1. WRAP bursts are not supported in Low-area mode and will result in DECERR if received.
    WARNING: [xilinx.com:ip:smartconnect:1.0-1] tx_os_jesd_exerciser_inst_0_interconnect_0: If WRAP transactions are required then turn off Low-area mode using ADVANCED_PROPERTIES. Execute following: set_property CONFIG.ADVANCED_PROPERTIES {__experimental_features__ {disable_low_area_mode 1}} [get_bd_cells /tx_os_jesd_exerciser_inst_0_interconnect_0]
    validate_bd_design: Time (s): cpu = 00:00:13 ; elapsed = 00:00:13 . Memory (MB): peak = 1242.348 ; gain = 0.000
    ERROR: [Common 17-39] 'validate_bd_design' failed due to earlier errors.
    
        while executing
    "validate_bd_design"
        (procedure "adi_sim_project_xilinx" line 36)
        invoked from within
    "adi_sim_project_xilinx $project_name "xcvu9p-flga2104-2L-e""
        (file "system_project.tcl" line 20)
    INFO: [Common 17-206] Exiting Vivado at Thu Apr 18 09:06:22 2024...
    

    Regards,

    Lcy

  • Hi  ,

    Regarding the adrv9009 project, it has a known issue with the system_project, where the block design has issues with validating the Smartconnect. We know about this issue, we're working on this with Xilinx engineers to figure out what's going on exactly with the project.

    The issue with the Cygwin and the scripts is that the script is trying to access flock command, which is used when we run make in parallel, and it seems that Cygwin does not have this command installed. I tried to add flock to Cygwin, but have issues with the installer. What you could do is to go to util_pack branch and edit line 88: 

    $(CMD_PRE) $(M_VIVADO) $(RUN_SIM_PATH) -tclargs $(1) $(2) $(MODE) $(CMD_POST), \

    As well as lines 141-143 to:

    $(MAKE) -C $(dir $@) $(TARGET);

    This will will practically remove the option to run simulations in parallel, which doesn't affect you anyway. I tried it on my Windows machine, Vivado 2021.1, HDL main, Testbenches util_pack, version ignore enabled, and it works. We're working on finding solutions for this issue, and see which is the best one. See if this works with the util_pack testbench. 

    Regards,
    -Istvan

Reply
  • Hi  ,

    Regarding the adrv9009 project, it has a known issue with the system_project, where the block design has issues with validating the Smartconnect. We know about this issue, we're working on this with Xilinx engineers to figure out what's going on exactly with the project.

    The issue with the Cygwin and the scripts is that the script is trying to access flock command, which is used when we run make in parallel, and it seems that Cygwin does not have this command installed. I tried to add flock to Cygwin, but have issues with the installer. What you could do is to go to util_pack branch and edit line 88: 

    $(CMD_PRE) $(M_VIVADO) $(RUN_SIM_PATH) -tclargs $(1) $(2) $(MODE) $(CMD_POST), \

    As well as lines 141-143 to:

    $(MAKE) -C $(dir $@) $(TARGET);

    This will will practically remove the option to run simulations in parallel, which doesn't affect you anyway. I tried it on my Windows machine, Vivado 2021.1, HDL main, Testbenches util_pack, version ignore enabled, and it works. We're working on finding solutions for this issue, and see which is the best one. See if this works with the util_pack testbench. 

    Regards,
    -Istvan

Children