I'm using the Indoor Occupancy FileIO Example with the BLIP2. I ran it and was able to see the demo_indoors_curr.yuv video with the bounding boxes as shown in the documentation. Is it possible to modify which video it uses for the demo? Also, does that video have to be in the eyuv file extension or could it simply be in any yuv extension?
I'm primarily interested in evaluating the algorithm by running it offline on many prerecorded videos, labeling the data with some ground truth occupancy, and determining how well it performs. I was thinking of modifying the code in the track_main.c file and writing the blobs info to a .dat file, similar to the AD VisionSensor GUI. Otherwise, is there a simpler way to use the AD VisionSensor GUI to run the indoor occupancy algorithm on videos from a PC, rather than real time through the BLIP2 board?
Unfortunately the ADVisionSensor GUI doesn't support file reads so you will have to use the Indoor Occupancy FileIO Example for now.
The input to the file is currently taken from the example.cmd file. This file can be found in your workspace in -
<cces_work_space>/examples/indooroccupancysensor.blackfin_1.0.0/IOS FileIO BLIP2/demo/framework/utils
The contents of the file are currently-
iosdet-indoors -i ../../../../Media/IndoorOccupancySensor/ios-demo-adi-320x240.eyuv -o demo_indoors -w 320 -h 240 -n 357 -R 25 -r 25 -f 0 ;
The -i switch gives the input file while the -o switch gives the name of the output file. You can edit this or add new lines to the example.cmd file to run different files. The details of more switches are given in the document KT-2509_IOS_UsersGuide.pdf under the section 3.2.6 Command Line switches for the File IO Application.
So using this example.cmd file you can run multiple files of different formats sequentially. The supported input video formats can be of Y only, YUV422 or YUV420 formats and the extension can be of any type not necessarily eyuv. I believe this would be the best way to handle multiple input videos.
Regarding writing the blobs info to a binary file, I would suggest doing this in the application file itself, in ios_fileio.c. You can modify the DumpOutput function to write the displayed blobs to a file. The graphics display call will give you an idea of the format of the object that holds the blob information.
Hope that helps,
Thanks, that makes more sense. So for a batch mode evaluation, I could add more files to the cmd file, with the correct switches.
Also, would it matter whether I capture these short videos of occupancy for evaluation via the 1) the recording features in the AD Vision Sensor GUI or 2) the Frame Capture Example?
I would recommend using the ADVision Sensor GUI for capturing the videos since it will be much faster compared to the FrameCapture example as it uses the USB channel for writing the files. You can use the recording feature in the capture mode for recording the vectors.