In Simulation mode, CCES (running under eclipse IDE) runs data processing algorithms roughly 200 times SLOWER than Visual DSP++. Both environments are running the exact same code, compiled under the respective environment. Next step is to load code onto target Blackfin DSP and EXPECT (from what discussion board dialogs promote) to find that in the application processor the code from the CCES compilation runs more efficient than the CODE from the Visual DSP++. Both environments operate on equal size blocks of N long values at a time. The code utilizes native fract and long fract types for fixed point arithmetic data processing routines and built-in filtering functions.
Does anyone know if this is just an issue with CCES or are there settings that can bring this more inline with the performance of simulation execution under Visual DSP++???