I'd like to take a map file (generated by CCES in XML format) and determine (e.g. using a python script) the total memory size used by objects (both code and data) originating from a specific library or a specific object file. (I am fine with ignoring any padding due to required object alignment.) I'd like to be able to do this for a variety of cores (e.g. SHARC+ in 8-bit mode, SHARC+ in 32-bit mode, BlackFin...).
I can generate map files, parse them, and traverse the resulting document tree. I can run through all the "symbols", pick those that originate from a specific "input" library or object file, and add up all their sizes (found in "address" field). However, I am seeing some discrepancies.
For example, comparing the same app built for both 32-bit mode and 8-bit mode, sizes in the 8-bit version seems to be counted in (8-bit) bytes. However, sizes in the 32-bit version look a little inconsistent. I can clearly see symbols that have size specified in (32-bit) words. This, however, does not seem to be the case for all symbols in the 32-bit version. I cannot discern which sizes are in bytes and which in words (i.e. there is no obvious relationship to segment name, memory type...).
Just as further illustration:
-- Size of the 8-bit version is *not* (roughly) four times the size of the 32-bit version (in fact the sum of object sizes is not that different between the two versions)
-- I see that at least some objects' sizes are definitely counted in bytes for the 8-bit version and in words for the 32-bit version.
Any help in much appreciated,