I was using bit fields in VDSP and for some reason the bits follow the big endian format where the first bits declared are the most significant bits of a word. Is this normal operation for this compiler ?
Unfortunately, the C/C++ language gurus left the layout of bitfields up to the compiler maker. Even though Analog Devices uses the same little-endian format as Intel that makes it easy to transfer data between the two systems without having to mess about with the data, Analog Devices unfortunatley chose to start populating the bitfields in an opposite direction from the way that Microsoft and GCC do on the Intel platform. This means when sharing data bitfield data declarations between the two, you have to "#ifdef SHARC" around each bitfield delcaration and fill it in the opposite direction (and maybe add padding).
Very annoying that the Analog Device development crew didn't put a bit more thought into this. That horse is out of the barn already, so they can't change it now, except maybe they could add a compiler flag to make bitfields fill in the opposite direction, which would be very handy.
thanks for clarifying this. It would have been good if the folks at AD could have clarified this in the compiler refernce manual and online help because it sent me on wild goose chase for a while
Bitfield layout is an implementation-defined aspect of the language. This has two implications:
This means that if you're transferring data between two different systems, you can't just use bitfields - you have to arrange the data in a manner that doesn't involve implementation-defined aspects of the compiler.
But surely you would arrange the compiler so that it was consistent from one version to another or at least offer a compiler switch to specify which addressing type. Why would the compiler writer not do this ??
We do indeed endevour to keep such details consistent from one release to another. Sometimes, changes do occur, however. For example, we might change such a detail if it transpired that the ADI implementation was, in fact, in violation of the Standard, or deficient in some manner. For example, in the past we've changed the implementation of C++ exception handling so that it does not rely on code-reading (which is problematic for processors that have instruction memory with no data path).
It's not something we'd change lightly. I was merely pointing out that writing your application in a manner that relies on a particular implementation of an implementation-defined language feature can lead to problems, regardless of the origin of the compiler(s).
I was merely pointing out that writing your application in a manner that relies on a particular implementation of an implementation-defined language feature can lead to problems, regardless of the origin of the compiler(s).
But your job as a hardware manufacturer and compiler maker is to make our job as software developers easier. It is an academic argument to say we shouldn't rely on bitfields because the compiler maker can pick them in arbitrary order. By saying that, you are effectively saying, don't use bitfields at all, go back to using #define macros for hardware descriptions even though bitfields would make everything SO MUCH EASIER.
And the truth of the matter is that once the order of bitfields is set by a compiler maker, they will not change it, because it would kill so many things unneccesarily.
When faced with choices like this, Analog Devices should choose an order that is compatible with the dominant order on PCs because that is what is most likely to be interfaced to by anyone using Analog Devices processors. And because they didn't in this case, bitfields are completely useless for anyone describing data structure interfaces between a PC and an Analog Devices processor. Instead now, when sharing data interface description files between the PC and AD processors, we have to use #define macros and do the bit-bang stuff ourselves. Either that, or describe the bitfield in both forward and reverse order with #ifdefs around our bitfield definitions, which is fraught with errors.
Not to mention it is completely unintuitive to go MSB->LSB with your bitfield order when your processor memory order is LSB->MSB. Exhibit A, the original poster.
I still think bitfield order (MSB->LSB or LSB->MSB) should be made into a compiler option by Analog Devices. Then everyone would be happy.
dsmtoday wrote: Not to mention it is completely unintuitive to go MSB->LSB with your bitfield order when your processor memory order is LSB->MSB. Exhibit A, the original poster. I still think bitfield order (MSB->LSB or LSB->MSB) should be made into a compiler option by Analog Devices. Then everyone would be happy.
Perhaps another #pragma would be of use which forces the ordering of the bit fields so no confusion can arise.
But at the bare minimum if the processor uses little endian mode then the bit fields should use it is well because what use would bit fields be from a hardware perspective if they didn't match the underlying memory model ??
ADI's SHARC compiler is a big-endian compiler (MSW of an unsigned long long is stored at the lowest-addressed word, for example), and allocates bitfields from the MSB of a word. The reasons for this are historical, made more than a decade ago.
If ADI had selected the same layout as the Intel PC for the compiler, there'd still frequently have to be some translation, since SHARC is a word-addressed machine. And for customers who were interfacing to processors that didn't share the same layout as the PC, there'd be still more translation required. I agree that, had different decisions been made back then, you could take some short-cuts now. You would still be relying on implementation-defined behaviour, however.
It's possible that we might add support for different layouts in future, but not likely; such things are problematic at the boundaries between parts of the application that use them and the parts that don't (if you don't build your whole application with that switch, including all libraries (or specify the pragma in all the right places) you get hard-to-diagnose data corruption problems), and they still don't remove all the necessary translation issues (byte- vs word-addressing, again).
This is a really OLD thread, but my 2 cents...
SHARC can't claim to be any endianess. The nominal native data type of the compiler/hardware is long-word (32-bit), and the native address mode is long-word. Endianess is meaningless in this context. It's only meaningful if you're trying to address a native long word value using a native short or byte addressing reference. SHARC does not do this. It may also be meaningful in the context of an external processor or peripheral having direct access to SHARC memory, in which case the endianess depends on how the device is hooked up.
Now SHARC can make 16, 32, and 64-bit memory references through an address space mapping. But this is more of a memory space aliasing trick supported by the hardware. In this case if I were to store a two consecutive 16-bit values to consecutive 16-bit address space as (b4 b3), (b2 b1), then dereference the values in 32-bit address space your value would yield (b2 b1 b4 b3) which is neither big nor little endian!
This is where Analog screwed up IMHO. They should have kept the addressing scheme such that this sequence would preserve the original order which would make serialization over 32-bit interfaces easier. But I'm sure they had their reasons.
I agree that the bit-field packing order should be defined, even if it is not defined in the C language. After all, the whole reason you use C is it allows you to do otherwise completely insane pointer de-referencing on incompatible data types to solve real world data exchange problems. But in this case I think the lesson is don't rely on bit fields.
Retrieving data ...