I'm using a ADXL362 in 16-bit mode in the 2g mode.
The function I'm using to read for the axis data registers,
void ADXL_ReadXYZ_16(uint16 *x, uint16* y, uint16*z)
{
SPI_ReadRegister(0x0E, rawData, 6);
*x = (uint16_t)((rawData[1] << 8) | rawData[0]);
*y = (uint16_t)((rawData[3] << 8) | rawData[2]);
*z = (uint16_t)((rawData[5] << 8) | rawData[4]);
}
Snippet for conversion of z-axis (the same procedure for the other 2 axes)
AXDL362_ReadXYZ_16(&uint16_x, &uint16_y, &uint16_z);
/* Check sign */
if ((uint16_z & 0xF000) == 0xF000)
{
z = -1 * ((float)((uint16_z - 0xFB00) / 1000.0; // 0xFB00 removes extended sign bit & the sign bit at bit 11, scale factor 1000 from datasheet
}
else
{
z = (float)(uint16_z) / 1000.0;
}
Typical reading with the sensor flat on the bench,
x : 0.01 y : 0.01 z : 1.35
With sensor tilted in + y direction
x : 0.01 or -2.04 y : 1.01 z : 0.3
With sensor tilted in - y direction
x : 0.02 y : -1.01 z : 0.35
With sensor tilted in + x direction
x : 1.02 y : 0.01 z : 0.48
With sensor tilted in - x direction
x : 1.10 y : -2.03 z : 0.34
So there appears to be consistent offset in z of around 0.3 but I don't get why I'm getting those -2.03 values on the x & y axes when I am expecting close to 0.
Can anyone shed any light on this for me?

