I've got a few questions about calibrating VRMS on the 7816. I quite like the idea of using the VGAIN register to allow the device to report in real-world volts (actually in tenths of mVs), thereby saving the firmware from having to do any conversion on the number read from the VRMS register. I've experimented with this using an ideal input at 250V, then using equation 7 on p24 of the datasheet to program up the VGAIN register appropriately. The results are spectacular over an extremely wide range of input voltages... from about 1V to 255V I am getting superb actual voltage readings out of the device using that one VGAIN setting.
My understanding of VGAIN is that it can scale the raw reading from anywhere to 0x up to 2x (i.e. plus or minus 100%).
AN-1152 states that:
RMS calibration does not affect the performance of the active or reactive energy.
I'd like to understand that claim better. Certainly changing VGAIN impacts the energy readings so presumably you should calibrate VGAIN (and IGAIN) before calibrating the energy readings. Looking at the functional block diagram, Fig 1 on p1 of the datasheet, VGAIN happens fairly early in the chain. My concern is that if I set VGAIN so that it's reducing V (i.e. set it to a negative value) does it impact on the internal resolution of the energy measuring subsystem, or can that subsystem still see the full resolution of the pre-VGAINed ADC value?
To use the example numbers in AN-1152: imagine I get a raw reading of 3,400,000 at 220V. I can use VGAIN to take that reading anywhere from 0 up to 6,800,000. But if I want real world numbers to be coming out of the device, then setting VGAIN such that it turns 3,400,000 into 2,200,000 would seem a good choice. The device is then reporting in real-world tenths-of-mVs. If you do the maths, I think that works out at a VGAIN of 0xfd2d2d3.
If I do that, will it reduce the resolution that the energy measuring system has to work with?
If the answer is YES, then my backup plan is to use a positive VGAIN to scale 3,400,000 up 4,400,000, so the device is then reporting in real-world units of twentieths-of-mVs. The firmware can then do a simple shift-right-one-bit to turn the 4,400,000 back into 2,200,000.