I am little confused about the INL spec. It is in terms of LSB, but they are in decimals. How is this derived? Is it saying that (for example, on figure 13) if I put a input code of 64, that the whole bit can be truncated by a little over 10% of its whole step? So, if a step size can output in increments of 20mV, then the INL tells us that the 20mV could vary by a little over 2mV for that particular step?