piclist 2000\05\04\124420a >
Thread: [EE] 24-bit A/D. Are We in the Twilite Zone Here?
www.piclist.com/techref/io/atod.htm?key=a%2Fd
BY : jamesnewtonTakeThisOuT@RemoveMEpiclist.com

Correct me if I'm wrong, but the noise for the system that Scott is talking
about has to be added BEFORE the signal is "quantized" from analog to
digital. After, the extra fractional information is already lost. So
shifting the A2D result one bit left and adding a random bit would not

Again, the only reason I can see for shifting one bit left and adding a
random bit would be A) in combination with averaging and B) to make the *2
into a *2 + 0.5. And I really don't see the need for B. As an example: lets
say you have an XUnknown (XU) of 8.5 and are adding random noise (RN)
between 0 to 1. Your A2D only measures units of 1 (IN). This gets shifted
left one (*2) and added to the average (+A) which is then shifted left one
(/2) and, if you had floating point divide, you would be able to read this
as having an extra bit of precision past the actual input value. (OUT)

ANALOG  |         DIGITAL
XU      RN      IN      *2      "+A     /2      OUT
0
8.5     0.5     9       18      18      9.00    4.5
8.5     1       9       18      27      13.00   6.5
8.5     0.5     9       18      31      15.00   7.5
8.5     0       8       16      31      15.00   7.5
8.5     0.5     9       18      33      16.00   8
8.5     1       9       18      34      17.00   8.5
8.5     0.5     9       18      35      17.00   8.5
8.5     0       8       16      33      16.00   8
8.5     0.5     9       18      34      17.00   8.5
8.5     1       9       18      35      17.00   8.5
8.5     0.5     9       18      35      17.00   8.5
9       0       9       18      35      17.00   8.5
9       0.5     9       18      35      17.00   8.5
9       1       10      20      37      18.00   9
9       0.5     9       18      36      18.00   9
9       0       9       18      36      18.00   9
9       0.5     9       18      36      18.00   9
9       1       10      20      38      19.00   9.5
9       0.5     9       18      37      18.00   9
9       0       9       18      36      18.00   9
9       0.5     9       18      36      18.00   9
9       1       10      20      38      19.00   9.5
4.7     0.5     5       10      29      14.00   7
4.7     0       4       8       22      11.00   5.5
4.7     0.5     5       10      21      10.00   5
4.7     1       5       10      20      10.00   5
4.7     0.5     5       10      20      10.00   5
4.7     0       4       8       18      9.00    4.5
4.7     0.5     5       10      19      9.00    4.5
4.7     1       5       10      19      9.00    4.5
4.7     0.5     5       10      19      9.00    4.5
4.7     0       4       8       17      8.00    4
4.7     0.5     5       10      18      9.00    4.5

Let me know if anyone wants the Excel spreadsheet used to make that.

Note that the averaging is absolutely necessary and has the effect of
reducing the maximum frequency that can accurately be sampled by a factor of
4 or so. You can get around that (at the cost of momentary loss of some
accuracy) by tossing the average (over writing it) when the input changes by
more than 1 from its previous value. Better yet, compute the delta between
the last reading and this, and divide the average by it, multiply the new
value by it before averaging so that small changes have no effect but large
changes quickly overcome the average.

Now, if the random noise was between -0.5 and +0.5 and you wanted an exact
result rather than just a more accurate proportional result, you would need
to add .5 in somewhere either by adding a 1 after the *2 every other time or
by just adding .5 to the OUT.

But there just isn't any reason to add digital noise to an a2d result (other
than graphing).

Right?

---