i urgently need a way to output a 0..5V dc voltage with at least 12 bit
resolution. Because the device is powered up and down frequently and because
there are some fail safe issues behind, i want the dc value to be non
volatile. Restrictions are as always SPACE, PRICE, AVAIL.
The best idea i could think of so far is using an E-POT or even better an
E-DAC.
First question is: Does anybody know who makes good/inexpensive
E-POTs/E-DACs?
A further problem is that the E-POTs/DACs i found so far usually provide 8
bit resolution. This means i had to use two of them by adding/combining its
outputs via a high precision op-amp.
But the MAIN QUESTION however is:
Is my way a good one at all, or does anybody know something completely
different that is probably BETTER / EASIER / CHEAPER or even more RELIABLE.
Much thanks in advance for spending some time to think about.
As I try to recall, there is a company, Frambus or something like that, that
make a series of parts based on ferous memory cell technology. In addition to
parts that look and feel like non-volatile SRAM, they also make an 8 bit HC373
type buffer lookalike. Use two of these and a standard D/A, and you would have
non-volatility.
> Good morning all,
>
> i urgently need a way to output a 0..5V dc voltage with at least 12 bit
> resolution. Because the device is powered up and down frequently and because
> there are some fail safe issues behind, i want the dc value to be non
> volatile. Restrictions are as always SPACE, PRICE, AVAIL.
>
> The best idea i could think of so far is using an E-POT or even better an
> E-DAC.
> First question is: Does anybody know who makes good/inexpensive
> E-POTs/E-DACs?
>
> A further problem is that the E-POTs/DACs i found so far usually provide 8
> bit resolution. This means i had to use two of them by adding/combining its
> outputs via a high precision op-amp.
> But the MAIN QUESTION however is:
>
> Is my way a good one at all, or does anybody know something completely
> different that is probably BETTER / EASIER / CHEAPER or even more RELIABLE.
>
> Much thanks in advance for spending some time to think about.
>
> Germain Morbe
>
> --
> http://www.piclist.com hint: The PICList is archived three different
> ways. See http://www.piclist.com/#archives for details.
> Combining the outputs will _not_ work. The MSBits DAC will not have the
> precision needed.
>
> Bob Ammerman
> RAm Systems
> (contract development of high performance, high function, low-level
> software)
Bob, you sound like you are right daubtlessy. Can you explain the reason to
me more precisely. Both DACs would deliver 5V in 256 steps, i planned to
scale down one of them by 256 to get the LSB voltage then add them up with
an op-amp. For the moment i dont see why this should not work. Please
comment.
> Chris, thanks your tip may be worth a look.
>
> > Combining the outputs will _not_ work. The MSBits DAC will not
> have the
> > precision needed.
> >
> > Bob Ammerman
> > RAm Systems
> > (contract development of high performance, high function,
> low-level
> > software)
>
> Bob, you sound like you are right daubtlessy. Can you explain the
> reason to
> me more precisely. Both DACs would deliver 5V in 256 steps, i
> planned to
> scale down one of them by 256 to get the LSB voltage then add them
> up with
> an op-amp. For the moment i dont see why this should not work.
> Please
> comment.
>
Say each D/A has a maximum error of 0.5 bits. This would be 9.8 mV on
the "most significant" D/A (the one that is not divided by 256. The
"least significant" D/A, after division, has a full scale range of 19.6
mV, so each bit is 76.89 uV. The 0.5 bit error in the most significant
D/A corresponds to about 128 counts of the least significant D/A.
________________________________________________________________
GET INTERNET ACCESS FROM JUNO!
Juno offers FREE or PREMIUM Internet access for less!
Join Juno today! For your FREE software, visit:
dl.http://www.juno.com/get/tagj.
> Say each D/A has a maximum error of 0.5 bits. This would be 9.8 mV on
>the "most significant" D/A (the one that is not divided by 256. The
>"least significant" D/A, after division, has a full scale range of 19.6
>mV, so each bit is 76.89 uV. The 0.5 bit error in the most significant
>D/A corresponds to about 128 counts of the least significant D/A.
I guess you could make it work if you had a real 12 bit or better A/D,
and read the output voltage. Use one D/A as a coarse and the other as fine.
Messy, and you'd have to check that the stability was reasonable, it may
not be good enough to call it 12-bit.
Best regards,
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Spehro Pefhany --"it's the network..." "The Journey is the reward" .....speffKILLspam@spam@interlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
Contributions invited->The AVR-gcc FAQ is at: http://www.bluecollarlinux.com
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Chris, thanks your tip may be worth a look.
>
> > Combining the outputs will _not_ work. The MSBits DAC will not have the
> > precision needed.
> >
> > Bob Ammerman
> > RAm Systems
> > (contract development of high performance, high function, low-level
> > software)
>
> Bob, you sound like you are right daubtlessy. Can you explain the reason
to
> me more precisely. Both DACs would deliver 5V in 256 steps, i planned to
> scale down one of them by 256 to get the LSB voltage then add them up with
> an op-amp. For the moment i dont see why this should not work. Please
> comment.
Sure,
You see, an ADC's is not infinitely precise. In fact, it is typically
specified to only 1/2 of the value for the least significant bit.
Lets take a concrete example:
Assume we have an 8-bit A/D whose output ranges from 0 to 2.55 volts (ie:
0.01 volts per step).
The code 00000000 is only guaranteed to generate an output in the range
0.005 to 0.015.
The code 00000001 is only guaranteed to generate an output in the range
0.015 to 0.025
etc.
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
Depending on speed there are a couple ways that will give
simple linear D/A conversion.
1) PWM slow response 12 bits will need 16bit control registers and a
very stable power supply.
2) There is a novel PWM D/A that works surprisingly well although 12
bits might be pushing the envelop. Use a random number generator
that is random over the desired digital range. To generate a analog
output generate a random number and compare it to the desired
output number. If the random number is less than the desired number
output a 1 otherwise output a 0 to an output pin. Feed the output
pin through a low pass filter. Do this whenever possible in your
code. It's advantage is that the code is asynchronous , waiting
for a keystroke in a timer interrupt or a idle wait loop. I have
used this in a DTMF dialer generating an analog complex wave form
within CCTIT standards with only a non critical resistor and
small filter cap.
> i urgently need a way to output a 0..5V dc voltage with at least 12 bit
> resolution. Because the device is powered up and down frequently and because
> there are some fail safe issues behind, i want the dc value to be non
> volatile. Restrictions are as always SPACE, PRICE, AVAIL.
>
>
> 2) There is a novel PWM D/A that works surprisingly well although 12
> bits might be pushing the envelop. Use a random number generator
> that is random over the desired digital range. To generate a analog
> output generate a random number and compare it to the desired
> output number. If the random number is less than the desired number
> output a 1 otherwise output a 0 to an output pin. Feed the output
> pin through a low pass filter. Do this whenever possible in your
> code. It's advantage is that the code is asynchronous , waiting
> for a keystroke in a timer interrupt or a idle wait loop. I have
> used this in a DTMF dialer generating an analog complex wave form
> within CCTIT standards with only a non critical resistor and
> small filter cap.
Now that is clever! The only thing you have to guarantee is that "enough"
samples are generated to satisfy nyquist, but you don't need to worry about when
they occur. The sampling criterion has become a statistical problem.
> A further problem is that the E-POTs/DACs i found so far usually provide 8
> bit resolution. This means i had to use two of them by adding/combining
its
> outputs via a high precision op-amp.
You have to be careful here. Most D/As are no better then +- 1/2 LSB. You
therefore can't just add the scaled outputs of two such 8 bit D/As to make a
useful 12 bit D/A. The combined D/A will still be +- 1/2 LSB of the top 8
bits, which means the extra low bits are meaningless.
> But the MAIN QUESTION however is:
>
> Is my way a good one at all, or does anybody know something completely
> different that is probably BETTER / EASIER / CHEAPER or even more
RELIABLE.
What's wrong with the obvious solution of using a 12 bit D/A? Keep in that
you have to pay reasonable attention to offset voltages, drifts, and other
sources of error at this resolution.
> 2) There is a novel PWM D/A that works surprisingly well although 12
> bits might be pushing the envelop. Use a random number generator
> that is random over the desired digital range. To generate a analog
> output generate a random number and compare it to the desired
> output number. If the random number is less than the desired number
> output a 1 otherwise output a 0 to an output pin. Feed the output
> pin through a low pass filter. Do this whenever possible in your
> code. It's advantage is that the code is asynchronous , waiting
> for a keystroke in a timer interrupt or a idle wait loop. I have
> used this in a DTMF dialer generating an analog complex wave form
> within CCTIT standards with only a non critical resistor and
> small filter cap.
This is a long description for dithering with white noise. The problem with
this method is that there is no guarantee on the lower bounds of the
frequency content. It is therefore impossible to guarantee response time or
to design a filter that guarantees a minimum signal to noise ratio.
> I guess you could make it work if you had a real 12 bit or better A/D,
> and read the output voltage. Use one D/A as a coarse and the other as
fine.
> Messy, and you'd have to check that the stability was reasonable, it may
> not be good enough to call it 12-bit.
I'm not recommending this as a solution here; this is more of an aside
discussion. I once built a 12 bit D/A out of "cheap" parts to prove a
point. Instead of the usual R - 2R resistor ladder, I decreased the 2R a
little so that the bits would overlap somewhat, but built a total of 16
bits. This meant there were 65536 levels scattered about over the desired
range in such a way that one was always within 1/2 LSB out of 12 at any
point along the range. I measured all 65536 levels with a computer
controlled precision voltmeter (I worked for HP at the time and we had these
things availabe in the lab), then had the computer pick the 4096 values that
were closest to the desired ones and saved them in a lookup table. Poof, 12
bit D/A as long as you used the lookup table to drive it. This thing was
useless over temperature because I used individual discrete resistors, but
it achieved its purpose of proving a point.
Olin Lathrop wrote:
>
> > 2) There is a novel PWM D/A that works surprisingly well although 12
> > bits might be pushing the envelop. Use a random number generator
> > that is random over the desired digital range. To generate a analog
> > output generate a random number and compare it to the desired
> > output number. If the random number is less than the desired number
> > output a 1 otherwise output a 0 to an output pin. Feed the output
> > pin through a low pass filter. Do this whenever possible in your
> > code. It's advantage is that the code is asynchronous , waiting
> > for a keystroke in a timer interrupt or a idle wait loop. I have
> > used this in a DTMF dialer generating an analog complex wave form
> > within CCTIT standards with only a non critical resistor and
> > small filter cap.
>
> This is a long description for dithering with white noise. The problem with
> this method is that there is no guarantee on the lower bounds of the
> frequency content. It is therefore impossible to guarantee response time or
> to design a filter that guarantees a minimum signal to noise ratio.
Points are well taken. Frequency response is a function of the update
period, by tying at least one of the update sources to a fixed period
recurring interrupt (timer) establishes the lower bounds of the
frequency
content. This was suggested in my post but not explained. Its primary
advantage is a simple D/A with a minimum of components.
This is one way of implementing a sigma-delta D/A converter, and it can work
quite well, provided...
A converter like this works great when it's generating an output of about
50% of full-scale. To produce 50%, it needs to generate a stream of ones
and zeroes that's 50% ones, and 50% zeroes. The way it's usually done with
a delta-sigma, in fact, guarantees that this bit stream will be a square
wave at the update rate.
In this case, the lowest frequency present is the update rate, and since the
whole thing is so simple, it's easy to make the update rate high enough that
it only takes a very simple reconstruction filter to make a nice clean
output signal. Generally, for outputs in the 25% to 75% range, a converter
like this will outperform a PWM in terms of usable resolution versus number
of poles of reconstruction filter.
But as the circuit is required to make an output nearer either the 0 or the
1 rail, it must produce an output stream that has mostly zeroes or mostly
ones. Eventually at the extreme, it must produce a stream of
all-zeroes-and-a-single-one or vice versa. For example, for 12 bits, to
produce the output of one lsb above zero, it must produce 4095 zeroes and 1
one, which is exactly the same waveform you will get from a PWM. The lowest
spurious frequency in this case is the update rate/4096, which means that to
get a clean output, you will need lots of poles of filtering, or the update
rate will have to be much, much higher than the bandwidth you need in your
output.
3) An 8 bit DAC working at higher frequency. At each sample, calculate
the error between 12 bit input and 8 bit output (output is taken as 8
higher bits of input). Error should be calculated in relative
units, 1=full scale. (It is simply the lower bits of input). At the
next sample, add this error to the input. This will compensate for
reduced resolution in the long run. Works as some kind of PWM, but
doesn't need high oversampling, since you need to add only 4 bits
of resolution. Also, adding small noise to the input will help to
'randomize' regular patterns at the output.
See also an excellent article on dithering and noise-shaping at
> Depending on speed there are a couple ways that will give
> simple linear D/A conversion.
> 1) PWM slow response 12 bits will need 16bit control registers and a
> very stable power supply.
> 2) There is a novel PWM D/A that works surprisingly well although 12
> bits might be pushing the envelop. Use a random number generator
> that is random over the desired digital range. To generate a analog
> output generate a random number and compare it to the desired
> output number. If the random number is less than the desired number
> output a 1 otherwise output a 0 to an output pin. Feed the output
> pin through a low pass filter. Do this whenever possible in your
> code. It's advantage is that the code is asynchronous , waiting
> for a keystroke in a timer interrupt or a idle wait loop. I have
> used this in a DTMF dialer generating an analog complex wave form
> within CCTIT standards with only a non critical resistor and
> small filter cap.
>> i urgently need a way to output a 0..5V dc voltage with at least 12 bit
>> resolution. Because the device is powered up and down frequently and because
>> there are some fail safe issues behind, i want the dc value to be non
>> volatile. Restrictions are as always SPACE, PRICE, AVAIL.
>>
>>
> Olin, why is it impossible to guarantee response time or design a
> filter for that? Could you please explain in more detail?
The way I understood it, you were dithering the input signal with a random
number, meaning white noise in signal processing terms. If your random
number generator is really good, then each output value has no correlation
to any prevous output values and has an equal probability of being anywhere
in the output range. This means it contains all frequencies (up to what can
be represented by the sample rate) equally. In other words, you are
dithering with a signal that contains 1Hz, .1Hz, .01Hz, etc.
Let's look at this in the time domain instead of frequency. Assume your
desired duty cycle is 50% and your random numbers range from 0 to 1. There
is no guarantee how long a continuous string of random numbers might be
below 1/2. It gets increasingly unlikely for longer strings, but there is
no guaranteed upper limit.
Look at this another way. The reconstruction filter will remove frequencies
above some value, but the dither signal will always contain frequencies
below that value. Therefore, there will always be low frequency noise on
the reconstructed signal.
> > Olin, why is it impossible to guarantee response time or design a
> > filter for that? Could you please explain in more detail?
>
> The way I understood it, you were dithering the input signal with a random
> number, meaning white noise in signal processing terms. If your random
> number generator is really good, then each output value has no correlation
> to any prevous output values and has an equal probability of being
anywhere
> in the output range. This means it contains all frequencies (up to what
can
> be represented by the sample rate) equally. In other words, you are
> dithering with a signal that contains 1Hz, .1Hz, .01Hz, etc.
>
> Let's look at this in the time domain instead of frequency. Assume your
> desired duty cycle is 50% and your random numbers range from 0 to 1.
There
> is no guarantee how long a continuous string of random numbers might be
> below 1/2. It gets increasingly unlikely for longer strings, but there is
> no guaranteed upper limit.
Actually, there is a guarantee of the longest possible string of 0/1 -
implied by the width of the PRNG. But it is long enough to not be any real
help here!
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
-- http://www.piclist.com hint: PICList Posts must start with ONE topic:
[PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads
It also can be said that for each sample of random generator there is
an equal probability of being either under 0.5 or over. And if we take a
sufficient number of samples than half of them are under 0.5 and half
are over. The 'sufficient number' depends on a particular
implementation of a pseudo-random generator, but in practice I can see
no more than 10 consecutive samples in one region (using a MATLAB rand
function for about 1000 samples), and it happens just about two times.
So it is probably safe to say that response delay of a 1 bit DAC using
dithering is x samples in average. It can be measured if necessary.
Right, adding white noise to a signal results in increased noise over
the whole frequency range. And the noise power will increase equally
over the whole range even if the signal is quantized (quoting the
article I referenced in the previous message)
It is possible though to shape that 'noise floor', for example push it
away from the low frequency range to the high frequency range, using a
feedback. The simplest way is to remember the error during each 12 to
1 bit conversion and subtract it from the next input. See Fig. 2 for
how will it look like. It almost completely removes noise at DC.
>> Olin, why is it impossible to guarantee response time or design a
>> filter for that? Could you please explain in more detail?
> The way I understood it, you were dithering the input signal with a random
> number, meaning white noise in signal processing terms. If your random
> number generator is really good, then each output value has no correlation
> to any prevous output values and has an equal probability of being anywhere
> in the output range. This means it contains all frequencies (up to what can
> be represented by the sample rate) equally. In other words, you are
> dithering with a signal that contains 1Hz, .1Hz, .01Hz, etc.
> Let's look at this in the time domain instead of frequency. Assume your
> desired duty cycle is 50% and your random numbers range from 0 to 1. There
> is no guarantee how long a continuous string of random numbers might be
> below 1/2. It gets increasingly unlikely for longer strings, but there is
> no guaranteed upper limit.
> Look at this another way. The reconstruction filter will remove frequencies
> above some value, but the dither signal will always contain frequencies
> below that value. Therefore, there will always be low frequency noise on
> the reconstructed signal.
>
> Thanks Olin, I can see your points.
>
> It also can be said that for each sample of random generator there is
> an equal probability of being either under 0.5 or over. And if we take a
> sufficient number of samples than half of them are under 0.5 and half
> are over. The 'sufficient number' depends on a particular
> implementation of a pseudo-random generator, but in practice I can see
> no more than 10 consecutive samples in one region (using a MATLAB rand
> function for about 1000 samples), and it happens just about two times.
>
> So it is probably safe to say that response delay of a 1 bit DAC using
> dithering is x samples in average. It can be measured if necessary.
>
> Right, adding white noise to a signal results in increased noise over
> the whole frequency range. And the noise power will increase equally
> over the whole range even if the signal is quantized (quoting the
> article I referenced in the previous message)
>
> It is possible though to shape that 'noise floor', for example push it
> away from the low frequency range to the high frequency range, using a
> feedback. The simplest way is to remember the error during each 12 to
> 1 bit conversion and subtract it from the next input. See Fig. 2 for
> how will it look like. It almost completely removes noise at DC.
>
> Quite usable technique in my opinion.
>
> Nikolai
Nikolai, Olin, would it be possible to use a sawtooth
(ramped) waveform as the reference instead of a random
waveform? This would give the same effect but is more
predictable, and if you know the PWM freq and the
sawtooth reference freq you could tune it to give
minimum error, or at least a reliable error?? :o)
-Roman
> Nikolai, Olin, would it be possible to use a sawtooth
> (ramped) waveform as the reference instead of a random
> waveform? This would give the same effect but is more
> predictable, and if you know the PWM freq and the
> sawtooth reference freq you could tune it to give
> minimum error, or at least a reliable error?? :o)
> -Roman
>
Roman,
I've got a suspicion: that would be conventional PWM, wouldn't it?
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
>Roman,
>I've got a suspicion: that would be conventional PWM, wouldn't it?
Bob:-
To me, the unique feature of Walter's suggestion is that it didn't
have to be called with an exact time between calls. If you started
making it a sawtooth (a counter or whatever) then you have to start
worrying about beat frequencies between whatever is going on in
the software and the period of the sawtooth. Ugh.
As you say, if you call it regularly at fixed times, it's just a
software PWM.
Best regards,
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Spehro Pefhany --"it's the network..." "The Journey is the reward" @spam@speff@spam@spam_OUTinterlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
Contributions invited->The AVR-gcc FAQ is at: http://www.bluecollarlinux.com
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> It also can be said that for each sample of random generator there is
> an equal probability of being either under 0.5 or over. And if we take a
> sufficient number of samples than half of them are under 0.5 and half
> are over. The 'sufficient number' depends on a particular
> implementation of a pseudo-random generator, but in practice I can see
> no more than 10 consecutive samples in one region (using a MATLAB rand
> function for about 1000 samples), and it happens just about two times.
With an ideal random number generator, you have about one chance in 1000 of
any 10 values being below 1/2.
> So it is probably safe to say that response delay of a 1 bit DAC using
> dithering is x samples in average. It can be measured if necessary.
Only if your random number generator has some known characteristics that
make it not purely random. If it is truly random, then measuring it is
useless because all future values have no correlation to previous values, by
definition.
> Right, adding white noise to a signal results in increased noise over
> the whole frequency range. And the noise power will increase equally
> over the whole range even if the signal is quantized (quoting the
> article I referenced in the previous message)
>
> It is possible though to shape that 'noise floor', for example push it
> away from the low frequency range to the high frequency range, using a
> feedback. The simplest way is to remember the error during each 12 to
> 1 bit conversion and subtract it from the next input. See Fig. 2 for
> how will it look like. It almost completely removes noise at DC.
Yes, there are various techniques that improve over dithering with a random
signal. The error diffusion technique you describe is a common one,
although it has its own artifacts. Whether the result is an improvement
depends on the desired characteristics of the output signal. In other
words, you can change the characteristics of the noise, but you can't get
rid of it. You have to know the application to decide which characteristics
are most and least objectionable.
Another point I didn't mention before is that an incoming signal of 1/2 is
the easy case. Imagine if the desired average value is 1/100. Only about 1
in 7 random values will be below that threshold. Therefore any problems
with low frequency noise just got 4 times worse.
> Quite usable technique in my opinion.
I'm not trying to belittle the technique, although I do believe that for
most applications fixed period PWM has better characteristics. I think the
advantage of the dithering techniques are that they use less processor
resources if implemented in software.
I also wanted point out the drawbacks because I got the impression that some
people on the list thought it was a novel and wonderful idea (probably
because they hadn't heard of it before) without having thought it thru.
> Nikolai, Olin, would it be possible to use a sawtooth
> (ramped) waveform as the reference instead of a random
> waveform? This would give the same effect but is more
> predictable, and if you know the PWM freq and the
> sawtooth reference freq you could tune it to give
> minimum error, or at least a reliable error?? :o)
What you describe is fixed period PWM. Yes I agree that fixed period PWM
has better characteristics for most applications.
Bob Ammerman wrote:
>
> > Nikolai, Olin, would it be possible to use a sawtooth
> > (ramped) waveform as the reference instead of a random
> > waveform? This would give the same effect but is more
> > predictable, and if you know the PWM freq and the
> > sawtooth reference freq you could tune it to give
> > minimum error, or at least a reliable error?? :o)
> > -Roman
> >
>
> Roman,
>
> I've got a suspicion: that would be conventional PWM, wouldn't it?
>
Ha ha! I knew someone would say that! The orig thread
was about using a random number and comparing against
the ref value, to give a functional pwm that had no real
timing constraints. Like you could go and do software
functions of any length and return to refresh the pwm
anytime with the same average DC value outputted.
I liked that idea a lot. :o)
Instead of comparing the ref value to a random signal,
compare it to a ramped sawtooth signal. This has the
same average DC value and expresses the full range of
values with equal weighting. So assuming you "refresh"
it at a freq different to the sawtooth it will give the
same average effect as the random waveform. And refresh
timing is non-critical, as long as refresh freq and the
sawtooth freq are different enough it should work fine.
In effect the "randomness" is provided by the difference
between the two frequencies and the differing delays
between refreshes. :o)
-Roman
> Instead of comparing the ref value to a random signal,
> compare it to a ramped sawtooth signal. This has the
> same average DC value and expresses the full range of
> values with equal weighting. So assuming you "refresh"
> it at a freq different to the sawtooth it will give the
> same average effect as the random waveform. And refresh
> timing is non-critical, as long as refresh freq and the
> sawtooth freq are different enough it should work fine.
This opens you up to aliasing problems in sampling the sawtooth waveform.
Imagine, for example, that you got back to PWM code every time when the
sawtooth was at its peak.
I think the most workable of these schemes we've discussed where you do the
PWM when you get around to it is the one that propagates the error from the
previous decision. There is no random or sawtooth generator, but you do
have to keep track of how long the PWM has been sitting at the previous
value and do a multiply to calculate the accumulated error.
>>Roman,
>>I've got a suspicion: that would be conventional PWM, wouldn't it?
> Bob:-
> To me, the unique feature of Walter's suggestion is that it didn't
> have to be called with an exact time between calls. If you started
> making it a sawtooth (a counter or whatever) then you have to start
> worrying about beat frequencies between whatever is going on in
> the software and the period of the sawtooth. Ugh.
> As you say, if you call it regularly at fixed times, it's just a
> software PWM.
> Best regards,
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Spehro Pefhany --"it's the network..." "The Journey is the reward"
> TakeThisOuTspeffKILLspamspaminterlog.com Info for manufacturers: http://www.trexon.com
> Embedded software/hardware/analog Info for designers: http://www.speff.com
> Contributions invited->The AVR-gcc FAQ is at: http://www.bluecollarlinux.com
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
There is family of asynchronous D/A converter algorithm's
that are useful when peak execution cycles are at a premium
and the average is still less than the maximum available
execution cycles.
The random reference algorithm described a few days ago
is the only one I know that is both asynchronous and doesn't
need an accurate time reference.
There are a couple different ways that delta mod can be
used. A few days ago Nikolai Golovchenko clearly described
a technique that can be extended to be both asynchronous and
have extended precision. Nikolai's algothrim makes a
simple assumption that each sample error is integrated
over a normalized signal unit of time.
If the error is integrated accounting for time then
each output sample can both be asynchronous and the
errors can be compensated.
This idea can be extended and used in two ways.
First the simplest D/A converter is a single bit
(loosely delta mod) This reduces extended precision
to error management. More precision at increased
processing requirements.
Secondly, to have a stable D/A most of the delta
MOD D/A's make an assumption the output is periodic. By
including time in the error calculations we can eliminate
the need for tight periodic outputs. There is usually a
simple way that this can be done. When we are integrating
errors with normalized unit time we are actually multiplying
by 1. If we have a one bit D/A then the value that we are
integrating is 1 by the amount of elapsed time. The elapsed
time can be had by sampling a free running timer and
calculating the delta time with a subtract.
The net result is an asynronous software D/A using
very simple code.
Walter Banks
Nikolai Golovchenko wrote:
> An 8 bit DAC working at higher frequency. At each sample,
> calculate the error between 12 bit input and 8 bit output
> (output is taken as 8 higher bits of input). Error should
> be calculated in relative units, 1=full scale. (It is
> simply the lower bits of input). At the next sample, add
> this error to the input. This will compensate for reduced
> resolution in the long run. Works as some kind of PWM,
> but doesn't need high oversampling, since you need to
> add only 4 bits of resolution.
Just a quick question regarding the PRNG D/A converter. If I needed
multiple D/A outputs, is anything stopping me from using just one PRNG, and
comparing against multiple reference values?
Nikolai Golovchenko wrote:
>
> Wow! I missed this feature at first. Just tweak the output when
> you have time, and who cares about PWM or saw-tooth waves! :)
>
> Nikolai
Yes it appealed to me also. The ability to return to
the pwm routine at any time and refresh it and still get
a fair pwm analogue output.
I suggested a sawtooth wave for a number of reasons. The
problems with true random numbers is they are totally
unpredictable. When I worked as a games programmer
*everything* uses random numbers, chance of player being
injured, chance of getting the goodies, etc.
No games programmers use real randomness. You *absolutely
will* get long strings of good or bad numbers based only
on luck. Generating good psuedo-random numbers is a an
art, but the basis of that art is ensuring for an EQUAL
biasing of all samples. In other words, if you take 100
samples, about 10 should be under 10%. About 10 should
be from 10% to 20%, etc etc. This is vitally important
in the real world.
With a pwm dependant on the random number, it is quite
possible that in 1000 samples (1 second?) that all the
random samples will be low. This gives a ridiculously long
RC time constant to try and filter it, and chance of
scary errors when a lot of "bad" samples happen. Which
is a matter of pure luck! There will be times when 1000
sequential samples happen with NONE of them over 25%
value. This would be a pwm disaster. And it WILL happen.
Games programmers use a more "sawtooth" random approach,
where EVERY possible sample must occur in a short time
period, so in one second EVERY possible value occurs a
lot of times. Taking a random (psuedo random) sample
somewhere in there will provide a much more usable random
value than a pure random number.
This occured to me straight away when the random pwm
idea was mentioned. You really don't want proper randomness
or you introduce problems, hence the discussions here
over the last couple of days. Using a high frequency
sawtooth style wave sampled at irregular frequency or the
appropriate aliasing frequency will give a more predictable
and much more reliable result than a true random waveform
which sould be high or low for an unlimited number of cycles
based upon luck.
If the sawtooth frequency is significantly higher than
the sampling frequency, the result will be very random
with excellent weighting. If the sawtooth is generated in
hardware even better.
For a timing-insensitive pwm I really think this would be
a better method. Yes, you have to deal with aliasing, but
is that a bigger problem than the alternatives?? I don't
think so... :o)
-Roman
PS. The preferred way to generate good psuedo random
numbers is to generate numbers between 0 and 255 (for
eaxample) and place in a 256 unit lookup table where
there is only ONE of each number, but the postions
are random. Then you can read 256 random numbers
in sequence and have perfect weighting. As a games
programmer I coded many psuedo-random generators and
testers, and this is a good place to start. Maybe a
search of games programming sites would reveal some
useful info. People think they are dopey kids but
they are just as good at what they do as any aerospace
programmer.
> Just a quick question regarding the PRNG D/A converter. If I needed
> multiple D/A outputs, is anything stopping me from using just one PRNG,
and
> comparing against multiple reference values?
I can't see why not.
Let's do a little thought experiment:
Assume we built a device that used a PRNG to generate a DA.
Now assume it works. :-)
Now assume we made 20 of them.
Now assume we started them all at the same time.
All 20 would be using the same stream of PRNG values at the same time,
right?
All 20 would be working fine, right?
Now, what would be the difference if we just happened to have one more
powerful machine doing all 20 of them?
Nothing, right?
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
> PS. The preferred way to generate good psuedo random
> numbers is to generate numbers between 0 and 255 (for
> eaxample) and place in a 256 unit lookup table where
> there is only ONE of each number, but the postions
> are random. Then you can read 256 random numbers
> in sequence and have perfect weighting. As a games
> programmer I coded many psuedo-random generators and
> testers, and this is a good place to start. Maybe a
> search of games programming sites would reveal some
> useful info. People think they are dopey kids but
> they are just as good at what they do as any aerospace
> programmer.
This is really unneeded.
A well designed PRNG (say one based on a shift register with feedback) _
will, _by its very design_, generate every possible code in its range.
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
> -----Original Message-----
> From: Bob Ammerman [SMTP:.....RAMMERMANRemoveMEPRODIGY.NET]
> Sent: Thursday, February 08, 2001 11:41 AM
> To: RemoveMEPICLISTspamBeGoneMITVMA.MIT.EDU
> Subject: Re: [EE]: high res DA challenge
>
> > PS. The preferred way to generate good psuedo random
> > numbers is to generate numbers between 0 and 255 (for
> > eaxample) and place in a 256 unit lookup table where
> > there is only ONE of each number, but the postions
> > are random. Then you can read 256 random numbers
> > in sequence and have perfect weighting. As a games
> > programmer I coded many psuedo-random generators and
> > testers, and this is a good place to start. Maybe a
> > search of games programming sites would reveal some
> > useful info. People think they are dopey kids but
> > they are just as good at what they do as any aerospace
> > programmer.
>
> This is really unneeded.
>
> A well designed PRNG (say one based on a shift register with feedback) _
> will, _by its very design_, generate every possible code in its range.
>
And therefore you should be able to calculate the lowest frequency component
in your PWM by knowing the longest run of 1's or 0's in the PSBS, and the
longest time between comparisons in your loop.(he said, hopefully...)
Bob Ammerman wrote:
>
> > PS. The preferred way to generate good psuedo random
> > numbers is to generate numbers between 0 and 255 (for
> > eaxample) and place in a 256 unit lookup table where
> > there is only ONE of each number, but the postions
> > are random. Then you can read 256 random numbers
> > in sequence and have perfect weighting.
>
> This is really unneeded.
>
> A well designed PRNG (say one based on a shift register with feedback) _
> will, _by its very design_, generate every possible code in its range.
>
> Bob Ammerman
Hi Bob, maybe I misunderstand you, but I still think
the point was valid. If you generate a random number
every time you update the pwm and the number is TRULY
random it could be a disaster. You could generate the
same number every time for an unknown amount of times.
This is the real problem with truly random numbers.
A psuedo random system (like I suggested) cannot generate
the same number twice, nor can it generate the same
number at all until all possible numbers have been read.
This gives an equal weighting to all numbers IN ANY
GIVEN TIME PERIOD which is never achieved with a true
random number. A truly random system could generate
any number at any time, it could generate the same number
X times in a row and then not again for millions of
generations. Very bad if it is controlling your pwm. :o)
-Roman
> Just a quick question regarding the PRNG D/A converter. If I needed
> multiple D/A outputs, is anything stopping me from using just one PRNG,
and
> comparing against multiple reference values?
> -----Original Message-----
> From: Roman Black [SMTP:TakeThisOuTfastvidspamEZY.NET.AU]
> Sent: Thursday, February 08, 2001 2:28 PM
> To: PICLISTEraseMEMITVMA.MIT.EDU
> Subject: Re: [EE]: high res DA challenge
>
> Bob Ammerman wrote:
> >
> > > PS. The preferred way to generate good psuedo random
> > > numbers is to generate numbers between 0 and 255 (for
> > > eaxample) and place in a 256 unit lookup table where
> > > there is only ONE of each number, but the postions
> > > are random. Then you can read 256 random numbers
> > > in sequence and have perfect weighting.
> >
> > This is really unneeded.
> >
> > A well designed PRNG (say one based on a shift register with feedback) _
> > will, _by its very design_, generate every possible code in its range.
> >
> > Bob Ammerman
>
>
> Hi Bob, maybe I misunderstand you, but I still think
> the point was valid. If you generate a random number
> every time you update the pwm and the number is TRULY
> random it could be a disaster. You could generate the
> same number every time for an unknown amount of times.
> This is the real problem with truly random numbers.
>
> A psuedo random system (like I suggested) cannot generate
> the same number twice, nor can it generate the same
> number at all until all possible numbers have been read.
> This gives an equal weighting to all numbers IN ANY
> GIVEN TIME PERIOD which is never achieved with a true
> random number. A truly random system could generate
> any number at any time, it could generate the same number
> X times in a row and then not again for millions of
> generations. Very bad if it is controlling your pwm. :o)
> -Roman
>
What you suggest is absoluetly correct, but not exactly a major problem as
generating a such a truly random number with just a PIC would be pretty much
impossible. A shift register PRNG, is both fast, easy to implement and seem
to have the ideal characteristics for this application.
> Bob Ammerman wrote:
> >
> > > PS. The preferred way to generate good psuedo random
> > > numbers is to generate numbers between 0 and 255 (for
> > > eaxample) and place in a 256 unit lookup table where
> > > there is only ONE of each number, but the postions
> > > are random. Then you can read 256 random numbers
> > > in sequence and have perfect weighting.
> >
> > This is really unneeded.
> >
> > A well designed PRNG (say one based on a shift register with feedback) _
> > will, _by its very design_, generate every possible code in its range.
> >
> > Bob Ammerman
>
>
> Hi Bob, maybe I misunderstand you, but I still think
> the point was valid. If you generate a random number
> every time you update the pwm and the number is TRULY
> random it could be a disaster. You could generate the
> same number every time for an unknown amount of times.
> This is the real problem with truly random numbers.
Roman,
As I said, we were dealing with a PRNG (where P means "pseudo"). These
algorithms, given the same starting point, always generate the same sequence
and always generate each number exactly once.
Actually, it is rather difficult to get a TRUE random number generator in a
computer. The only real way is to base it on some completely external
stimulus.
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
software)
> No games programmers use real randomness. You *absolutely
> will* get long strings of good or bad numbers based only
> on luck. Generating good psuedo-random numbers is a an
> art, but the basis of that art is ensuring for an EQUAL
> biasing of all samples. In other words, if you take 100
> samples, about 10 should be under 10%. About 10 should
> be from 10% to 20%, etc etc. This is vitally important
> in the real world.
I other words, you want white noise that is high pass filtered.
> With a pwm dependant on the random number, it is quite
> possible that in 1000 samples (1 second?) that all the
> random samples will be low.
No, quite improbable actually. If "low" is defined as below 1/2, then the
probability of any one number being low is 1/2. The probability of any 1000
numbers all being low is 2**-1000, which so extremely improbable as to be
"impossible" in a practical sense.
However, I do agree with the point your are trying to make, which I believe
is that there is no guaranteed lower frequency bounds in a truly random
signal.
> Using a high frequency
> sawtooth style wave sampled at irregular frequency or the
> appropriate aliasing frequency will give a more predictable
> and much more reliable result than a true random waveform
> which sould be high or low for an unlimited number of cycles
> based upon luck.
I disagree, since you are sampling the sawtooth at random times, the result
will be a random value. If you don't sample at regular intervals, then you
will get aliasing, which could again lead to arbitrarily low frequency
content. Throwing in the sawtooth (or any other waveform that contains all
levels equally) obscures what is going on a bit, but you end up with the
same situation.
You could sample your sawtooth using a Poisson interval method with a
properly chosen upper period. That would eliminate all low frequencies
below a cutoff. The power that would go into aliases with regular sampling
turns into white noise above the cutoff frequency. However this method
requires sampling the sawtooth at precisely determined times, which defeats
the purpose.
> If the sawtooth frequency is significantly higher than
> the sampling frequency, the result will be very random
> with excellent weighting.
As mentioned before, this gets you nowhere. If you sample regularly you get
aliasing. If you sample randomly, you are back to a purely random number
generator. This "sawtooth" concept just doesn't work.
> For a timing-insensitive pwm I really think this would be
> a better method. Yes, you have to deal with aliasing, but
> is that a bigger problem than the alternatives??
Yes. At least with white noise the average power in the low frequencies was
bounded. With aliasing the upper bound on low frequency power is higher,
and the lowest possible frequency is still not bounded.
> PS. The preferred way to generate good psuedo random
> numbers is to generate numbers between 0 and 255 (for
> eaxample) and place in a 256 unit lookup table where
> there is only ONE of each number, but the postions
> are random. Then you can read 256 random numbers
> in sequence and have perfect weighting.
This is no different from the sawtooth concept. This is just another
example of a periodic signal that contains all levels equally. The only
difference is that the order of the samples has changed, which makes no
difference to the way we are discussing using the signal.
>A psuedo random system (like I suggested) cannot generate
>the same number twice, nor can it generate the same
>number at all until all possible numbers have been read.
>This gives an equal weighting to all numbers IN ANY
>GIVEN TIME PERIOD which is never achieved with a true
>random number. A truly random system could generate
>any number at any time, it could generate the same number
>X times in a row and then not again for millions of
I took it that he was proposing a software PRNG that you called each time you
wanted a random number, rather than a free running PRNG. This would do the same
as your table approach, and may fit into less code space (if you include the
space taken by the table), while executing nearly as fast. The only advantage
with the table is that you could influence the order of values alot more easily,
which may be more suitable for gaming purposes.
> > Hi Bob, maybe I misunderstand you, but I still think
> > the point was valid. If you generate a random number
> > every time you update the pwm and the number is TRULY
> > random it could be a disaster. You could generate the
> > same number every time for an unknown amount of times.
> > This is the real problem with truly random numbers.
>
> Roman,
>
> As I said, we were dealing with a PRNG (where P means "pseudo"). These
> algorithms, given the same starting point, always generate the same sequence
> and always generate each number exactly once.
> Actually, it is rather difficult to get a TRUE random number generator in a
> computer. The only real way is to base it on some completely external
> stimulus.
Thanks for clearing that up Bob. So we are talking
about the same thing. :o)
-Roman
Hmm, interesting idea... I'm not sure that the correction for sampling period
is going to be very easy. It involves a multiplication of error by
period. Error and period values can occupy several bits, so the
multiplication can take some time. Also there is a potential for round off
errors as a result of multiplication. Could be okay, I don't know.
Measuring period is also an interesting problem. If it should be one
cycle precise, then the easiest way is to use the TMR0 without
prescaler, and update the DAC every 255-x...255 cycles (x is a
deviation range, where the DAC should be updated). It leaves small
room for the conversion itself and other tasks. So a 16 bit timer
looks like a more appropriate choice here as it doesn't require the
frequent updates. But 16 bit timer requires two RAM accesses, which
can be interrupted, and also more computation than the 8 bit timer.
Another option is to stick with the TMR0, but use the prescaler. The
update will need to be synchronized to its ticks though. Adjusting
prescaler you effectively select the required DAC update period. Some
time is lost in waiting for the timer tick (maximum delay is prescaler
period). Looks like a better compromise. 8 bit multiplication is fast
and small...
In case of a 1 bit DAC with 8 bit input this would look something like:
;in - 8 bit input
;error - 8 bit error
;timer_old - previous timer sample
;temp - temporary
;wait for the timer tick, for example 1:2 prescaler
waitTick
movf TMR0, w ;if timer LSb changed
xorwf TMR0, w ;at this instruction than jump out
skpnz
goto waitTick ;it takes maximum two loops to sync
;read timer, remember it, and find period
movf timer_old, w
subwf TMR0, w ;w = TMR0 - timer_old
movwf temp ;temp holds period
addwf timer_old, f ;timer_old' = timer_old+TMR0-timer_old = TMR0
;multiply period by error and divide by 128. We assume here an ideal
;period of 128 timer ticks (multiplication by 1). The period can be
;less than 128, but NOT MORE (to prevent overflowing).
movf error, w
btfss temp, 0
clrf error
clrc
btfsc temp, 1
addwf error, f
rrf error, f
clrc
btfsc temp, 2
addwf error, f
rrf error, f
clrc
btfsc temp, 3
addwf error, f
rrf error, f
clrc
btfsc temp, 4
addwf error, f
rrf error, f
clrc
btfsc temp, 5
addwf error, f
rrf error, f
clrc
btfsc temp, 6
addwf error, f
rrf error, f
clrc
btfsc temp, 7
addwf error, f ;note we have an 8 bit result in error
;now add corrected error to input
movf in, w
addwf error, f ;here error is updated
;if carry set output 1, if cleared output 0
movf SHADOW_PORT, w
andlw BIT_MASK^0xFF ;clear bit
skpnc
iorlw BIT_MASK ;set bit
movwf port ;write to port
;done!
Not terribly complex, might even work. Anyone wanna test? :)
> There is family of asynchronous D/A converter algorithm's
> that are useful when peak execution cycles are at a premium
> and the average is still less than the maximum available
> execution cycles.
> The random reference algorithm described a few days ago
> is the only one I know that is both asynchronous and doesn't
> need an accurate time reference.
> There are a couple different ways that delta mod can be
> used. A few days ago Nikolai Golovchenko clearly described
> a technique that can be extended to be both asynchronous and
> have extended precision. Nikolai's algothrim makes a
> simple assumption that each sample error is integrated
> over a normalized signal unit of time.
> If the error is integrated accounting for time then
> each output sample can both be asynchronous and the
> errors can be compensated.
> This idea can be extended and used in two ways.
> First the simplest D/A converter is a single bit
> (loosely delta mod) This reduces extended precision
> to error management. More precision at increased
> processing requirements.
> Secondly, to have a stable D/A most of the delta
> MOD D/A's make an assumption the output is periodic. By
> including time in the error calculations we can eliminate
> the need for tight periodic outputs. There is usually a
> simple way that this can be done. When we are integrating
> errors with normalized unit time we are actually multiplying
> by 1. If we have a one bit D/A then the value that we are
> integrating is 1 by the amount of elapsed time. The elapsed
> time can be had by sampling a free running timer and
> calculating the delta time with a subtract.
> The net result is an asynronous software D/A using
> very simple code.
> Walter Banks
> Nikolai Golovchenko wrote:
>> An 8 bit DAC working at higher frequency. At each sample,
>> calculate the error between 12 bit input and 8 bit output
>> (output is taken as 8 higher bits of input). Error should
>> be calculated in relative units, 1=full scale. (It is
>> simply the lower bits of input). At the next sample, add
>> this error to the input. This will compensate for reduced
>> resolution in the long run. Works as some kind of PWM,
>> but doesn't need high oversampling, since you need to
>> add only 4 bits of resolution.