Very cool! But its not a true sine wave, its a parabolic wave if that's a
real term. One segment of a triangle wave is a line. The algorithm is an
integrator, and the integral of a line like y = x is y = (x^2)/2+c, a
parabola (let c be your DC offset). Maybe someone else can tell us what
the error magnitude amounts to comparing the "parabolic wave" to a sine?
--- Jim Hartmann <spam_OUTJim_hartmannTakeThisOuTSILENTKNIGHT.COM>
wrote:
> Very cool! But its not a true sine wave, its a
> parabolic wave if that's a
> real term. One segment of a triangle wave is a
> line. The algorithm is an
> integrator, and the integral of a line like y = x is
> y = (x^2)/2+c, a
> parabola (let c be your DC offset). Maybe someone
> else can tell us what
> the error magnitude amounts to comparing the
> "parabolic wave" to a sine?
I was wondering who'd drag this [OT] first.
I did a quick back-of-the-notebook calculation and
discovered what should've been intuitive. If you're
familiar with time to frequency transformations like
Fourier or Laplace then you'll see why I say this.
Let's look at the way we've been discussing how to
approximate sine waves and their corresponding
harmonics. First we started with square waves. We know
that the harmonic strengths fall off inversely to the
harmonic number (and for symmetrical square waves,
there are no even harmonics). This inverse relation
ship, 1/s or 1/(jw), is a direct result of taking the
transformation of a step function. The additional
factors such as defining the harmonic locations are
determined by the periodicity of the wave form.
When we went to triangle waves, we noted that the
harmonic strengths diminished as the square of the
harmonic number, 1/s^2 of 1/(jw)^2 . Again, this is a
direct result of taking the transform of a line.
Continuing with this line of reason, we should
(intuitively) suspect that a wave created with
parabolas would have harmonics strengths diminishing
with the cube of the harmonic number, 1/s^3 or
1/(jw)^3. Guess what?
I suppose one could continue with this reasoning to
suppress the harmonics even more. But (especially on
the pic) you'll reach a point of diminishing returns.
This whole subject btw, falls into the category of
polynomial approximation. So far we've been using
really simple polynomials. In fact to tie in with an
earlier observation, so far these polynomials are like
FIR filters. If one wanted to create the analogue
(perhaps an inappropriate adjective) IIR filter, then
perhaps rational Pade' approximations would be of some
use.
.lo
__________________________________________________
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com
Along the lines of this thread: Can someone please tell me the difference
between FIR and IIR filters? I am familiar with impulse response for
continuous time systems, and with transfer functions,but I have not yet
covered IIR versus FIR in class, nor have I read any explanation of it yet,
so I am curious. Is it just that the impulse response is time-limited?
>Along the lines of this thread: Can someone please tell me the difference
>between FIR and IIR filters? I am familiar with impulse response for
>continuous time systems, and with transfer functions,but I have not yet
>covered IIR versus FIR in class nor have I read any explanation of it yet,
>so I am curious. Is it just that the impulse response is time-limited?
FIR filters use only input signal samples to compute the next output. The
difference equation for FIR filter is:
y(n) = b0 * x(n) + b1 * x(n - 1) + b2 * x(n - 2) + ...,
where y(n) - n'th sample of output
x(n) - n'th sample of input
You can see from the FIR formula that its impulse response is indeed
time-limited, because its output is based on a number of input samples and
if input becomes zero then output will go zero sooner or later. The length
of FIR filter buffer defines the frequency and magnitude resolution that the
filter can provide. For example, a low pass filter with pass band edge
frequency of 100 Hz (0dB) and stop frequency 150 Hz (-60 dB) at 44100 Hz
sampling requires about 1400 buffer size. FIR filters can have linear phase
on frequency dependence.
IIR filter output depends on both input and output samples. IIR filters
resemble analog filters in this respect and often synthesized on an analog
model. IIR filters generally require less memory size, but harder to
implement. They have problems with stability, coeficients resolution. Good
idea is break the whole filter into second order sections. Response of a
practical IIR filter may be finite because of limited coeficients length.
Sometimes, the response is made infinite intentionally to generate for
example sine waves.
If you want more information on digital filters check out Texas Instruments
and Analog Devices web sites. They should have lots of information on this
topic.
_
Thanks for that information, it was the best short-and-sweet
explanation of IIR anf FIR filters I ever heard.
I think too many people get lost in the theoretical mathematics which
derive the IIR and FIR behaviors from the similar continuous time
filters. Many university courses approach this from continuous time
circuits, then Fourier series, then Fourier transforms, then sampling,
then Z transforms, then digital filters. By that time many are
confused!
But you can also explain "backwards" by learning about sampling,
then talk of averaging samples, (FIR, equal co-efficients, right?!)
Then talk of weighted averages (FIR, unequal coefficients), and so
on. Then go on to feedback and IIR techniques. This makes the
practical digital techniques clear. Of course, understanding the
design of a complex (or excellent) digital filter makes us get back to
Z transforms some day. But it makes more sense to me when I
remember to start at sampled data and averaging!
------------
Barry King, Engineering Manager
NRG Systems "Measuring the Wind's Energy" http://www.nrgsystems.com
Phone: 802-482-2255
FAX: 802-482-2272
Absolutely true. Digital filters took my imagination long before the Control
Theory course and I started to learn how they work "backwards", as you say,
from sampling, ADC. DAC, etc. So I can say now it is really easier to study
digital signal processing from simple examples like averaging, integration,
differentiation... Unfortunately, I hadn't a chance yet to work with DSPs,
so I am limited to simulation only. MATLAB is a terrific tool for this.
By the way, I'm glad that you liked my very short explanation, because
sometimes I'm too short to be understood :o)
_