Searching \ for 'Data Logging' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/mems.htm?key=data
Search entire site for: 'Data Logging'.

Truncated match.
PICList Thread
'Data Logging'
1997\11\03@064100 by Jon

flavicon
picon face
Hi

I am working on a final year project for my degree,
using a PIC controller as a data logger.  At the
moment a predefined amount of data is logged for any
given session time.  What I want to do it allow the
system to be run for any length of time and still have
the same amount of data logged (e.g 60 points).

Does anybody have an algorithm or know where to find
one, that dynamically junks data as the time goes on,
such that I have a set of data with fairly equal spaces
in time between them.
---------------------------------------
TRESADERN J.M
spam_OUTtc801404TakeThisOuTspamstmail.staffs.ac.uk

1997\11\04@205938 by Ron Kreymborg

flavicon
face
This is an intriguing problem. To maintain a fixed size buffer containing
a set of n equally spaced in time values that best represent the events
over a time period that is continually increasing.

To ensure there are always n values present (the device could be asked to
dump its contents at any time) and that they faithfully represent the
input time series means the n values must be "adjusted" as each new sample
arrives. The adjustment would have less and less effect as it moves back
in time through the buffer. It also means the sample period must slowly
increase to match the elapsed time period, or else the samples are
accumulated for an increasing time and then a mean taken before being
added to the buffer. While sample period adjustment and the mechanism of
the buffer update is not hard, choosing the "adjustment" algorithm for
each value is.  I am exploring some interpolation and averaging methods
but comparing the buffer contents to an actual n samples of the total time
series at the end of the simulation has significant errors. Anyone else
got any ideas?

Ron

On Mon, 3 Nov 1997, TRESADERN J.M wrote:

{Quote hidden}

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ron Kreymborg                   Systems Administrator
Monash University               CRC for Southern Hemisphere Meteorology
Wellington Road
Clayton, VIC 3168               Phone     : 061-3-9905-9671
Australia                       Fax       : 061-3-9905-9689
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1997\11\05@150719 by Eric van Es

flavicon
face
Ron Kreymborg wrote:

> This is an intriguing problem.

Correct!

{Quote hidden}

Ok - in flash of lightning I saw something that MIGHT be an idea at least.
Possibly it will be totally inadequate for this appliacation.

You have 60 samples, right? Say one every second for starters. Now after say
60 minutes you want a new set (i'm jumping some time here<G>). Take the
average of the first set of 60, put that in position 1 (the oldest value) The
second set of 60 reading is averaged and put in position 2. And so on. Until
you've got a set of 60 values again. Now the time betwwen each sample is up to
60 seconds from 1 second. How the heck you are going to do this with the PIC
will be _VERY_ interesting! You'll probably need a lot of processing going
on....

ELSE
if you just want new values for a set period - do something like the rrf
instruction. Take position2, move it into position position1; pos3 into pos2,
pos4 into pos3 etc.
Your timing will probably be tricky, and you'll need a buffer to temporarily
store new vlues while you shift the old one through.
ELSE
You could write the values to the address of the oldest value, overwriting it,
and the next new value will overwrite the second oldest value..... This will
take some thought into keeping track of where your data is....

This is probably garbage, but maybe it serves as some inspiration...

Cheers!

{Quote hidden}

Eric van Es               | Cape Town, South Africa
vanesspamKILLspamilink.nis.za | http://www.nis.za/~vanes
LOOKING FOR TEMPORARY / HOLIDAY ACCOMODATION?
http://www.nis.za/~vanes/accom.htm

1997\11\05@163024 by Scott Newell

flavicon
face
>This is an intriguing problem. To maintain a fixed size buffer containing
>a set of n equally spaced in time values that best represent the events
>over a time period that is continually increasing.


Assume we start with a buffer of fixed size and a sample rate of 1 Hz.

Fill the buffer in order (0, 1, 2, ...).

When you hit the end of the buffer, switch to a sample rate of 1/2 Hz
(every other data point.)  But this time, overwrite every other location in
the buffer.(0, 2, 4, 6...)

When you hit the end of the buffer, switch to 1/3 Hz, and overwrite every
third location.

I'd probably set it up as a circular buffer, and increment modulo buffer
length at the wrap around.

Of course, the data won't be consecutive once you start overwriting, but I
think this scheme will keep the time between samples evenly spaced.


To keep the data in consecutive order looks to be a little tougher.  I
guess instead of overwriting, you could shift all the data points from the
insertion point to the end of the buffer forward one spot.  Then stick the
newest data at the end of the buffer.  But then to find the next insertion
point, you'll need to subtract the number of 'data shifts' since the last
buffer wrap around.  I think.  (Probably should have written up a little
C++ simulation before shooting off my mouth on this one...)


newell

1997\11\05@200042 by Mike Keitz

picon face
On Wed, 5 Nov 1997 10:12:56 +0200 Eric van Es <.....vanesKILLspamspam.....ILINK.NIS.ZA>
writes:

>> To maintain a fixed size buffer containing
>> a set of n equally spaced in time values that best represent the
>events
>> over a time period that is continually increasing.
>>
>> To ensure there are always n values present (the device could be
>asked to
>> dump its contents at any time) and that they faithfully represent
>the
>> input time series means the n values must be "adjusted" as each new
>sample
>> arrives.

There are methods for "resampling", for example taking audio samples at
44.1 KHz from a CD and "adjusting" them so they are ready to record on
digital tape at 32 KHz.  For the case of the new sample rate being
exactly double or exactly half (or 3X, etc), it's very simple.  But if
the rates don't match well, then it gets complicated and strange aliasing
effects leading to loss of quality (beyond what just reducing the sample
rate does) could be expected.

So if you have 60 samples in the buffer and resample them at a rate of
59/60, generating 59 new samples, there will be room for one new sample
in the buffer.  The resample process would have to be done for each new
sample, but this would guarantee there would always be  either 59 or 60
samples in the buffer.  If you can tolerate not having all 60 at any
given time, then the resample could be done every 5 samples and so on.
The computation would be real easy if you could accept a 1/2 (30/60)
resample, this would halve the sample rate every 30 samples (after the
first 60).  But the buffer may have as few as 30 samples if the process
is stopped immediately after a resampling.

I'm not familiar with exactly how resampling works.  I suppose one simple
method would be to do "weighted interpolation", based on where the new
sample points land among the existing ones.  For example, if a new sample
point lands at 1.75 (3/4 of the way between existing samples 1 and 2),
take 3 parts of sample 2 and 1 part of sample 1, and divide the result by
4.  This won't work very well, but it's a start.

Techniques for resampling must be discussed heavily in DSP literature,
where it is needed in many situations.  Doubtless there are some web
pages about it, or paper pages in the university library.

1997\11\05@200850 by Bob Lunn

flavicon
face
Bob Lunn
11/06/97 12:08 PM


> Techniques for resampling must be discussed heavily in
> DSP literature, where it is needed in many situations.
> Doubtless there are some web pages about it, or paper
> pages in the university library.

    I seem to recall that these techniques are
    called 'warping' when applied to voice.

    This may be a useful term on which to search.

___Bob

1997\11\05@205828 by TONY NIXON 54964

flavicon
picon face
If I understand the question properly then I'm not sure that this problem
can be solved accurately.

For example, if I run the logger for 1 second then I expect to have
60 samples spaced at 16.7 mS apart.

If I run the logger for 1.4 seconds then I expect to have 60 samples
spaced at 23.3mS apart.

If I run the logger for 2 seconds then I expect to have 60 samples
spaced at 33.3mS apart.

etc

How could you maintain an even sampling rate to something that is not
constant? Time in this case.

If the operator can turn the logger off at ANY given time how could
there be 60 samples waiting for you when the logging interval was
never known?

If I start off with a 60 sample time frame of 100mS what is the next time
frame?  101mS, 110mS, 200.000054mS, double the first?

If I discard old samples that do not fit the current sample rate they are
lost forever, but I may need some of them to match a future sample
rate or I may not. It sounds like an exploding data base may form
here, with excruciating task of sorting it out on the fly.

Tony


Just when I thought I knew it all,
I learned that I didn't.

1997\11\05@224351 by Steve Baldwin

flavicon
face
> If I understand the question properly then I'm not sure that this problem
> can be solved accurately.

I agree, depending on how you interpret the question.
As far as I can see, you need to be oversampling to some extent and that
would depend on both the resolution of the result you want and the step
size of this continually moving window.

If the window size can be upped by a factor of 2 each time, then one
approach would be to use a buffer of 1.5n elements, where n is the number
of samples you want to report.
eg. For a history of 60, the buffer would need to record 90 samples.
Starting at buffer[0], sample at rate 1 until you get to the end and then
sample at rate 2 (half the rate of 1) with the results overwriting the old
data. Keep doing this until the end of the universe.

You can stop at any time and always have a valid set of results. If the
current buffer pointer is "i" then the first result to report is
buffer[i+1]+buffer[i+2] (if i is odd, it would have to be offset by one
member). When you reach the end of the buffer reporting 2 at a time, start
at the top but only increment 1 at a time until you get back to the
original value of "i".

I think this would work. I haven't tried it.

Steve.

{Quote hidden}

1997\11\06@035718 by Andrew Warren

face
flavicon
face
TRESADERN J.M <EraseMEtc801404spam_OUTspamTakeThisOuTstmail.staffs.ac.uk> wrote:

> I am working on a final year project for my degree, using a PIC
> controller as a data logger.  At the moment a predefined amount of
> data is logged for any given session time.  What I want to do it
> allow the system to be run for any length of time and still have the
> same amount of data logged (e.g 60 points).
>
> Does anybody have an algorithm or know where to find one, that
> dynamically junks data as the time goes on, such that I have a set
> of data with fairly equal spaces in time between them.

Jon:

Jennifer Wilson and I have come up with the following:

   1.  Set INTERVAL = 1 and take enough samples (at one sample
       every INTERVAL seconds) to fill your buffer.

   2.  Set N = 1.

   3.  Wait INTERVAL seconds, then take a sample.

   4.  Throw out the sample at position [N+1], then shift all the
       following samples left one position, so you end up with an
       empty spot at the end of your buffer.  Store your newest
       sample in that empty spot.

   5.  If N = 1 then INTERVAL = INTERVAL * 2.

   6.  N = N + 1.

   7.  If N > (buffer size / 2) then go to Step 2.  Otherwise, go
       to Step 3.

Using a 10-sample buffer for simplicity, this algorithm produces the
following:
                                                   Average Time
   Time             Buffer Contents                Between Samples
   ----    --------------------------------------- ----------------
     10      1   2   3   4   5   6   7   8   9  10      1
     11      1   3   4   5   6   7   8   9  10  11      1.111
     13      1   3   5   6   7   8   9  10  11  13      1.333
     15      1   3   5   7   8   9  10  11  13  15      1.555
     17      1   3   5   7   9  10  11  13  15  17      1.777
     19      1   3   5   7   9  11  13  15  17  19      2
     21      1   5   7   9  11  13  15  17  19  21      2.222
     25      1   5   9  11  13  15  17  19  21  25      2.666
     29      1   5   9  13  15  17  19  21  25  29      3.111
     33      1   5   9  13  17  19  21  25  29  33      3.555
     37      1   5   9  13  17  21  25  29  33  37      4

     etc...

With this algorithm:

   The buffer is always full.

   The first sample in the buffer is always the very first one you
   took.

   The last sample in the buffer is always the most-recent one you
   took.

   All the samples are in the correct order.

   The distance between samples is always as close as possible to
   equal.

   No extra storage is required.

   There's no "processing":  Each of the samples in your buffer is
   accurate; none are averaged or modified in any way.

-Andy

P.S.  The algorithm is for buffers with an even number of slots.
     If your buffer-size is odd, you need to modify the algorithm
     slightly, as follows:

         1.  Set INTERVAL = 1 and take enough samples (at one
             sample every INTERVAL seconds) to fill your buffer.

         2.  Set N = 1.  INTERVAL = INTERVAL * 2.

         3.  Wait INTERVAL seconds, then take a sample.

         4.  Throw out the sample at position [N+1], then shift all
             the following samples left one position, so you end up
             with an empty spot at the end of your buffer.  Store
             your newest sample in that empty spot.

         5.  N = N + 1.

         6.  If N > (buffer size / 2) then go to Step 2.
             Otherwise, go to Step 3.

P.P.S.  If anyone's interested, I have a QuickBASIC program that
       nicely illustrates the operation of the algorithm... Request
       it in PRIVATE e-mail and I'll send you a copy.

=== Andrew Warren - fastfwdspamspam_OUTix.netcom.com
=== Fast Forward Engineering - Vista, California
=== http://www.geocities.com/SiliconValley/2499

1997\11\06@114816 by Wayne Foletta

flavicon
face
Hi Data Logger Coders:

I've been following the thread on the 60 sample evenly spaced - but my
question is "Why evenly spaced samples?". What's the basic reason for
sampling in this manner? If the intent is reconstruct a signal waveform,
changing the sample interval by Nyquist theory will reduce the
reconstructed waveform bandwidth directly with the increased time
between samples. So what is the importance of even samples? Have I
missed a post on the basics? Who or what is the signal observer and what
is the signal source?

Mike Keitz is right on the fundamentals of simple resampling - linear
interpolation does work. You only need to use more complex DSP routines
if you want to account for sampling aperture (sinc 1/fs rolloff) and
sample resolution. If the human eye is the observer, 8 bits and linear
resampling is transparent. However, if it is the human ear, 8 bits and
linear resampling sounds good only if the original signal is oversampled
by a factor 10 or more (CD playback chips use 16x or more with 6+ order
elliptic DSP filters for 16 bit Hi Fi).

PS: The term 'warping' when applied to voice or other sampled signals
usually refers to changing the time or pace of the signal without a
frequency shift or pitch change.

-Wayne

{Quote hidden}

1997\11\06@182940 by Ron Kreymborg

flavicon
face
It would be interesting to hear from Jon more about his application and
what he intends to sample.  My interest was simply in developing an
algorithm that gave evenly spaced samples at the highest resolution
possible for the period concerned, without any special data logger setup.
"Evenly spaced" because that was difficult to do but also because it
simplifies subsequent analysis (I am talking here about environment
variables). Andrew & Jennifer show a method where this requirement is
relaxed.

Ron

On Thu, 6 Nov 1997, Wayne Foletta wrote:

{Quote hidden}

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ron Kreymborg                   Systems Administrator
Monash University               CRC for Southern Hemisphere Meteorology
Wellington Road
Clayton, VIC 3168               Phone     : 061-3-9905-9671
Australia                       Fax       : 061-3-9905-9689
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1997\11\10@175625 by Marc Heuler

flavicon
face
Hi Mike (Mike Keitz), in <RemoveME19971105.200527.2638.13.mkeitzTakeThisOuTspamjuno.com> on Nov 5 you
wrote:

{Quote hidden}

I don't have a solution for the above problem, but maybe I can add some
thoughts.

The digital representation of an analog signal suffers an effect called
IMAGING.  The spectrum -fs/2 to +fs/2 (with fs being the sampling frequency
or 1 Hz in this thread) is repeated in each fs interval.

This happens because there is an unlimited number of wave forms that share
the very same discrete values that you read from the sampler.

Here's an analog signal spectrum.  Don't get disturbed by the negative
frequency side:

                  ^
                  |
                 /|\
                / | \
               /  |  \
              |   |   |
              |   |   |
----+----+----+----+----+----+----+----> frequency axis
             -B   0   B

(B is the bandwidth of the signal).  Here's its digital representation
after sampling it with sampling frequency fs (>2B).

                  ^
        .         |         .
\       / \       /|\       / \       /
\     /   \     / | \     /   \     /
 \   /     \   /  |  \   /     \   /
  | |       | |   |   | |       | |
  | |       | |   |   | |       | |
----+----+----+----+---B+----+----+----> frequency axis
      -fs  -fs/2  0   fs/2  fs


As you can see those image spectra are spaced in fs Hz intervals in the
frequency domain.

The analog signal above is bandlimited.  Its bandwidth B is lower than half
the sampling rate fs.  This fact (2B<fs) is called the Nyquist criterium.

If the signal had more bandwidth (2B>fs), the image spectra would _overlap_
each other and the signal of interest, as their width grows with the signal
width!  Overlapping of the image spectra and the signal of interest is
called ALIASING.

Once you have sampled the signal with fs too low, the aliasing noise
(overlapping spectra) cannot be removed anymore.  Frequency analysis will
lead to erranous results and detect tone energy where there was none in the
original analog signal (ie find low frequency energy when there were only
high frequency components present).


The original poster probably does not care about this effect and just
sample his data points in.  Doing else would mean using an analog 0.5Hz
cutoff lowpass filter (fs=1Hz, Bmax=fs/2=0.5Hz).  I don't know if such a
thing exists in hardware :-) Note that the filtering must be done in the
ANALOG part of the circuit.


In the DSP literature there are methods to resample data without loss of
quality.  But these very much relate to the imaging effect.  If the signal
has been sampled with aliasing noise, they won't give the expected result.


Resampling has to be done in integer steps.  In this threads' scenario you
would upsample x59, and then downsample x60.  Upsampling is called
INTERPOLATION and done by inserting an appropriate number of zeros between
the samples to create a new datastream of sample rate fs_new (ie take one
sample of the old signal, then insert 58 zeros, then take the next old
sample, and insert another 58 zeros).

The zeros have two other effects next to raising the sample rate:  They
lower the new signals energy/time.  And they induce noise, or to be more
specific they move fs_new high above the image spectra - so some of the
images are now below Nyquist frequency (fs_new/2) and therefore belong to
the signal (bandwidth B_new = fs_new/2).

You can remove the unwanted images by lowpass filtering the new signal.
This filter is called an INTERPOLATION FILTER.  The filter will replace the
zeros with new sample values that are perfectly fine with the Nyquist
criterion.


After this has been done, you DECIMATE the signal by 60 - by picking each
60th sample value from the stream to build a new one.  Wait, I hear you
say, wouldn't this cause aliasing noise?  The answer is yes.  Therefore
decimation also requires a filter, the DECIMATION FILTER.  It limits the
bandwidth of the signal to fs_final/2 - so the new spectrum is fine with
the new samplerate (2B_final<fs_final).  This has to be done BEFORE
decimating, as aliasing noise can't be removed anymore when it's there
once.


There's much room for optimization.  For example you can combine
interpolation filter and decimation filter into one.  You don't need to
calculate each of the signalx59 data points as you would discard most of
them during the decimation anyway.  Also, picking other frequencies (ie
with common divisors) can reduce cpu time.


Now I have written a lot of theory but still can't answer the original
question.  If the signal had been correctly bandlimited when converting
from analog to digital, the above method would work.  You could then
correctly resample the data each time a new sample is acquired.

Note also, that fs lowers as more and more data points are read (as the
fixed length interval represents are larger amount of time).  No new data
from the sampler is allowed to violate the Nyquist rule, so the
anti-aliasing filter must have variable cutoff frequency (that decreases as
time goes by).

You could put this into a circuit when using a fixed high speed (with
respect to your data point frequency) sampling rate with fixed analog
filter, and digitally lowpass filter the signal (with variable cutoff)
before forwarding it as new sample to the variable fs function.

I'm sure this would work just fine, but I think this requires more effort
and hardware parts than the original poster wants to invest.


Just my 2 Pfennig.


PS: The imaging problem is real and exists in all digital equipment. All
   DA-converters will either exhibit the images, or contain an analog
   low pass filter that attenuates energy above fs/2. This is called an
   ANTI-IMAGING FILTER or RECONSTRUCTION FILTER.

   Better DA-converters have this filter built-in. The others require you
   to live with the problem or add external filters (such as RC filters or
   one of those 8 pin DIL filter ICs).




PPS:  Another solution would be to allow only a limited time of operation.
     Then one could just take the intermediate sample points that belong
     to other data frames from the sampler directly.  This requires high
     sampling rate and lots of storage memory for long time of operation.

1997\11\10@183552 by )

flavicon
face
Marc Heuler wrote (in addition to some really good stuff on digital
sampling):

> The original poster probably does not care about this effect and just
> sample his data points in.  Doing else would mean using an analog
> 0.5Hz
> cutoff lowpass filter (fs=1Hz, Bmax=fs/2=0.5Hz).  I don't know if such
> a
> thing exists in hardware :-) Note that the filtering must be done in
> the
> ANALOG part of the circuit.
>
Actually Marc, such things do exist. I work with a bunch of Krohn-Hite
3988 two channel programmable filters here that have a range of 0.003Hz
to 1MHz low pass mode, 0.003 to 300kHz highpass. They're 8 pole
Butterworth/Bessel selectable. The filter sections themselves are pure
analog. In addition to input and output amp sections, they have four
quadratic filter sections that do the filtering work. They use DACs
controlling FETs as variable R's and relay selected capacitors and have
(in general) 3 digit frequency resolution. Extremely low noise. They
work quite well! For the history buffs, I also work with GenRad 1952As
which do (I think) about 0.1Hz to 100kHz, 6 pole Butterworth (I think).
Old enough to have Op-amps made from discrete germanium transistors!

-Frank


Frank Richterkessing
Experimental Methods Engineer

TakeThisOuTFRANK.RICHTERKESSINGEraseMEspamspam_OUTAPPL.GE.COM

More... (looser matching)
- Last day of these posts
- In 1997 , 1998 only
- Today
- New search...