Exact match. Not showing close matches.
'[PIC] RF pulse duration modulation encoding techni'
I have a really neat idea that could save a lot of time and money on an RF
project (not to mention have a greater baud rate), but I don't know if it
Lots of pics like the 16F877 have a feature that when a signal on a pin goes
high, an interrupt can be triggered.
Pulse duration modulation is an RF modulation technique like morse code
except the info is incoded in how long the pulse is high or low.
Why don't we have an RF signal coming into an antenna, which is fed into a
rectifier, which is then fed into a low pass filter, which is fed into an
non-inverting amplifier so we get a 0-5 v DC signal. This signal would go
into a pin with an interrupt enabled, say RB0 for the PIC16F877.
The pulse is off most of the time and on for a short period of time. When
the pulse turns on, a timer in the pic starts counting up. The timer stops
and the value is recorded on the next pulse.
In this way all that you need to encode a number in a radio signal is two
pulses, rather than log2(n) number of bits (encoded in a modulation which
requires a DSP to decompose), where "n" is the maximum value of the
information. The info can be directly read by the PIC.
Is this feasable, impossible, already done, missing a piece. I think its
I am not sure if I understood very well, but if you are counting from
raising till falling edge of the signal then it's called Pulse Width
Modulation (PWM), and if you have small spikes and you are counting the time
in between the spikes then it's called Pulse Position Modulation (PPM). Both
of them are widely used, however, for encoding analog data instead of
digital. For example in radio controlled modeling you have many control
channels encoded in PWM, then it's encoded as PPM to transmit all channels
over the air, so each spike is not just tells you the value of the control
channel but also a marker for starting the next one. It's practically a
shift register to coding back to PWM.
I have made some devices for digitally filtering these signals, and all I
can say is that it is not so trivial to measure the very same length all the
time. The slightest change in length or just not being able to catch the
signal exactly the same moment (as you have a clock ticking, not like an
analogue that responds almost the same moment). So slightest change and the
counted number of 0xA5 becomes 0xA6 very easy. With full analogue design
everything is fine, however with digital capturing you can hear a small
noise from the servo motor which is annoying and consumes electricity. Your
digital filter has to take care of that.
Anyway, the standard pulse of such radio system uses 22ms frame rate and
1-2ms pulse width for each channels. I was using 4MHz FOSC and the 1ms is
divided into 256 steps - which is quite low actually but enough for most
applications. Now if 1 byte takes 1ms with such inaccuracy, I have some
doubts about the usage. If you think it over, 1 bit of 9600bps takes 104us,
this one takes 125... Maybe soneone else have a different opinion?
On Wed, Jun 25, 2008 at 12:36 AM, Zachary Noyes <gmail.com> wrote: znoyes
This idea is feasible (as Tamas points out) and is done, but it is
very difficult to get the error rate down to as low as you can get
with binary modulation. That's why it tends to be used in analog comms
rather than digital.
This idea reminds me of one I had when I was about 13 years old. I
thought that we could send so much more data in a given bandwidth if
we would simply use analog voltage to indicate value. I also thought
that we could include "reference pulses" which were a known voltage to
compensate for changes in gain.
All of these ideas have the same problem, and that is that in a
real-world comms channel with noise, they become just as limited if
not more than the typical methods.
You should Google for Shannon-Hartley Theorem. This is a theorem in
information theory which tells you the maximum overall bitrate you can
transfer (after errors are corrected) through a communications channel
of a given bandwidth and signal to noise ratio, assuming additive
Gaussian noise. No matter what modulation scheme/error control coding
you use, you cannot do better than this limit. There are several
modern digital comms methods which get very close to this limit.
Most of the challenge in digital comms comes in dealing with
distortion (both in terms of a non-flat frequency response, delay
which varies with frequency and time, multipath, and nonlinear
distortion) and non-Gaussian noise (like spikes from lightning strikes
or man-made devices).
In general, bandwidth efficiency and noise rejection are at odds with
each other. In other words, complex schemes like multi-level phase
shift keying, quadrature amplitude modulation, etc. are good at
cramming as many bits per second into a small bandwidth, but require a
high signal to noise ratio to do so. Simple bandwidth-hogging schemes
like FSK (frequency shift keying) take up at least several Hertz of BW
per bit per second, but are more tolerant of lower signal to noise
Keeping total power fixed, and assuming that the noise is white so
that there is equal power per Hertz of bandwidth, then there comes a
point where increasing bandwidth doesn't buy you any more bits per
second. Spread spectrum modulation methods operate near this limit so
that they can achieve the greatest bit rate at the lowest signal
More complex digital modulation schemes are usually employed to deal
with the "challenging" items mentioned above, like multipath.
Multipath is when you have several paths which the signal can have
from TX to RX (like the direct path plus several reflections) and they
arrive out of phase with each other, distorting the signal. The lower
the symbol rate (the fewer changes per time in the signal), the less
effect multipath will have, so schemes which work well in multipath
often use parallel channels, like several different relatively slow
FSK channels close together which, combined, give a respectable bit
It is interesting to note that all modulation schemes which approach
the Shannon-Hartley limit for bitrate per signal strength sound pretty
much like white noise (QAM, OFDM, and CDMA/DSSS are three prominent
On Tue, Jun 24, 2008 at 7:36 PM, Zachary Noyes <gmail.com> wrote: znoyes
Sean Breheny wrote:
> It is interesting to note that all modulation schemes which approach
> the Shannon-Hartley limit for bitrate per signal strength sound pretty
> much like white noise (QAM, OFDM, and CDMA/DSSS are three prominent
Thank you very much for this explanation, Sean - I sometimes take a look
at (newer) modulation scheme, and it's good to know.
Zachary Noyes wrote:
> <long winded description of basic pulse width modulation snipped>
> In this way all that you need to encode a number in a radio signal is
> two pulses,
Actually one pulse according to your description. It looked like you were
encoding information in the width of each carrier burst.
> rather than log2(n) number of bits (encoded in a
> modulation which requires a DSP to decompose), where "n" is the
> maximum value of the information. The info can be directly read by
> the PIC.
> Is this feasable, impossible, already done, missing a piece.
Yes, no, of course, perhaps.
I haven't looked at them in detail, but I think the RF signals for RC hobby
servos use exactly this scheme. This was all designed before anyone could
possibly put a computer at the receiving end. The pulse width was used by a
analog servo circuit to control the position of the motor. In this case,
the fact that this method has no guaranteed delivery, no error detection,
and low accuracy is partly irrelevant and partly made up for with
redundancy. A new value is sent about every 20mS, so if one gets glitched
it won't have too much effect overall.
This kind of scheme works OK for transmitting single analog values where
close enough is close enough. It doesn't work well for transmitting
discrete digital information, like a credit card number. There you have to
get the right number. Being off by 1 isn't close enough.
Another drawback is lack of error detection. RF is always a noisy channel.
You should never assume any one packet makes it intact to the receiver. In
the case of discrete digital information, it is often important to know that
it was received correctly. For example, if you send a credit card number
you have to assume sometimes it sometimes gets garbled. In those cases it's
much better to know you didn't get anything usable and to ignore the whole
packet than to use the wrong credit card number. This is usually handled by
wrapping chunks of information in packets and adding a digital checksum to
each packet. (Credit card numbers actually contain some error detection
themselves, but that's not relevant to this discussion).
Although you say this scheme is more efficient than sending the information
digitally, it's actually not for most cases. There will always be some
jitter on received demodulated edges due to channel noise. For sake of
simplicity, let's say that every edge is randomly received somewhere within
a 1 time unit window. The uncertainty of a received carrier pulse width is
therefore 2 time units (one for each edge). Let's say you wanted to
transmit a bunch of randomly chosen values with 8 bits resolution (1 part in
256). The maximum pulse width would need to be 512 time units for the
receiver to be able to distinguish 256 levels. If these are randomly chosen
values, then the carrier will be on for 256 time units per value on average.
Now let's compare this to a digital scheme, like manchester encoding. One
way of looking at manchester encoding is that data is sent in carrier bursts
(and gaps) of 1/2 bit or 1 bit time in length. Since we have a +-2 time
unit error on measuring the width of any pulse or gap, we need the pulses to
be 4 and 8 time units long to be able to distinguish between a long and a
short in the receiver. This make each manchester bit 8 time units long.
Since we are transmitting a 8 bit value, that requires 64 time units. The
carrier is on for 1/2 the time each bit, or 32 time units for a 8 bit value.
Even if we use twice that by adding a preamble for synchronization, we're
still at 1/4 the carrier time for sending just a 8 bit value compared to the
For transmitting N bits of information, the analog scheme requires
transmitter power proportional to 2**N whereas for digital encoding it's
proportional to N. As you can see, even for a low value like N=8 the
digital scheme is already 4x more efficient.
Of course both schemes aren't equal and have different properties. The
analog scheme fails gracefully if you are transmitting a analog value. The
digital scheme allows for means of knowing the data was received exactly
correct within some confidence probability, such as by adding a checksum.
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014. Gold level PIC consultants since 2000.
More... (looser matching)
- Last day of these posts
- In 2008
, 2009 only
- New search...