Searching \ for '[PIC]: Is 632 a "magic number" for PICs ?' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/microchip/devices.htm?key=pic
Search entire site for: 'Is 632 a "magic number" for PICs ?'.

Exact match. Not showing close matches.
PICList Thread
'[PIC]: Is 632 a "magic number" for PICs ?'
2005\06\17@140822 by Roy J. Gromlich - PA

picon face
I am stumped by this one - and ideas will be appreciated.

We have a PIC-based device which is ALMOST working correctly, except for one little thing. We have an 18F452
- its UART is connected to a modem ( wired or radio) and
passes every byte through via SSP to a Maxim3100 UART
chip.
On the way through the PIC every byte is checked to see
if it is the beginning of a command string - if so the bytes
are diverted to a buffer to be parsed when the command terminates.  
In addition, every byte received by the Max3100 is passed back through the PIC and out the PICs UART.

The idea is a device allowing bi-directional communication
between a modem and an attached device, which will respond to command strings sent through the modem to tell it to do something and return results.

All of this works just fine - almost.  Any number of bytes are passed through from the Max3100 through the PIC
to the modem - i"ve tried up to 1,000,000.  
But in a stream of data from the modem through the PIC and out through the Max3100, every 632nd byte is dropped.
Always the 632nd byte - never more or less.  Doesn't depend
on baud rate - 2.4 K through 115 K makes no difference.
Not affected by data byte vales - any byte is OK (except the Start-of-Command switch byte).
None of the data in either direction is being buffered on the PIC, so I don't see how there could be buffer rollover
issues.  
I'm not posting the code here yet, because I am hoping someone will recognize the # 632 and give me a clue as to where to look. However, if the conclusion is that you need the code, I will pull the serial I/O stuff out and post
it.  
Roy J. Gromlich



2005\06\17@142230 by Dave VanHorn

flavicon
face

>
>I'm not posting the code here yet, because I am hoping
>someone will recognize the # 632 and give me a clue
>as to where to look. However, if the conclusion is that you
>need the code, I will pull the serial I/O stuff out and post
>it.

Sounds like maybe a buffer rollover is off by one.

2005\06\17@144001 by Jan-Erik Soderholm

face picon face
Dave VanHorn wrote :

> Sounds like maybe a buffer rollover is off by
one.

Even though Roy J. Gromlich - PA wrote :

> None of the data in
either direction is being buffered on
> the PIC, so I don't see how
there could be buffer rollover
> issues.  

Still possible maybe...

Anyway,

Roy J. Gromlich - PA wrote :

> But in a stream of data from
the modem through
> the PIC and out through the Max3100, every
> 632nd
byte is dropped.

Is it possible to see where in the chain the byte
gets lost ? I mean, is it between the modem
and the PIC, or between the
PIC and the MAX3100 ?
Or maybe even *before* the modem or *after* the
MAX3100 ? And also, what is there before the modem
and after the
MAX3100 ?

Jan-Erik.



2005\06\17@144714 by Dave VanHorn

flavicon
face

>
>Is it possible to see where in the chain the byte gets lost ? I
>mean, is it between the modem
>and the PIC, or between the PIC and the MAX3100 ?
>Or maybe even *before* the modem or *after* the MAX3100 ? And also,
>what is there before the modem
>and after the  MAX3100 ?

Maybe the routine takes 1/632'nd of a word time too long to execute,
and so misses a byte?
But, he said baud didn't matter either.

If there's a PC involved, I wouldn't trust it either, without a referee.
I have a nice serial ASCII terminal that I use for things like this.
It never lies, and it shows me hex values for the chars, which is
very convenient.


2005\06\17@145814 by Jan-Erik Soderholm

face picon face
Dave VanHorn wrote :

> Maybe the routine takes 1/632'nd of a
> word
time too long to execute,
> and so misses a byte?
> But, he said baud
didn't matter either.

Should that matter at all on an asynk link,
that
is re-synced on each start bit ?

Regards,
Jan-Erik.



2005\06\17@150917 by Dave Tweed

face
flavicon
face
Roy J. Gromlich - PA <spam_OUTrgromlichTakeThisOuTspampa.net> wrote:
> All of this works just fine - almost.  Any number of bytes
> are passed through from the Max3100 through the PIC
> to the modem - i"ve tried up to 1,000,000.
>
> But in a stream of data from the modem through the PIC
> and out through the Max3100, every 632nd byte is dropped.
> Always the 632nd byte - never more or less.  Doesn't depend
> on baud rate - 2.4 K through 115 K makes no difference.
> Not affected by data byte vales - any byte is OK (except the
> Start-of-Command switch byte).

These are both asynchronous interfaces that you're running at full tilt --
no gaps in the data, right?

Are they both clocked from the same master oscillator? Probably not.

If the transmit clock on the modem is 0.158% faster than the transmit
clock on the MAX3100, this is exactly the sort of symptom you'd see.
For every 632 bytes transmitted by the modem to the PIC, the MAX3100
is only managing to transmit 631 of them, and one gets dropped.

-- Dave Tweed

2005\06\17@151628 by Bill & Pookie

picon face
Could it be that there is ever so slight a difference in the speeds of the
two ends?  Because the data is buffered one byte in the uarts, once you
start sending and keep loading another byte before the uart has finished
transmitting the previous byte, the receiver will fall a little behind.  And
after 632 bytes, the error has grown to where it is unacceptable.

A quick test may be to heat or cool the uarts crystal on each end and see if
the magic number changes.

If you do decide to pause the transmitting every so often, remember that you
will have to let both ends flush their uarts by pausing the transmitting
greater than one byte time.

Kinda like Lucy and Ethel on the candy conveyer belt.  Need to pause for
enough time for all the candy to make it to the end of the line and fall on
the floor every so often.
.
Bill

{Original Message removed}

2005\06\17@152327 by Dave VanHorn

flavicon
face

>
>Should that matter at all on an asynk link,
>that
>is re-synced on each start bit ?

The UART is resynced, but the ISR wouldn't be.

2005\06\17@152443 by Dave VanHorn

flavicon
face

>
>If the transmit clock on the modem is 0.158% faster than the transmit
>clock on the MAX3100, this is exactly the sort of symptom you'd see.
>For every 632 bytes transmitted by the modem to the PIC, the MAX3100
>is only managing to transmit 631 of them, and one gets dropped.

Not true for async data though. You only care how far you are apart
at the end of the byte.

2005\06\17@154543 by Dave Tweed

face
flavicon
face
Dave VanHorn <.....dvanhornKILLspamspam@spam@dvanhorn.org> wrote:
> I wrote:
> > If the transmit clock on the modem is 0.158% faster than the transmit
> > clock on the MAX3100, this is exactly the sort of symptom you'd see.
> > For every 632 bytes transmitted by the modem to the PIC, the MAX3100
> > is only managing to transmit 631 of them, and one gets dropped.
>
> Not true for async data though.

Absolutely true for async data! I've seen this exact problem in telecom
interfaces I've worked on. Async interfaces are not a good match for
continuous data, because of this specific problem.

If the application actually requires continuous streams of data >632 bytes,
then the approach we took was to implement an async transmitter (in our
FPGA) that actually transmitted stop bits that were slightly short,
allowing the next byte to start a little early if necessary. This won't be
an option with the 3100, of course, so it will be necessary to give it a
crystal that's a little on the high side.

> You only care how far you are apart at the end of the byte.

That's true only for receiving. The problem here is that the MAX3100 is
taking slightly longer to retransmit the data compared to how fast the
modem is sending it to the PIC.

-- Dave Tweed

2005\06\17@160213 by Wouter van Ooijen

face picon face
> These are both asynchronous interfaces that you're running at
> full tilt --
> no gaps in the data, right?
>
> Are they both clocked from the same master oscillator? Probably not.
>
> If the transmit clock on the modem is 0.158% faster than the transmit
> clock on the MAX3100, this is exactly the sort of symptom you'd see.
> For every 632 bytes transmitted by the modem to the PIC, the MAX3100
> is only managing to transmit 631 of them, and one gets dropped.

The standard cure for this is to have the first sender send with two
stop bits, and the 'passthrough' device send with one stop bit. Or you
could cheat: the passthrough device sends at a 0.5% higher baudrate.

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu



2005\06\17@160752 by Dave Tweed

face
flavicon
face
Bill & Pookie <reddxspamKILLspamcomcast.net> wrote:
> If you do decide to pause the transmitting every so often, remember
> that you will have to let both ends flush their uarts by pausing the
> transmitting greater than one byte time.

Not really. Pausing 0.2% of the time would be plenty. It doesn't really
matter how the pauses are distributed -- it could be implemented as a
0.02-bit pause after every byte, a one-bit pause every 50 bytes (500 bits),
or a one-byte pause every 500 bytes.

Many applications don't really need continuous data -- either the message
length is implicitly limited, or there's a high-level end-to-end protocol
that introduces pauses into the data stream flowing in each direction.

Also, this is the sort of thing for which flow control was invented. The
PIC needs to be able to hold off the modem if the MAX3100 isn't ready to
accept more data. This may require a certain amount of buffering in the
PIC after all.

-- Dave Tweed

2005\06\17@161324 by Dave VanHorn

flavicon
face

>
>Absolutely true for async data! I've seen this exact problem in telecom
>interfaces I've worked on. Async interfaces are not a good match for
>continuous data, because of this specific problem.

Ok, I see what you mean now. Sneaky.
This also explains why I've never seen it, everything I've worked on
has a higher "upstream" data rate than it's input, so it wouldn't
logjam like that.

2005\06\17@163409 by olin piclist

face picon face
Dave VanHorn wrote:
>> If the transmit clock on the modem is 0.158% faster than the transmit
>> clock on the MAX3100, this is exactly the sort of symptom you'd see.
>> For every 632 bytes transmitted by the modem to the PIC, the MAX3100
>> is only managing to transmit 631 of them, and one gets dropped.
>
> Not true for async data though. You only care how far you are apart
> at the end of the byte.

It's not a per byte timing issue.  The problem is the bytes are coming in a
little faster than they are going out.

As a simple example, let's say a PIC with two UARTS is simply copying bytes
from one to the other.  Both are operating at a nominal baud rate of 9600.
The original sender is transmitting bytes at the maximum rate, in other
words there is no dead time between bytes.  Now assume the sender's baud
clock is 1% faster than the PIC's.  A 1% mismatch still allows the PIC to
receive all the bytes OK, but it now receives 101 bytes for every 100 it can
send on.  When these extra bytes pile up beyond the PIC's ability to buffer
them, something is going to get lost.

The solution is to make sure the PIC can send faster than it can receive, or
implement some sort of flow control.


*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\17@164425 by Dave VanHorn

flavicon
face

>
>It's not a per byte timing issue.  The problem is the bytes are coming in a
>little faster than they are going out.

Yes, I see it now.
I just hadn't thought about it in aeons, since "shovel out faster
than it comes in" has been a design rule for so long, I forgot the cause :)

2005\06\17@171141 by Dave Tweed

face
flavicon
face
Whatever happened to the OP?

Anyway, one more thought: The specific error amount did ring a bell with
me. It may be pure coincidence, or the modem may be using a "trick" to
generate its interface clock.

Many UARTs take a clock of 1.8432 or 3.6864 MHz from which they generate
the standard baud rates. However, if you have a crystal of 4.000 or 16.000
MHz in your system, there's a way to digitally synthesize a reasonable
facsimile of the baud clock and avoid the cost of another crystal.

If you set up a 4-bit binary counter to divide by thirteen, one of the
counter bits pulses 3 times for every 13 input clocks and another one
pulses 6 times. 4.000 MHz * 6 / 13 = 1.846154 MHz and 16.000 MHz * 3 / 13
= 3.692308 MHz. Both of these values are 0.16026% (1/624) high relative
to the nominal values, which is remarkably close to 1/631 = 0.1585%. The
remaining difference could be explained by the crystal tolerances at both
ends.

I've often wondered why the PICs that have on-chip 4 MHz oscillators don't
have the 6/13 circuit built in as one option for the prescaler for the UART
timing.

-- Dave Tweed

2005\06\17@174642 by Roy J. Gromlich - PA

picon face
In general to all of the good folks who responded to this query, let me say thank you,  I had been feeling around he edges of the In-rate vs Out-rate but hadn't quite gotten there.

Actually, the PIC UART clock and the MAX3100 clock are from
the same crystal - I used an 18.432 MHz xtal for the PIC and "borrowed" it via a /10 chip down to an 1.8432 clock into the MAX3100 (which is just about its max clock rate).  If you do the division on 18.432 MHz the baud rates should come out dead in the center of each rate's tolerance window. So I would have to think that the clocks are in dead synch with each  other

Also, I find in testing that ONLY the 115.2 rate gets through without dropping the byte - at least on the systems I have
tested so far. The LOWER baud rates are the ones dropping
bytes.  This is true on 3 different PCBs with many different
data sources & data sinks.

But I am going to try the flow-control idea, since it just sounds good. Unfortunately, most of the devices which plug into the Output side have no flow control capability, but the modem
supplying the data feed always does, so I will try pausing the data flow every 100-200 characters, for a few character-periods, to let the MAX3100 catch up.

We will see how this goes.

RJG

2005\06\17@175010 by Roy J. Gromlich - PA

picon face
As to what happened to me, I was dealing with a Sears repair man who
came to fix our cloths dryer, so didn't get back to read all of these replys until a short time ago.

As for the crystal issue, as I explained in the post I just sent, I am using an 18.432 MHz down the PIC and dividing it down to 1.8432 MHz for the
MAX3100.

RJG
 {Original Message removed}

2005\06\17@175814 by Roy J. Gromlich - PA

picon face
Thanks for the info about the 6/13 circuit - I had never seen that one and
find it rather clever.
However, since I observe the same problem with PC serial ports into
the PIC PCB as well as wireless modems, I don't think this is a likely
explanation.  Good call though - I really mean that.

RJG

2005\06\17@180106 by Hopkins

flavicon
face
The capacitor will cause a phase change and the divide chip may cause
some delay so that the two devices are not in exact synch.

Mind you there are tolerances to the baud rate that the devices should
handle.

_______________________________________

Roy
Tauranga
New Zealand
_______________________________________

{Original Message removed}

2005\06\17@181142 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich - PA wrote :

> Actually, the PIC UART clock and the
MAX3100 clock are from
> the same crystal...
>
> So I would have
> to
think that the clocks are in dead synch with each  other

Well, maybe
*those* two baud rate clocks are, but they
are not the only ones here.
There must be a buad rate
clock in whatever is feeding the PIC UART
also, and *that*
one might be a little to fast (still within single
byte async
specs limits, but that doesn't help if running full speed
with no pauses, as we've learned here)...

Jan-Erik.



2005\06\17@181626 by Maarten Hofman

face picon face
Rochester, 17 juni 2005.

> As for the crystal issue, as I explained in the post I just sent, I am using an
> 18.432 MHz down the PIC and dividing it down to 1.8432 MHz for the
> MAX3100.

Yes, but is this the same clock that the modem is using that is
sending you the data? (If the modem's clock is slightly faster, the
incoming stop bits will be slightly shorter, which will cause the PIC
to receive more bytes than it is sending, as far as I understood from
previous posts).

Greetings,
Maarten Hofman.

2005\06\17@204729 by Roy J. Gromlich - PA

picon face

Thank you Jan-Erik - that was the point that I was missing all along. While it is true that the PIC UART clock and the MAX3100 clock are the same (if possibly out of phase) it is the two off-board clocks that would cause the effect.

In fact, now that I think of it, the way I have been testing
for this problem virtually guarantees that I will see the byte loss somewhere in the stream. I have been using a PC with two serial ports to send test data into the PCB and also to capture the data coming out of the PCB. So in the case where the PC is sending data a tiny bit slower than the PIC can swollow it, no problem. But on the output side
the same PC baud rate clock makes the receiving port a
tiny bit slower than the MAX3100 can send, so eventually a byte gets dropped.

In normal operation the devices connected through the PCB
to the modem don't send or receive large blocks of data - the typical dialog consists of 20-30 bytes in each direction.
However, one device has a calibrate mode which sends large quantities of data - on the order of 1000s of bytes non-
stop. This is the device which demonstrated the problem to us - we had not seen it earlier.

So now I need to come up with a rational way to pause
the data coming into the PCB, to insure that everything gets sent out. There is no point in checking the Busy Bit of the
MAX3100 because the receiving device has no handshake
lines. I will just set the Pause event to occur every 'n' bytes,
where 'n' is greater than the longest normal message and
less than 632.

--- should be interesting.

RJG


 {Original Message removed}

2005\06\17@211754 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich - PA wrote :

> In fact, now that I think of it, the
way I have been testing
> for this problem virtually guarantees that I
will see the
> byte loss somewhere in the stream. I have been using a
> PC with two serial ports to send test data into the PCB and
> also
to capture the data coming out of the PCB. So in the
> case where the
PC is sending data a tiny bit slower than
> the PIC can swollow it, no
problem. But on the output side
> the same PC baud rate clock makes the
receiving port a
> tiny bit slower than the MAX3100 can send, so
eventually
> a byte gets dropped.

Yes, but (there is always a "but",
right ? :) )...

The PC can not get more data on it's input port
then
it has sent out on the output port, right ?
Where would that come from
?

The fact that the UART and the MAX3100 are running a little
faster
dosn't matter. They will not be handling data att
100% of theirs
capacity. The PIC and MAX3100 doesn't
*add* any data to what's comming
from the PC, do they ?

Now, it's another issue if the UART get's in't
input from
something else then the PC, that is actualy sending at
100%
of the (PIC and MAX3100) baudrate, but slightly
faster then the PC baud
rate clock...

Jan-Erik.




2005\06\17@221407 by William Chops Westfield

face picon face
On Jun 17, 2005, at 5:47 PM, Roy J. Gromlich - PA wrote:

> So now I need to come up with a rational way to pause
> the data coming into the PCB...

Can you simply set the PC to generate 2 stop bits, while configuring
the PIC and MAX to only need one?  That should be plenty of pause,
and doesn't require any handshaking or SW modifications on either side.

BillW

2005\06\17@222814 by Roy J. Gromlich - PA

picon face
In mosst cases - where I have control of the modem/communications
device - the answer is Yes. But there are installations where I can't
change those settings, and many of them have been set to 1 stop bit from day one.

Also, at 115.2 K the PIC circuit needs 2 stop bits. Now there are no installations at present running at 115.2 K, but at some time in the
future there may be, and we need to have a solution for setting
up the system which works with existing devices.

I would like to come up with a fix which works in worst-case conditions
so our field people can walk in and upgrade an existing installation to remote monitoring and control with minimal changes to existing
hardware.
RJG
 {Original Message removed}

2005\06\17@222850 by Bill & Pookie

picon face
Whatever solution you use, may be best to implement it for both ways.   Else
you will have the same problem the other way if faster device is on other
end.

Bill

{Original Message removed}

2005\06\18@000716 by William Chops Westfield

face picon face
On Jun 17, 2005, at 7:28 PM, Bill & Pookie wrote:

> Whatever solution you use, may be best to implement it for both ways.

Do you HAVE any sort of flow control available?  Pausing the transmitter
periodically is relatively easy, but you'll fall further behind with
your
receiver.  You need the receiver to be able to pause the transmitter...

Can you operate transmit and receive at different speeds?  The first
time I
did some streaming data conversion (6bit "baudot" to 8bit ascii), I
made the
transmit speed 4x the receive speed to make sure I had plenty of space
to
insert all those extra bits...

There used to be a hack in the cisco terminal server code that would
periodically
pause the uart transmitters.  The theory was that I would rather get my
transmits
synchronized so that each ISR even had more than one transmitter to
service...
(but that's a different problem.)  IIRC, it had measurable benefits in
the lab.
Whether it had any real results in the field is a separate question...

BillW

2005\06\18@084243 by olin piclist

face picon face
Roy J. Gromlich - PA wrote:
> I will just set the Pause event to occur every 'n' bytes,
> where 'n' is greater than the longest normal message and
> less than 632.

632 is just the failure level of the current unit in hand.  You should do an
analysis of the worst case mismatches and make sure gaps are inserted to
cover that.  I think it would also be a really good idea to provide at least
a few bytes of buffering.


*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\18@085806 by olin piclist

face picon face
Roy J. Gromlich - PA wrote:
> Also, at 115.2 K the PIC circuit needs 2 stop bits.

Huh?  There is nothing in the PIC that requires 2 stop bits a 115.2Kbaud.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\18@101302 by Roy J. Gromlich - PA

picon face
That is exactly what I thought. I suspect the Interrupt procedure is too
long, but I haven't found a good way to tighten it for faster execution.
I have certain tasks to accomplish between bytes, and I can't find a good way to bypass any of them.
Certainly stuffing bytes into a buffer to be pulled out later would work
in most cases, but only if you can guarantee that the longest continuous
block of bytes won't overflow the buffer.  I can't - in the calibrate mode the attached device sends up to 1K bytes in a stream.
I am beginning to suspect the need for a faster system clock. If I use a 9.216 MHz crystal wih the X4 PLL I can double the execution speed - hopefully that will take care of the 2 Stop Bit problem and maybe even the byte dropping problem.

Work in progress.

RJG
 {Original Message removed}

2005\06\18@101640 by Bill & Pookie

picon face
Not sure of what hardware changes you can make at this time, but....

Could you install a "bypass the PIC UART" function where PIC could monitor
data going through but not repeat the data?

Bill

{Original Message removed}

2005\06\18@111053 by Bill & Pookie

picon face
Do not think the problem is in the speed of execution of PIC code, but in
the speed of the PIC's UART.  The PIC's UART has to completely send the
previous byte (including the full stop bit) before it can start to send the
next byte.  If  you could speed up the PIC's UART clock a bit it might solve
the problem.

Like the "My bucket has a hole in it" model.  A tier of three buckets with a
hole in each.  Pour water into the top one and it goes through the middle
one to the bottom bucket.  The problem is that the middle bucket has a
smaller hole and it takes longer for the water to run out.  There fore it
will fill up slowly and you will have "data overflow".

Bill

{Original Message removed}

2005\06\18@112110 by Dave VanHorn

flavicon
face
At 09:12 AM 6/18/2005, Roy J. Gromlich - PA wrote:
>That is exactly what I thought. I suspect the Interrupt procedure is too
>long, but I haven't found a good way to tighten it for faster execution.
>I have certain tasks to accomplish between bytes, and I can't find a
>good way to bypass any of them.
>
>Certainly stuffing bytes into a buffer to be pulled out later would work
>in most cases, but only if you can guarantee that the longest continuous
>block of bytes won't overflow the buffer.  I can't - in the calibrate mode
>the attached device sends up to 1K bytes in a stream.

I almost hate to say it, but a mega AVR with 4k sram internal, and
roughly 4x speed on the same clock, would be looking pretty good to
me. Some of the megas offer two hardware uarts.

I did a serial mux a few years ago, on a much less capable AVR than
we have today, that ran eight Max3100 uarts at 4800 full duplex, and
funneled all that data packetized through it's onboard uart at
115200. Upstream, it managed the flow of the data, taking the
incoming data from each source and making packets out of it, into the
output buffer, with command packets being "wedged" in front of the
queue of normal data.  Downstream, it parsed the incoming data,
acting on command packets and throwing data packets into the
appropriate output buffers.  Handshaking for the downstream ports was
handled in the command packets, basically like "Shut up on port 2" or
"go again on port 5". In it's spare time, it was taking DF bearings
at 7200 bearings/sec and converting them from polar to rectangular
coordinates, averaging them, and then converting the results back to
rectangular.  All that on an 8 MHz clock.  On average, IIRC, I had
about 300-ish instructions worth of time between ints.




2005\06\18@125921 by Roy J. Gromlich - PA

picon face
I had already thought of that, but it would require a PCB change to do properly. It is a bit more than monitoring, thought, since when a
command string is being received it must be blocked from going out the serial port to the attached device.  So a couple of gates would be
needed in there - not a big deal, but requiring a layout change.

Roy
 {Original Message removed}

2005\06\18@133020 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich - PA wrote :

>> Could you install a "bypass the PIC
UART" function where
>> PIC could monitor data going through but not
repeat the data?
> Subject: Re: [PIC]: Is 632 a "magic number" for PICs
?
>
> I had already thought of that, but it would require a PCB change
to
> do properly. It is a bit more than monitoring, thought, since
when a
> command string is being received it must be blocked from going
out
> the serial port to the attached device.  So a couple of gates
would be
> needed in there - not a big deal, but requiring a layout
change.

How would the PIC have time to "close the gate" if it
had to
evaluate the data first ? It had already been
bypassed to the MAX at
the time the gate
was closing, not ?

Jan-Erik.



2005\06\18@220342 by Roy J. Gromlich - PA

picon face

Hmmmmmmmmmm - that does present a bit of a problem, doesn't it?

I suppose I could insert an external shift register in the Rx data line between the PIC Rx input and the MAX232 driving the DB9 output connector, to delay the Rx data bits for a character interval or so,
but that is very inelegant.  I could simulate a shift regisster in the PIC - route the Rx line into a PIC pin which would be the input of a software SR, then come out of another pin which would go to the MAX232. Not
a happy choice, but it does at least eliminate an extra added chip.

I still think the present design should be more than fast enough - I
simply have to determine where my code is wasting all the time.
At the 18.432 MHz clock rate I am getting 217 nanosec instruction
time (on average).  FIxing this would be a lot easier is I knew exactly
what was causing the dropped byte, and if the possible dropped byte
is the reason the calibrate mode won't work through the pass-thru PCB.

I am starting to see the image of a good logic analyzer, or a digital
storage scope, in the future.  Too bad the owner probably won't
see it that way.

Roy
 {Original Message removed}

2005\06\18@221636 by Anthony Toft

flavicon
face
> Certainly stuffing bytes into a buffer to be pulled out later would work
> in most cases, but only if you can guarantee that the longest continuous
> block of bytes won't overflow the buffer.  I can't - in the calibrate mode
> the attached device sends up to 1K bytes in a stream.

You can use a rolling buffer, as long as you don't get more than "buffer
length" behind you are fine with any length stream. You can also use
those message blocks to catch up (as the message could go to a different
buffer)...


--
Anthony Toft <.....toftatKILLspamspam.....cowshed.us>
    I'm Anton, and I approve this message

2005\06\18@221853 by Dave VanHorn

flavicon
face

>
>I am starting to see the image of a good logic analyzer, or a digital
>storage scope, in the future.  Too bad the owner probably won't
>see it that way.

An ANT-8 might not break the bank.  

2005\06\19@073615 by Dave Tweed

face
flavicon
face
Roy J. Gromlich <EraseMErgromlichspam_OUTspamTakeThisOuTpa.net> wrote:
> I suppose I could insert an external shift register ...

You're coming at this from entirely the wrong direction.

I've already pointed out that changing your PIC's crystal frequency (and
the clock to the MAX3100) upward by 0.2% to 0.5% would solve the problem
directly.

Also, don't overlook the power of implementing a FIFO for the data inside
the PIC -- for every byte of FIFO capacity you have, you multiply the
length of continuous data that you can handle without dropping a byte.
For example, if you implement a 16-byte FIFO, you'll be able to handle
a data block of ~600 * 16 = 9600 bytes without dropping any. From what
you've said about your application, this should be plenty. This requires
no hardware changes at all.

-- Dave Tweed

2005\06\19@110042 by Bill & Pookie

picon face
I was talking with my brother about the number 632 and he remembers that
our uncle Wilbert had the same problem back in 32 while on the WPA.
Wilbert's solution was to have the device originating the data set to
transmit with 1 1/2 or 2 stop bits to slow things down.

Bill

{Original Message removed}

2005\06\19@164547 by William Chops Westfield

face picon face
On Jun 19, 2005, at 4:36 AM, Dave Tweed wrote:

> I've already pointed out that changing your PIC's crystal frequency
> (and
> the clock to the MAX3100) upward by 0.2% to 0.5% would solve the
> problem

Doesn't it just move the problem somewhere else?

BillW

2005\06\19@175343 by Dave Tweed

face
flavicon
face
William "Chops" Westfield <westfwspamspam_OUTmac.com> wrote:
> On Jun 19, 2005, at 4:36 AM, Dave Tweed wrote:
> > I've already pointed out that changing your PIC's crystal frequency (and
> > the clock to the MAX3100) upward by 0.2% to 0.5% would solve the problem
>
> Doesn't it just move the problem somewhere else?

You might think so, but no.

As long as we're just talking about two asynchronous links in each
direction with the PIC in the middle, this completely solves the problem.
The key is that the PIC needs to be able to transmit data to either device
slightly faster than the *other* device can send data to the PIC.

Now, if someone were to add a another device and a third asynchronous link
to the chain, that would complicate matters quite a lot, and this solution
would be too simplistic. That's when you start thinking about hardware flow
control or clocked synchronous interfaces.

-- Dave Tweed

2005\06\19@182814 by William Chops Westfield

face picon face

On Jun 19, 2005, at 2:53 PM, Dave Tweed wrote:

>> Doesn't it just move the problem somewhere else?
>

> As long as we're just talking about two asynchronous links in each
> direction with the PIC in the middle, this completely solves the
> problem.
> The key is that the PIC needs to be able to transmit data to either
> device
> slightly faster than the *other* device can send data to the PIC.
>
> Now, if someone were to add a another device and a third asynchronous
> link

I was assuming the third async device that the PIC is transmitting TO.
Hmm.  And I was assuming that the third port would have the same issue,
which is wrong - the problem only occurs when the data disposal rate is
LOCKED to a speed lower than the receive rate.  For an actual endpoint
like a PC, the disposal rate is likely to be very fast indeed (if
bursty),
so temporary speed mismatches are easily handled by the buffering in
that
system...

So, I think I agree: diddle the clock speed of the PIC up a fraction,
and the
problem will go away...

BillW

2005\06\19@222027 by Dmitriy Kiryashov

picon face
Two cents were these equations are coming from.

4.0000 Mhz = 6400 * 625
3.6864 Mhz = 6400 * 576

625 to 576 ratio for precise convertion.

One can simplify and obtain two close approximations as:

1) 624 ( 13*48 ) vs 576 ( 12*48 ) giving +0.1603% error
( 13/1 & 12/2 are deriving from this )

2) 625 ( 25*25 ) vs 575 ( 23*25 ) giving -0.1736% error


WBR Dmitry.

PS. I rather say in the PIC world 12 is magic number :)





Dave Tweed wrote:
{Quote hidden}

> -

2005\06\20@035240 by Alan B. Pearce

face picon face
>But I am going to try the flow-control idea, since it just sounds
>good. Unfortunately, most of the devices which plug into the
>Output side have no flow control capability, but the modem
>supplying the data feed always does, so I will try pausing the
>data flow every 100-200 characters, for a few character-periods,
>to let the MAX3100 catch up.

I would suggest you use Olins FIFO macros from his development environment
within the PIC as a buffer in the pass through software. You could add extra
macros to it to keep track of "high tide" and "low tide" points in the
buffer for doing the handshake. Would be quite easy to do this as getting at
the pointer byte is quite simple in a macro.

2005\06\20@121920 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich wrote :

> I changed my program for testing to
>
simply echo whatever comes IN the PIC UART right back
> OUT of the PIC
UART - I'm not involving the MAX3100
> at all. I am still dropping a
byte roughly every 600+ bytes.

Taking the MAX3100 out of the picture
doesn't change anything.
You are *still* getting bytes faster from the
PC then you can send
them out from the PIC. Didn't the MAX run from the
same
basic clock as the PIC, so they will basicly have exactly the
same
baudrate out, not ?

Try to  slightly "miss-tune" your baudrate clock
in the PIC
to be just a little fast (if you have a PIC with the EUSART
module with a 16 bit baud rate register, this might be
possible...) I
don't know the MAX part, but maybe
it's baudrate clock also can be set
slightly fast ?

Jan-Erik.



2005\06\20@123522 by Mark Scoville

flavicon
face
Hi Roy, I feel your pain. I haven't followed this thread very closely, are
you using an 18F chip? I had some similar problems on a 18F6585/6680
project. The PIC would miss receiveing a character every 8 or 10 characters.
Eventually I had my code stripped down to just echoing out what was
received - the PIC was still dropping characters. Bottom line was I had to
set the Baud rate a little high to get things to work with one stop bit (As
Microchip suggests in the following link). Apparently there is some sort of
issue in the EUSART hardware regarding sample timing...

Before you rip your code apart much more take a look at this... Maybe this
is relevent for you - maybe not. But it is probably worth you looking at -
it's very interesting.

http://forum.microchip.com/tm.asp?m=36399

especially look at the post from FujiFlyer at the bottom... where it says...

"There is an issue with the EUSART in the sampling timing. This is
especially noticable when multiple bytes are sent in succession with no
spacing in between. The module seems to mis the start bit. One thing that
seems to help is to set the USART baud rate on the 18F6680 to run a bit
fast.
Sincerely,
Michael Karbowski
Microchip Technology"

My favorite part is where it says...

"Unfortunately, at this point the designers haven't fully characterized the
behavior, so the safest bet is to put some space between bytes if possible.
Sincerely,
Michael Karbowski
Microchip Technology"

That's reassuring, huh? This may prevent you some hair pulling - but if it
affects you it's not going to make you happy. As I said, this has affected
me on 18F6680's and 18F6585's and there is NO MENTION of it in the errata.

I hope this is of some help to you Roy.

-- Mark

> {Original Message removed}

2005\06\20@132329 by olin piclist

face picon face
Roy J. Gromlich wrote:
> I changed my program for testing to
> simply echo whatever comes IN the PIC UART right back
> OUT of the PIC UART - I'm not involving the MAX3100
> at all. I am still dropping a byte roughly every 600+ bytes.
>
> I find that I need to set my sending device (PC serial port)
> to two stop bits in order to get reliable transmission

You have a PC producing simulated data out a COM port, that goes into the
PIC UART, gets sent right back out the PIC uart, and into another PC COM
port to verify the data stream?  If so, this is exactly what you would
expect if the PC UART clock was just a little faster than the PIC UART
clock.

> this is
> happening even at 9.6 Baud, so something is very wrong
> here.

Baud rate has nothing to do with it.  It's a ratio thing.  PC sends 601
bytes for every 600 the PIC can send out.  One byte will get lost every 600.
The speed of the bytes doesn't matter.

2005\06\20@140724 by J. Gromlich

flavicon
face
Mark Scoville:

Thank you very much for that interesting reference - it does appear
to be related to the problem I am having. Its nice to know good
old Microchip is aware of the problem - - - even if they aren't
doing anything about it.  You know, UARTs are rather old
established devices - how could they put one in there with a whole
new problem?

Luckily, bypassing most of the code to test this isn't that big an
issue. I go all the way back to pulling out the EPROM, erasing
the EPROM, reprogramming the EPROM and putting it back into
the socket (if the pins don't break off), so flash chips are a real
favorite of mine.

For Dave Tweed and Jan-Erik Soderholm:

You are both quite correct in your recommendation to crank up
the UART bit rate slightly.  I reduced by 1 the divisor value into
the baud rate generator and the PIC UART keeps up quite nicely
at 1 stop bit. All the input characters echo back out just fine.

This isn't a big jump in clock rate - the rate increases at all of the
baud rates appears to be under 1%. At 115.2 KBaud it jumps to
115.9 KBaud, while at 9.6 KBaud it goes up to 9.68 KBaud.
Amazingly, the UART still reads & echoes the correct characters.

Unfortunately, the MAX3100 doesn't have a programmable divisor,
you are just selecting divisors from an internal table. The result is
that while the Input data is now being read (and echoed) correctly
the data stream out through the MAX3100 is dropping LOTS of
characters - about every 10th one.

This is strange, because the MAX3100 appeared to be operating
correctly before. It isn't clear to me why it is doing this - it gets its
data via the Synchronous Serial Port, which knows nothing
about the bit clock in the PIC UART. Since I haven't changed
the PICs clock the data should be going into the MAX3100 at
just about the same rate it was going before.

re: the Microchip UART issue, I wonder if somewhere down the
road I'm going to find a similar glitch report on the MAXIM.

Making some slow progress here - - -

RJG

> {Original Message removed}

2005\06\20@140750 by Bill & Pookie

picon face
All is well.  You still see the original problem even though you have the
PIC echoing the data.  You say that by sending the data with 2 stop bits "it
works".  That is the fix for the problem.

First understand what the problem is.  The clocks for the UARTs are not at
the precise same speeds.  Maybe that is why they call it asynchronous?  The
PC is sending it's byte faster than the PIC can send its byte.  However
small this difference is, it does accumulate and causes the PIC to have a
byte to send before it's transmit buffer is empty.  And this byte goes to
byte heaven.

Think of it this way.  If the PC was set to 1 stop bit and the PIC was set
to 1 1/2 stop bits, you would have the same problem with echoing the data,
only you would get an error long  before 600 bytes.  setting the PC to 2
stop bits would fix the problem.

The PIC and the MAX3100 make up a string of relays for the data.  They
should be set to 1 stop bit.  The devices hooked to either end of the relay
string, the device originating the data, should have more than one stop bit
set.  This will insure that any acceptable difference in UART clocks will
not accumulate.

Bill

{Original Message removed}

2005\06\20@150158 by olin piclist
face picon face
Roy J. Gromlich wrote:
> This isn't a big jump in clock rate - the rate increases at all of the
> baud rates appears to be under 1%. At 115.2 KBaud it jumps to
> 115.9 KBaud, while at 9.6 KBaud it goes up to 9.68 KBaud.
> Amazingly, the UART still reads & echoes the correct characters.

Not amazing if you do the math.  Given 8N1 format there are 8.5 bits from
the start of the start bit to the center of the last data bit.  1% baud rate
error amounts to sampling the last bit 8.5% of a bit time from its true
center.  That's only 17% of the "guaranteed to fail" baud rate mismatch.  I
usually figure 1/4 bit time mismatch to be OK, although less is of course
better.  That's the threshold at which my UART_BAUD macro in STD.INS.ASPIC
at http://www.embedinc.com/pic starts complaining.

2005\06\20@170122 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich wrote :

> Mark Scoville:
>
> Thank you very much for
that interesting reference - it does appear
> to be related to the
problem I am having. Its nice to know good
> old Microchip is aware of
the problem - - - even if they aren't
> doing anything about it.  You
know, UARTs are rather old
> established devices - how could they put
one in there with a whole
> new problem?

I have not read that
Microchip
forum thread, but from whats been written
about it here, I
not *sure* that they talk about
the same problem you are seeing here.

Anyway...

> For Dave Tweed and Jan-Erik Soderholm:
>
> You are both
quite correct in your recommendation to crank up
> the UART bit rate
slightly.  I reduced by 1 the divisor value into
> the baud rate
generator and the PIC UART keeps up quite nicely
> at 1 stop bit. All
the input characters echo back out just fine.

Since your change in
UART speed was larger then 1 in 600, which
was how much to slow your
UART was.

> This isn't a big jump in clock rate - the rate increases
at all of the
> baud rates appears to be under 1%. At 115.2 KBaud it
jumps to
> 115.9 KBaud, while at 9.6 KBaud it goes up to 9.68 KBaud.
>
Amazingly, the UART still reads & echoes the correct characters.

And 1
% is about 6 times more then 1 in 600, right ?
So you got well "over
the fence".

And an 1% (a little less actualy, since you started
on the
low side) "wrong" baudrate is well within the
acceptable error for a 8-
bit character, so that's OK.


> Unfortunately, the MAX3100 doesn't
have a programmable divisor,
> you are just selecting divisors from an
internal table. The result is
> that while the Input data is now being
read (and echoed) correctly
> the data stream out through the MAX3100
is dropping LOTS of
> characters - about every 10th one.

It would be
nice to know if they are dropped before
or after the MAX3100.

> This
is strange, because the MAX3100 appeared to be operating
> correctly
before.

In what way "correctly" ? Was the dropped bytes
between the
PIC and the MAX3100 ? I thought
they was *after* the MAX3100. And
besides,
*eveything* (hardware wise) was probably working
correctly,
they (UART and MAX) was just asked to do
something they wasn't designed
for.


> It isn't clear to me why it is doing this -
> it gets its
>
data via the Synchronous Serial Port, which knows nothing
> about the
bit clock in the PIC UART. Since I haven't changed
> the PICs clock the
data should be going into the MAX3100 at
> just about the same rate it
was going before.
>
> re: the Microchip UART issue, I wonder if
somewhere down the
> road I'm going to find a similar glitch report on
the MAXIM.

Hm, I thought we was talking about some (quite expected)
data dropouts from a *continous* running async communication
line with
miss-matched baud rates, not ?
Not some "glitch" in some hardware ?
(A
"glitch" in the overall system design maybe... :-) )

Personly, I think
that the only real solution is to
implement a FIFO buffer in the PIC
that can even-out
any differences in speeds over the timeframe with
large data "bursts".

Best Regards,
Jan-Erik.




2005\06\20@195841 by Roy J. Gromlich - PA

picon face
I agree regarding the FIFO - I will put something in there to try in the near future.  I am still not certain how the FIFO is going to help = it appears to have been demonstrated that the PIC USART needs a bit more speed to keep up with the PC.  OK - I can change the crystal or tweak the caps to try pushing the clock rate up by 0.2 - 0.3% which is closer
to what I actually need.  But ultimately, if bytes are coming
into the PIC system faster than they can be written out, the
FIFO will fill (no matter how long it is) and drop bytes.  The FIFO can do a nice job of smoothing out variations in bursty
data, but ultimately we can't accept data faster than we can spit it out the other end.

At the moment, with the almost 1% overclocking of the UART, I am getting what I would call "proper" operation - ie. - all the bytes are received by the UART and echoed back out of the UART correctly. By "all" I mean I have been testing it with approximately 1 million bytes in a continuous unbroken stream.

In fact, at the moment, the MAX3100 is sending out those same
1 million bytes correctly at all baud rates from 2.4K through
115.2K.  And the same i million bytes of data goes IN through
the MAX3100 and OUT from the PIC UART (response channel) just as it was supposed to do.  So what's my problem?

The problem is that this is going to be an uncertain beast to install in the field, where it is going to be added to legacy systems with unknown characteristics.  To be used as intended it can't require tweaking in the field. I will be teting it tonight with 2 or 3 other older computers to see if the performance is the same.

But one thing at a time.  More later - - -

RJG

 {Original Message removed}

2005\06\20@202556 by William Chops Westfield

face picon face

On Jun 20, 2005, at 4:58 PM, Roy J. Gromlich - PA wrote:

> I am still not certain how the FIFO is going to help;
> it appears to have been demonstrated that the PIC USART
> needs a bit more speed to keep up with the PC.

You're dropping a packet whenever the receive overrun speed, times
the number of bytes received, exceeds the buffering you have available
in the PIC.  In the current case, you have one byte of buffering, and
it looks like the receiver is running about 0.2% slower than the
transmitter (well within the allowed range), and .002 * 600 = 1.2, so
you lose your byte after about 600 received bytes.

Increasing the buffering will only help if there are pauses in the
received data 'eventually';  If you know that there will be some sort
of pause after 16k characters, you should be able to get away with
about 32 bytes of buffering...

BillW

2005\06\20@214551 by Jan-Erik Soderholm

face picon face
Roy J. Gromlich - PA wrote :

> I agree regarding the FIFO - I will put
something in there
> to try in the near future.  I am still not
certain how the FIFO
> is going to help = it appears to have been
demonstrated
> that the PIC USART needs a bit more speed to keep up
with
> the PC.

If the "burst" goes on **forever**, there isn't a FIFO
large
enough.

If the burst has *some* maximum lenght, a FIFO depth
can
be calculated that will be enought.

> OK - I can change the crystal or
tweak the caps to
> try pushing the clock rate up by 0.2 - 0.3% which
is closer
> to what I actually need.  But ultimately, if bytes are
coming
> into the PIC system faster than they can be written out, the
>
FIFO will fill (no matter how long it is) and drop bytes.

But only if
there isn't, ever, a pause in the stream.

> The
> FIFO can do a nice
job of smoothing out variations in bursty
> data,

Which was what you
talked about earlier.

> but ultimately we can't accept data faster
than we
> can spit it out the other end.

Which is another scenario.
Question is if even *is* a solution
for *indefinitly* long
*continously* streams of bytes. Not
without syncing the baud rate
clocks in *all* involved
equipment.

I don't realy understand, what are
your design rules ?
Are you designing for **indefinitly** long "bursts"
of bytes ?

Earlier you talked about "several thousands", not ? And
that
shouldn't be a major problem to smooth out in a FIFO.

Regards,
Jan-Erik.



2005\06\21@095143 by J. Gromlich

flavicon
face
Jan-Erik Soderholm:

OK - let me back up and explain where & how this problem
originated.  There are monitor and display devices dating
back between 5 and 10 years installed all over the country
They have minimal intelligence and were intended to be
accessed over dial-up analog modems at 4.8 Kbaud and
up. The command conventions and protocols were done
in-house by the designers, usually follow no standards
of any kind, and are proprietary - meant to be controlled
only by the designer/manufacturer's software.

We come along and add functionality (remote reading
of voltage. current. solar panel charging rate, temperature,
operating state - remote reset of devices) to the installed
devices by adding a PIC-based board between the modem
(which is already there) and the controlled device. We do
not have the option of modifying the existing installation -
except in very minor ways. We do not have access to the
internal hardware or software of the controlled device(s).  
The PIC board must be totally transparent to the existing
software, but be able to intercept commands from our
add-on software modules running on the control PC.

As bad as this sounds, it works quite well. Except, of course,
with one device, and only in its Calibration mode. When the
modem is connected directly to the device, and the calibrate
function is selected in the device's control program, a
great deal of data is exchanged - in some cases blocks of
1K - 2K bytes are exchanged.  I have deduced what some
of the device commands are, and how the communication
protocol works under normal conditions, but the calibrate
mode appears to be different and totally undocumented.
The data is not ASCII but apparently binary.

When the PIC device is installed between the modem and
the controlled device, and calibrate mode is invoked, the
dialog between controller and device proceeds normally
up to a point - then the device starts returning what I think
are error messages to every transmission from its control
program. The control program ultimately reports that it
is unable to communicate with the device and the session
fails.

Analyzing the communications logs for both normal and
failure modes I noticed that bytes were missing - not always
the same bytes, either. This led to using test files to try to
determine what was being dropped and why. This is where
I am stuck now.  As far as "design rules" the only one I
can state is "total transparency", and at the moment, in
this case,  that has failed.

RJG

>
> {Original Message removed}

2005\06\21@102041 by olin piclist

face picon face
Roy J. Gromlich wrote:
> a
> great deal of data is exchanged - in some cases blocks of
> 1K - 2K bytes are exchanged.

OK, so your data is not continuous.  You apparently have an upper limit of
2K bytes before a pause.  Let's say your worst case speed mismatch between
the PIC UART and the other UARTS is .5%, then you need to buffer 1 byte for
every 200 in the worst case continuous block.  That means a 10 byte buffer
should in theory suffice for a 2K byte continuous block.  I would use a 16
or 32 byte FIFO just to be sure, assuming the PIC has the memory.

The other thing to do, as has been pointed out repeatedly, is to run the PIC
UART a little bit faster.  1% above the "correct" baud rate sounds like a
nice number.  You can do this with a slightly faster crystal, or a slightly
smaller baud rate divisor value.

This has all been hashed to death, and I even thought you said speeding up
the PIC UART worked.  What exactly is your remaining problem?


*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\21@120456 by Jan-Erik Soderholm

face picon face
Scott Dattalo wrote :

> Bit-banging a transmitter is very
> straight
forward and you've got control
> over the exact baud rate. In fact,
you
> could even servo your Tx baud rate
> based on your Rx baud rate.
But, I'd
> definitely consider the FIFO/circular
> buffer approach
first.

Actualy, you could implement a FIFO, and
then servo the UART
based on the "level"
in the FIFO. Just as Roy actualy did
(bit as a
fixed change). This could be
used as a automatic adoption to the
natural variations amongst the devices
"out there". With the new 16 bit
baud rate
registers (EUSART), you can "fine-tune"
the baud rate quite
well.


Roy J. Gromlich wrote :

> When the modem is connected directly
> to the device, and the calibrate
> function is selected in the
device's control
> program, a great deal of data is exchanged
> - in
some cases blocks of 1K - 2K bytes are
> exchanged.

OK. Fine.
Yesterday we was talking about
more or less indefinitly long bursts.
1-
2K long burst are probably easier to handle.

Jan-Erik.



2005\06\21@151733 by Wouter van Ooijen

face picon face
> Actualy, you could implement a FIFO, and
> then servo the UART
> based on the "level"
> in the FIFO.

Flash of recognition: I once designed such a system using that
mechanism. It's purpose was to provide transparent asynch and X.21
(synchronous) links over an ATM network. Variable delay times made the
buffer size calculations very complex.

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu


2005\06\22@093403 by J. Gromlich

flavicon
face
Again, thanks for all the advice.

I expect I will add a FIFO feeding the MAX3100
Slave UART, with a high-water threshold at 70%
or 80% to toggle the hardware handshake to the
source device (PC or moded) which is feeding the
PIC UART input, until the FIFO falls below the
low-water threshold at 20% or 30%. That should
break up even a "continuous" input stream into
manageable chunks.

However, I have detected another - possibly
interacting - problem which I need to resolve
first.

So back to the scope and the counter.

Roy

2005\06\22@111453 by olin piclist

face picon face
Roy J. Gromlich wrote:
> I expect I will add a FIFO feeding the MAX3100
> Slave UART, with a high-water threshold at 70%
> or 80% to toggle the hardware handshake to the
> source device (PC or moded) which is feeding the
> PIC UART input, until the FIFO falls below the
> low-water threshold at 20% or 30%. That should
> break up even a "continuous" input stream into
> manageable chunks.

But I thought you said hardware flow control wasn't an option because the
sending device just sends what it sends without monitoring any handshake
lines?

2005\06\26@200907 by Roy J. Gromlich - PA

picon face
Greetings Again:

I'm back with another question.
(From far off the words "Oh No, Not Again" can be heard)

Increasing the baud rate on the PIC UART alightly totally eliminated my dropped bytes at all baud rates - that is good, but confusing. Confusing because the change did
not affect the baud rate of the MAX3100 Slave UART, so
it is still sending bytes out at the same rate it was before.
To me this appears to mean that I am taking them in faster
than before, but spitting them out at the same rate, so they should just pile up faster..
Also, it presents a potential for future problems because the 18F452 has only an 8-bit baud rate divider, so I can't
shift all of the rates by the same percentage.

I'm not going to chase that one around - I want a general fix which will raise both the PIC and the MAX3100 baud rate(s). Since the MAX3100 is getting its baud rate clock from the PIC (via a divide /10) the obvious answer is to
speed up the PIC clock by 1% or 2%,  THe PIC crystal is 18.432 MHz - what I need are 18.616 MHz or 18.800 MHz -
those are +1% and +2% respectively.

Neither is a standard crystal - I can have them made to order for an extra fee, but before I do. does anyone know how to "pull" a crystal that far off its nominal frequency?
I can shift it 0.1% or 0.2%, but whole percentage points are a bit of a problem, to say nothing about what would happen to stability and starting ability if I could.

Any ideas will be appreciated.

Roy J. Gromlch

2005\06\26@205839 by olin piclist

face picon face
Roy J. Gromlich - PA wrote:
> Increasing the baud rate on the PIC UART alightly totally
> eliminated my dropped bytes at all baud rates - that is
> good, but confusing.

I thought this was the expected result, and was discussed at great length.

> does anyone know
> how to "pull" a crystal that far off its nominal frequency?
> I can shift it 0.1% or 0.2%, but whole percentage points
> are a bit of a problem, to say nothing about what would
> happen to stability and starting ability if I could.

A long time ago we used a crystal based VCO in a video genlock circuit that
could go +-100ppm if I remember right (maybe +-200ppm).  But it was
specifically designed for that.  I wouldn't expect to be able to pull a
normal crystal by more than 10-20ppm, before running into trouble.  However,
1% (= 1000ppm) is waaaaaay out of line for a crystal.


*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\26@213220 by Roy J. Gromlich - PA

picon face
Thanks - that's what I thought from my early days as an amateur
radio operator.  I'll just have to break down and order the slightly
faster crystals for the production units.

RJG
 {Original Message removed}

2005\06\26@220141 by Richard Prosser

picon face
Can you use a completely different crystal frequency (with a
completely different uart divisor) to get your slightly faster
bitrate. Or would it require too many changes elsewhere (timers etc.)
Or run an RC oscillator using the incoming bitstream as a reference
and set things up to output just slightly faster.

RP

On 27/06/05, Roy J. Gromlich - PA <@spam@rgromlichKILLspamspampa.net> wrote:
> Thanks - that's what I thought from my early days as an amateur
> radio operator.  I'll just have to break down and order the slightly
> faster crystals for the production units.
>
> RJG
>  {Original Message removed}

2005\06\27@105056 by J. Gromlich

flavicon
face

Yes, I can use a different frequency. In fact, using a 20.48 MHz
crystal, which is a standard part, and changing the divisors I
can get all of the baud rates to be +1% from where they are now.
I think that will do the job and is a whole lot cheaper than custom
crystals.

Of course I will have to change all of my time delays, since
20.48 MHz comes out to +11% for the PIC itself, and that
is too large an error to be ignored, as an error of +1% could
have been. Not a big deal, though.

RJG

>
> {Original Message removed}

2005\06\27@145715 by olin piclist

face picon face
Roy J. Gromlich wrote:
> Of course I will have to change all of my time delays,

Not if the code is written right.  If not, now is a good time to get it in
shape.  Put the oscillator frequency as a single constant in an include
file.  When you need a delay, baud rate, or whatever, put that in the
include file too and derive everything from those numbers automatically.

My PIC development environment enforces part of this by requiring the
FREQ_OSC constant to be defined.  For an automatic way of calculating the
UART setup, see the UART_xxx macro in STD.INS.ASPIC at
http://www.embedinc.com/pic.  You specify the baud rate, and it uses
FREQ_OSC to derive the setup.

2005\06\27@160311 by J. Gromlich

flavicon
face
Right you are, Olin.  I had already decided to do that when I make
this set of changes. With the usual 20:20 hindsight I should have
put all kinds of things in there as defined constants, and let the
Assembler insert the correct values when something changes.
However I didn't anticipate needing to fiddle with the baud rate
timing.

This, of course, makes one wonder what else I didn't think of
think is waiting to bite me in the     .

RJG

> {Original Message removed}

2005\06\27@164635 by olin piclist

face picon face
Roy J. Gromlich wrote:
> Right you are, Olin.  I had already decided to do that when I make
> this set of changes. With the usual 20:20 hindsight I should have
> put all kinds of things in there as defined constants, and let the
> Assembler insert the correct values when something changes.
> However I didn't anticipate needing to fiddle with the baud rate
> timing.
>
> This, of course, makes one wonder what else I didn't think of
> think is waiting to bite me in the.

Every time you type a constant into source code you should be asking
yourself whether its something inherent to the algorithm (like masking in
the low nibble for example) or dependent on configuration.  If the latter,
at least it should be defined with an EQU at the top of the module or the
project include file.  The best approach is for human diddle constants to
only reflect genuine choices in as close to human terms as possible.
Anything used internally due to specifics of how the PIC works should be
derived from the human constants with assembler math.

As an example, doing a MOVLW 137 to get the baud rate generator value is the
worst possible approach.  Slightly better is to put BAUDDIV EQU 137 at the
top of the file.  The best approach is to define FREQ_OSC and BAUD, and have
the assembler calculate the baud rate divisor automatically.

2005\06\27@203510 by Roy J. Gromlich - PA

picon face
Exactly what I have done with many other "constants" in the program  - and, in fact, in most programs I write.  Especially the ones which get to the 3rd or 4th major re-write.  Obvioulsly a
more disciplined approach to laying out the program before
starting to code would make this easier to do.  As every wise
instructor I have ever had has told me - and, ironically enough, as I have told my students when I have been teaching.

Do as I say, not as I do.

RJG
 {Original Message removed}

More... (looser matching)
- Last day of these posts
- In 2005 , 2006 only
- Today
- New search...