Searching \ for '[EE] Idea for protocol' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=idea+protocol
Search entire site for: 'Idea for protocol'.

Exact match. Not showing close matches.
PICList Thread
'[EE] Idea for protocol'
2010\07\04@042416 by solarwind

picon face
I don't really know much about writing protocols or implementing them,
but I had the following idea. I don't really know if it's the "right"
way to go about it, so please comment on it.

I'm trying to design a multi master protocol for RS485 networks
(specifically working on how the microcontroller will handle the data
at this point).

The MCU will have an ISR as follows, which will be called every time a
byte is received:

ISR() {
   if buffer not full
       push byte into a FIFO buffer
   start(processing_thread)
}

processing_thread() {
   if frame complete (if we have a full, complete frame in the buffer)
       process_frame() (process the frame, handle the message, whatever)
   stop(processing_thread)
}

Pastebin link for those who have trouble reading the above formatted
text: http://pastebin.com/Z8YRt69k


In this way, the ISR stays relatively free most of the time. It only
executes some short code when a byte is received. So, even if the
processing of the message data takes a long time (for whatever
reason), the ISR is free to continue pushing data into the buffer.
Meanwhile, the processing thread runs "in the background" to handle
the received data.

I would imagine that using an interrupt (rather than consistently
polling for data) is far more efficient. Is this the way it's usually
done in the microcontroller world? Or have I totally missed something?

2010\07\04@093121 by Isaac Marino Bavaresco

flavicon
face
Em 4/7/2010 05:23, solarwind escreveu:
{Quote hidden}

Your idea is fine. It all depends on how you implement it.

Are you still using FreeRTOS? Which MCU?

With FreeRTOS, if your RX interrupt has a higher priority than the
kernel you may have trouble. I implemented a system using this same
approach, but because I was using two UARTs at 115200bps I had to set
the RX ISRs in the high-priority interrupt (PIC18). In this situation,
you cannot call any of the FreeRTOS functions from the ISR, then what I
did was to set the processing task with a higher priority than all the
others and to sleep for one tick if there is no data to process, so the
ISR doesn't need to wake it.

The processing task then wakes at the beginning of each tick, and if
there is no data to process it sleeps immediately, leaving the rest of
the tick for the other tasks to run. The resulting performance is very good.


Best regards,

Isaac


__________________________________________________
Fale com seus amigos  de graça com o novo Yahoo! Messenger
http://br.messenger.yahoo.com/

2010\07\04@151829 by Marechiare

picon face
> In this way, the ISR stays relatively free most of the
> time. It only executes some short code when a byte
> is received. So, even if the processing of the message
> data takes a long time (for whatever reason), the ISR
> is free to continue pushing data into the buffer.
> Meanwhile, the processing thread runs "in the
> background" to handle the received data.
>
> I would imagine that using an interrupt (rather than
> consistently polling for data) is far more efficient.
> Is this the way it's usually done in the microcontroller
> world? Or have I totally missed something?

I think, you are right; and not only for the microcontroller world,
the concept holds true for other worlds too, for general software
development - the so called n-layer architecture, for instance; and
even beyond the software development, in my opinion... Some resource
is catching the data in "real time", next level resource would gather
and process the data later to produce more meaningful result. The next
level resource would handle the data further. That was a good point, I
must admit.

2010\07\04@172310 by William \Chops\ Westfield

face picon face

On Jul 4, 2010, at 1:23 AM, solarwind wrote:

> In this way, the ISR stays relatively free most of the time. It only
> executes some short code when a byte is received. So, even if the
> processing of the message data takes a long time (for whatever
> reason), the ISR is free to continue pushing data into the buffer.
> Meanwhile, the processing thread runs "in the background" to handle
> the received data.

Your ideas are fine and relatively standard for device drivers on  
pretty much any class of processor.  You can end up saving significant  
amounts of CPU time overall by adding a little bit more intelligence  
to the ISR; detecting end-of-packet before waking up the "processing  
thread", for example.   You use more cycles in the ISR, but the data  
is already in registers and such, and the processing wakes up a lot  
less.  (and one learns to appreciate protocol definitions that make it  
easy to do significant portions of processing in the ISR.  Terminating  
byte values avoided in the packet itself: good; byte count at start of  
packet: less good.)

It'll depend somewhat on what else is going on...  Multitasking  
kernels are fundamentally dependent on having a surplus of actual  
computing power...

BillW

2010\07\04@175820 by Isaac Marino Bavaresco

flavicon
face
Em 4/7/2010 18:23, William "Chops" Westfield escreveu:
{Quote hidden}

For ASCII protocols it is easy to have start and end markers that can't
appear inside the packet itself, but ASCII protocols are less efficient
than binary protocols.

For machine-to-machine communication I prefer binary protocols, with a
start marker, length, payload and a checksum. I stipulate a maximum
delay between any two bytes of a packet (1 to 10ms usually), so if for
any reason the receiving state machine gets confused (because some data
is lost and a byte inside a packet matches the start marker, for
instance), it is easy to stop transmitting until the other side detects
this timeout and re-synchronizes. Usually this is accomplished
automatically, because the transmitter will wait for the response of the
previous packet before transmitting the next, and this wait is enough
for the other side to re-synchronize.


With protocols like this, it is more efficient if the ISR understands
the packet structure and assemble a complete packet before waking the
processing task. It just takes a very compact, simple and fast state
machine.
I wrote once a system where the ISR replies a low-level packet telling
that the packet arrived OK (it inserted a packet directly into the
transmission queue and enabled the TX interrupt), or if there was any
error and a retransmit was needed. The main task then replied a second
packet, with the result of the processing of the original packet.


Best regards,

Isaac

__________________________________________________
Fale com seus amigos  de graça com o novo Yahoo! Messenger
http://br.messenger.yahoo.com/

2010\07\04@194503 by William \Chops\ Westfield

face picon face

On Jul 4, 2010, at 2:58 PM, Isaac Marino Bavaresco wrote:

> For ASCII protocols it is easy to have start and end markers that  
> can't
> appear inside the packet itself, but ASCII protocols are less  
> efficient
> than binary protocols.
>
> For machine-to-machine communication I prefer binary protocols, with a
> start marker, length, payload and a checksum.


Protocols with explicit "special" characters usually have a mechanism  
for "escaping" them if they appear inside the packet itself.

But you can also just allow the "end" character to occur inside the  
packet.  If it's not ACTUALLY the end (as determined by length, etc),  
you can still detect that by other mechanisms at either ISR or process  
level, and you're still cutting down the overall effort involved.  
Packet formats with checksums as the last bytes are rather sucky :-(

For example, one way to speed up PPP in some complex network  
topologies (eg over X.25 PAD intermediate network) is to send a  
"return" character after each packet.  It comes after the checksum has  
been parsed at the receiver, so it doesn't go in the packet, and it  
comes during a state when a "start" character is expected, making it  
particularly easy to ignore (you don't have a partially complete input  
packet that you have to figure out what to do with.)


> I stipulate a maximum delay between any two bytes of a packet

I hate protocols that do this.  It makes them very unreliable to  
"tunnel" across arbitrary comm technology (eg network protocol to  
dialout modem pool to public network to async server to destination.)  
"delays" are not preserved.  The less a protocol can depend on timing,  
the better. (You're still free to use timing to make things more  
efficient, of course.  After a delay between two bytes is a fine time  
to wake up process level code.)

BillW

2010\07\04@215256 by Isaac Marino Bavaresco

flavicon
face
Em 4/7/2010 20:45, William "Chops" Westfield escreveu:
> On Jul 4, 2010, at 2:58 PM, Isaac Marino Bavaresco wrote:
>
>> For ASCII protocols it is easy to have start and end markers that  
>> can't
>> appear inside the packet itself, but ASCII protocols are less  
>> efficient
>> than binary protocols.
>>
>> For machine-to-machine communication I prefer binary protocols, with a
>> start marker, length, payload and a checksum.
>
> Protocols with explicit "special" characters usually have a mechanism  
> for "escaping" them if they appear inside the packet itself.

Escaping is bad when there are a lot of the special character in the
payload.


> But you can also just allow the "end" character to occur inside the  
> packet.  If it's not ACTUALLY the end (as determined by length, etc),  

This is exactly what I do, just I don't use an "end" character, I use a
"start" character and a length ( usually one or two bytes, depending on
the maximum allowed packet length). The "start" character may happen
anywhere, as long as the receiver is synchronized with the data flow.

Problem may arise only if a "start" character is lost and a "start"
character is present inside the packet. Most probably the packet will be
rejected, but there is a small possibility that once the receiver get
unsynchronized and characters with value equal to the "start" characters
keep arriving inside packets, it may not be able to resynchronize
without a pause in the transmission to detect the timeout.


> you can still detect that by other mechanisms at either ISR or process  
> level, and you're still cutting down the overall effort involved.  
> Packet formats with checksums as the last bytes are rather sucky :-(

Checksum is to ensure the received data is correct and the receiver
won't act based on bad data.

{Quote hidden}

This timeout I use only for point-to-point protocols (direct serial
connection). For other medias (Ethernet, etc.) it is better to rely on
their "packeting".
Even with this timeout, the protocol may survive tunneling, the timeout
is just to help re-synchronize the receiver but it is not mandatory and
won't be needed it the error rate is very low.

Isaac

Isaac

__________________________________________________
Fale com seus amigos  de graça com o novo Yahoo! Messenger
http://br.messenger.yahoo.com/

2010\07\05@020802 by Ruben Jönsson
flavicon
face
{Quote hidden}

As others have said you can handle the frame decoding in the ISR itself and
only call the processing_thread when you have a complete frame in order to
increase efficiency.

Another thing I havn't seen mentioned here yet is the potential problem with
collitions which have to be addressed since you are planning on making a
multimaster protocol. A collition will happen if two masters tries to initiate
a message at roughly the same time. If you are planning to use standard RS485
tranciever chips the masters themselves may not even notice the collition even
if you read back the transmitted data since they actively drive the line to
both polarities.

This can be solved with a bus that is only actively driven to one polarity and
passively pulled to the other. This is how a CAN bus works.

If you don't want a bus that is only actively driven to one polarity (it will
be more succeptible to noise in the passive state) there are other methods to
try to minimize the chance of a collition between masters to a low enough
probability that it can be acceptable. One such way is to wait a random time
after the bus has become quiet until the master starts to transmitt the next
frame. In the unlikely event that two masters still will transmitt at the same
time and not notice each other, the message will be messed up anyway and the
checksum/crc will catch the faulty message. Of course, you will need some sort
of ACK mechanism so the master knows that the message has not been received for
this to work. Instead of using a random backoff time, you could use a time
based on the unique identifier address (which most likely exists anyway) for
the master. This could give masters with a lower address a higher priority.

Good luck / Ruben
==============================
Ruben Jönsson
AB Liros Electronic
Box 9124, 200 39 Malmö, Sweden
TEL INT +46 40142078
FAX INT +46 40947388
spam_OUTrubenTakeThisOuTspampp.sbbs.se
==============================

2010\07\05@041306 by solarwind

picon face
2010/7/5 Ruben Jönsson <.....rubenKILLspamspam@spam@pp.sbbs.se>:
{Quote hidden}

Hi Ruben,

I had another thread a while ago specifically addressing issues with
multi-master protocols and collision detection/avoidance over RS485. I
just have to go find it now and re-read it.

I love these discussions. So much useful information.

2010\07\05@081611 by sergio masci

flavicon
face


On Sun, 4 Jul 2010, William "Chops" Westfield wrote:

{Quote hidden}

I don't like putting this much protocol dpendent code in the ISR as it
leads to many complications. What happens if the data length arrives
corrupted, what happens if there is a break in transmision - do you
implement timeouts, do you send a NAK, what happens if the CRC is wrong,
what happens if you see the start of the next packet before the end of the
current one, what about different packet formats.

How do you debug all this (as it's going on in an ISR) and push errors
through it to ensure you've caught all those gotchas in the "protocol".

>
> It'll depend somewhat on what else is going on...  Multitasking  
> kernels are fundamentally dependent on having a surplus of actual  
> computing power...

And a very good way of having a surplus of actual computing power is not
to waste time in an ISR.

If you really want to reduce the load by not having the background task
waking up very frequently and processing each byte as it gets put into
your fifo, you need a little more intelligence in your RTOS. You put your
packet processing task to sleep for a long time (say a data transmition
timeout period). You implement a high water mark on your fifo (say 50%)
and when the ISR detects that it has gone past this mark it wakes up the
packet processing task. If the fifo never gets to this mark (because the
transmiter has stopped sending) then the task gets woken up anyway because
of the timeout. The high water mark can even be made variable such that
the packet processing task always gets woken up when the first few bytes
of a packet arrive. It then determins what type of packet it is receiving
and sets the high water mark accordingly.

With an intelligent RTOS tasks don't need to wake up very often to check
the environment, they can be woken up by an ISR (or other task) and while
the tasks are asleep they consume NO CPU time. A good RTOS will not try to
resume each task that is sleeping. It will maintain a ready to run list
and simply cycle through the list (in priority order).

Some multitasking executives simply provide a "yield" facility. It is then
the responsibility of each task to keep putting itself back to sleep if it
is woken and has nothing to do. Processing a data packet this way will
waste CPU time and I can see the rational behind Bill and Isaacs
suggestion. But if the RTOS does provide a proper suspend and resume
facility then your overheads will drop dramitically and you software will
be greatly simplified.

Regards
Sergio Masci

2010\07\05@100128 by Olin Lathrop

face picon face
solarwind wrote:
> I'm trying to design a multi master protocol for RS485 networks

Why RS485?  That's so 1990s.  Unless that is something specific to the
requirements (and it doesn't sound that way since you seem to be free to
design the protocol), you should seriously look at CAN.  CAN hardware is now
readily available in a variety of microcontrollers (18F4580 just to name
one).  All nodes on a CAN bus are equal, and the collision detection and
resolution is handled in the hardware transparently to the firmware.  In
other words, multi-master just works, whereas it's not trivial to do with
RS-485.

> The MCU will have an ISR as follows, which will be called every time a
> byte is received:
>
> ISR() {
>     if buffer not full
>         push byte into a FIFO buffer
>     start(processing_thread)
> }

Why start the receiving thread in the ISR?  That sounds more complicated
than necessary.  The logic above is also imcomplete because you don't check
for and handle the case of the thread still running from last time.  And
then how can a thread reasonable handle a communication protocol if it is
run separately for each byte?  This makes no sense.

All the interrupt routine should do, if you use interrupts at all, is to get
the data and leave it lying around someplace the receiving thread can find
it, usually in a FIFO is the data is just a byte stream.  The receiving
thread runs all the time, but the call to get the next byte blocks until one
is available.  Now the receiving thread can do a computed GOTO on the opcode
byte and run a separate routine per command.  These command routines can get
parameter bytes as appropriate and don't jump back to the start of the main
command loop until their command has been processed.  This is essentially
using a state machine to processes a state-dependent input stream, with the
PC being the state variable.  I find that sort of architecture works well
for handling asynchronous input streams.

> I would imagine that using an interrupt (rather than consistently
> polling for data) is far more efficient.

Probably not, but efficiency isn't the issue.  Once you buy into the
separate receiving thread concept with blocking GET calls, those GET calls
can just as well check a hardware flag to see if there is new data than to
check the software flag that says there is at least one byte in the FIFO.
Things look the same to the receiving thread.  Note that while the GET call
appears to block to the receiving thread, "blocking" in this context really
means checking for new data in a tight look that calls TASK_YIELD whenever
it doesn't find data.  This is how threads "block" in a cooperative
multi-tasking system.

I have done this sort of thing many times with PICs, and found it to work
well and the input processing to fall out rather naturally.

The issue with using interrupts for each piece of input data versus polling
the hardware by the receiving thread is not one of efficiency but of
responsiveness to the hardware.  Depending on what the system is doing,
there might be too much time between successive invocations of the receiving
thread such that the UART (or whatever your input hardware is) gets overrun.
Interrupt receiving followed by a FIFO is a bit more complicated and
therefore slightly less efficient, but allows averaging the processing power
over multiple input bytes to be applied to the stream.  Usually the problem
with the input overrunning the receiving thread is one of latency and
burstiness, not of overall cycles.  If you don't have enough cycles long
term, you're screwed anyway.

The FIFO allows you to go to lunch for several characters at a time, then
catch up by draining a whole bunch of characters from the FIFO in a burst
later.  Unless you are using a slow baud rate and know exactly what's going
on in the CPU, I would go with a interrupt driven input FIFO.  Often just 8
bytes is enough to smooth out the burstiness of the thread scheduling, but
of course you'll have to do your own math to determine what you need.  For
example, if you think a thread may be held off for as long as 1mS (quite a
long time actually) and input is via UART at 115.2Kbaud, then the FIFO needs
to be at least 12 bytes.  In that case 16 sounds like a nice round number.
16 bytes plus maybe another 2 for FIFO control isn't a big deal on most
PICs.

If you are using CAN, you may well want to use the polled approach.  CAN
data comes in larger chunks called frames.  These frames have some header
information plus 0-8 data bytes.  The receiving thread waits for the next
whole frame to be received, processes it, then goes back to get the next
frame.  You could set up a FIFO of CAN frames, but so far I have not done
this in my CAN implementations.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2010\07\05@102057 by Olin Lathrop

face picon face
Ruben Jönsson wrote:
> As others have said you can handle the frame decoding in the ISR
> itself and only call the processing_thread when you have a complete
> frame in order to increase efficiency.

I keep hearing this concept, but step back a bit folks, this is a small
system.  On a large system it may make sense to put things into buffer, then
pass around buffers of data.  However, eventually, somewhere, the bytes in
the buffer will get processed one by one.

The only reason the buffer concept is more efficient on a large system is
because of the overhead of getting into and out of the operating system and
kernel priveledge level, so you wouldn't want to do this every byte.  You
wouldn't want to call Windows ReadFile for each byte from a TCP stream, for
example, because that would be many more cycles/byte and probably limit the
top speed you could receive.

However, on little systems with no OS this is the other way around.  The
layers are few and cheap, but memory is more of a premium, and it's
inefficient to "pass around" buffers.  Here is makes sense to have the
protocol stack (often just a single layer and a interrupt routine) handle a
byte at a time just like the application will eventually anyway.  If in rare
cases the application needs to deal with a group of bytes together (like
when converting a input ASCII number to binary for example), then it can
buffer the few bytes as it needs to when it needs to.

So on small systems, think byte at a time.  You may be surprised how much
simpler this makes both the low level input code and the app processing code
once you buy into this concept the whole way thru.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2010\07\05@102628 by Xiaofan Chen

face picon face
On Mon, Jul 5, 2010 at 10:01 PM, Olin Lathrop <olin_piclistspamKILLspamembedinc.com> wrote:
> solarwind wrote:
>> I'm trying to design a multi master protocol for RS485 networks
>
> Why RS485?  That's so 1990s.  Unless that is something specific to the
> requirements (and it doesn't sound that way since you seem to be free to
> design the protocol), you should seriously look at CAN.

The only thing is that CAN is stuck at 1MHz. With RS485, it can be
much faster. It can also be upgraded to M-LVDS to achieve much
faster speed. But I am not so sure about the OP's speed requirement.


--
Xiaofan http://sourceforge.net/projects/libusb-win32/

2010\07\05@110910 by Isaac Marino Bavaresco

flavicon
face
Em 5/7/2010 11:26, Xiaofan Chen escreveu:
> On Mon, Jul 5, 2010 at 10:01 PM, Olin Lathrop <.....olin_piclistKILLspamspam.....embedinc.com> wrote:
>> solarwind wrote:
>>> I'm trying to design a multi master protocol for RS485 networks
>> Why RS485?  That's so 1990s.  Unless that is something specific to the
>> requirements (and it doesn't sound that way since you seem to be free to
>> design the protocol), you should seriously look at CAN.
> The only thing is that CAN is stuck at 1MHz. With RS485, it can be
> much faster. It can also be upgraded to M-LVDS to achieve much
> faster speed. But I am not so sure about the OP's speed requirement.


And don't forget the distance, usually up to 1200m @ 100kbps (others say
3000m, speed unknown).
It seems that 100m @ 1Mbps and 10m @ 10Mbps are feasible.


Isaac
__________________________________________________
Fale com seus amigos  de graça com o novo Yahoo! Messenger
http://br.messenger.yahoo.com/

2010\07\05@114208 by Isaac Marino Bavaresco

flavicon
face
Em 5/7/2010 11:01, Olin Lathrop escreveu:
> Why start the receiving thread in the ISR?  That sounds more complicated
> than necessary.  The logic above is also imcomplete because you don't check
> for and handle the case of the thread still running from last time.  And


Awaking a thread that was already awoken or even running is innocuous
(at least in FreeRTOS).


> then how can a thread reasonable handle a communication protocol if it is
> run separately for each byte?  This makes no sense.

The thread just sleeps when there is no more bytes available, waiting
for more.

{Quote hidden}

He is not using a state-machine, he is running a RTOS (probably
FreeRTOS, which he already used before).

The receiving routine just sits in a loop until a full packet is
received, the routine may sleep/yield as many times as it needs, then
perhaps return to a calling function with the packet in some buffer and
a status.
>From the programmer's point of view, it is a simple loop and he can
abstract the fact that the bytes may take some time to arrive, he just
calls a function "getbyte" and it returns either with the byte or a
timeout status. All waiting/sleeping/yielding is done inside this
"getbyte" function.

{Quote hidden}

The thread can also "sleep" for some time (which is usually the timeout)
then it won't waste not even one single CPU cycle, and the ISR may awake
it when data is available.
It may awake either by timeout or because the ISR received some data,
and act accordingly.


{Quote hidden}

If the RTOS's tick is 1ms, the latency may be several ms depending on
the number of threads, unless the thread priority is raised above all
the others, then it runs at the beginning of each tick. In this case, a
"Sleep(1)" will allow the other threads to use the remaining of each tick.

The ISR may receive several bytes before the task get a chance to run
(and will call "ResumeTask" this same number of times, but only the
first does anything). When the task effectively runs, there may be lots
of bytes to process, then it processes these bytes at once and sleeps
again (or process the resulting packet).


Isaac

__________________________________________________
Fale com seus amigos  de graça com o novo Yahoo! Messenger
http://br.messenger.yahoo.com/

2010\07\05@120357 by Ruben Jönsson

flavicon
face
> Ruben Jönsson wrote:
> > As others have said you can handle the frame decoding in the ISR
> > itself and only call the processing_thread when you have a complete
> > frame in order to increase efficiency.
>
> I keep hearing this concept, but step back a bit folks, this is a small
> system.  On a large system it may make sense to put things into buffer, then
> pass around buffers of data.  However, eventually, somewhere, the bytes in the
> buffer will get processed one by one.
>

But there are a lot of protocols which requires that you have the whole frame
before you can do anything with the data. Everywhere you need to do a checksum
or CRC for example. Before you know that the checksum/crc is Ok you can't do
anything than to put it in a buffer. Many times you also have to reply to a
message/frame and you don't know what to reply before the you have the
requesting frame.

The checksumming and CRCing can easily be done on the fly by the ISR one byte
at a time though.

{Quote hidden}

On a small system it is usually one single array of memory that acts as the
buffer. You don't need to pass anything around since it's location is always
known. Just set a flag to inform that the buffer needs to be
handled/interpreted on a higher level.

> protocol stack (often just a single layer and a interrupt routine) handle a byte
> at a time just like the application will eventually anyway.  If in rare cases
> the application needs to deal with a group of bytes together (like when
> converting a input ASCII number to binary for example), then it can buffer the
> few bytes as it needs to when it needs to.

But then you need the buffer anyway...

/Ruben==============================
Ruben Jönsson
AB Liros Electronic
Box 9124, 200 39 Malmö, Sweden
TEL INT +46 40142078
FAX INT +46 40947388
EraseMErubenspam_OUTspamTakeThisOuTpp.sbbs.se
==============================

2010\07\05@125255 by Olin Lathrop

face picon face
Isaac Marino Bavaresco wrote:
> Awaking a thread that was already awoken or even running is innocuous
> (at least in FreeRTOS).

Perhaps, if you're already using something like that.  As I remember, his
pseudo code said something about "starting" a thread, not waking it.  Waking
it in this context makes some sense, but starting it doesn't.

{Quote hidden}

He didn't say that, but that has nothing to do with the concept of using a
state machine and the PC being the state variable.  Whether the thread spins
waiting on the next input by calling TASK_YIELD or goes to sleep and then is
awoken when there is more input is a minor low level detail invisible to the
app thread at that level.

> The thread can also "sleep" for some time (which is usually the
> timeout)
> then it won't waste not even one single CPU cycle, and the ISR may
> awake
> it when data is available.
> It may awake either by timeout or because the ISR received some data,
> and act accordingly.

Yes that is another way to do it, although more complex.  Whether that's
approriate or not depends on the RTOS, the relative power of the CPU, etc.
Most likely either method is OK given that you've bought into a RTOS that
supports it and truly understand what is going on under the hood.

> If the RTOS's tick is 1ms, the latency may be several ms depending on
> the number of threads, unless the thread priority is raised above all
> the others, then it runs at the beginning of each tick.

You only get into this mess with a overly complicated (for most PIC
projects) RTOS in the first place.

The really simple cooperative multi-tasking system where each task calls
TASK_YIELD periodically works remarkably well for small embedded projects.
It's hardly a "RTOS", just a task switcher.  It may not cover all needs for
really complicated requirements, but is simple with low footprint.  There
are no "ticks", no thread priority, no sleeping and asynchronous awakening,
no mutexes, and of course none of the infrastructure to support any of this.

At first glance, it may seem wasteful to have a thread that has nothing to
do spin in a loop calling TASK_YIELD.  But think about it carefully and it's
actually kind of elegant.  The system is optimized for lightweight and fast
task swapping with no baggage.  The overhead to run a task and have it
quickly call TASK_YIELD is very little time, much much less than a
millisecond in the normal case.  Also keep in mind that the number of tasks
is small.  3 is a typical number for such small embedded systems with
dedicated purposes.

Most of the time nothing is going on and all the tasks are being cycled thru
rapidly because they are all calling TASK_YIELD quickly.  Then something
unusual occurs like a input byte is available in the input command stream
FIFO.  The thread spinning in a loop waiting for just that event will
discover that event quickly, handle it, and get back to calling TASK_YIELD.

Most purely CPU bound operations that don't require any new input complete
so quickly that a task can perform the whole operation before going back to
calling TASK_YIELD without undue latency imposed on the other threads.  And
even if something came up for thread 2 to handle while thread 1 was doing
some CPU bound processing, it will run quickly after thread 1 is done.  In
the end the CPU actually gets used quite efficiently.

Again, such a simple system can't solve all problems, but it does seem to
quite nicely solve the vast majority of real world jobs PICs are asked to
perform.  Keep it simple.  Most of the uses I see of phancy RTOSs is just
overkill because someone thought using a RTOS was cool or didn't think about
it real hard.  The underlying complexity doesn't come for free, and you get
into issues of juggling time slices, priorities, mutexes, and the like that
a simpler scheme avoids.  So while a fancy RTOS may appear to simplify
things at first glance, you actually have to know a lot more about what
you're doing to not get into trouble.  Ufortunately, fancy RTOSs tend to
hype their benefits to exactly those people that shouldn't be there.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2010\07\05@130647 by Olin Lathrop

face picon face
Ruben Jönsson wrote:
> But there are a lot of protocols which requires that you have the
> whole frame
> before you can do anything with the data.

We're talking about a protocol that we can develop to fit well with our
embedded system.

> Everywhere you need to do a
> checksum
> or CRC for example.

Sometimes you're stuck with that.  Even then though, you likely don't need
to buffer the whole packet, only the payload part.

> Before you know that the checksum/crc is Ok you
> can't do
> anything than to put it in a buffer.

Sometimes, sometimes not.  Sometimes you can process the data covered by the
checksum but not act on it until it is verified by the checksum.  Sometimes
that requires holding processed data which is a lot like writing it to a
buffer, but that processed data may be smaller than the raw data.  Or the
raw data may cause different branches to be taken in parsing logic rather
than being real data that must be saved until it's known to be OK to use.

I'm not saying this always works, but you see things differently when you at
least try to look at things a byte at a time instead of just knee jerk
putting it into a buffer.

> The checksumming and CRCing can easily be done on the fly by the ISR
> one byte at a time though.

Not necessarily so easily.  The ISR is very low level and it may be
inconvenient to pass it enough information from the higher levels so that it
knows the context of the data coming in to perform such processing.  The ISR
knowing too much about what is going on is a kludge alert red flag.
Sometimes you need to do that, but you'd better be real sure you have a good
reason.

> On a small system it is usually one single array of memory that acts
> as the buffer.

Sure, but note that then prevents overlapped processing of one buffer while
receiving the next.

Again, all I'm saying is step back and think about byte at a time.  It's not
always the right answer, but more often than it gets used I think.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

More... (looser matching)
- Last day of these posts
- In 2010 , 2011 only
- Today
- New search...