Searching \ for '[PIC] reinventing the flat tire RE: [PIC] Paging a' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/microchip/devices.htm?key=pic
Search entire site for: 'reinventing the flat tire RE: [PIC] Paging a'.

Exact match. Not showing close matches.
PICList Thread
'[PIC] reinventing the flat tire RE: [PIC] Paging a'
2005\06\24@115636 by olin piclist

face picon face
I changed this back to [PIC] tag since it does seem to be relevant to PICs
in useful ways.

phil B wrote:
> This is a somewhat false economy.  When the program
> remains small, it works well but when the program
> spills into a second bank or page or what ever, the
> cost of inter bank/page/whatever accesses is fairly
> high in program size, execution speed and especially
> complexity.

I don't know what you call "fairly high" of course, but I don't think
banking/paging add that much to my 14 bit core programs over 2K words and
that use multiple banks.

Let's start with banking.  Most of the 14 bit banked PICs have global memory
in the last 16 locations of each bank.  I find that reserving these mostly
for temporary scratch values that get banged around a lot, and using banked
memory for persistent state works well.  My REG0 - REG12 "general registers"
are in this memory, and can therefore always be accesses without regard to
the bank setting.  That, together with good modularization and
semi-automation of bank setting with macros (see DBANKIF and related macros
in STD.INS.ASPIC at http://www.embedinc.com) make banking quite manageable
in my opinion.

The cost in program size is genarlly not that great.  I can't offer any
figures other than a general feeling right now.  An interesting test would
be to take a nearly full 8K 16F project and count the number of BSF/BCF
instructions to RP0/RP1 in the HEX file.  I haven't done this and I'm not
sure when I'll get around to it, but it would be interesting to have a
quantitative value.  I can supply a HEX file for a pretty crammed 16F876
project if anyone else wants to try this.  I'm guessing the overhead is
around 5%, and would be real surprised if it exceeded 10%, but of course
this is just speculation.

Execution speed would be pretty much related to the increase in program size
as a first pass estimate.  It is probably a little less because the speed
critical accesses in loops and the like are likely to be optimized to
unbanked memory or with the bank setting outside the loop.

As for complexity, I totally disagree that it is "fairly high".  This can be
managed "fairly painlessly" in my opinion with good programming discipline
and facilities like the DBANKIF macro.

As for paging, I think this is even less of an issue.  Just comparing a
banked to an unbanked call can lead to the impression of enormous overhead
(up to 5 instructions versus 1).  However, proper modular design puts this
overhead at the boundaries between subsystems where the links are fewer and
the speed dependence is often not as great.  2K words is a large chunk of
logic you can stay within without dealing with paging.  As for complexity, I
don't see the argument at all.  With macros like GCALL and GJUMP, the
complexity presented to the programmer is virtually none.  Writing GCALL
instead of MCALL (see STD.INS.ASPIC again) is not complex by any common
definition.  About the only complexity is remembering to do it, but the
tools will find local calls that should be global calls as undefined symbols
or missing externals.

> I sure do fuss with rp0 and rp1 a LOT,

You really shouldn't be fussing with it directly at all.  See DBANKIF.

> I bet an incredible about of effort
> is spent on this issue, especially tracking down
> elusive bugs.

Banking bugs are frankly quite rare in my code, and I don't remember a
paging issue in years.  Yes, it is possible to write undisciplined code
where banking is a constant and error-prone issue.  So don't do that.  It's
not fair to indict an architecture because it's easy to write bad code.  You
should look at how easy it is to write good code with proper tools and
discipline.

> This false
> economy gets relearned every few years in the computer
> industry since the introduction of the first
> commercial computer.

<rant>This kind of attitude pisses me off.  Someone sitting on the sidelines
sees something that appears stupid and immediately declares it stupid
without further investigation.  If multiple experts have repeatedly and
independently come up with similar answers, then just maybe they know
something the casual observer doesn't.  This doesn't guarantee it isn't in
fact stupid, but one should not conclude that without examination of all the
tradeoffs and constraints that applied.</rant>

In this case you are only berating the negative aspects of address
segmentation without examining its advantages.  Everything is a tradeoff,
which is a corollary to there is no free lunch.

> The manufacturer usually
> stresses upward compatibility and thus doesn't bite
> the bullet in making the architecture tuned for larger
> programs.

Again you are making an implicity assumption that changing the architecture
is inherently better without examining the tradeoffs.

In Microchip's case they do have multiple architectures, so I'm not sure
what you are complaining about.  You can make your own tradeoff with size,
cost, speed, segmentation of memory, and a whole host of other parameters.

> Intel got smart and junked
> the 286, introducing the flat and linear 80386.

Actually the 386 is an extension of the same architecture.  Segments were
expanded from 16 bit addresses to 32 bit addresses, making them infinitely
big for the usage at the time.  In a few years when 48 or 64 bit
architectures are common, are you going to complain that Intel didn't "bite
the bullet" and go to 64 bit segments in the mid 1980s when the 386 came
out?

> In the end,
> memory size was not at all a limiting cost so all that
> effort turned out to be wasted.

The key phrase is "in the end".  Back when the 8086 came out a few Kbytes
cost more than the processor.  When the 80386 came out a few Mbytes cost
more than the processor.  Now we're up to 1 Gbyte about the same as the
processor.  It shouldn't be surprising that a 100,000:1 change in the
processor to memory cost ratio would lead to different architectures.

How much extra would you pay for a processor now that can address memory you
won't be able to get or afford for another 10 years?


*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

2005\06\24@121231 by Dave VanHorn

flavicon
face

>
>Let's start with banking.  Most of the 14 bit banked PICs have global memory
>in the last 16 locations of each bank.  I find that reserving these mostly
>for temporary scratch values that get banged around a lot, and using banked
>memory for persistent state works well.  My REG0 - REG12 "general registers"
>are in this memory, and can therefore always be accesses without regard to
>the bank setting.  That, together with good modularization and
>semi-automation of bank setting with macros (see DBANKIF and related macros
>in STD.INS.ASPIC at http://www.embedinc.com) make banking quite manageable
>in my opinion.

In the AVR, you get this with 32 general registers, and a completely
flat ram space.
You CAN dedicate registers as you desire, or not. Tables can start
anywhere, and be as long as needed.
The penalty we pay is 16 bit instructions vs shorter instructions.
It's a plus for the chip maker if they can work in smaller widths,
but that's really a non-issue for me the programmer.
Since the instructions execute in so few clock cycles (most 1 cycle,
some 2 cycle) and I don't have to spend any time or effort on
paging/banking issues, this is my personal favorite.


The Z8 takes this to the extreme, where ALL ram is registers, and any
pair of registers can be a 16 bit pointer.
AFAIK, they still have some significant restrictions in RAM size
though, because of this, and still have Clock/12 internal division.

Compromises.. Always you have to give up something somewhere.


IMHO, the actual advantages of paging/banking are minimal, and solely
an internal machine thing.
The rest is ways to make you work with the banking, since you have no
other choice in the pic line.

The tradeoffs of the internal machine issues show up in ways that are
more difficult to compare, like price per 10,000 parts, code
development time, HLL tool efficiency, and overall execution speed.

2005\06\24@133020 by John Ferrell

face picon face
Bravo!
"Everything is a tradeoff, which is a corollary to there is no free lunch."
is an understatement.

I don't especially like bank switching or even Base + Displacement
addressing but when properly used it sure does speed up the hardware as
opposed to flat addressing. Compilers & operating systems lighten the load
on the programmer but cost in terms of efficiency. Paging and program
overlays(shudder) are even higher overhead. The ultimate solution is and
always has been a clever programmer! Just to confuse the issue there exists
a mind set that says better programmers write more lines of code...

IBM used to have a processor that was used internally in control units that
allowed context switching between tasks by setting a single byte. The switch
bought a dedicated address space, registers and all. No save restore of
anything needed. Really neat for some special apps, but not for general use.




John Ferrell
http://DixieNC.US

----- Original Message -----
From: "Olin Lathrop" <spam_OUTolin_piclistTakeThisOuTspamembedinc.com>
To: "Microcontroller discussion list - Public." <.....piclistKILLspamspam@spam@mit.edu>
Sent: Friday, June 24, 2005 11:57 AM
Subject: Re: [PIC] reinventing the flat tire RE: [PIC] Paging and Banking.
Any benefits?


>I changed this back to [PIC] tag since it does seem to be relevant to PICs
> in useful ways.
>

2005\06\24@135235 by Dave VanHorn

flavicon
face

>
>IBM used to have a processor that was used internally in control
>units that allowed context switching between tasks by setting a
>single byte. The switch bought a dedicated address space, registers
>and all. No save restore of anything needed. Really neat for some
>special apps, but not for general use.

Zilog's Z-80 had the alternate register set.
In embedded systems, we dedicated those to the ISRs, to minimize latency.

2005\06\24@145315 by Harold Hallikainen

face picon face

> IBM used to have a processor that was used internally in control units
> that
> allowed context switching between tasks by setting a single byte. The
> switch
> bought a dedicated address space, registers and all. No save restore of
> anything needed. Really neat for some special apps, but not for general
> use.
>
>


Not quite the same thing, but I once wrote some multitasking code where
there was a separate stack for each task (holding local variables, return
addresses, etc.). When a task was waiting for I/O, it'd call NextTask() in
the loop. This would push the return address on the current stack, move
the stack pointer to the next task, then do a return, dropping back into
the next task where we left off. This was not PIC, however.

Harold

--
FCC Rules Updated Daily at http://www.hallikainen.com

More... (looser matching)
- Last day of these posts
- In 2005 , 2006 only
- Today
- New search...