Searching \ for '[PIC] automatic code banking(in an assembler or li' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/microchip/memory.htm?key=bank
Search entire site for: 'automatic code banking(in an assembler or li'.

Exact match. Not showing close matches.
PICList Thread
'[PIC] automatic code banking(in an assembler or li'
2005\12\26@222823 by andrew kelley

picon face
Okay, okay, It may not be quite PIC but it is targeted at a PIC.

How would automatic intelligent banking for call's and goto's work multiple
code page PIC's (in a linker or assembler)? I mean, if you were to have a
piece of code on the boundary that would be pushed onto the next page with
the addition of banking instructions,  how would the assembler know what to
do for max efficiency?

--
Thanks,
andrew
(designing a compiler backend for a PIC16 target)

2005\12\27@003535 by Andrew Warren

face
flavicon
face
andrew kelley <spam_OUTpiclistTakeThisOuTspammit.edu> wrote:

> How would automatic intelligent banking for call's and goto's work
> multiple code page PIC's (in a linker or assembler)? I mean, if
> you were to have a piece of code on the boundary that would be
> pushed onto the next page with the addition of banking
> instructions,  how would the assembler know what to do for max
> efficiency?

Andrew:

The piclist.com list archive seems to be offline at the moment, so
I'll just copy an email I sent a few years ago...

------- Forwarded Message Follows ------

Steve Hardy <.....PICLISTKILLspamspam@spam@MITVMA.MIT.EDU> wrote:

{Quote hidden}

Steve:

This is slightly off-topic, but there's a really simple algorithm
that solves the "branch/jump optimization" problem in linear time:  

   1.  Start with all branches set to the shortest size.

   2.  Put all the branches on a stack.

   3.  Pull a branch off the stack; if it's out of range, increase
   it to the next-larger size (some processors have 3 or more branch
   sizes) and put all the branches that SPAN the just-increased
   branch on the stack.

   4.  Repeat step 3 until everything's stable.

This algorithm was first shown to me by Dr. Cliff Click, who works in
Motorola's PowerPC compiler group.  As I said, it runs in linear time
and is guaranteed not to get stuck in an infinite loop.  

I can send you the rigorous mathematical proof if you're
interested... It depends on the fact that monotonic functions over
complete lattices have a unique minimal fixed point.  

-Andy  

------- End of Forwarded Message ------

=== Andrew Warren - fastfwdspamKILLspamix.netcom.com

2005\12\27@062224 by sergio masci

flavicon
face


On Mon, 26 Dec 2005, andrew kelley wrote:

> Okay, okay, It may not be quite PIC but it is targeted at a PIC.
>
> How would automatic intelligent banking for call's and goto's work multiple
> code page PIC's (in a linker or assembler)? I mean, if you were to have a
> piece of code on the boundary that would be pushed onto the next page with
> the addition of banking instructions,  how would the assembler know what to
> do for max efficiency?

for max efficiency the assembler would need to profile the generated code
and determin where page select instructions NEED to be inserted.

This is effectively a simplified simulation of the generated code looking
for all execution threads through each instruction. You don't need to know
the exact state of the PIC while you are simulating a path just some
components of it. When you have finished building the list of threads
through each instruction, you can determin if any of the threads that go
through a call or goto have inconsistant values in PCLATH, if they do then
you need to insert page select instructions before the call or goto.

Something to look out for is a call or goto preceaded immediately by a
skip instruction such as btfss
e.g.

       btfss        STATUS, C
       goto        lab1

in this situation the page select instructions need to be inserted befor
the skip instruction.

You may also want to take into account any "software return stack" that
you implement in your compiler. The XCASM assembler knows about the XCSB
long call mechanism and tracks long calls as well as normal hardware
(opcode) calls.

e.g.
       movlw        (fromhere >> 8) & 0xff
       movwf        retaddr+1
       movlw        fromhere & 0xff
       movwf        retaddr+0
       goto        func1                        ; long call, return addr is
                                       ; stored in retaddr
fromhere

       ...


func1        ...

       movf        retaddr+1,w
       movwf        PCLATH
       movf        retaddr+0,w
       movwf        PCL                        ; long call return

The XCASM assembler does what you are looking for and also manages bank
select for RAM access.

XCASM uses multiple passes (more than two) to achive this. It also
profiles the code after each pass to determin if inserted instructions
have caused a problem. The only tricky part is matching generated
instructions in each pass given that select instructions may be inserted
or deleted between passes.

BTW when you embed inline assembler in XCSB source, the assembler looks
after RAM bank and code page select for you. You might consider doing this
as well.

Regards
Sergio Masci

http://www.xcprod.com/titan/XCSB - optimising PIC compiler
FREE for personal non-commercial use



.

2005\12\27@065657 by olin piclist

face picon face
andrew kelley wrote:
> How would automatic intelligent banking for call's and goto's work
> multiple code page PIC's (in a linker or assembler)? I mean, if you
> were to have a piece of code on the boundary that would be pushed onto
> the next page with the addition of banking instructions,  how would the
> assembler know what to do for max efficiency?

The assembler doesn't know where a piece of code will end up and therefore
can't adjust for it.  However, your problem isn't an issue in reality since
the rest of your code likely can't tolerate page boundaries at arbitrary
locations anyway.  The best way to deal with this is to make each page a
separate linker memory region.  That way you are guaranteed that any code
section will be placed entirely on a single page.  Then keep PCLATH set for
the current code page, so that means you can do local CALLs and GOTOs within
the same code section with single instructions.  PCLATH must be set right
before an external call and restored right after.


******************************************************************
Embed Inc, Littleton Massachusetts, (978) 742-9014.  #1 PIC
consultant in 2004 program year.  http://www.embedinc.com/products

2005\12\27@133943 by Dave Tweed
face
flavicon
face
"Andrew Warren" <.....fastfwdKILLspamspam.....ix.netcom.com>
> This is slightly off-topic, but there's a really simple algorithm
> that solves the "branch/jump optimization" problem in linear time:  
>
>     1.  Start with all branches set to the shortest size.
>
>     2.  Put all the branches on a stack.
>
>     3.  Pull a branch off the stack; if it's out of range, increase
>     it to the next-larger size (some processors have 3 or more branch
>     sizes) and put all the branches that SPAN the just-increased
>     branch on the stack.
>
>     4.  Repeat step 3 until everything's stable.
>
> This algorithm was first shown to me by Dr. Cliff Click, who works in
> Motorola's PowerPC compiler group.  As I said, it runs in linear time
> and is guaranteed not to get stuck in an infinite loop.  

Driving it even further off the original topic (since this really has
nothing to do with the code paging problem on a PIC), it would seem that
Dr. Cliff needs to take another look.

In the worst case, this algorithm is O(N**2) on the number of branches,
since each iteration of step 3 has a hidden O(N) process embedded in it;
to wit: "... put all the branches that ..."

> I can send you the rigorous mathematical proof if you're
> interested... It depends on the fact that monotonic functions
> over  complete lattices have a unique minimal fixed point.  

Sounds impressive, but I'm not sure what it means or that it's even
relevant to the order-of-complexity question. It would seem that you
haven't given us a sufficiently detailed description of the algorithm.

-- Dave Tweed

2005\12\27@141554 by Wouter van Ooijen

face picon face
> > This is slightly off-topic, but there's a really simple algorithm
> > that solves the "branch/jump optimization" problem in linear time:  

Besides Dave's remark about the hidden O(n) step which makes this
algorithm O(n^2): this algorithm is for fixed-maximu-offset branches.
Paged branches as used on PICs are much more complex. And furtermore
algorithms like this only deal with ammending the braches as required,
not with optimal placement of the code fragments in memory to avoid
needing such ammendments.

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu


More... (looser matching)
- Last day of these posts
- In 2005 , 2006 only
- Today
- New search...