Searching \ for 'Interpreter Engine' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=interpreter+engine
Search entire site for: 'Interpreter Engine'.

Truncated match.
PICList Thread
'Interpreter Engine'
1998\12\03@142940 by Andy Kunz

flavicon
face
I posted this yesterday, but no takers.  Maybe I should have said something
to get your ire up so you could flame me?  <G>

I need an interpreter for a 14-bit core which can be incorporated as part
of a time-critical program which already exists.

My main program bit-bops data at 62.5Kbps, but on occasion I need to be
able to go off and execute an EEPROM-resident script, but can never get
stuck in the script (that is, it needs to return to the caller after every
"instruction" even if it doesn't complete a logical operation).

A compiled BASIC or FORTH engine would be just the ticket, but I need one
which is smart enough to realize it's not the master of the chip, and which
can be linked in to a HiTech C program, preferably.  (Actually, C source
would be optimal).

Something along the lines of Jack Crenshaw's Sweet that he's been doing in
ESP.

Anybody have any experience with anything available?

Thanks!

Andy

==================================================================
Andy Kunz - Statistical Research, Inc. - Westfield, New Jersey USA
==================================================================

1998\12\03@164729 by paulb

flavicon
face
Andy Kunz wrote:

> I posted this yesterday, but no takers.  Maybe I should have said
> something to get your ire up so you could flame me?  <G>

> A compiled BASIC or FORTH engine would be just the ticket, but I need
> one which is smart enough to realize it's not the master of the chip,
> and which can be linked in to a HiTech C program, preferably.

 And you're surprised?

 The interpretive language is something many of us would like to do,
but few have done and those who have (notably Parallax and Antti Lukats
(see: http://www.dontronics.com/bs4.html ) don't fancy releasing the
code.  (Sigh!  As usual!)

 You ask in addition that it be embedded in "C" and run as a background
task.  And you bemoan the absence of a stampede?  I'll bet if you wrote
it GNU and offered it, *then* you'd see action! ;-)
--
 Cheers,
       Paul B.

1998\12\03@165354 by William Chops Westfield

face picon face
A "nanoForth" would be just about right, depending on the complexity of the
"scripts" you have.  The forth-like "execute" loop would only execute one
word per invocation, and then let your main loop do stuff.  Since you'd have
control of what words were available, you can guarantee that they are all
short enough...  (um, I think this is pretty much a standard forth with a
modified main interpretter loop - probably ANY interpretter would work if
you have source access to the main interpretter loop...)

I don't know of any PRODUCTS that allow this, though.

BillW

1998\12\04@042936 by Dr. Imre Bartfai

flavicon
face
Hi,

if I'd have this problem, I'd try the C-FLEA from

http://www.dunfield.com

It is a c-compiler for a virtual processor, which is implemented also for
PIC. The compiled code can be placed in any type of memory. I did not try
it, however, I'd be very glad if you share your experiences in case you
take it.

Imre


On Thu, 3 Dec 1998, Andy Kunz wrote:

{Quote hidden}

1998\12\04@054949 by Russell McMahon

picon face
They say it steam engines when its steam engine time.
I don't have  ahead of steam up yet but I was thinking recently that
a dumbish interpreter would be a "nice" way of extending the memory
capacity of low code size CPUs. The only excuse I can think of for
doing this is because the cpu + eerom cost is much lower than an
alternative capability cpu. The new flash PICs will hopefully remove
this justification.

I was thinking about what the minimum useful instruction set was that
scold be executed by such a machine. Long ago there was an
interpreter called "CHIP-OS" which ran initially AFAIR on the COSMAC
and later on t he 6802 and no doubt other chips. It used 2 byte
tokens had a 4096 address space virtual machine.

I have a 15 page article on the language (BYTE (!), December 1978 pp
108-122.. An Easy Programming System - Joseph Weisbecker, NJ). He
claims all sorts of advantages for this type of interpretive
language. Says it typically takes 512 bytes for a full language and
resultant code is 6 times as dense as BASIC (his claims). As I
recall, CHIPOS could achieve some remarkable results considering its
rudimentary nature. Something like this may be not too hard to write
(he says a new language takes him about a week :-)).


   Russell McMahon


{Original Message removed}

1998\12\04@075419 by Ivan Cenov

flavicon
face
Hi,

you may want to make a procedure - something like
poller() which is not part of the interpreter but is in the main program.
This poller will poll for the bits that arive and probably store them in
a buffer and set flag that these are available.
Normally the poller is called in main loop of the main program.
After that another part of main program will handle the bits.
When in interpreter, the implementation routines of interpreter
commands must make call to poller() too. How frequently - as needed and as
possibly.
Interpreter may execute one logical operation per call etc.
And you may try to implement something like cache of EEPROM in RAM.

Hope this helps.

Ivan Cenov
spam_OUTokto7TakeThisOuTspambotev.ttm.bg ICQ: 17221948
www.geocities.com/SiliconValley/Network/9276/
Do what have to do, let happens what it wants.

{Original Message removed}

1998\12\04@092330 by Andy Kunz

flavicon
face
>> A compiled BASIC or FORTH engine would be just the ticket, but I need
>> one which is smart enough to realize it's not the master of the chip,
>> and which can be linked in to a HiTech C program, preferably.

>  And you're surprised?

Yes, actually.  With as much traffic as this list generated on Forth a
while back I would have expected a lot more feedback.

>  The interpretive language is something many of us would like to do,
>but few have done and those who have (notably Parallax and Antti Lukats
>(see: http://www.dontronics.com/bs4.html ) don't fancy releasing the
>code.  (Sigh!  As usual!)

Not to release it publicly.  I don't think the customer would mind paying
at all for a license to integrate it into his product.  NDA's etc. and
licensing are wonderful tools for generating income (ie, royalties) without
doing anything.

>  You ask in addition that it be embedded in "C" and run as a background
>task.  And you bemoan the absence of a stampede?  I'll bet if you wrote
>it GNU and offered it, *then* you'd see action! ;-)

Actually, if I had time I would do that.

It would be the fourth language I've developed, but I don't have time to
port the existing stuff to a PIC.

Andy

==================================================================
Andy Kunz - Statistical Research, Inc. - Westfield, New Jersey USA
==================================================================

1998\12\05@084023 by Peter L. Peres

picon face
On Fri, 4 Dec 1998, Russell McMahon wrote:

> doing this is because the cpu + eerom cost is much lower than an
> alternative capability cpu. The new flash PICs will hopefully remove

That is very correct afaik. Microcode was invented by people facing the
problem of extended functionality in limited silicon and most present day
CISC computers use microcode for most complex operations (silicon monsters
like the Pentium [tm] don't count).

> I was thinking about what the minimum useful instruction set was that
> scold be executed by such a machine. Long ago there was an

Further rummaging in my tangled neuron collection: the approximate
instruction set of that Z80 based thing I did:

Five arithmetics: +, -, *, /, % Four logic: &, |, ^, !  Three branch:
GOTO, BRZ, BRC with relative address only, -1 relative address being
translated to program start (absolute 0), always.  No subroutines.
Implicitly addressed IO instructions for each of the IO registers (i.e.
IN0, IN1,... OUT0, OUT1,...).  Stack manipulations: DUP, DROP, (no OVER, I
had a 2-deep stack ;), and CONST. CONST was an implicit operator that
placed a constant onto the stack. It was a token that was followed by a
constant value in the instruction stream. Relative GOTOs were specified in
bytes, so the constants had to be calculated into jump vectors at compile
time (it was a 2-pass compilation using pen and paper ;). All tokens were
8 bits and all constants were 8 bits, and all ops were on 8 bits. I only
bring this up here, because it's all 8 bits wide (like a PIC and an E^2
for example). I think that I had a bit in each opcode to select whether
the flags were to be affected or not and I think that I had
increment/decrement instructions too.

Anyway the hardware was on the evil side, using 1xZ80A 1xLS04 (clock osc
and buffer), 1x2764, 1x8255, 1x7805, 4 caps a crystal and 4 resistors ;)
An optional EEPROM was connected to 3 pins of the 8255 (SPI), and
contained the program executed by the interpreter. I think that there was
a diode or two in there too, I don't remember all the details.

> I have a 15 page article on the language (BYTE (!), December 1978 pp
> 108-122.. An Easy Programming System - Joseph Weisbecker, NJ). He

Oops. In 1978 I was 9 years old and was going to start being REALLY
interested in electronics the next year or so ;)

> claims all sorts of advantages for this type of interpretive
> language. Says it typically takes 512 bytes for a full language and
> resultant code is 6 times as dense as BASIC (his claims). As I
> recall, CHIPOS could achieve some remarkable results considering its
> rudimentary nature. Something like this may be not too hard to write
> (he says a new language takes him about a week :-)).

Actually after you have been there a few times, it only takes a few hours.
Especially if you can re-use pseudo-code and do not fall in the pitfalls
that you have 'tried' out previously, in other projects.

Someone (William ?) has suggested a 'full' FORTH machine. This won't work
on a PIC. A FORTH machine assumes two things that a PIC has not got: A
dictionary of keywords (size!) and a sizeable stack (registers!). So, a
FORTH machine running on a PIC will have to have the FORTH call interface
'compiled' into direct absolute (?) compiled word addresses, and will have
to coax the programmer into using a shallow stack by FORTH standards.
Also, the interpreter needs to be flat-threaded or direct-threaded so as
to compensate for the very few return stack slots available on the PIC.
This means, that most FORTH words which are usually written in FORTH
itself and interpreted will have to be compiled to leave room for
application procedure calls.

Both the dynamic code extension problem (BUILDS/DOES FORTH paradigm) and
the limited operand stack size can be fixed by using a piece of EEPROM as
stack cache imho. I haven't done this, but I did most of the thinking, and
it should work (at horribly slow speed). Thus, the way to self-modifying
code for even the smallest PICs will be open ;) Let's see who comes up
with the 1st PIC virus. <g>

highly interested in the $SUBJ (minus viruses),

Peter

1998\12\05@115121 by Adriano De Minicis

flavicon
face
There was an articol on Circuit Cellar #93 (April 1988) of
A Stamp-like interpreted controller, named "Picaro" complete
with source code, that can be useful as a starting point.
It's based on a 16C56 and a 24LC16.
File "picaro_UPDATED!.ZIP" on http://www.circuitcellar.com

Adriano

1998\12\05@220201 by Andy Kunz

flavicon
face
>CISC computers use microcode for most complex operations (silicon monsters
>like the Pentium [tm] don't count).

The Pentium is microcoded also, I believe.

Andy

==================================================================
 Andy Kunz - Montana Design - http://www.users.fast.net/~montana
==================================================================

1998\12\06@114519 by Peter L. Peres

picon face
On Sat, 5 Dec 1998, Andy Kunz wrote:

> >CISC computers use microcode for most complex operations (silicon monsters
> >like the Pentium [tm] don't count).
>
> The Pentium is microcoded also, I believe.

I think that they have pipelined and hardcoded so much in it that there is
no microcode left. Back to the future, kind of ;) How else can you need X
million transistors for a job that can be done with 100,000 (and has been
before). Microcode died when VHDL compilers entered the scene imho.

It would be interesting to find out how powerful a RISC core would be if
it would use the no. of transistors and the power of a P2. I think Alphas
were there first, no ? (at 600 MHz and all that). Hrrmph. I am digressing.

Peter

1998\12\07@063614 by Wolfgang Willenbrink

flavicon
face
Hi Adriano,
please help in finding out the right web-adress: on which page exactly do
you
have found the PICARO-project?

I've searched the (whole?!) CircuitCellarInk - site; without success in
finding a hint to this project!

Thanx in advance
Wolfgang

1998\12\07@165852 by paulb

flavicon
face
Wolfgang Willenbrink wrote:

> on which page exactly do you have found the PICARO-project?

> I've searched the (whole?!) CircuitCellarInk - site; without success
> in finding a hint to this project!

Really?

ftp://ftp.circuitcellar.com/CCINK/1998/Issue_93/
--
 Cheers,
       Paul B.

1998\12\08@113254 by John Payson

flavicon
face
> >CISC computers use microcode for most complex operations (silicon monsters
> >like the Pentium [tm] don't count).
>
> The Pentium is microcoded also, I believe.

|I think that they have pipelined and hardcoded so much in it that there is
|no microcode left. Back to the future, kind of ;) How else can you need X
|million transistors for a job that can be done with 100,000 (and has been
|before). Microcode died when VHDL compilers entered the scene imho.

Parts of the Pentium and (99% certain) Pentium II are still microcoded;
while most instructions have explicit logic to implement them, a few of
the goofy ones (which few people would miss if they vanished tomorrow)
are performed in microcode because it's not worth the silicon to make
them go faster.  Ironically, I believe the XLAT instruction (which per-
forms MOV AX,[BX+AX] is one of them, despite the fact that on the orig-
inal 8088 it was the fastest instruction to read a byte from memory (and
is now probably the slowest).

|It would be interesting to find out how powerful a RISC core would be if
|it would use the no. of transistors and the power of a P2. I think Alphas
|were there first, no ? (at 600 MHz and all that). Hrrmph. I am digressing.

Not just the number of transistors, but the quantity of R&D, etc. as
well.  After all, it's no easy task getting a micro to run two variable-
length instructions per clock cycle.  What's particularly interesting,
though, is to look at the driving factors in the RISC-vs-CISC debate and
to look at how the Pentium-plus chips handle code cachine.

Generally, the most distinctive property of a RISC instruction set is
that all instructions are of uniform length.  If the bus width is an int-
eger multiple (1 is okay) of the instruction length, it's possible to
pipeline instructions several levels deep (allowing for rapid program
execution).  Unfortunately, the uniform-width instructions tend to be
rather wide, leading to a need for a wide memory bus and a large code
store.

Most CISC designs, by contrast, allow for variable-length instructions.
These are more complex to evaluate and perform, but allow code to be made
more compact than with RISC designs.  In micro systems where the speed of
executing code is limitted by the rate at which it can be fetched from
memory, smaller code translates into faster execution.

In modern MPU's with a code cache, the rate of fetching code is no longer
a primary speed-limitting factor: once the code is cached, it may be acc-
essed again and again with almost no delay.  Unfortunately, as the time to
fetch instructions is diminished, the time to evaluate them becomes domin-
ant and cannot be so readily improved.

To get around this problem, many processors like the Pentium II, K6, etc.
use a cache that holds partially-decoded instructions.  Rather than try
to design a CPU core to evaluate two (or more) arbitrary-length x86 inst-
ructions at a time, the CPU is split into two parts:

- The instruction translation logic, which takes code from system RAM,
  munges it into a more RISC-like form, and stores it in the cache.

- The execution logic, which runs the RISC-like code stored in the cache.

Note that the instruction translation logic only has to run as fast as the
system's memory (or L2 cache), not as fast as the PIC's execution code.
Note as well that in most software 90% of the time is spent running very
small chunks of the code.  As a consequence of these factors, the ability
to read and evaluate variable-length instructions quickly (which is very
hard) is less important than the ability to execute quickly the instructions
that have been cached.  Pre-decoding instructions allows for a fundamentally
simpler design than would be needed for effective real-time decoding.  The
one caveat is that cache cannot hold as many instructions as an "old-style"
cache of the same size.  Still, it is quite amazing what Intel (**and** its
competitors) have managed to accomplish.

1998\12\08@122230 by Peter L. Peres

picon face
On Tue, 8 Dec 1998, John Payson wrote:

<very interesting explanation snipped>

> Note as well that in most software 90% of the time is spent running very
> small chunks of the code.  As a consequence of these factors, the ability

This is the one point where I have to disagree. An embedded system running
a predefined set of tasks will do as you say, but any 'general purpose'
computer that is multitasking, won't. The 1M Pentium cache swaps actively
even with a small server running 20 html daemons and 20 news daemons in
parallel. Multi-processor servers are popular for precisely that reason.

RISC designs built for this kind of job seem to fare MUCH better. They
have large blocks of registers that act as caches and 'rotate' as a block
with each task switch, as well as instruction caches. This reduces the
overhead in switching very much. Some RISCs can switch tasks in one clock
cycle (!).  Or so I read.  They also have symmetrical instruction sets
that are more compiler- friendly than CISC, even if the generated code is
larger (in size). RISC code moves less data into/from registers and
achieves more useful operations / no. of instructions than CISC (this
should make RISC code more compact ?!).

RISCs economize the long instruction decoding hardware and they use less
silicon (and less power and run faster) for the same core power. The price
is disk space apparently, but it is not so bad. *This* is what I had
meant. How powerful will a Pentium technology device be if it uses RISC
instead of CISC organization ?

And, re: wide instruction words, RISC: see 'Merced' ;)

Now, how many pic12C508s fit in a Pentium based on the number of
transistors, and what is the resulting price difference per, including all
required peripherals for a minimal system ;)

Peter

1998\12\08@141724 by Lee Jones

flavicon
face
>>> CISC computers use microcode for most complex operations
>>> (silicon monsters like the Pentium [tm] don't count).

>> The Pentium is microcoded also, I believe.

>| I think that they have pipelined and hardcoded so much in it
>| that there is no microcode left.

> Parts of the Pentium and (99% certain) Pentium II are still microcoded;

> What's particularly interesting, though, is to look at the
> driving factors in the RISC-vs-CISC debate and to look at
> how the Pentium-plus chips handle code cachine.

> Generally, the most distinctive property of a RISC instruction set is
> that all instructions are of uniform length.

> Unfortunately, the uniform-width instructions tend to be rather
> wide, leading to a need for a wide memory bus and a large code store.

A wide memory bus can also be used to improve performance of
both CISC and RISC bases systems.  Wide, slow bus feeds is
funneled down into a narrow, much faster bus for the CPU.  For
example, the Sun E450 server has a 1176 bit wide memory array.

> Most CISC designs, by contrast, allow for variable-length instructions.
> These are more complex to evaluate and perform, but allow code to be made
> more compact than with RISC designs.

But as the Intel x86 architechture has moved from 16-bit to
32-bit, haven't all those nice, short instructions had prefix
bytes stuck in front so that the instruction executes in wide
mode (i.e. 32-bit operations)?

> Still, it is quite amazing what Intel (**and** its competitors)
> have managed to accomplish.

Absolutely agreed.  On-the-fly conversion from CISC to RISC
(for internal execution) is quite a feat.
                                               Lee Jones

More... (looser matching)
- Last day of these posts
- In 1998 , 1999 only
- Today
- New search...