Searching \ for '[PIC] C arithmetic conversion/integer promotion/et' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/microchip/languages.htm?key=c
Search entire site for: 'C arithmetic conversion/integer promotion/et'.

Exact match. Not showing close matches.
PICList Thread
'[PIC] C arithmetic conversion/integer promotion/et'
2009\02\10@235016 by Forrest W Christian

flavicon
face
One thing which continually drives me up the wall is how to deal with
making the C compiler actually do math on integers with the right number
of bits, or in some cases, do floating-point math if necessary.

This is especially bad on the PIC processors where you typically have 8
(or 16) bit variables involved, or end up doing math using a floating
point constant, or similar.

Over the years I have learned typically what I need to do to make it
*work*, but I still don't really understand.  

Is there a reference somewhere which isn't written in computer-science
speak which explains what is really going on in a typical C compiler
when you do something like:

i=c*2.342;

Where i is 16 bit, and c is 8 bits... and how to ensure that the
compiler produces a 16 bit result, after doing floating point math
during a multiply?

i=c*c is also interesting at times, and so on.  

So, could anyone point me toward a reasonable explanation (and don't
tell me to go read the standard, as I've tried :)...

-forrest

2009\02\11@001344 by William \Chops\ Westfield

face picon face

On Feb 10, 2009, at 8:49 PM, Forrest W Christian wrote:

> Is there a reference somewhere which isn't written in computer-science
> speak which explains what is really going on in a typical C compiler
> when you do something like:
>
> i=c*2.342;
>
> Where i is 16 bit, and c is 8 bits... and how to ensure that the
> compiler produces a 16 bit result, after doing floating point math
> during a multiply?
>
> i=c*c is also interesting at times, and so on.
>
> So, could anyone point me toward a reasonable explanation (and don't
> tell me to go read the standard, as I've tried :)...

I don't know that there IS a reasonable explanation for this.  C
has it's rules, but I'm not sure they're sensible.  So you shouldn't
DO this and let those rules take control, but instead, if you MUST
mix variable types like this, you should make all the conversions
completely explicit:

    i = (int16_t) ((float)c * 2.342);

BillW

2009\02\11@001409 by andrew kelley

picon face
On Tue, Feb 10, 2009 at 11:49 PM, Forrest W Christian
<spam_OUTforrestcTakeThisOuTspamimach.com> wrote:
<snip about math in C and types involved>

> i=c*2.342;
>
> Where i is 16 bit, and c is 8 bits... and how to ensure that the
> compiler produces a 16 bit result, after doing floating point math
> during a multiply?

Sure whenever you want to force an operation, type cast it.  given the
following:
unsigned short i; unsigned char c;
i=(float)c * 2.342f;

The f may not be necessary but ensures the literal is a float.

> i=c*c is also interesting at times, and so on.

That should get the right answer.. if it follows the spec. (VC++ says
625 for c=25 for given sizes)

> So, could anyone point me toward a reasonable explanation (and don't
> tell me to go read the standard, as I've tried :)...

Sometimes an error is issued, other times not.  Typically if integer
math involves a constant float, unless typecasted, will be performed
with truncated integer value of constant.  Equations are evaluated
with order of operations(thats standard).  Depending on the order of
operations and sizes determines how it will perform the math.

FEX:
short i; char c; float x;
eq1: i=c*(i*x);
eq2: i=(c*i)*x;
They would not be the same as each other depending upon values given, because:

Eq1: i*x should be evaluated first as float then converted to
short(destination type) then multiplied by c.
Eq2: c*x should be evaluated first as float then converted to
short(destination type and next operand) then multiplied by c.

In general, larger non-float sizes are kept and floats are lost unless
casted.  Some compilers will evaluate the above examples the same,
unless i or c is casted to float.  I don't know which is correct but I
usually force a cast if all variables are not floating point.

> -forrest

I used shorts in the above example because on a PC a short is 2bytes
and int's are typically 4 bytes.

--
Andrew

2009\02\11@021147 by Forrest W. Christian

flavicon
face
William "Chops" Westfield wrote:
> I don't know that there IS a reasonable explanation for this. C has
> it's rules, but I'm not sure they're sensible. So you shouldn't DO
> this and let those rules take control, but instead, if you MUST mix
> variable types like this, you should make all the conversions
> completely explicit:
Which is basically what I end up doing.  Make sure that everything which
needs to be computed as a float is *cast* to be a float, and so on.

It's just that this feels so kludgy that I often feel like I'm doing
something wrong... or missing something obvious.

The one which has been biting me recently is shifting and masking 32 bit
values, where things like (i>>24) doesn't work but (i/16777216ll) does.  
Fortunately the PIC compilers seem to be smart enough to convert a
power-of-2 division to a shift.

-forrest

2009\02\11@054955 by Michael Rigby-Jones

picon face


> -----Original Message-----
> From: .....piclist-bouncesKILLspamspam@spam@mit.edu [piclist-bouncesspamKILLspammit.edu] On
Behalf
{Quote hidden}

which
> needs to be computed as a float is *cast* to be a float, and so on.
>
> It's just that this feels so kludgy that I often feel like I'm doing
> something wrong... or missing something obvious.
>
> The one which has been biting me recently is shifting and masking 32
bit
> values, where things like (i>>24) doesn't work but (i/16777216ll)
does.
> Fortunately the PIC compilers seem to be smart enough to convert a
> power-of-2 division to a shift.

If you are right shifting a signed variable then it's entirely possible
to get unexpected results, since the rules state that the way in which
the sign bit is handled is implementation defined.  Shifting an unsigned
variable should not do anything unexpected however.

Regards

Mike

=======================================================================
This e-mail is intended for the person it is addressed to only. The
information contained in it may be confidential and/or protected by
law. If you are not the intended recipient of this message, you must
not make any use of this information, or copy or show it to any
person. Please contact us immediately to tell us that you have
received this e-mail, and return the original to us. Any use,
forwarding, printing or copying of this message is strictly prohibited.
No part of this message can be considered a request for goods or
services.
=======================================================================

2009\02\11@072112 by olin piclist

face picon face
andrew kelley wrote:
> unsigned short i; unsigned char c;
> i=(float)c * 2.342f;

Ouch.  Is C really this stupid or did you mess up?  Allowing automatic
conversion from floating point to integer is really irresponsible of a
compiler.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@081848 by Byron Jeff

flavicon
face
On Wed, Feb 11, 2009 at 07:22:45AM -0500, Olin Lathrop wrote:
> andrew kelley wrote:
> > unsigned short i; unsigned char c;
> > i=(float)c * 2.342f;
>
> Ouch.  Is C really this stupid or did you mess up?

Neither.

>  Allowing automatic conversion from floating point to integer is really
>  irresponsible of a compiler.

The C philosophy has always been that the programmer knows what they are
doing. If the programmer decides to do something stupid, then C is quite
happy to give that programmer enough rope to make a noose.

It's perfectly legal C to truncate a floating point result to integer. And
yes it will do it automatically.

I just dropped this piece of code into gcc and it compiled without
complaint. In fact it doesn't even complain if I turn all warnings on.

BAJ

2009\02\11@082517 by Gerhard Fiedler

picon face
Forrest W. Christian wrote:

> William "Chops" Westfield wrote:
>> I don't know that there IS a reasonable explanation for this. C has
>> it's rules, but I'm not sure they're sensible. So you shouldn't DO
>> this and let those rules take control, but instead, if you MUST mix
>> variable types like this, you should make all the conversions
>> completely explicit:
>
> Which is basically what I end up doing.  Make sure that everything
> which needs to be computed as a float is *cast* to be a float, and so
> on.

This is not a C problem per se, it's a type problem. When you want
specific typing, that's what you need to prescribe. That's just how it
is, in any language.

> It's just that this feels so kludgy that I often feel like I'm doing
> something wrong... or missing something obvious.

The maybe not so obvious that you're missing is fixed-point (which may
include 0 decimals) arithmetic. It is done entirely with integers and
appropriate scaling. A common "trick" used is to transform a
multiplication or division into a multiplication and a division by a
power of two (to avoid the costly division).

> i=c*2.342;
>
> Where i is 16 bit, and c is 8 bits

For example (all 16 bit integer)

i = ((int16)c * 75) / 32;

or

i = ((int16)c * 75) >> 5;

75/32 is 2.3438, which is good enough (maximum error is 0.446, for
c=255).

When you do something like this, it's generally important to use
parentheses, as you want to control exactly the sequence in which the
operations are executed.

This, for example, would not work well:

i = (int16)c * (75 / 32);

Another way to control the exact sequence (instead of using parentheses)
is this:

i = c;
i *= 75;
i >>= 5;

This has (almost) the same effect and the difference is mostly one of
coding style. (Doing it this way, you may avoid the creation of some
temporary, hidden variables for intermediate results -- or, if they are
needed, you create them explicitly yourself.)

The floating-point math routines are really big and slow, and I've done
quite some math in PICs without ever needing floating-point. Appropriate
scaling and occasionally using 32-bit integers always did the trick --
with much less code, and faster, too.


> The one which has been biting me recently is shifting and masking 32
> bit values, where things like (i>>24) doesn't work but (i/16777216ll)
> does.

If i is a (signed) integer and has a negative value, the behavior of the
right shift operator is "implementation defined" (which is their way of
saying RTFMM :). It may maintain the high bit (in which case the result
would be the same as of a division) or it may shift in zeroes (in which
case the result would be different from division).

> Fortunately the PIC compilers seem to be smart enough to convert a
> power-of-2 division to a shift.

Mostly.

Gerhard

2009\02\11@095755 by Walter Banks

picon face


> Fortunately the PIC compilers seem to be smart enough to convert a
> power-of-2 division to a shift.

It is a trade-off between fast code and round in the wrong direction. We
shift but get called on it a few times a year.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com





2009\02\11@100140 by Walter Banks

picon face


Olin Lathrop wrote:

> > unsigned short i; unsigned char c;
> > i=(float)c * 2.342f;
>
> Ouch.  Is C really this stupid or did you mess up?  Allowing automatic
> conversion from floating point to integer is really irresponsible of a
> compiler.

Compilers do what they were told. Customers do amazing things with C
because they can. Somebody out there has a application reason why
this code is a good idea

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com




2009\02\11@101209 by olin piclist

face picon face
Walter Banks wrote:
>> Fortunately the PIC compilers seem to be smart enough to convert a
>> power-of-2 division to a shift.
>
> It is a trade-off between fast code and round in the wrong direction. We
> shift but get called on it a few times a year.

But isn't integer division specified as the integer quotient with the
remainder tossed?  If so, then there is no issue of rounding and a
arithmetic right shift should be identical to a divide by 2**N.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@101707 by olin piclist

face picon face
Walter Banks wrote:
>>> unsigned short i; unsigned char c;
>>> i=(float)c * 2.342f;
>>
>> Ouch.  Is C really this stupid or did you mess up?  Allowing automatic
>> conversion from floating point to integer is really irresponsible of a
>> compiler.
>
> Compilers do what they were told. Customers do amazing things with C
> because they can. Somebody out there has a application reason why
> this code is a good idea

It's not the code adhering to the C definition (apparently) that I'm
complaining about, but the fact that this is legal in C in the first place.
Yes, the spec can define whether you always round or truncate, but I think
not forcing the program to explicitly say which he wants is bad language
design.

This wouldn't keep anyone from doing something less amazing with the
language.  You can still get exactly the same result.  Yet another reason
why C is such a awful language.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@105107 by Alan B. Pearce

face picon face
>It's not the code adhering to the C definition (apparently) that
>I'm complaining about, but the fact that this is legal in C in
>the first place. Yes, the spec can define whether you always round
>or truncate, but I think not forcing the program to explicitly
>say which he wants is bad language design.
>
>This wouldn't keep anyone from doing something less amazing with the
>language.  You can still get exactly the same result.  Yet another
>reason why C is such a awful language.

I do seem to recall that a spacecraft went AWOL back in the 60's because
someone relied on the Fortran defaults for variable types, but someone else
had varied the default types somewhere else in the program.

2009\02\11@121742 by Paul Hutchinson

picon face
> -----Original Message-----
> From: .....piclist-bouncesKILLspamspam.....mit.edu On Behalf Of Byron Jeff
> Sent: Wednesday, February 11, 2009 8:19 AM
>
> On Wed, Feb 11, 2009 at 07:22:45AM -0500, Olin Lathrop wrote:
> > andrew kelley wrote:
> > > unsigned short i; unsigned char c;
> > > i=(float)c * 2.342f;
> >
<snip>
>
> It's perfectly legal C to truncate a floating point result to integer. And
> yes it will do it automatically.
>
> I just dropped this piece of code into gcc and it compiled without
> complaint. In fact it doesn't even complain if I turn all warnings on.

This is a good example of why C programmers should use a good static
checking lint in addition to having the compilers warning level at its
highest setting. A decent lint program will flag the line for using
automatic type conversion. To make the lint happy you change the line to:

i=(unsigned short)((float)c * 2.342f);

This explicit type casting of the code makes it obvious that you intend to
truncate the result and didn't just forget to pass the calculation through a
rounding function.

Paul Hutch

>
> BAJ

2009\02\11@122244 by olin piclist

face picon face
Alan B. Pearce wrote:
> I do seem to recall that a spacecraft went AWOL back in the 60's because
> someone relied on the Fortran defaults for variable types, but someone
> else had varied the default types somewhere else in the program.

Yeah, Fortran default variable types were dangerous.  I always explicitly
declared every variable I used.

I can forgive the Fortran design a lot more than C.  Fortran was first, so
there was no experience with these kinds of constructs.  Sure they made some
mistakes in hindsight, but the first of anything is going to get a few
things wrong.  Actually in hindsight I'm amazed how much they got right.

On the other hand, by the time C was designed strong type checking had
already been tried and its advantages well understood.  C was designed by a
couple of irresponsible hackers.  That by itself is OK too.  The real
problem is that it gets used so widely, probably because there are so many
irresponsible programmers out there to whom C looks "easier" at first glance
because you can just write what you want.  Of course you way more than pay
for it in the end, but that gets ignored.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@122603 by olin piclist

face picon face
Paul Hutchinson wrote:
> i=(unsigned short)((float)c * 2.342f);
>
> This explicit type casting of the code makes it obvious that you intend
> to truncate the result and didn't just forget to pass the calculation
> through a rounding function.

This is better, but I still don't like it.  You are still allowing the
compiler to chose how a floating point value gets converted to a integer.
There is nothing above explicilty indicating truncation versus rounding.  I
would prefer some explicit syntax, like:

 i = trunc((float)c * 3.14);

or

 i = round((float)c * 3.14);


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@125209 by Bob Blick

face
flavicon
face
On Wed, 11 Feb 2009 12:24:05 -0500, "Olin Lathrop"
<EraseMEolin_piclistspam_OUTspamTakeThisOuTembedinc.com> said:

> couple of irresponsible hackers.  That by itself is OK too.  The real
> problem is that it gets used so widely, probably because there are so
> many
> irresponsible programmers out there to whom C looks "easier" at first
> glance
> because you can just write what you want.  Of course you way more than
> pay
> for it in the end, but that gets ignored.

Easier than what? Irresponsible programmers? I don't believe you. That
is total BS.

That's like saying that anyone with a gasoline powered automobile is an
irresponsible driver. I heard of a guy whose car is powered by a fuel he
makes himself with a handmade fractionator, the plans of which are
available for free on his website. Too bad it takes so much tinkering to
get it to work on different makes and models of automobiles.

It doesn't make everybody else irresponsible.

C is used because it is the one language that is available for virtually
every processor. Until and unless that changes, it will continue as the
most popular language.

Best regards,

Bob

--
http://www.fastmail.fm - I mean, what is it about a decent email service?

2009\02\11@130720 by Walter Banks

picon face


Olin Lathrop wrote:

{Quote hidden}

Like most things C weaknesses is also its strength's. The same language that
allows developers to write truly terrible code gives other developers the
flexibility to accomplish great works. C like assembler requires programmer
discipline to use.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com



2009\02\11@134011 by Walter Banks

picon face


Olin Lathrop wrote:

{Quote hidden}

This is true as long as the numbers are positive. Shifting to
divide with negative numbers is where the problem starts.
This assumes that the right shift is an arithmetic shift. That
isn't the issue.

A couple examples illustrates the issue

-1 ASR 1 is -1 not 0 required for /2
-2 ASR 1 is -1 correct
-3 ASR 1 is  -2 not -1 required for /2

We shift on unsigned ints but not signed ints in our code
generation. (I just checked)

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com





2009\02\11@134528 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

> Walter Banks wrote:
>>> Fortunately the PIC compilers seem to be smart enough to convert a
>>> power-of-2 division to a shift.
>>
>> It is a trade-off between fast code and round in the wrong
>> direction. We shift but get called on it a few times a year.
>
> But isn't integer division specified as the integer quotient with the
> remainder tossed?  If so, then there is no issue of rounding and a
> arithmetic right shift should be identical to a divide by 2**N.

Shifting of integers (maintaining the sign bit) amounts to rounding down
(towards the smaller value), not tossing the remainder (-1 shifted right
by 1 is still -1, not 0).

FWIW, the C89 standard leaves it "implementation defined" which way it
is.

Gerhard

2009\02\11@135230 by Paul Hutchinson

picon face
> -----Original Message-----
> From: piclist-bouncesspamspam_OUTmit.edu On Behalf Of Olin Lathrop
> Sent: Wednesday, February 11, 2009 12:28 PM
>
> Paul Hutchinson wrote:
> > i=(unsigned short)((float)c * 2.342f);
> >
> > This explicit type casting of the code makes it obvious that you intend
> > to truncate the result and didn't just forget to pass the calculation
> > through a rounding function.
>
> This is better, but I still don't like it.  You are still allowing the
> compiler to chose how a floating point value gets converted to a integer.
> There is nothing above explicilty indicating truncation versus
> rounding.

Actually this does _not_ give a standards compliant C compiler a choice, the
C standards are clear, when demoting via an explicit or implicit cast the
result is always truncated. This is not one of those implementation defined
areas where a standards compliant C compiler has a choice.

IME, even with C implementations that violate the standards in other ways
this is one area where I have never seen a violation, rounding instead of
truncating. This is likely due to the fact that it is significantly easier
to truncate than round.

Paul Hutch

> I would prefer some explicit syntax, like:
>
>   i = trunc((float)c * 3.14);
>
> or
>
>   i = round((float)c * 3.14);

2009\02\11@140100 by olin piclist

face picon face
Bob Blick wrote:
> C is used because it is the one language that is available for virtually
> every processor. Until and unless that changes, it will continue as the
> most popular language.

Right, but that is orthogonal to being a irresponsibly designed language.  I
agree C is popular, but my point is that it gained popularity partly by luck
and association with Unix (which says nothing about the quality of the
language) and partly because it appealed to a too common breed of
irresponsible programmer.  Being popular is no proof of being good.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@140509 by Walter Banks

picon face


Olin Lathrop wrote:

{Quote hidden}

As a language C was not meant to be a Fortran replacement. (You also
didn't say that) C was meant to be a very low level language a kind of
asm shorthand that evolved. There are a lot of good languages around
that have in them features that you have indicated should be in C. There
are a lot of languages around whose roots are in C and have feature sets
that reflect application or target needs.

All of the C's for the Microchip PIC's are in this category including
ours with their support for separate address spaces and asymmetrical
I/O.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com





2009\02\11@140547 by olin piclist

face picon face
Walter Banks wrote:
> Like most things C weaknesses is also its strength's. The same language
> that allows developers to write truly terrible code gives other
> developers the flexibility to accomplish great works.

But other languages don't proclude that either.  They may only require
different and in some cases a little more syntax in return for less chance
of unintentional program behavior.

Show be something "great" you can do in C that you can't in Pascal, for
example.  I'm not talking about details of a few lines of code this way
versus that.  "Great" implies a full functioning program.  Your premise is
that you can't to "great" things in other languages.  I think that's
nonsense, so let's see a few examples.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@141021 by Bill Clawson

picon face
Hi,

Saw the thread and got me to searching the web.  I don't guarantee that this will help, since I've just started reading this myself, but the following PDF

http://www.ess.uci.edu/esmf/ibm_compiler_docs/sc094958.pdf

Is an IBM C/C++ reference and chapter 6 in the document covers type promotion.

Best Regards,

Bill

--- On Tue, 2/10/09, Forrest W Christian <@spam@forrestcKILLspamspamimach.com> wrote:

From: Forrest W Christian <KILLspamforrestcKILLspamspamimach.com>
Subject: [PIC] C arithmetic conversion/integer promotion/etc.
To: "Microcontroller discussion list - Public." <RemoveMEpiclistTakeThisOuTspammit.edu>
Date: Tuesday, February 10, 2009, 8:49 PM

One thing which continually drives me up the wall is how to deal with
making the C compiler actually do math on integers with the right number
of bits, or in some cases, do floating-point math if necessary.

This is especially bad on the PIC processors where you typically have 8
(or 16) bit variables involved, or end up doing math using a floating
point constant, or similar.

Over the years I have learned typically what I need to do to make it
*work*, but I still don't really understand. 

Is there a reference somewhere which isn't written in computer-science
speak which explains what is really going on in a typical C compiler
when you do something like:

i=c*2.342;

Where i is 16 bit, and c is 8 bits... and how to ensure that the
compiler produces a 16 bit result, after doing floating point math
during a multiply?

i=c*c is also interesting at times, and so on. 

So, could anyone point me toward a reasonable explanation (and don't
tell me to go read the standard, as I've tried :)...

-forrest

2009\02\11@141041 by olin piclist

face picon face
Paul Hutchinson wrote:
> Actually this does _not_ give a standards compliant C compiler a
> choice, the C standards are clear, when demoting via an explicit or
> implicit cast the result is always truncated. This is not one of those
> implementation defined areas where a standards compliant C compiler has
> a choice.

That makes the statement unambigous to the compiler.  My point is that it
ignores the human element.  While in theory everyone that writes in a
language should know all the details in the language definition, people are
going to make mistakes.  Obviously there is a tradeoff with how much you
clutter the code with stuff "everybody should know" from the standard, but
in this case I think having something that explicitly says truncate or round
to a casual observer would be a good thing.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@141129 by olin piclist

face picon face
Gerhard Fiedler wrote:
> Shifting of integers (maintaining the sign bit) amounts to rounding down
> (towards the smaller value), not tossing the remainder (-1 shifted right
> by 1 is still -1, not 0).

Yes, I didn't think that thru all the way, as Walter also pointed out.

********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@141215 by Walter Banks

picon face


Olin Lathrop wrote:

{Quote hidden}

There are very few languages that provide very low access to the
processor and can be used on a wide variety of processors. Warts
and all C does do this.

C's failings have tended to have been largely addressed in the
last 20 years and no longer warrant the attention they once did.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com




2009\02\11@142249 by olin piclist

face picon face
Walter Banks wrote:
> All of the C's for the Microchip PIC's are in this category including
> ours with their support for separate address spaces and asymmetrical
> I/O.

I'm not picking on your implementation or anyone's in particular.  I regret
that C has caught on to the point it has such that folks like you feel
compelled to provide a reasonably C-compatible language as apposed to one
with much better constructs.  I would much rather see you free to innovate.
I understand the business reasons your compilers have to be C, but that is
not evidence of C being good, only popular.  Unfortunately C is well past
critical mass so that even if folks like you agreed with me (I don't know
whether you do or not), you couldn't make a business case for a well
designed language specifically targeted for embedded systems.

I've actually been playing around with the definition of such a language I'm
calling M for "embedded".  One of these days I'll implement a front end for
it in my source to source translator so everyone can play with it.
Unfortunately since this is a side project, there are a lot of other things
ahead of it.  It will likely have to wait a while, especially since my free
time is about zero these days.  For reasons I don't fully understand, we're
more swamped since about September than we've ever been.  "Spare" time is
used to catch up to keep customers from getting pissed off on some of the
projects that were neglected during "normal" time.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@150231 by Paul Hutchinson

picon face
> -----Original Message-----
> From: spamBeGonepiclist-bouncesspamBeGonespammit.edu On Behalf Of Olin Lathrop
> Sent: Wednesday, February 11, 2009 2:13 PM
>
> That makes the statement unambigous to the compiler.  My point is that it
> ignores the human element.  While in theory everyone that writes in a
> language should know all the details in the language definition,
> people are
> going to make mistakes.  Obviously there is a tradeoff with how much you
> clutter the code with stuff "everybody should know" from the standard, but
> in this case I think having something that explicitly says
> truncate or round to a casual observer would be a good thing.

I agree with your sentiment and would like to make it clear that I do not
think C is a language suitable for the casual observer/programmer on any
level, period.

To me a C programmer who does not fully understand the rules of standard C
is equivalent to an analog circuit designer who does not fully understand
Ohms Law and Kirchhoff's circuit laws. Both types would either be educated
or fired if they where in my workplace.

C requires attention to detail like assembly language but with the benefit
that the details are the same regardless of the processor target, assembly
requires attention to differing details for each target.

IMO, for the casual observer/programmer many other languages are appropriate
but not C.

Paul Hutch

2009\02\11@150556 by Walter Banks

picon face


Olin Lathrop wrote:

{Quote hidden}

I am not arguing that Pascal with the kind of extensions that many C
compilers have cannot do similar things.

> Your premise is
> that you can't to "great" things in other languages.  I think that's
> nonsense, so let's see a few examples.

This is NOT my premise. I use all kinds of languages selected for
what is appropriate for the application. In Byte Craft we regularly
use C, Delphi Pascal, and several functional languages. Last
weekend triggered by solarwind's quest for a calculator I coded some
approaches using the functional features of JavaScript as a proof
of concept test.

C low level processor access is well suited to target many
embedded applications.


Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com




2009\02\11@152820 by Walter Banks

picon face


Olin Lathrop wrote:

> I regret that C has caught on to the point it has such that folks like you feel
> compelled to provide a reasonably C-compatible language as apposed to one
> with much better constructs.  I would much rather see you free to innovate.
> I understand the business reasons your compilers have to be C, but that is
> not evidence of C being good, only popular.  Unfortunately C is well past
> critical mass so that even if folks like you agreed with me (I don't know
> whether you do or not), you couldn't make a business case for a well
> designed language specifically targeted for embedded systems.

You just outlined why I spent about 2 months a year for 5 years working
on  ISO C standards for embedded systems. IEC/ISO 18037. It is not
easy to design a low level language to meet a broad range of requirements
for embedded systems. The compromises that need to be addressed
are significant.

> I've actually been playing around with the definition of such a language I'm
> calling M for "embedded".  One of these days I'll implement a front end for
> it in my source to source translator so everyone can play with it.

Part of C's success is it provides solutions to implementation problems.
Arguably there may be better solutions, there're clearly are better
evolving solutions that are being incorporated into the C language
at WG14. C has addressed legitiment concerns of critics and
continues to evolve.

Make a detailed case for changes in the language. I know from experience
that changes can be incorporated in to new standards documents.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com


2009\02\11@154729 by olin piclist

face picon face
Paul Hutchinson wrote:
> I agree with your sentiment and would like to make it clear that I do
> not think C is a language suitable for the casual observer/programmer
> on any level, period.

Exactly.  And with a better design it could be this without compromising the
good things you mentioned.

> To me a C programmer who does not fully understand the rules of
> standard C is equivalent to an analog circuit designer who does not
> fully understand Ohms Law and Kirchhoff's circuit laws. Both types
> would either be educated or fired if they where in my workplace.

That analogy is very weak.  The C spec has lots of little details.  I bet
many programmers that use C frequently would get a few of them wrong if you
gave them a test.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@155308 by olin piclist

face picon face
Walter Banks wrote:
> Make a detailed case for changes in the language. I know from experience
> that changes can be incorporated in to new standards documents.

My objections to C are too fundamental for a few backward compatible changes
to be able to fix anything.  The loose type checking is something you can't
take out of C, for example.  Then there is the really bad concept that
everything is a expression, which leads to lots of trouble.  You can put
more and more lipstick on the pig, but you're still stuck with the pig
underneath.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@162049 by Walter Banks

picon face


Olin Lathrop wrote:

> Walter Banks wrote:
> > Make a detailed case for changes in the language. I know from experience
> > that changes can be incorporated in to new standards documents.
>
> My objections to C are too fundamental for a few backward compatible changes
> to be able to fix anything.  The loose type checking is something you can't
> take out of C, for example.

Type checking changed a lot in C99. How about a specific example.

> Then there is the really bad concept that
> everything is a expression, which leads to lots of trouble.

It also opens lots of doors for branch free code generation to eliminate
pipeline stalls in new processors.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com







2009\02\11@162228 by Larry Bradley

flavicon
face
I heartily agree, Olin. I've been a programmer all my (rather long) life, and
programmed in a lot of languages, including several assemblers. As far as I
am concerned, C is just assembler with curly brackets.

I like a stongly-typed language, such as Pascal (or back in my IBM mainframe
days), PL/1.

There is no reason why a Pascal-like language can't be used for PIC or other
imbedded programming. Even in C, everything out of the ordinary is done via
function calls.

I just downloaded the latest version of JAL, just to take a look at it again.
It is a Pascal-like language.

Larry


Original Message:

My objections to C are too fundamental for a few backward compatible changes
to be able to fix anything. The loose type checking is something you can't
take out of C, for example. Then there is the really bad concept that
everything is a expression, which leads to lots of trouble. You can put
more and more lipstick on the pig, but you're still stuck with the pig
underneath.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014. Gold level PIC consultants since 2000.

2009\02\11@163243 by Walter Banks

picon face


Larry Bradley wrote:

> I've been a programmer all my (rather long) life, and
> programmed in a lot of languages, including several assemblers. As far as I
> am concerned, C is just assembler with curly brackets.

It is an "assembler with curly brackets" that targets many processors.
It is a data base of common code snippets that get applied where
appropriate.

It was never supposed to be a high level language just a good way to manage
processor resources and application data.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com



2009\02\11@175934 by olin piclist

face picon face
Walter Banks wrote:
> Type checking changed a lot in C99. How about a specific example.

I'm probably not up on the latest C standards, but type checking used to be
so loose to be virtually nonexistant.  There was no notion of a separate
character or boolean type, for example, so the compiler couldn't distinguish
these and find obvious screwups.  Even though there were enums, you could
use them as ordinary integers.  The distinction between a array and the
first element of the array was sometimes blurry.  These should be two
totally separate constructs.

>> Then there is the really bad concept that
>> everything is a expression, which leads to lots of trouble.
>
> It also opens lots of doors for branch free code generation to eliminate
> pipeline stalls in new processors.

I don't see what you mean.  It seems to me the compiler would have all
necessary information in either case.  For example, C code might be:

 if (i = j + k) l = 5;

The Pascal equivalent would have to be written as two statements:

 i := j + k;
 if i <> 0 then l := 5;

The difference is syntax only.  Both compilers know the same about what is
going on.  I'm assuming the Pascal compiler is smart enough to know it just
evaluated a expression and assigned the result to I, and therefore be able
to optimize keeping the expression result around and not have to fetch it
from I in the second statement.

The Pascal syntax does require a little more bookeeping by the compiler, but
there is nothing preventing it and the resulting machine code should be the
same.  The big advantage is that the Pascal syntax makes the programmer say
more explicitly what he intended and is less likely to be done by accident
when not intended.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\11@184826 by Richard Seriani, Sr.

picon face
Walter wrote about C, "It was never supposed to be a high level language
just a good way to manage processor resources and application data."

I have had an on-again/off-again relationship with C for many years and for
various reasons. However, I do know that K&R seem to agree with Walter.

As stated in the Preface of my 1978 copy of K&R, "C is a general-purpose
programming language which features economy of expression, modern control
flow and data structures, and a rich set of operators. C in not a "very high
level" language, nor a "big" one, and is not specialized to any particular
area of application. But its absence of restrictions and its generality make
it more convenient and effective for many tasks than supposedly more
powerful languages."

Granted, this is dated, but it seems to be as valid a statement today as it
was then, no matter how much some folks believe that C is, or should be,
something it was never intended to be.

Richard


{Original Message removed}

2009\02\11@204254 by Bob Ammerman

picon face

From: "Olin Lathrop" <TakeThisOuTolin_piclistEraseMEspamspam_OUTembedinc.com>
> Walter Banks wrote:
>>> Fortunately the PIC compilers seem to be smart enough to convert a
>>> power-of-2 division to a shift.
>>
>> It is a trade-off between fast code and round in the wrong direction. We
>> shift but get called on it a few times a year.
>
> But isn't integer division specified as the integer quotient with the
> remainder tossed?  If so, then there is no issue of rounding and a
> arithmetic right shift should be identical to a divide by 2**N.

IIRC, it breaks for negative numbers.

For example -5 (decimal) is 0xFB.
Divide by 2 you expect the answer -2
But shift (arithmetically) 1 place right is FD which is -3.

--- Bob Ammerman
RAm Systems

2009\02\12@055322 by Alan B. Pearce

face picon face
>For example, C code might be:
>
>  if (i = j + k) l = 5;
>
>The Pascal equivalent would have to be written as two statements:
>
>  i := j + k;
>  if i <> 0 then l := 5;

The guy that wrote the book that Microchip supplies, 'Beginners Guide to
Embedded C Programming' (Microchip stock number BK0003) takes the latter
approach, but reading through the book, it is clear that it is because of
his lack of understanding of C (other constructs he uses throughout the book
reinforce this view).

2009\02\12@063229 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

> I regret that C has caught on to the point it has such that folks like
> you feel compelled to provide a reasonably C-compatible language as
> apposed to one with much better constructs.  I would much rather see
> you free to innovate. I understand the business reasons your
> compilers have to be C, but that is not evidence of C being good,
> only popular.  Unfortunately C is well past critical mass so that
> even if folks like you agreed with me (I don't know whether you do or
> not), you couldn't make a business case for a well designed language
> specifically targeted for embedded systems.

It is funny you should say so. This reminded me of our conversation
about the (collectively incredibly inefficient) state of affairs WRT
abandoning imperial units in mechanics in the USA and moving towards
metric units. I see a very obvious analogy between C and imperial units:
both have better alternatives, but both have a critical mass created by
more or less arbitrary historic events, and both create, so to speak,
"local optimums" that people stay in without moving to a better "local
optimum".

In that conversation about moving to metric you strongly defended the
individual's decision to stay in the "local optimum" they found once --
it seems just as strongly as you regret their same decision in the case
of C.


(I see one difference where the analogy ends: there is no way around
eventually "going metric", unless the world falls into a state of global
war and all trade with places outside the USA ceases. The direction and
the end state is clear; the only thing not clear is how long it will
take and how much money and effort collectively will be wasted on the
way. The fate of C and if, when and by what it will be replaced is not
clear at all. Which seems to indicate that the decision to stay in the
"local optimum" with imperial is a less understandable one than the
decision to stay in the "local optimum" with C.)

Gerhard

2009\02\12@064535 by PPA

flavicon
face

Hi,

> We shift on unsigned ints but not signed ints in our code
> generation. (I just checked)

It's easy to add one if the value was odd before shifting (using carry)...


-----
Best regards,

Philippe.

http://www.pmpcomp.fr Pic Micro Pascal for all!
--
View this message in context: www.nabble.com/C-arithmetic-conversion-integer-promotion-etc.-tp21948440p21971464.html
Sent from the PIC - [PIC] mailing list archive at Nabble.com.

2009\02\12@073225 by Isaac Marino Bavaresco

flavicon
face
Alan B. Pearce escreveu:
>> For example, C code might be:
>>
>>  if (i = j + k) l = 5;
>>    

A really good compiler will give a warning about the assignment inside
the "if", suggesting to use parenthesis around the expression to ensure
the programmer really wants the assignment and don't just forgot an "="
sign.

>> The Pascal equivalent would have to be written as two statements:
>>
>>  i := j + k;
>>  if i <> 0 then l := 5;
>>    

I think the reason the C language authors chose such constructs
(assignment to be expressions) is because the compilers were not
intelligent enough at the time to re-use the just calculated expression.
This way, the programmer "helps" the compiler to optimize the code.

> The guy that wrote the book that Microchip supplies, 'Beginners Guide to
> Embedded C Programming' (Microchip stock number BK0003) takes the latter
> approach, but reading through the book, it is clear that it is because of
> his lack of understanding of C (other constructs he uses throughout the book
> reinforce this view).
>  
I know many people that program in C but advice not using certain
constructs. They think these constructs make the logic hard to understand.
I agree with them to a certain degree but I like to write my code in a
concise manner.

This is one of the positive points of C language: if you don't like some
elements, don't use them, it is highly flexible and supports many coding
styles.


Regards,

Isaac
__________________________________________________
Faça ligações para outros computadores com o novo Yahoo! Messenger
http://br.beta.messenger.yahoo.com/

2009\02\12@080833 by Tamas Rudnai

face picon face
> (I see one difference where the analogy ends: there is no way around
> eventually "going metric", unless the world falls into a state of global
> war and all trade with places outside the USA ceases.

It's not only USA who uses imperial, though :-) So the critical mass should
be also done here in Europe. Republic of Ireland successfully swapped to
metrics couple of years ago, however, certain things are still in imperial -
the "most important" is for example the beer which is still measured in
pints :-) Also height is mentioned in foot and inches, weight in stones and
for used cars you read mileage in the ads.

Anyway, it should be a really cool stuff to be able to get away people from
C and it's derivatives and would still need some 10-15 years to reach the
critical mass.

Tamas


On Thu, Feb 12, 2009 at 11:32 AM, Gerhard Fiedler <
RemoveMElistsspamTakeThisOuTconnectionbrazil.com> wrote:

> (I see one difference where the analogy ends: there is no way around
> eventually "going metric", unless the world falls into a state of global
> war and all trade with places outside the USA ceases.
>



--
Rudonix DoubleSaver
http://www.rudonix.com

2009\02\12@081024 by olin piclist

face picon face
Richard Seriani, Sr. wrote:
> Walter wrote about C, "It was never supposed to be a high level
> language just a good way to manage processor resources and
> application data."

That is again a orthogonal argument.  Regardless of what C was intended to
be, it is being applied way beyond its usefulness.  However, my main point
is that C is full of drawbacks that a better design would have avoided
without penalty.  There is no excuse for the irresponsible syntax and lack
of type checking.  You can still do all the same things with a better
language resulting in the same machine code.

> But its
> absence of restrictions and its generality make it more convenient
> and effective for many tasks than supposedly more powerful languages."

The "absence of restrictions" is exactly what I'm talking about.  These two
hackers saw that as a good thing so they didn't have to type a few extra
keystrokes and probably enjoyed writing impenetrable code with as few
characters as possible.  Grow up.

The restrictions I'm talking about are low level that force you to say what
you want more explicitly, not restrictions in things you can get the
language to do for you.  You can still write the same program that compiles
to the same machine code, but with a responsibly designed language the
result is much more readable and the inevitable human errors are more likely
to be caught at compile time and less likely to cause expensive runtime
bugs.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@083706 by olin piclist

face picon face
Gerhard Fiedler wrote:
> It is funny you should say so. This reminded me of our conversation
> about the (collectively incredibly inefficient) state of affairs WRT
> abandoning imperial units in mechanics in the USA and moving towards
> metric units. I see a very obvious analogy between C and imperial
> units: both have better alternatives, but both have a critical mass
> created by more or less arbitrary historic events, and both create,
> so to speak, "local optimums" that people stay in without moving to a
> better "local optimum".

Geesh Gerhard, you just like arguing, don't you?

> In that conversation about moving to metric you strongly defended the
> individual's decision to stay in the "local optimum" they found once
> -- it seems just as strongly as you regret their same decision in the
> case of C.

I never said it was good that we were using the imperial system.  I was only
trying to explain why individuals don't see the need to switch to metric for
all things, mostly as a defense because you were basically saying they were
stupid for not switching.  I agree everyone would be better off if everyone
used a common system.  However, there are still good reasons for individuals
not to switch, and they are not stupid for not immediately switching.  This
is the part you never seemed to get.

In any case, this is a silly analogy.  Switching computer languages is
relatively easy.  I first learned Basic, then the binary codes of a
glorified calculator, then Fortran, then a short survey of several languages
including Algol, Lisp, and Snobol, then HP Pascal (which sucked), then
Apollo Pascal (which was a great language), then C, then a little Java, and
a very little JavaScript.

I think the real reasons C is so popular are:

1 - There was a whole generation of programmers that learned C as their
first language, got used to it, got used to writing code irresponsibly as a
result, and now anything else feels like it would be less comfortable and
more restrictive.  They overlook the many bad things because they are used
to them and their consequences are built into the work flow so that they
don't notice them and largely don't realize how much better things could be.

2 - Most programmers are bad and will gladly be irresponsible when given the
chance.  Such people actually like C because it doesn't get in their way as
they see it.

3 - Anything else is not what people known, so they naturally rebell against
the alternatives without really understanding them.  Don't underestimate
this powerful force of human nature.  People feel threatened by stuff they
don't know.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@084158 by olin piclist

face picon face
Isaac Marino Bavaresco wrote:
> I agree with them to a certain degree but I like to write my code in a
> concise manner.

Number of characters used is irrelevant.  Readable is what it important.

> This is one of the positive points of C language: if you don't like
> some elements, don't use them,

Wrong.  The point is that you might use the ones you don't like by accident.
Then there is the issue of being handed someone else's code that liked using
the set of constructs you like to avoid.

Some constructs are just plain bad programming.  A good language never
allows them in the first place.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@100822 by Lee Jones

flavicon
face
Gerhard Fiedler wrote:
>> [C then imperial units versus metric units] -- it seems just as
>> strongly as you regret their same decision in the case of C.

Olin Lathrop wrote:
> In any case, this is a silly analogy.  Switching computer languages
> is relatively easy.

Only if you're willing to abandon all your prior source code
base...  or rewrite it every time you switch languages...  or
write a source translator each time.

> I first learned Basic, then [...] Fortran [... et al]

> I think the real reasons C is so popular are:

You forgot one:

4) it's incredibly portable.  I've used C on machines with
  8-bit to 36-bit to 64-bit wide native word sizes.  Pascal
  has the contructs of a decent language but having a good
  Pascal compiler available on a new client's system(s) is
  _much_ more problematic.

> 1 - There was a whole generation of programmers that learned
>     C as their first language, got used to it, got used to
>     writing code irresponsibly

I first learned Fortran II, then assembler (for a BCD machine),
then BASIC, Algol, COBOL, C, multiple assembly languages, etc.

C was the first language I found that got one crucial element
right -- that every subroutine could return a value (part of
the general case in C where everything is an expression).  I
was incredibly frustrated that I could not do that in COBOL
(which was a major source of income at the time).

I like C.  One primary advantage is the portability.  I also
avoid certain syntax constructs because they make the source
difficult to read [ref: Knuth literate programming] and lead
to difficult to find errors.  Nothing's prefect -- live with it.

And most programmers will write code irresponsibly in whatever
language in which they are writing.

>     and largely don't realize how much better things could be.

Assuming the "better" compiler is available on all the target
systems in which you are trying to build projects.

> 3 - Anything else is not what people known, so they naturally
>     rebell against the alternatives without really understanding
>     them.  Don't underestimate this powerful force of human nature.
>     People feel threatened by stuff they don't know.

I absolutely agree.

But I see "stuff I don't know" as a challenge to learn something.
I love learning just for the hell of it.  When people tell me that
something can't be done -- and I then show then how to do it ...
friction sometimes results.  It used to amaze me that they didn't
want to learn anything new -- it's happened so often that as I've
aged, I accept now it with sadness & cynicism.


I also want to comment on your characterization of C's designers
as "hackers" in a very derogatory sense.  They extended the state
of the art _at_the_time_ and delivered a usefull tool.  As the
computer industry expanded, that tool got widely deployed.

The industry is wildly differnet now and is _BIG BUSINESS_.  A
"better" language would now have to go through so many standards
bodies & industry committees before enough companies deployed it
so that it was ubiquitous & usefull that (even if it were started
today) I doubt I would be alive to see it done.  Sad but true.

                                               Lee Jones

2009\02\12@110143 by Bob Ammerman

picon face
>> I think the real reasons C is so popular are:
>
> You forgot one:
>
> 4) it's incredibly portable.  I've used C on machines with
>   8-bit to 36-bit to 64-bit wide native word sizes.  Pascal
>   has the contructs of a decent language but having a good
>   Pascal compiler available on a new client's system(s) is
>   _much_ more problematic.

This is a bit of a circular argument. C is popular because it is portable
and it is portable because it is popular.

-- Bob Ammerman
RAm Systems

2009\02\12@110401 by olin piclist

face picon face
Lee Jones wrote:
> You forgot one:
>
> 4) it's incredibly portable.  I've used C on machines with
>    8-bit to 36-bit to 64-bit wide native word sizes.  Pascal
>    has the contructs of a decent language but having a good
>    Pascal compiler available on a new client's system(s) is
>    _much_ more problematic.

But this is a result of already being popular, and can't be used to explain
how it got popular.  What you are really saying is the C has gained critical
mass so that it has to get used for reasons that have nothing to do with the
merits of the language itself.  I agree.

> C was the first language I found that got one crucial element
> right -- that every subroutine could return a value (part of
> the general case in C where everything is an expression).

This makes no sense.  Every language I've used has the ability to have a
subroutine return a value.  In Fortran and Pascal those types of subroutines
are called functions as apposed to subroutines (Fortran) and procedures
(Pascal).  If you want a function returning a value you can do that in
(most) any language.  Surely you have cases where you don't want a
subroutine returning a value.  In C you declare the return value VOID, in
Pascal you declare the routine PROCEDURE, in Fortran you declare it
SUBROUTINE.  I don't see how this is a C distinction.

> I
> was incredibly frustrated that I could not do that in COBOL
> (which was a major source of income at the time).

There are many other languages than C and COBOL.  I've never used COBOL so I
can't comment on it.

> I like C.  One primary advantage is the portability.

Again, this is only because it has become popular, not because it is good.

> And most programmers will write code irresponsibly in whatever
> language in which they are writing.

True, but C makes it particularly easy.

> I also want to comment on your characterization of C's designers
> as "hackers" in a very derogatory sense.  They extended the state
> of the art _at_the_time_

No they didn't.  Both Algol and Pascal predated C.  The concept of strong
type checking had already been tried and its advantages understood.  The
only reason for deliberately weakening type checking is that the designers
of C found it irritating.  The only reason they would think that is if they
didn't "get it" and were irresponsible programmers.

> The industry is wildly differnet now and is _BIG BUSINESS_.  A
> "better" language would now have to go through so many standards
> bodies & industry committees before enough companies deployed it
> so that it was ubiquitous & usefull that (even if it were started
> today) I doubt I would be alive to see it done.  Sad but true.

I agree.  It would take far more for a new compiled general purpose language
to gain popularity today that it did when C was developed.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@112746 by Wouter van Ooijen

face picon face
> This is a bit of a circular argument. C is popular because it is portable
> and it is portable because it is popular.

True. The world is full of positive-feedback loops.

--

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu

2009\02\12@114036 by Larry Bradley

flavicon
face
C was original designed by two guys (Kernighan and Ritchie) at Bell Labs for
internal use. It was really assembler with curly braces - a very low-level,
let me get at the guts of the system language. *They* never intended it to be
as widespread as it is now. But it is, and all the improvements over the
years have still left it as a loose language.

As long as the users understand the issues, all is well. But I think a lot of
people approach C like they would Basic, for example - as a forgiving
language, one that does what you *intend* as opposed to one that does what
you *wrote*.

Personally, I hate it. I don't use it if I can avoid it - but there are a lof
of "C-like" languages out there, such as PHP, which I have to use for web
development stuff. I *hate* curly brackets. Give me a good old "Do Begin ...
End" any time :)

Larry

2009\02\12@114421 by Bob Ammerman

picon face
From: "Olin Lathrop" <olin_piclistEraseMEspam.....embedinc.com>

> Lee Jones wrote:
>>...snip...
>> I also want to comment on your characterization of C's designers
>> as "hackers" in a very derogatory sense.  They extended the state
>> of the art _at_the_time_
>
> No they didn't.  Both Algol and Pascal predated C.  The concept of strong
> type checking had already been tried and its advantages understood.  The
> only reason for deliberately weakening type checking is that the designers
> of C found it irritating.  The only reason they would think that is if
> they
> didn't "get it" and were irresponsible programmers.

While I agree that K&R didn't follow the state of the art when they designed
"C", I think a major reason for not doing strict type checking is that it
allowed them to make the compiler much simpler (for example treating any
non-zero value as true). Remember that "C" (like Unix) was originally
designed to run on "small" machines, for some truly interesting values of
"small".

-- Bob Ammerman
RAm Systems


2009\02\12@115001 by Alan B. Pearce

face picon face
>> 4) it's incredibly portable.  I've used C on machines with
>>    8-bit to 36-bit to 64-bit wide native word sizes.  Pascal
>>    has the contructs of a decent language but having a good
>>    Pascal compiler available on a new client's system(s) is
>>    _much_ more problematic.
>
>But this is a result of already being popular, and can't be
>used to explain how it got popular.  What you are really saying
>is the C has gained critical mass so that it has to get used
>for reasons that have nothing to do with the merits of the
>language itself.  I agree.

Unfortunately economics got in the way of making one other language as
popular. I am thinking of what UCSD did with their tokenized Pascal
interpreter when the 8086 came along. There was a real chance to allow UCSD
Pascal code for the 8080 to be used on a new processor, by buying just the
16 bit interpreter, and they killed the market by insisting that you could
only get the interpreter by purchasing a full many US$ developers package.
Up to that point Pascal was seen as an important language with many of the
desirable typing that Olin wants.

The interest immediately dropped out of the market - and seemed to affect
all interest in Pascal as a result. Only Turbo Pascal seemed to blossom (out
of all the available Pascal packages of the time).

I wonder how many other potentially useful languages with the
characteristics Olin is suggesting died for similar reasons. A C compiler
was readily available - for the effort of typing it in (Small C as published
in Dr Dobbs Journal - still available from them on CD), originally designed
for the 8080, and ported to many other processors. If a similar project had
been done for more strongly typed languages, I wonder if they would have
survived.

2009\02\12@120311 by Wouter van Ooijen

face picon face
> While I agree that K&R didn't follow the state of the art when they designed
> "C", I think a major reason for not doing strict type checking is that it
> allowed them to make the compiler much simpler (for example treating any
> non-zero value as true). Remember that "C" (like Unix) was originally
> designed to run on "small" machines, for some truly interesting values of
> "small".

So they did follow the state of the art, but for some different value of
"art" (that is, they optiomized to a different gial than the
mainframe-oriented algol, cobol and fortran people).

--

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu

2009\02\12@124523 by Ariel Rocholl

flavicon
face
2009/2/12 Olin Lathrop <EraseMEolin_piclistspamembedinc.com>

> ... The
> only reason for deliberately weakening type checking is that the designers
> of C found it irritating.


Do you know this for a fact or is this a guess? I can think of reasons why
this may have happened different than that, and don't recall any text of K&R
stating this.


--
Ariel Rocholl
Madrid, Spain

2009\02\12@132053 by olin piclist

face picon face
Ariel Rocholl wrote:
>> ... The
>> only reason for deliberately weakening type checking is that the
>> designers of C found it irritating.
>
> Do you know this for a fact or is this a guess? I can think of
> reasons why this may have happened different than that, and don't
> recall any text of K&R stating this.

It seems a logical conclusion given the evidence.  I do remember K or R
talking somewhere about a compiler not "getting in the way", which sounds
like what I mean.

Bob Ammerman pointed out that C was developed on a small machine and they
might not have had the room to do type checking.  Maybe there is some truth
there.  I do know other compilers existed for that machine.  I expect the
compiler itself wasn't pushing the machine to its limits.  Note that type
checking doesn't make the compiled code any bigger, which is usually the
issue on a small machine, it only requires a bit more work in the compiler
itself.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@134315 by William \Chops\ Westfield

face picon face

On Feb 12, 2009, at 5:10 AM, Olin Lathrop wrote:

> The "absence of restrictions" is exactly what I'm talking about.

It's fascinating that the least "professionally regarded" languages  
have become the most successful (BASIC and C, right?)

Sometimes I think C succeeded solely on the basis of being able to do:
   myuart = (struct uartregs *)0xFFFF8C80;

The last time this discussion went around, you were advocating a  
Pascal that had been extended to support the various constructs  
required for embedded software, but I don't recall whether those  
extensions had been formalized into a standard specification or not?    
I'd certainly have trouble going out and BUYING Pascal for PIC, AVR,  
PC, Mac, ARM, freescale, MIPS, 8051, Renesas, Z80, etc.  Walter hits  
the nail on the head with "portability" arguments; it doesn't matter  
how wonderful a language might be if it's only implemented on a  
particular architecture (or a small number of architectures.)  (BASIC  
is pretty hopeless; I don't have any faith whatsoever that the  
features of one microcontroller basic will map easily onto another's.)

BillW

2009\02\12@140812 by William \Chops\ Westfield

face picon face

>> 4) it's incredibly portable.  I've used C on machines with
>>  8-bit to 36-bit to 64-bit wide native word sizes.  Pascal
>>  has the contructs of a decent language but having a good
>>  Pascal compiler available on a new client's system(s) is
>>  _much_ more problematic.
>
> This is a bit of a circular argument. C is popular because it is  
> portable
> and it is portable because it is popular.

But...  It wasn't.  When I was in college (<1981), Pascal was much  
more popular than C.  I mean, C was a unix-only language for all  
practical purposes, and unix was HARD-TO-GET.

People were taught Fortran, PL/1, and APL (!) as general purpose  
languages.  The PDP11s ran DEC operating systems, and Pascal was the  
up-and-coming "teaching language."   Everyone wrote a pascal-like  
compiler in their compiler class.  The microcomputers all ran BASIC.  
There was *A* C compiler for CP/M, but you had to get it from one of  
those weird hole-in-the-wall companies.

When the IBM/PC came out (1983) it had Fortran and Pascal compilers  
long before there was a commercial C compiler.  In 1987 when cisco  
started up, commercial unix systems running on 68000 were just  
appearing, but the only C compiler around was based on "the portable C  
compiler", which was pretty sucky.  Borland C for the PC came out  
about the same time as the first versions of gcc for 68k.  Microsoft C  
probably showed up in about 86.

C was originally developed in 1972.  So that's something like *15  
years* that C, as a language, languished in relative obscurity while  
the best and the brightest put forth their alternative Excellent  
Languages: PL/1, Pascal, Common Lisp, Scheme, Smalltalk, Modula-2,  
APL, etc, etc, etc.  They taught the languages to students an they  
pushed them on their mainframes (I had some passing experience with  
the IBM "everything should be written in PL/1 instead of Fortran OR  
Cobol" agenda.)  And yet, today it appears that C won the war?  Why?  
I dunno.  But I think I can claim that it isn't entirely that C  
"succeeded."  All those other languages *FAILED*...  (There are  
probably important lessons about something in that.  But I don't think  
anyone learned them...)

BillW

2009\02\12@142924 by olin piclist

face picon face
William Chops" Westfield" wrote:
> And yet, today it appears that C won the war?  Why?

I think part of it was that C tagged along with Unix.  Otherwise the "I
don't want the compiler telling me what to do" types had a lot of influence.
I don't remember type checking and careful programming being hot topics when
I was in school in the late 1970s.  A whole generation of programmers were
trained without software cleanliness and programming for maintainability
being much more than a mention.  And now it's too late.

> I dunno.  But I think I can claim that it isn't entirely that C
> "succeeded."  All those other languages *FAILED*...

Interesting point.  I guess C looks more appealing at first glance to
inexperienced programmers who are trying to get their homework done and keep
getting all those annoying compiler errors.  They're not mature enough to
realize that in the end not having the compiler tell you about such things
is a lot worse.

I remember in in college a guy in the dorm came to me with a Fortran listing
that had more errors than source lines.  He said he'd been trying to figure
it out for hours.  In a few seconds I found that he misspelled INTEGER, the
effect of which rippled thru the code to cause most of the errors.  After I
pointed it out and he verified that was indeed the problem, he started
ranting about how you shouldn't have to declare variables at all.  I think
the problem is there are a lot more of that guy out there than responsible
programmers that understand a strict compiler helps them.

> (There are probably important lessons about something in that.
> But I don't think anyone learned them...)

The masses (of programmers) are asses?


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@151024 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

> Geesh Gerhard, you just like arguing, don't you?

Umph... look who's talking :)

{Quote hidden}

I never called anybody stupid. This is the part that you don't seem to
get.

> In any case, this is a silly analogy.  Switching computer languages is
> relatively easy.  

You may say so, but that doesn't make it so. The thing is... it is not,
or else you (for example) would have done it a long time ago. It is
really difficult to select a language that has decent compilers that
produce production-grade code and is supported on a majority of
processors and that isn't C.

Somebody working on multiple embedded platforms and thinking about
investing time and money in a language (and the associated environment)
probably would not choose Pascal -- even if that person really likes
Pascal. I know you didn't.

This is not about what would be best, but what /is/. And as much as you
dislike it, C is the most widely supported language for embedded
processors, just as mechanical parts measured in imperial units are the
most widely available ones (in the USA). Alternatives are available, but
I guess it is easier to buy metric mechanic parts in the USA than it is
to find a language that is not C and still is well (and professionally)
supported on a significant number of embedded processors.


> I think the real reasons C is so popular are:
>
> 1 - There was a whole generation of programmers that learned C as
> their first language, got used to it, got used to writing code
> irresponsibly as a result, and now anything else feels like it would
> be less comfortable and more restrictive.  

Are you sure about this? I think the majority of programmers that work
today never learned C. As Bill pointed out, it doesn't seem to have been
popular in US colleges either. Which generation is it that you refer to?

> 2 - Most programmers are bad and will gladly be irresponsible when
> given the chance.  Such people actually like C because it doesn't get
> in their way as they see it.

I think this is plain wrong. You like to say so because you don't like
C, but again this doesn't make it true. In order to get anything to work
in C you need to program with a certain discipline (as you point out in
other occasions when this suits your argument). IMO irresponsible
programmers don't like this; they like more forgiving languages better.
Nobody /likes/ to shoot themselves in the foot, and programming
irresponsibly in C invariably ends up as a shot in ones foot.

> 3 - Anything else is not what people known, so they naturally rebell
> against the alternatives without really understanding them.  Don't
> underestimate this powerful force of human nature.  People feel
> threatened by stuff they don't know.

Before C, TurboPascal was the big thing on the PC. AFAIR it was the
first IDE that was in broad use on the PC. (With "PC" here is meant
Microsoft, large user base.) Only after the huge success of TurboPascal
and later Visual Basic became C++ somewhat a success on the PC platform.
Resistance to change can't be the reason. (Compared to the alternatives,
much of what you dislike about C is also valid for C++, so the argument
is very similar.)

Gerhard

2009\02\12@153054 by M. Adam Davis

face picon face
I'm sorry if this is duplicate - I don't have time to read the other
answers, but I wanted to make sure you had a few tools as you looked
for information:

The key work here is variable promotion or conversion.

In C, typically, variables are promoted to the greatest type in the
equation, where greatest type is defined as follows:

Short --> int --> long --> float --> double --> long double

This means that in your example two conversions should have happened.
The int would be converted to float, the equation would have been
evaluated, and then the result converted to the type that represents
i.  However, in an embedded compiler the promotion is different, and
you MUST consult your manual if you choose not to be explicit.  There
are generally good reasons to not follow the standard on 8 bit
processors, so look into it.

Implementations vary though, especially in the embedded world.
Casting is about the only thing you can do and expect it to come out
as desired if you are mixing types.

That being said, it's far better to avoid mixing types, and when
necessary you should always be explicit and cast everything.  If you
find are constantly mixing types you may want to re-think your design.

This seems a pretty reasonable ans straightforward explanation of
variable promotion:
http://icecube.wisc.edu/~dglo/c_class/promo_conv.html

-Adam

On Tue, Feb 10, 2009 at 11:49 PM, Forrest W Christian
<RemoveMEforrestcEraseMEspamEraseMEimach.com> wrote:
{Quote hidden}

> -

2009\02\12@153137 by Nicola Perotto

picon face


Olin Lathrop wrote:
> The masses (of programmers) are asses?
>
>
>  
In my experience... yes!


2009\02\12@153140 by Walter Banks

picon face


Bob Ammerman wrote:

{Quote hidden}

Historically C evolved out of Martin Richards's BCPL compiler and the
derivative B compiler. This was at a time that one of the hot language
research topics was simple effective machine independent languages.
The other active projects at the time were forth, macro processor work
for machine independent software ML1 STAGE2 FLUB M4 some of the
researcher were Doug McIlroy, Peter Brown, Bill Waite and Poole
who wrote STAGE2. There was a lot of other things going on at the same
time Calvin Moores wrote trac as a thesis and lost out by a few months
(to Doug McIlroy) the independent discovery of the power of recursive
macros. There was other processor independent recursive functional languages
lisp. LISP was ported to the PDP 1 as a proof of concept. (I have the
PDP1 source if anyone is interested)

In the processor independent low level languages C won because it was
a better implementation language in more cases than rest. Ron Cain showed
that C could be ported simply to small processors.

Richie had a goal of creating an implementation language and did what
he set out to do. Richie was not an "irresponsible programmer"
For the last 20 years ISO has been standardizing C. It has been a full
debate.

Creating an implementation language to replace C will technically be a
significant task. ISO WG14 has about a 1000 internal papers that
document language design issues for C alone. Each of these represents
a design consideration that has to be addressed. Many of C's choices
can argued and alternative choices selected.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com



2009\02\12@153457 by Walter Banks

picon face


Olin Lathrop wrote:

> No they didn't.  Both Algol and Pascal predated C.  The concept of strong
> type checking had already been tried and its advantages understood.  The
> only reason for deliberately weakening type checking is that the designers
> of C found it irritating.  The only reason they would think that is if they
> didn't "get it" and were irresponsible programmers.

Harsh words for fundamental researchers for an implementation language that
succeeded where many failed. DMR posts emails on several newsgroups
ask him if that was why he made the choices they did.

w..
.

2009\02\12@153853 by sergio masci

flavicon
face


On Thu, 12 Feb 2009, Olin Lathrop wrote:

> No they didn't.  Both Algol and Pascal predated C.  The concept of strong
> type checking had already been tried and its advantages understood.  The
> only reason for deliberately weakening type checking is that the designers
> of C found it irritating.  The only reason they would think that is if they
> didn't "get it" and were irresponsible programmers.
>

Actually, the story goes, 'C' came from 'B' which came from BCPL.

BCPL has not types. It only knows about integers.

'B' (again according to the stories) was an interpreted language - which
kind of explains why you'd want operators such as '+=' and '++'. Being
interpreted would also add a runtime overhead to type checking.

'C' (again according to the stories) was developed as a real compiler
because 'B' ran too slowly.

With all this in mind, it's understandable that 'C' started out life
without strong type checking.

So, apart from type checking what else would your ideal language have?


Regards
Sergio Masci

2009\02\12@154507 by olin piclist

face picon face
sergio masci wrote:
> So, apart from type checking what else would your ideal language have?

"Other than that Mrs Lincoln, how did you like the play?"

I've already pointed out a few other things, like everything being a
expression that has a value.  I'm still waiting for clarification from
Walter about why this lets a compiler skip some branching instructions.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@155256 by Walter Banks

picon face


Olin Lathrop wrote:

> No they didn't.  Both Algol and Pascal predated C.  The concept of strong
> type checking had already been tried and its advantages understood.  The
> only reason for deliberately weakening type checking is that the designers
> of C found it irritating.  The only reason they would think that is if they
> didn't "get it" and were irresponsible programmers.

Their alternative was assembly language for implementation. They
were not trying to create an application language or a language
that was designed to compete with application languages.
Languages like processors should be selected for the suitability
to the application

> > The industry is wildly differnet now and is _BIG BUSINESS_.  A
> > "better" language would now have to go through so many standards
> > bodies & industry committees before enough companies deployed it
> > so that it was ubiquitous & usefull that (even if it were started
> > today) I doubt I would be alive to see it done.  Sad but true.
>
> I agree.  It would take far more for a new compiled general purpose language
> to gain popularity today that it did when C was developed.

Maybe not. It would take a lot to displace C but many successful
new languages have been introduced in the last few years. They share
many of the same characteristics of C, to fill a need that has not been
properly addressed before. One example is javascript a powerful
interpreted functional language widely used in internet pages and
powerful enough to implement an optimizing compiler. Twenty lines
or so will implement a 4 function calculator with display graphics.

Another example is the block programming language that is described
in IEC61499 that makes a single application able to be distributed across
several processors .

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com



2009\02\12@160551 by Forrest W. Christian

flavicon
face
One thing which appears to be missing from this whole discussion is one
key point:

Languages are tools to get work done.   I want my chosen language to
help me get the work done as efficiently as possible, and of course
correctly.

I prefer that I can convey what I need to get done to the language
quickly and easily.   Often, Strongly typed languages add a whole
additional layer of complexity that quite frankly isn't needed.  
Writing in Pascal versus C is a good example of what I am talking about,
although perhaps C took it too far toward simplicity and perhaps into
danger.   In pascal you define everything and doing anything weird adds
a whole layer of complexity just to convince the language that you do in
fact want to do what you are asking it to - "yes I do want to increment
the 'ascii value' of the character in the third position on the string
by one".

My favorite language to write in is Perl.   What is interesting is that
perl is as close to a typeless language as you can get, and it makes
well-defined decisions as to what you really want.   For instance if you
use an arithmetic operator on two variables it will add them as
numbers.   If one of the variables is a string, it will convert it to
the numeric reference firrst.

If instead you wanted to concatenate two strings, you need to use the
concatenation operator.   Concatenate to numeric values?  Well you get
the result that you would if you convert each to a string and then
concatenate them.

And so on.

I am not convinced by the argument that a more ridgid language produces
better code.  Java is arguably much more ridgid than C, and I'll tell
you that there is *lots* of buggy Java out there.

My original post was sort of asking what C did to figure out how it was
going to produce the results it did, specifically when doing math on
mixed types (and for the record, I typically use all integers, and very
rarely have to mix types other than perhaps integers of different
lengths).   It's important that a language does the right thing, and
that the right thing is well defined.   I think some of Olin's argument
is that it is impossible to ensure that without being lots more verbose
toward the compiler.  My argument is that this is a bunch of bull,
although I agree that C perhaps could have been a bit better in some
regards as to either requiring additional verbosity, or doing things a
bit smarter at times.   That said, most modern compilers do a fairly
good job of telling you when they think something is weird.   "What do
you mean you want to put a integer in a pointer?" or so on.

-forrest

2009\02\12@161708 by David Harris

picon face
Walter Banks wrote:
>
> Olin Lathrop wrote:
>
>  
>> No they didn't.  Both Algol and Pascal predated C.  The concept of strong
>> type checking had already been tried and its advantages understood.  The
>> only reason for deliberately weakening type checking is that the designers
>> of C found it irritating.  The only reason they would think that is if they
>> didn't "get it" and were irresponsible programmers.
>>    
>
> Harsh words for fundamental researchers for an implementation language that
> succeeded where many failed. DMR posts emails on several newsgroups
> ask him if that was why he made the choices they did.
>
> w..
> .
>  
Yes, I agree, pretty harsh words.  Remember Pascal was developed on a
CDC 6000 in 1969-70, while C was developed on PDP7 and PDP11s from
1969-1973.  There are orders of magnitude of difference in size there,
and really overlap of timeframes, too.  Also, the authors were busy
developng Unix at the same time -- quite the accomplishment!  

David

David  

2009\02\12@162834 by olin piclist

face picon face
Forrest W. Christian wrote:
> Often, Strongly typed languages add a whole
> additional layer of complexity that quite frankly isn't needed.

See folks, this is exactly the kind of irresponsible and incorrect attitude
I've been talking about.  I really didn't expect anyone on the PICList to
advocate it though.

Yes, strong type checking might cost a few extra keystrokes in the
relatively rare cases where where you want to do something unusual.  That's
a small fraction of the cost of the compiler not catching something you
didn't mean.

> "yes I do want to increment the 'ascii value' of the character in the
> third position on the string by one".

 str[3] := char(ord(str[3]) + 1);

Big deal.  If it weren't for good type checking this could be written

 str[3] := str[3] + 1;

That saves 11 characters, wow.  If you can't see how the 11 characters are
worth it in the rare case you really do want to do this compared to the
debug time when you accidentally mixed character and integer types by
mistyping a variable, grabbing the wrong array, or whatever, then frankly
you shouldn't be allowed near a keyboard.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@171206 by Forrest W. Christian

flavicon
face
Olin Lathrop wrote:
> Forrest W. Christian wrote:
>  
>> Often, Strongly typed languages add a whole
>> additional layer of complexity that quite frankly isn't needed.
>>    
>
> See folks, this is exactly the kind of irresponsible and incorrect attitude
> I've been talking about.  I really didn't expect anyone on the PICList to
> advocate it though.
>  
Actually I think you completely miss the point.  If the compiler is 100%
aware of the type of the argument, it can do the correct thing.   The
programmer learns how to specifically tell the compiler exactly what is
wanted.

A major problem in C is that operators often have multple function
differences.  And requires the programmer to carry around too much
information in their head, which leads to errors.   Is it a pointer or
an actual variable.  If it's a pointer, what structure is it pointing
to.   Was it even initialized?

One way to solve this is to strongly type the language, which leads to
situations where you spend all your time telling the compiler to convert
between this and this type and is a pain to program in.  I suspect that
is one of the reasons why C is so popular.   I think this is the stupid
way to solve it.

The other way is to move the intelligence into the compiler.   Fix the
operators so it is clear what is wanted.  Eliminate the necessity to
deal with type, since type becomes irrelevant.  Make it increasingly
difficult for the programmer to do something wrong - but not by forcing
the programmer to think about type at every stage of the game, but
instead to think about exactly what they want to do with the data they
have.  

If I attempt to multiply two strings together, the compiler can react in
three ways:

1) Perform the multiplication on the raw data of the string, that is the
ascii value - this is arguably what would happen in some cases in C,
since a string is just an array of int8's.
2) At compile time, realize that you can't multiply two strings and
throw a compile error.  This is the "pascal" way.
-or-
3) realize that strings need to be converted to numbers first before
they can be multiplied, so do the conversion, complete the
multiplication and save the result as a number.

I agree that #1 is the worst possible outcome.   You are arguing for
#2.  I would rather have #3.

-forrest

2009\02\12@180651 by olin piclist

face picon face
Forrest W. Christian wrote:
> One way to solve this is to strongly type the language, which leads to
> situations where you spend all your time telling the compiler to
> convert between this and this type and is a pain to program in.

You are way overstating the issue.  Well thought out programs rarely need
unusual type conversions.  The extra syntax to say "yes I know what I'm
doing in this special case" is so minimal as to be irrelevant.  I don't have
any hard numbers, but I suspect it happens less than once in a thousand
lines.  I routinely write whole programs where it never comes up.  This
issue is just a smoke screen.

{Quote hidden}

#3 has the drawback of not catching unintentional conversions.  It is good
to have a language where likely mistakes cause syntax errors instead of
unwanted automatic conversions.  Remember that maintainence is the big cost
of software, not the initial writing.  Adding a few inline functions or
whatever is trivial when first writing the code.  Chasing down runtime bugs
later when a change is made with a datatype a bit misunderstood is a lot
more expensive.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\12@185817 by Rolf

face picon face
Forrest W. Christian wrote:
> Olin Lathrop wrote:
>  
>> Forrest W. Christian wrote:
>>  
>>    
>>> Often, Strongly typed languages add a whole
>>> additional layer of complexity that quite frankly isn't needed.
>>>    
>>>      
>> See folks, this is exactly the kind of irresponsible and incorrect attitude
>> I've been talking about.  I really didn't expect anyone on the PICList to
>> advocate it though.
>>  
>>    
> Actually I think you completely miss the point.  
>  
[snip]
{Quote hidden}

The language that does that is called perl ;-)   (admittedly, it's
interpreted, not compiled).

Now, we could talk for years about the typing nature of perl, and still
not come to a resolution (B.T.W, I love perl for certain things...
theres nothing better for a quick hack).

Rolf

2009\02\12@193748 by Forrest W. Christian

flavicon
face
I should first clarify one of the three options I mentioned in my
original email:

"1) Perform the multiplication on the raw data of the string, that is
the ascii value - this is arguably what would happen in some cases in C,
since a string is just an array of int8's.".

I think actually that C may do something even stranger...

char string1[30];
char string2[30];
int result;

main()
{
  result=string1*string2;
}

result will probably me the multiplication of the *address* of string1
and string2.   Which is almost certainly what you don't want.

But to respond to Rolf's reply:

Rolf wrote:
> The language that does that is called perl ;-)   (admittedly, it's
> interpreted, not compiled).
>
> Now, we could talk for years about the typing nature of perl, and still
> not come to a resolution (B.T.W, I love perl for certain things...
> theres nothing better for a quick hack).
Yep, I know about perl ...   And although typless is probably not
accurate, strongly-typed may not be also.   I would probably say it's
more like "the type of the data in the variable (string, integer, float)
is irrelevant to the programmer in most cases".

-forrest

2009\02\12@222306 by Tony Smith

flavicon
face
> > 1) Perform the multiplication on the raw data of the string, that is
> > the ascii value - this is arguably what would happen in some cases in
> > C,
> > since a string is just an array of int8's.
> > 2) At compile time, realize that you can't multiply two strings and
> > throw a compile error.  This is the "pascal" way.
> > -or-
> > 3) realize that strings need to be converted to numbers first before
> > they can be multiplied, so do the conversion, complete the
> > multiplication and save the result as a number.
> >
> > I agree that #1 is the worst possible outcome.   You are arguing for
> > #2.  I would rather have #3.
>
> #3 has the drawback of not catching unintentional conversions.  It is good
> to have a language where likely mistakes cause syntax errors instead of
> unwanted automatic conversions.  Remember that maintainence is the big
cost
> of software, not the initial writing.  Adding a few inline functions or
> whatever is trivial when first writing the code.  Chasing down runtime
bugs
> later when a change is made with a datatype a bit misunderstood is a lot
> more expensive.


I'll take #2 as well.  #3 leads you down the path where VB ended up, after
slowly drifting from #2 to #3.  Consider:

       X = 1 + "2"
       X = 1 & "2"

Without knowing what X is defined as, the result could be 3, "3", 12 or
"12".  Fun!

No thanks.

Actually, I could be wrong about the results, it's hard to remember exactly
what it does these days.  As I said, no thanks.

Tony


2009\02\13@005719 by Vis Naicker

flavicon
face
I have always felt ... that C is a language of obscure bugs. It's been
hacked, it's been patched, but there are times that I will mean to do
something innocent and then get bitten. Of course, it will be my fault, and
hours will be lost trying to figure out what went wrong.

There are tools, and the compiler warnings that may point out some of my
shortcomings. I believe these shortcomings are due to the language itself.
The language is yesterday's tool and should take retirement in the embedded
world as well.

2009\02\13@011757 by William \Chops\ Westfield

face picon face

>>> 2) At compile time, realize that you can't multiply two strings and
>>> throw a compile error.  This is the "pascal" way.

Even C does not allow you to multiply pointers.  Unlike a lot of the
type checking that is relatively recent, I'm pretty sure that this
has always been the case... (Hmm.  cc5x accepted it.)

% cat >foo.c
char string1[30];
char string2[30];
int result;

main()
{
  result=string1*string2;
}
% gcc foo.c
foo.c: In function `main':
foo.c:7: error: invalid operands to binary *

2009\02\13@013905 by M. Adam Davis

face picon face
Isn't that true of any language?  If you aren't aware of the
intricacies of the language, then you're going to write code that you
can't expect to work the way you want it to.

C is just a tool, nothing more, nothing less.  For some jobs it's a
reasonable choice, and for other jobs it's not.

But to blame the hammer because you either bent the nail (you were
holding it wrong) or because you're trying to drive a screw with it is
shortsighted.

If you aren't proficient in C, then yes, you're going to bruise your
thumb.  Now replace the "C" with any other language.  Some languages
have shorter learning curves than C, and thus are more immediately
accessible to people.  If that's what you need, then that is a better
choice for you.  But what might be the best choice for you does not
make it a general case for everyone or every project.

But no one is forcing you to program in C.  You can program in
whatever language you prefer.  Keep in mind, though, that due to the
ease which a C compiler is brought up to a new product, that new
microcontrollers are coming out all the time, and that as a low level
language it's suited for standalone projects (no OS or significant
drivers) then it's unlikely that you'll be seeing C supplanted.  If
your preferred language isn't available, you may be the one that will
have to port it.

-Adam

On Fri, Feb 13, 2009 at 12:56 AM, Vis Naicker <RemoveMEvisnTakeThisOuTspamspamwbs.co.za> wrote:
> I have always felt ... that C is a language of obscure bugs. It's been
> hacked, it's been patched, but there are times that I will mean to do
> something innocent and then get bitten. Of course, it will be my fault, and
> hours will be lost trying to figure out what went wrong.
>
> There are tools, and the compiler warnings that may point out some of my
> shortcomings. I believe these shortcomings are due to the language itself.
> The language is yesterday's tool and should take retirement in the embedded
> world as well.
>
> -

2009\02\13@041423 by Alan B. Pearce
face picon face
>After I pointed it out and he verified that was indeed the
>problem, he started ranting about how you shouldn't have
>to declare variables at all.

I guess he stuck with Basic !!! ;))))

2009\02\13@042629 by Alan B. Pearce

face picon face
>Also, the authors were busy developng Unix at the
>same time -- quite the accomplishment!


I always understood that the original C implementation was written as a tool
to enable the writing of Unix, rather like Donald Knuth designed and wrote
Tex as he was having difficulty typesetting his math book series.

2009\02\13@044326 by Gerhard Fiedler

picon face
Alan B. Pearce wrote:

>> After I pointed it out and he verified that was indeed the problem,
>> he started ranting about how you shouldn't have to declare variables
>> at all.
>
> I guess he stuck with Basic !!! ;))))

Or Python, or any number of languages... but not with C. If you can make
a point out of such stories, it is that such types don't get a C program
running -- not that they choose C.

Gerhard

2009\02\13@044725 by Gerhard Fiedler

picon face
Walter Banks wrote:

> Olin Lathrop wrote:
>
>> No they didn't.  Both Algol and Pascal predated C.  The concept of
>> strong type checking had already been tried and its advantages
>> understood.  The only reason for deliberately weakening type
>> checking is that the designers of C found it irritating.  The only
>> reason they would think that is if they didn't "get it" and were
>> irresponsible programmers.
>
> Harsh words for fundamental researchers for an implementation
> language that succeeded where many failed. DMR posts emails on
> several newsgroups ask him if that was why he made the choices they
> did.

Did you just suggest bringing facts into the rant? That's almost a
sacrilege :)

Gerhard

2009\02\13@050737 by Gerhard Fiedler

picon face
Forrest W. Christian wrote:

> I think actually that C may do something even stranger...
>
> char string1[30];
> char string2[30];
> int result;
>
> main()
> {
>    result=string1*string2;
> }
>
> result will probably me the multiplication of the *address* of string1
> and string2.   Which is almost certainly what you don't want.

Unless you are talking about a bastard implementation (which then is not
really C), this is not allowed in C, and AFAIK has never been (or at
least not for a long time).

Gerhard

2009\02\13@051006 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

> Forrest W. Christian wrote:
>> Often, Strongly typed languages add a whole additional layer of
>> complexity that quite frankly isn't needed.
>
> See folks, this is exactly the kind of irresponsible and incorrect
> attitude I've been talking about.  I really didn't expect anyone on
> the PICList to advocate it though.

Have you ever learned to program in a ducktyped language? Not all things
are the same as they were when Pascal and C were invented, and
programming languages are no exception.

Gerhard

2009\02\13@051855 by Gerhard Fiedler

picon face
M. Adam Davis wrote:

> That being said, it's far better to avoid mixing types, and when
> necessary you should always be explicit and cast everything.  If you
> find are constantly mixing types you may want to re-think your
> design.

In more complex calculations on small embedded systems, I find that
mixing types is quite common. I tend to choose the smallest required
type for a variable or a single calculation, and if some parts of the
complete calculation have to be in 32 bit, they will be, and some others
in 16 bit, and so on. Between the different variables and calculations,
conversions up and down are frequent.

Of course you could contend that if there is a calculation that has to
be in 32 bit, I should do all the related calculations also in 32 bit,
even if they don't have to be 32 bit. In a typical PC application, this
is what I would do -- but not generally in an embedded application on an
8 bit processor.

I agree that using explicit conversions is the way to go, though. FWIW,
any decent C compiler will issue a warning every time a conversion may
loose information. That is, an implicit conversion from any of the float
formats to any of the integer formats will cause a warning (loss in
range). So will an implicit conversion from a larger integer format to a
smaller integer format (loss in both range and significant digits). And
so will some of the conversions from one of the larger integer formats
to a float format (loss in number of significant digits).

So there isn't really a problem with the implicit conversions -- if
using a decent compiler. Or a lint program, if the compiler isn't decent
enough.

Gerhard

2009\02\13@052359 by Wouter van Ooijen

face picon face
>>> After I pointed it out and he verified that was indeed the problem,
>>> he started ranting about how you shouldn't have to declare variables
>>> at all.
>> I guess he stuck with Basic !!! ;))))
>
> Or Python, or any number of languages... but not with C.

Languages exist that both
- do not require explicit declarations, yet
- are (compile-time!) type correct

Sadly, all such languages I know are in the functional area, which makes
them not easily suited to embedded work.

--

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu

2009\02\13@080557 by Walter Banks

picon face


> C is just a tool, nothing more, nothing less.  For some jobs it's a
> reasonable choice, and for other jobs it's not.
>
>  Keep in mind, though, that due to the
> ease which a C compiler is brought up to a new product, that new
> microcontrollers are coming out all the time, and that as a low level
> language it's suited for standalone projects (no OS or significant
> drivers) then it's unlikely that you'll be seeing C supplanted.

Both good points. As a language C has been flexible enough to
survive the active innovation of 30 years of processor architecture
changes.

WG14 the ISO body responsible for C is supposed to document
standard practice (as in there is already somebody doing it already)
got ahead of themselves in some area's in C99 by moving forward
to document what was going be choices that compiler developers
were going to face. Many people on WG14 are compiler developers
and we signed on at the time that these changes would be reasonable.

This resulted in C99 being slow to have widespread adoption.
Strictly documenting current practice generally means that functionally
similar new features and not necessarily syntactically similar in
released compilers. In Byte Craft compilers an example of this is size
specific variables have both C99 naming conventions and Byte
Craft's earlier naming conventions supported.

C99 brought many long needed changes some small like // comment
and some small but very significant to small processor embedded
systems like the "as rule" that is now widely applied to implementations
in most area's the results must be "as if" it were computed as....
In the area of integer promotion in calculations tight optimized
code can be generated with diverse data types.

There have been downsides for having C be so popular. It has
meant that instruction set design in some broad way is influenced
by function and features of C. Wouter has periodically mentioned
functional languages but that is one area where instruction set
design has not proceeded at the pace where could have.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com








2009\02\13@081248 by olin piclist

face picon face
M. Adam Davis wrote:
> Isn't that true of any language?  If you aren't aware of the
> intricacies of the language, then you're going to write code that you
> can't expect to work the way you want it to.

This is true to some extent.  The problem with C is that it is exceptionally
"intricate" as you put it.  There are a lot more gotchas due to its bad
design than needed to be for a language that could be used in the same
circumstances.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\13@081513 by olin piclist

face picon face
Gerhard Fiedler wrote:
> Have you ever learned to program in a ducktyped language?

I can't say as I have no idea what "ducktyped" means.

********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\13@083014 by Byron Jeff

flavicon
face
On Thu, Feb 12, 2009 at 02:29:51PM -0500, Olin Lathrop wrote:
> William Chops" Westfield" wrote:

[snip]

> > I dunno.  But I think I can claim that it isn't entirely that C
> > "succeeded."  All those other languages *FAILED*...
>
> Interesting point.  I guess C looks more appealing at first glance to
> inexperienced programmers who are trying to get their homework done and keep
> getting all those annoying compiler errors.  They're not mature enough to
> realize that in the end not having the compiler tell you about such things
> is a lot worse.

That can't be it. C is like playing the old adventure game...

"You're in a a twisty maze of passages, all alike..."

There's a lot of confusion about many C contructs. It's not a good place
for novice programmers to be.

For example with parameter passing. Everything is pass by value, except for
arrays which are pass by value but the address is passed (by definition).
However, structures are pass by value (for real) even though they can be
just as large as arrays (which is the nominal reason that the address of an
array is passed).

It's tough keeping all of this straight in your head. I finally settled on
a technique where I always declared a structure as a single element array
along with a pointer. For example:

typedef struct {
  double x,y;
} point_t[1], *point_p;

So declaring a point_t would allocate a structure but the name would refer
to the address of the struct.

> I remember in in college a guy in the dorm came to me with a Fortran listing
> that had more errors than source lines.  He said he'd been trying to figure
> it out for hours.  In a few seconds I found that he misspelled INTEGER, the
> effect of which rippled thru the code to cause most of the errors.  After I
> pointed it out and he verified that was indeed the problem, he started
> ranting about how you shouldn't have to declare variables at all.  I think
> the problem is there are a lot more of that guy out there than responsible
> programmers that understand a strict compiler helps them.

Of course if he got his way (which Perl and Python do allow even today) a
simple misspelling of a variable name can lead to disaster.

I always teach my students to address only the first error of a program
because of error cascade. I illustrate the point by taking a working
program and removing something like a semicolon near the top. It often goes
from no errors to dozens.

BAJ

2009\02\13@083810 by Bob Ammerman

picon face
{Quote hidden}

Actually, C will generate an error for this, since multiplication of two
pointers is not defined.

-- Bob Ammerman
RAm Systems

2009\02\13@085629 by olin piclist

face picon face
Byron Jeff wrote:
> I always teach my students to address only the first error of a program
> because of error cascade.

I do this too.  In my Pascal translator I went further in that it only
reports the first error then exits (with non-zero exit status of course).
That keeps clutter from obscuring the first error or scrolling it off the
screen.  After I fix the first error, other errors might be in different
locations anyway even if they are independent.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\13@094138 by Michael Rigby-Jones

picon face
> -----Original Message-----
> From: EraseMEpiclist-bouncesspamspamspamBeGonemit.edu [RemoveMEpiclist-bouncesKILLspamspammit.edu] On
Behalf
> Of Olin Lathrop
> Sent: 12 February 2009 23:07
> To: Microcontroller discussion list - Public.
> Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.
>
> Forrest W. Christian wrote:
> > One way to solve this is to strongly type the language, which leads
to
> > situations where you spend all your time telling the compiler to
> > convert between this and this type and is a pain to program in.
>
> You are way overstating the issue.  Well thought out programs rarely
need
> unusual type conversions.  The extra syntax to say "yes I know what
I'm
> doing in this special case" is so minimal as to be irrelevant.  I
don't
> have
> any hard numbers, but I suspect it happens less than once in a
thousand
> lines.  I routinely write whole programs where it never comes up.
This
> issue is just a smoke screen.

I'm betting you write programs for PC's, and possibly other large
machine rather than using a compiler to generate PIC code.  IME type
conversions are far more frequently required on resource limited micro's
such as a PIC; how often do you need to perform a calculation where the
result is smaller than the input values?  On a modern PC it doesn't
matter if you use a 32bit integer to hold an 8 or 16 bit result, the
extra bytes of wasted memory are a drop in the proverbial ocean.  On a
PIC the situation is very different.

Mike

=======================================================================
This e-mail is intended for the person it is addressed to only. The
information contained in it may be confidential and/or protected by
law. If you are not the intended recipient of this message, you must
not make any use of this information, or copy or show it to any
person. Please contact us immediately to tell us that you have
received this e-mail, and return the original to us. Any use,
forwarding, printing or copying of this message is strictly prohibited.
No part of this message can be considered a request for goods or
services.
=======================================================================

2009\02\13@110747 by Tamas Rudnai

face picon face
> I do this too.  In my Pascal translator I went further in that it only
> reports the first error then exits

Is it also for lexical analysis or only for syntax checking?

Tamas


On Fri, Feb 13, 2009 at 1:58 PM, Olin Lathrop <olin_piclistSTOPspamspamspam_OUTembedinc.com>wrote:

{Quote hidden}

> -

2009\02\13@114237 by Alex Harford

face picon face
On Fri, Feb 13, 2009 at 5:17 AM, Olin Lathrop <spamBeGoneolin_piclistSTOPspamspamEraseMEembedinc.com> wrote:
> Gerhard Fiedler wrote:
>> Have you ever learned to program in a ducktyped language?
>
> I can't say as I have no idea what "ducktyped" means.

If it walks like a duck, and quacks like a duck, then it is a duck.
:-P  Translated to programming, this means that if it has a walk()
method and a quack() method, then it's a Duck object.

http://en.wikipedia.org/wiki/Duck_typing

2009\02\13@121021 by olin piclist

face picon face
Tamas Rudnai wrote:
>> I do this too.  In my Pascal translator I went further in that it only
>> reports the first error then exits
>
> Is it also for lexical analysis or only for syntax checking?

It is for any error.  Not only does this keep the error output uncluttered
with cascading errors, but it's also easier to write the translator that
way.  There is really no good way to proceed reasonably after a error.  The
only point would be to list all errors, which is what I don't want anyway.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\13@121623 by Tony Smith

flavicon
face
> >After I pointed it out and he verified that was indeed the
> >problem, he started ranting about how you shouldn't have
> >to declare variables at all.
>
> I guess he stuck with Basic !!! ;))))


Even Basic (at least from MS QuickBasic) has the option of forcing you to
declare your variables, and it's turned on by default.  Why any modern
language wouldn't do the same is a mystery to me.

Tony

2009\02\13@134240 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

> Gerhard Fiedler wrote:
>> Have you ever learned to program in a ducktyped language?
>
> I can't say as I have no idea what "ducktyped" means.

Really? That's surprising. I thought that we were talking here about the
benefits of different approaches to typing, and that it was the rule
here that responsible, intelligent participants are expected to do a
minimum of homework before posting.

Anyway... here you go <http://en.wikipedia.org/wiki/Duck_typing>

Gerhard

2009\02\13@142607 by sergio masci

flavicon
face


On Thu, 12 Feb 2009, Olin Lathrop wrote:

> sergio masci wrote:
> > So, apart from type checking what else would your ideal language have?
>
> "Other than that Mrs Lincoln, how did you like the play?"
>
> I've already pointed out a few other things, like everything being a
> expression that has a value.

I don't like that either but seriously what else? This isn't a trick
question I really would like to know. "I don't like this or that" is not a
valid answer, what in you opinion would be a great must have ability?

Regards
Sergio Masci

2009\02\13@170138 by olin piclist

face picon face
sergio masci wrote:
>> I've already pointed out a few other things, like everything being a
>> expression that has a value.
>
> I don't like that either but seriously what else? This isn't a trick
> question I really would like to know. "I don't like this or that" is
> not a valid answer, what in you opinion would be a great must have
> ability?

I know I've listed various bad design choices of C already.  Loose type
checking and the fact that everything has a value and can therefore
syntactically be used as a expression are quite major issues.  Asking "what
else" after those are mentioned seems to miss the gravity of those points.
Those alone should be show stoppers.

Beyond those serious drawbacks there are a bunch of things I don't like
about C that are more preference issues.  In general C requires more
deciphering than reading compared to other languages.  While all the
information is there and everyone should know all nuances of the C spec, the
reality is that people get tired, make mistakes, get confused, etc.  C looks
more like it was designed exacerbate than mitigate human issues.  Special
characters should be kept to a minimum with keywords used instead.  I would
rather the programmer be forced to spell out individual steps than encourage
them be combined on a single line.  After all, we don't pay for software by
the line, but by the time it takes to maintain it.

Then there are missing capabilities which make the programmer juggle things
in his mind that the compiler should be able to do without cost in output
code efficiency.  For example, C lacks set handling, pass by reference, and
array index ranges that start at other than 0, just to name a few that I
could list quickly.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\13@184659 by William \Chops\ Westfield

face picon face

On Feb 13, 2009, at 9:14 AM, Tony Smith wrote:

> Why any modern language...

We ought to be careful to distinguish complaints about modern versions  
of languages from complaints about the original versions of the  
languages.

I don't particularly remember anyone being wildly enthusiastic about  
C's lack of compile-time type checking, and in fact additional type  
checking has been one of the major additions to C as time has gone  
by.  Runtime checking was more controversial due to performance issues.

I DO remember other complaints; for a long time there was a trend for  
compilers to work with a compiler-writer's "ideal abstraction" of a  
machine, instead of the machine that actually existed.  This led to  
things that would have been simple in assembly language becoming  
unweildly and nearly incomprehensible in the compiled language, which  
of course should not have been the goal.  For instance, I remember  
that the original Pascal completely lacked bitwise logical operations  
on integers ("oh, you should use SETS for that, with Intersection and  
Union and such!"), and wikipedia provides this lovely example of  
implementing bitwise ops in Algol68: www.rosettacode.org/rosettacode/w/index.php?title=Bitwise_operations
(and modula 2 is interesting too.)

When *I* make a comment like "The compiler should stay out of the  
programmers way", I tend to mean that I shouldn't have to import an  
obscure class and implement a subroutine in order to "correctly"  
implement a single machine-level instruction.

BillW

2009\02\16@072258 by Michael Rigby-Jones

picon face


> -----Original Message-----
> From: KILLspampiclist-bouncesspamBeGonespammit.edu [EraseMEpiclist-bouncesspamEraseMEmit.edu] On
Behalf
> Of Olin Lathrop
> Sent: 13 February 2009 22:03
> To: Microcontroller discussion list - Public.
> Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.
>
> Then there are missing capabilities which make the programmer juggle
> things
> in his mind that the compiler should be able to do without cost in
output
> code efficiency.  For example, C lacks set handling, pass by
reference,
> and
> array index ranges that start at other than 0, just to name a few that
I
> could list quickly.

To pass by reference you pass a pointer, how is this feature missing?  C
simply exposes more of what is happening under the hood than other
languages do, which is why it's not considered a HLL by most people.

Since accessing an array is simply applying an offset to an explicit
memory location (i.e. a pointer), having the first item at zero makes
complete sense.  Personally I don't like non-zero based arrays, they can
cause more confusion than they solve.

Regards

Mike

=======================================================================
This e-mail is intended for the person it is addressed to only. The
information contained in it may be confidential and/or protected by
law. If you are not the intended recipient of this message, you must
not make any use of this information, or copy or show it to any
person. Please contact us immediately to tell us that you have
received this e-mail, and return the original to us. Any use,
forwarding, printing or copying of this message is strictly prohibited.
No part of this message can be considered a request for goods or
services.
=======================================================================

2009\02\16@081058 by olin piclist

face picon face
Michael Rigby-Jones wrote:
> To pass by reference you pass a pointer, how is this feature missing?

You just answered your own question.  It seems you are unaware of
alternatives, which is rather a problem when trying to discuss them.

> Since accessing an array is simply applying an offset to an explicit
> memory location (i.e. a pointer), having the first item at zero makes
> complete sense.

I get the strong impression you haven't used a language that has these
features.  The point of a computer language is to allow the program to be
described in human terms to make it faster, more comfortable, and less error
prone.  Certainly you must be aware that some times it makes more sense to
count from 1 instead of 0.  Allowing array bounds to reflect the problem
conditions more closely is useful in that it's one more piece of bookeeping
the compiler can do for you that comes at no cost at run time.

You are so stuck in the C mindset you can't even see there is a C mindset
anymore.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@122603 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

>> Since accessing an array is simply applying an offset to an explicit
>> memory location (i.e. a pointer), having the first item at zero
>> makes complete sense.
>
> I get the strong impression you haven't used a language that has
> these features.  The point of a computer language is to allow the
> program to be described in human terms to make it faster, more
> comfortable, and less error prone.  Certainly you must be aware that
> some times it makes more sense to count from 1 instead of 0.
> Allowing array bounds to reflect the problem conditions more closely
> is useful in that it's one more piece of bookeeping the compiler can
> do for you that comes at no cost at run time.

It can really be a major pain to work in C (or C++) with arrays that are
by their (problem domain) nature 1-based (or any other number, but
1-based is more common than others).

You always have to decide between a few alternatives, which are not
really satisfying (at least not when working with C):

- Use a 0-based array and translate the index somewhere from the problem
domain's 1-based index into the program domain's 0-based index. The
problem here is that you often don't know whether, in your call
hierarchy, a certain function works with the 1-based problem domain
index or with the 0-based program domain index. One has to be very
careful with this, especially when debugging. Like when you enter some
data, which (in the problem domain) ends up meaning index 5. You really
shouldn't forget to look at member array[4] when you want to see the
associated data... :)

- Use a 0-based array and just don't use the first element. Is often the
quickest and safest workaround -- if you can spare the memory. Don't
forget to add asserts that make sure that no 0 index is used anywhere...
and don't forget to use an array that has a size that's one more than
the number of members you need (in the problem domain).

- In C++, you can create an array (or better vector) class that does the
index translation behind the scenes. Accomplishes almost the same as
Pascal-style arrays with their arbitrary indices. But this doesn't apply
to C, and you still have the issue with the shifted array locations when
looking at the data in a debugger.

Gerhard

2009\02\16@122626 by Gerhard Fiedler

picon face
Olin Lathrop wrote:

>> Since accessing an array is simply applying an offset to an explicit
>> memory location (i.e. a pointer), having the first item at zero
>> makes complete sense.
>
> I get the strong impression you haven't used a language that has
> these features.  The point of a computer language is to allow the
> program to be described in human terms to make it faster, more
> comfortable, and less error prone.  Certainly you must be aware that
> some times it makes more sense to count from 1 instead of 0.
> Allowing array bounds to reflect the problem conditions more closely
> is useful in that it's one more piece of bookeeping the compiler can
> do for you that comes at no cost at run time.

It can really be a major pain to work in C (or C++) with arrays that are
by their (problem domain) nature 1-based (or any other number, but
1-based is more common than others).

You always have to decide between a few alternatives, which are not
really satisfying (at least not when working with C):

- Use a 0-based array and translate the index somewhere from the problem
domain's 1-based index into the program domain's 0-based index. The
problem here is that you often don't know whether, in your call
hierarchy, a certain function works with the 1-based problem domain
index or with the 0-based program domain index. One has to be very
careful with this, especially when debugging. Like when you enter some
data, which (in the problem domain) ends up meaning index 5. You really
shouldn't forget to look at member array[4] when you want to see the
associated data... :)

- Use a 0-based array and just don't use the first element. Is often the
quickest and safest workaround -- if you can spare the memory. Don't
forget to add asserts that make sure that no 0 index is used anywhere...
and don't forget to use an array that has a size that's one more than
the number of members you need (in the problem domain).

- In C++, you can create an array (or better vector) class that does the
index translation behind the scenes. Accomplishes almost the same as
Pascal-style arrays with their arbitrary indices. But this doesn't apply
to C, and you still have the issue with the shifted array locations when
looking at the data in a debugger.

Gerhard

2009\02\16@140047 by Michael Rigby-Jones

picon face


> -----Original Message-----
> From: @spam@piclist-bounces@spam@spamspam_OUTmit.edu [spamBeGonepiclist-bouncesspamKILLspammit.edu] On
Behalf
> Of Olin Lathrop
> Sent: 16 February 2009 13:11
> To: Microcontroller discussion list - Public.
> Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.
>
> Michael Rigby-Jones wrote:
> > To pass by reference you pass a pointer, how is this feature
missing?
>
> You just answered your own question.  It seems you are unaware of
> alternatives, which is rather a problem when trying to discuss them.
>

You clearly feel that educating me to the alternatives is beneath you,
so instead you simply patronise me.  I've been on this list for probably
9 years or so, and in that time people have come, people have gone and
technology has certainly moved on.  One thing that hasn't changed in
this time is your attitude.  If social skills and engineering talent are
mutually exclusive, I'm very glad I'm not as brilliant as you Olin.  It
must be a very lonely place.


> > Since accessing an array is simply applying an offset to an explicit
> > memory location (i.e. a pointer), having the first item at zero
makes
> > complete sense.
>
> I get the strong impression you haven't used a language that has these
> features

Your impression is wrong (gasp! surely not?), I have used languages
which include arrays based at an arbitrary index, but as I say, I don't
find it a particularly helpful feature for the work I do.  Even modern
fully OO languages such as Python, Ruby and .NET omit this feature, so
it can't be very high on the priority list for most.

Perhaps you can give an example of a small embedded application (i.e.
one in which C is most often used) where a non-zero based array access
would significantly simplify the code?

Regards

Mike

=======================================================================
This e-mail is intended for the person it is addressed to only. The
information contained in it may be confidential and/or protected by
law. If you are not the intended recipient of this message, you must
not make any use of this information, or copy or show it to any
person. Please contact us immediately to tell us that you have
received this e-mail, and return the original to us. Any use,
forwarding, printing or copying of this message is strictly prohibited.
No part of this message can be considered a request for goods or
services.
=======================================================================

2009\02\16@142310 by Mark Rages

face picon face
On Mon, Feb 16, 2009 at 1:00 PM, Michael Rigby-Jones
<.....Michael.Rigby-Jonesspam_OUTspambookham.com> wrote:
>
> Your impression is wrong (gasp! surely not?), I have used languages
> which include arrays based at an arbitrary index, but as I say, I don't
> find it a particularly helpful feature for the work I do.  Even modern
> fully OO languages such as Python, Ruby and .NET omit this feature, so
> it can't be very high on the priority list for most.

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html

Regards,
Mark
markrages@gmail
--
Mark Rages, Engineer
Midwest Telecine LLC
TakeThisOuTmarkrages.....spamTakeThisOuTmidwesttelecine.com

2009\02\16@142901 by olin piclist

face picon face
Michael Rigby-Jones wrote:
>>> To pass by reference you pass a pointer, how is this feature
>>> missing?
>>
>> You just answered your own question.  It seems you are unaware of
>> alternatives, which is rather a problem when trying to discuss them.
>
> You clearly feel that educating me to the alternatives is beneath you,

I thought it was clear, in that having to pass a pointer manually is exactly
what you are trying to avoid.  Wouldn't it be nice if the compiler took care
of this for you so that you don't have to remember to pass a pointer in the
call and dereference in the routine?  Real languages allow you to specify
pass by reference or pass by value with the compiler keeping track of the
details instead of you.  It's interesting that the first language (Fortran)
was originally pass by reference only.

>> I get the strong impression you haven't used a language that has
>> these features
>
> Your impression is wrong (gasp! surely not?), I have used languages
> which include arrays based at an arbitrary index, but as I say, I
> don't find it a particularly helpful feature for the work I do.

You can't see any value in that?  You've never run accross a problem where
counting from 1 was the natural thing to do?


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@145146 by sergio masci

flavicon
face


On Mon, 16 Feb 2009, Olin Lathrop wrote:

{Quote hidden}

Actually pass by reference is a very bad thing. Now I'm going to do an
"Olin" here and leave it for the reader to find out why.

Regards
Sergio Masci

2009\02\16@145406 by William \Chops\ Westfield

face picon face
I found a brand new annoyance (or at least a surprise) with C.

Did you know that ( -10 > sizeof(myarray)) ?

Apparently sizeof() is unsigned and causes the signed compare-itand to  
be "promoted" to unsigned as well.
It makes sense.  Sort of.  (certainly if you're using a test like that  
before accessing an array element, you want negative arguments to be  
rejected as well.)  But it was not what I was expecting!

BillW

2009\02\16@150304 by olin piclist

face picon face
sergio masci wrote:
> Actually pass by reference is a very bad thing. Now I'm going to do an
> "Olin" here and leave it for the reader to find out why.

Which is just a smoke screen hiding the fact that there is no clear
agreement that it is bad and/or that you can't come up with any good reasons
against it.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@150511 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 11:00 AM, Michael Rigby-Jones wrote:

> Perhaps you can give an example of a small embedded application (i.e.
> one in which C is most often used) where a non-zero based array access
> would significantly simplify the code?

An interesting point is that Olin doesn't advocate HLLs for "small  
embedded applications"; he's a strong believer in assembly language  
for such apps.

Remembering that lends an interesting slant to the entire debate...

The intel/microsoft x86 assembler was "strongly typed" (at least for  
an assembler.)  It's been a long time, and I was a less mature  
programmer back then, but I recall finding it extremely obnoxious!

 :-)
BillW

2009\02\16@150953 by olin piclist

face picon face
William Chops" Westfield" wrote:
> Did you know that ( -10 > sizeof(myarray)) ?
>
> Apparently sizeof() is unsigned and causes the signed compare-itand to
> be "promoted" to unsigned as well.
> It makes sense.

Does it really?  When a number is clearly negative such as the constant
"-10", then automatically converting it to another representation which then
has a different value sounds very bad.  Are you sure the compiler truly
understood that "-10" was a signed integer?  If so I would consider this
outright broken.

What would this do:

 int i

 i = -10;
 if (i > sizeof(myarray)) ...

What if you added the suffix after -10 that explicitly says signed integer
(I can't remember which letter it is, maybe "L" for LONG or something)?


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@151414 by Bob Ammerman

picon face
> Michael Rigby-Jones wrote:
>>>> To pass by reference you pass a pointer, how is this feature
>>>> missing?

Olin replied:

{Quote hidden}

And I opine:

One could make an argument that a calling sequence that doesn't distinguish
call-by-value from call-by-reference at the point of the call is dangerous.
You can't tell by looking at the call whether the args can be trashed by the
callee.

This is one case (of few) where IMHO C has an advantage over Pascal. In "C"
arguments that can be trashed start with a "&" (except for arrays and that
is indeed a problem). In Pascal you can't tell by looking at the call
whether the args are pass-by-value or pass-by-reference.

Note that C++'s reference types allow you to get the Pascal type behavior if
that is what you really want. I tend to use them liberally, so I guess I
like the Pascal way even in C++ :-)

PS: I remember confusing people in Fortran by doing something like this:

   SUBROUTINE XX( ARG )
   INTEGER ARG
   ARG = 2
   RETURN
   END

   SUBROUTINE YY
   XX(1)
   I = 1

I ends up with the value 2 (and many other places 1 is referred to end up
equal to 2!!

Terrible and deadly!

I believe most (all?) Pascal implementations complain loudly if you pass a
constant value as a call-by-reference argument, and C won't let you take the
address of a literal value (except of course for strings, which are really
arrays and can cause no end of trouble, but const helps, and yes all the
rules are confusing and yes that makes C hard to use).

Finally, and completely unrelated to argument passing, here is my favorite
crazy "C" ism (comments and indentation deliberately omitted to make it more
confusing :-)

// assume the following function exists to return non-zero if n is a prime
number
int checkPrime( int n );

// then this is a more efficient way to check for primes if you often call
it with small integers
int isPrime( int n )
{
switch ( n )
default: if checkPrime(n)
case 2: case 3: case 5: case 7: case 11: case 13: return 1;
else
case 4: case 6: case 8: case 9: case 10: case 12: return 0;
}

Yuck!

-- Bob Ammerman
RAm Systems

2009\02\16@151733 by olin piclist

face picon face
William Chops" Westfield" wrote:
> An interesting point is that Olin doesn't advocate HLLs for "small
> embedded applications"; he's a strong believer in assembly language
> for such apps.

That's too general a statement.  I am not against high level languages in
certain circumstances in small embedded systems, but unfortunately most
languages for such systems are really bad, usually because they are C-like.
I personally often enough do cram jobs where assembler is a necessity, and
have therefore put a lot of effort into a assembler toolchain.  As a result,
assembler is usually easier and more comfortable on PICs.

> The intel/microsoft x86 assembler was "strongly typed" (at least for
> an assembler.)  It's been a long time, and I was a less mature
> programmer back then, but I recall finding it extremely obnoxious!

I did use that assembler (MASM if I remember right) back in the mid 1990s.
My impression back then and still today is that it was the best assembler
I've ever run accross, and I've used a few.  I'm not talking about the
instruction set or the processor, but the assembler itself.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@152438 by olin piclist

face picon face
Bob Ammerman wrote:
> PS: I remember confusing people in Fortran by doing something like
> this:
>
>     SUBROUTINE XX( ARG )
>     INTEGER ARG
>     ARG = 2
>     RETURN
>     END
>
>     SUBROUTINE YY
>     XX(1)
>     I = 1
>
> I ends up with the value 2 (and many other places 1 is referred to
> end up equal to 2!!

Yes, I remember this too.  This only works on machines that don't have
write-protected memory regions or where the linker doesn't put constants
into those regions.

> I believe most (all?) Pascal implementations complain loudly if you
> pass a constant value as a call-by-reference argument,

In the version of Pascal I use, the arguments can be defined with certain
attributes, including IN and OUT.  What you posted above couldn't happen
because ARG of XX would have to be declared OUT to allow assigning to ARG
within the subroutine, and it would never let you pass a constant for a
argument declared as out.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@152708 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 2:35 PM, sergio masci wrote:

> Actually pass by reference is a very bad thing. Now I'm going to do an
> "Olin" here and leave it for the reader to find out why.

Pass-by-value is a very bad thing too, frequently.  A real efficiency  
killer.  What you usually want for arrays and structures is "pass by  
reference with compiler-enforced read/write protection as  
appropriate."  C mostly doesn't do that (or makes it hard to do?  
Casting to and from "const" ?)

I find the inconsistency in C between passing structures by values and  
arrays by reference to be slightly annoying, but I'm not convinced  
that either one is inherently better than the other.  Frankly, it  
doesn't come up very often, because almost all the structures I used  
are referenced by pointers anyway (foo->bar rather than foo.bar)  And  
type checking in C *has* come far enough to tell when you screw up and  
try to pass a structure where you should have passed a pointer to a  
structure...

BillW

2009\02\16@154923 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 12:09 PM, Olin Lathrop wrote:

>> Did you know that ( -10 > sizeof(myarray)) ?
>>
>> Apparently sizeof() is unsigned and causes the signed compare-itand  
>> to
>> be "promoted" to unsigned as well.
>> It makes sense.
>
> Does it really?  When a number is clearly negative such as the  
> constant
> "-10", then automatically converting it to another representation  
> which then
> has a different value sounds very bad.  Are you sure the compiler  
> truly
> understood that "-10" was a signed integer?

The actual example was related to my keycode decoding problem, where
I'm using (-keycode) to signify a key release event.  So I had:
       unsigned char decode(int keycode)
       {
           if (keycode > sizeof(keycodetable))
              return INVALID_KEYCODE;
           if (keycode < 0) {
               /* handle key up events for shift/etc */
           }

And it was not behaving as I expected.  However, a quick check shows  
that
    if (-10 > sizeof(p)) {
       printf("Compiler is weird");
    }
Does indeed print the message.

I think I would be more worried if the constant example behaved  
differently
than the example with a real variable.  Although one of the valid  
complaints against C *is* the way that an inexactly specified constant  
can change the way a statement works.  (or was that one of the  
original complaints - that C would automatically convert constants  
without warning you...  Except that's true of most languages.)

BillW

2009\02\16@160216 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 12:17 PM, Olin Lathrop wrote:

>
> I did use that assembler (MASM if I remember right) back in the mid  
> 1990s.
> My impression back then and still today is that it was the best  
> assembler
> I've ever run accross, and I've used a few.

Yep.  That's the one.  It was pretty good; I think I liked DEC's  
Macro-10 a little bit better in some ways, and I didn't have time to  
find the ways MASM was better (symbols longer than 6 character!)

I've been trying to duplicate "structured programming" macros from  
those days that rely on being able to have things like
       .set symbol = symbol+1
       mymacro(bz, BEG, symbol)
generate code like:
       bz BEG123
and it doesn't seem to be possible in either the PIC or AVR assemblers  
that are out there (nor the gnu assembler, for that matter.)  It was  
easy with Macro and MASM.
(This lets you write code like:
       cpi R15, 'A'
       %IF E
         ; code
       %ELSE
         ; more code
       %ENDIF

Which is convenient.  Has anyone managed to do that sort of thing with  
PIC or AVR assemblers? (without a separate pre-processor.))

BillW

2009\02\16@162640 by Tamas Rudnai

face picon face
On Mon, Feb 16, 2009 at 9:01 PM, William Chops Westfield <TakeThisOuTwestfwKILLspamspamspammac.com>wrote:

{Quote hidden}

Yes, of course! You can define global or local variables in MPASM, you can
calculate with those, check their values and use them in ASM lines of
conditional directives including IF and WHILE. I use macros pretty
extensively to avoid duplicated code fractions and for other stuff as well
like coding switch-case. The MPLAB Help / Assembly describes these
cabalilities pretty well.

You can do almost the same with AVR assembly with a bit of different syntax
of course.

Tamas


--
Rudonix DoubleSaver
http://www.rudonix.com

2009\02\16@162736 by Wouter van Ooijen

face picon face
> Which is convenient.  Has anyone managed to do that sort of thing with  
> PIC or AVR assemblers? (without a separate pre-processor.))

The GNU assembler I use for ARM invokes the C preprocessor more-or-less
automatically. I think your IF would be no problem there.

IIRC did something like that years ago in MPASM. It was not easy, but
doable. Next I tried static-stack variable allocation, which turned out
to be too complex, and I ran into the symbol table size limit. So I
wrote (the old) Jal instead.

--

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu

2009\02\16@164708 by sergio masci

flavicon
face


On Mon, 16 Feb 2009, William "Chops" Westfield wrote:

{Quote hidden}

XCASM lets you do:

       symbol        .set        symbol + 1
       mylabel        .set        LABEL("BEG"+symbol)

It lets you do this in your macro without a seperate pre-processor. I
don't know if MPASM uses a seperate pre-processor or not (if it does then
does it let you do the equivalent inside a macro?)


Regards
Sergio Masci

2009\02\16@164721 by olin piclist

face picon face
William Chops" Westfield" wrote:
> I've been trying to duplicate "structured programming" macros from
> those days that rely on being able to have things like
> .set symbol = symbol+1
> mymacro(bz, BEG, symbol)
> generate code like:
> bz BEG123
> and it doesn't seem to be possible in either the PIC or AVR assemblers
> that are out there (nor the gnu assembler, for that matter.)  It was
> easy with Macro and MASM.
> (This lets you write code like:
> cpi R15, 'A'
> %IF E
>   ; code
> %ELSE
>   ; more code
> %ENDIF

It's not clear to me what exactly you are trying to do and what part of that
you want the assembler to do for you.  For example, where did the 123 in
BEG123 come from?  How is MYMACRO supposed to know to add this 123 to BEG?

> Has anyone managed to do that sort of thing with
> PIC or AVR assemblers? (without a separate pre-processor.)

Why insist on no preprocessor?  My PREPIC preprocessor is freely available,
including the buildable source code for it.  I have recently implemented
macros that are used syntactically just like MPASM opcodes.  The difference
is that these execute full blown PREPIC subroutines which can perform much
more processing than MPASM macros.  Argument passing is also a lot more
flexible.  You can, for example, make new symbols from symbol snippets
passed as arguments.  You can also create persistant state that other macros
can access and even deallocate when done with it.  You can even perform
manipulation on the label name preceeding the macro name.  The label, if
any, is just a implicit macro argument.  PREPIC is described in detail at
http://www.embedinc.com/pic/prepic.txt.htm.

Explain in more detail what you want and maybe I can show PREPIC code to do
it.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@170345 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 1:25 PM, Tamas Rudnai wrote:

>> (This lets you write code like:
>>       cpi R15, 'A'
>>       %IF E
>>         ; code
>>       %ELSE
>>         ; more code
>>       %ENDIF
>>
>> Which is convenient.  Has anyone managed to do that sort of thing  
>> with
>> PIC or AVR assemblers? (without a separate pre-processor.))
>>
>
> Yes, of course! You can define global or local variables in MPASM,  
> you can
> calculate with those, check their values and use them in ASM lines of
> conditional directives including IF and WHILE.

I think your missing that this is NOT compile-time IF or similar.  The  
above should assemble to something like:
       cpi R15, 'A'
       bne BEG023
         ;; code
       br  END023
    BEG023:
        ;; more code
    END023:

the main stumbling block seems to be the ability to concatenate a  
string-form macro argument (BEG) with the "stringification" of a  
numeric symbol, which is necessary to allow proper nesting of the  
structures.  I think.  The original macro-10 looks like this (with  
"\symbol" doing the stringification, and "a'b" doing concatenation:

{Quote hidden}

In masm, "%" did stringification and "& did concatenation:

{Quote hidden}

I can't find either stringification OR concatenation in either PIC or  
AVR assembler, at least not without mixing cpp-style and asm-style  
macro features, which is a bad idea and probably doesn't work.

BillW

2009\02\16@170531 by sergio masci

flavicon
face


On Mon, 16 Feb 2009, William "Chops" Westfield wrote:

{Quote hidden}

To be clear, are you saying call by value is "a real efficiency killer"
for everything (including simple variables like "ints") or just for arrays
and structures?

I would argue that there are many situations where simply making a copy of
a structure on the argument stack can actually improve efficiency just as
the opposit can also be true :-)

>
> I find the inconsistency in C between passing structures by values and  
> arrays by reference to be slightly annoying,

Actually I find this incredibly annoying :-)

Regards
Sergio Masci

2009\02\16@173044 by Wouter van Ooijen

face picon face
> I can't find either stringification OR concatenation in either PIC or  
> AVR assembler,

A line from my age-old WISP code:

 _set port#v(porta)_tris_value, H'FF'

The #v() is stringification, the concatenation is implicit. This was in
1997, maybe MPAM has changed. I recall that the WISP source was one of
the few (only?) programs that gpasm never got right, so I was a bit on
the edge then.

--

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu

2009\02\16@175222 by olin piclist

face picon face
William Chops" Westfield" wrote:
> I think your missing that this is NOT compile-time IF or similar.  The
> above should assemble to something like:
> cpi R15, 'A'
> bne BEG023
>           ;; code
> br  END023
>      BEG023:
> ;; more code
>      END023:

I'm still not following this.  What is CPI, R15, or A supposed to be?

> I can't find either stringification OR concatenation in either PIC or
> AVR assembler, at least not without mixing cpp-style and asm-style
> macro features, which is a bad idea and probably doesn't work.

In MPASM you can get the decimal string representation of a integer value
with the #v(integer_value) syntax.  Unfortunately MPASM doesn't have a
string or character data type, and can't do real string manipulation.  You
can do limited assembling of new symbol names by using #v() as a delimiter.
The result looks ugly and requires a gratuitous integer digit between the
two strings, but it can be acceptable if the new symbols are only
manipulated in other macros and not directly by the user.  For example:

newsim  macro  stra, strb, val
stra#v(0)strb equ val
       endm

       newsim abc, def, 27

This would define the symbol abc0def to have the value 27.

If NEWSIM were instead defined as a PREPIC macro, it could create the symbol
ABCDEF from the same macro invocation as above:

/macro newsim
[arg 1][arg 2] equ [arg3]
 /endm


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\16@204834 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 2:29 PM, Wouter van Ooijen wrote:

> _set port#v(porta)_tris_value, H'FF'
>
> The #v() is stringification, the concatenation is implicit.

Oh.  Lookit that; it's even documented as being for exactly what I'd
want to do with it!

Now if only the AVR assembler had something similar!  I was actually  
trying to implement these macros for AVR just last week.

{Quote hidden}

"cpi R15, 'A'" is an AVR instruction (at least approximately.)
Compare contents of register 15 with the literal/immediate value 65.
Not really relevant to the macro discussion; just setting the flags up.
(the original implementation only had %IF, cause the original CPU was  
almost entirely skip-based for comparisons/etc.)  So the code looks  
like:
       <setflags>
       %IF <condition>
          code
          <more flags setting>
         %IF <othercondition
             code 2
          %END
          code 3
       %ELSE
          code 4
       %END
It makes assembler "prettier" by hiding the labels and evil "goto"  
instructions that clutter up simpler common program structures...

BillW

2009\02\16@212949 by Bob Ammerman

picon face

----- Original Message -----
From: "Olin Lathrop" <.....olin_piclistspamRemoveMEembedinc.com>
To: "Microcontroller discussion list - Public." <RemoveMEpiclistspamspamBeGonemit.edu>
Sent: Monday, February 16, 2009 3:17 PM
Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.


> William Chops" Westfield" wrote:
>> The intel/microsoft x86 assembler was "strongly typed" (at least for
>> an assembler.)  It's been a long time, and I was a less mature
>> programmer back then, but I recall finding it extremely obnoxious!

Olin Responded:
> I did use that assembler (MASM if I remember right) back in the mid 1990s.
> My impression back then and still today is that it was the best assembler
> I've ever run accross, and I've used a few.  I'm not talking about the
> instruction set or the processor, but the assembler itself.

I Comment:
Actually the Borland assembler, TASM, was even better than the microsoft
one. It understood things like a separate namespace for the members of each
structure as well as built in support for calling sequences, and a much
simpler and easy to use segmentation module.

-- Bob Ammerman
RAm Systems

2009\02\16@213454 by Bob Ammerman

picon face

----- Original Message -----
From: "William "Chops" Westfield" <spamBeGonewestfw@spam@spamspam_OUTmac.com>
To: "Microcontroller discussion list - Public." <TakeThisOuTpiclistspamspammit.edu>
Sent: Monday, February 16, 2009 3:26 PM
Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.


{Quote hidden}

Actually "C" does handle the read/write protection rather intelligently. You
can always pass a non-const value (ie: pointer-to-const-something) to a
const parameter (which basically says that the called routine will treat it
as read-only. But you can't pass a const value to a non-const parameter
(without casting). This is sensible because then the called routine could
write to something that the calling routine has promised not to write to.

-- Bob Ammerman
RAm Systems

2009\02\16@214720 by Lee Jones

flavicon
face
old DEC Macro-10 assembler (on TOPS-10/TOPS-20 OS) thread...

I know exactly what BillW is getting at.  Been there, done
that, still have the manuals, and may have the source code.

{Quote hidden}

Macro-10 had impressive, generalized string manipulation that
allowed you to use the value of a symbol to create a string.
That string could then be used as a label (i.e. branch target).

It allowed building structured programming constructs, such as
IF - THEN - ELSE - ENDIF, using macros inside assembler programs.
For example, define a macro called IF which did a comparision &
branched to a label.  Define a second macro called THEN which
created the label used by the IF.  Define a third macro called
ELSE which defined a different label (alternate target for IF).
And a fourth macro for ENDIF which defined a label to be used
as the branch target.

It's easy to build if you only want to use one IF - THEN etc
contruct in the entire program.  Mostly you want to be able to
have multiple IF - THEN blocks so each macro invocation has to
create new labels which are kept track of by numeric symbols.
The symbols & generated labels were only for the use of the
macros, so human readability did not matter.  (Same thing a
compiler does with internal jump points.)

A tricky part was keeping track of the symbols/labels so that
you could nest the IF statements.

Of course you put your macro definitions in an include file so
you could easy add them to anything you were writing.

I had a friend who was big on using these structured macros
in extensive programs written in Macro-10.  The details are
a bit hazy as it was 25-30 years ago.


At the time, we had an 8080-based mircocontroller hanging off
of the big system's bus.  It was a home grown communications
controller doing serial I/O.  The 8080's program was written
in assembler using Intel mnemonics.

We did not have a cross-assembler.  An 8080-based "development
system" was way too expensive.  So the assembler for the Intel
chip was a large collection of macros written in Macro-10.  It
was a fully featured Intel 8080 assembler that depended on the
macro language in a 36-bit wide word assembler.

You used the DECsysystem-10's macro assembler to assemble the
8080 source code, linked it using the big system linker, then
ran a small program to extract the 8080 binary executable.  It
may have been slightly cumbersome, but it _was_ cute.



> the main stumbling block seems to be the ability to concatenate a  
> string-form macro argument (BEG) with the "stringification" of a  
> numeric symbol, which is necessary to allow proper nesting of the  
> structures.

I didn't know about the #v() in MPASM.  It might allow building
something like this.

Now a days, you can just use a compiler.

                                               Lee Jones

2009\02\17@023836 by Per Linne

flavicon
face
I checked with my (Code Gear) Borland C++ Builder and to me it seems
that it is the sizeof() that "missbehaves", or what you want to call it...
I tired to cast sizeof and then it works as you'd expect.

if(-10>(int)sizeof(myarray))
 does not execute
else
 does execute

PerL

----- Original Message -----
From: "William "Chops" Westfield" <westfwEraseMEspammac.com>
To: "Microcontroller discussion list - Public." <RemoveMEpiclistEraseMEspamspam_OUTmit.edu>
Sent: Monday, February 16, 2009 9:49 PM
Subject: Re: [PIC] C arithmetic conversion/integer promotion/etc.


>
> On Feb 16, 2009, at 12:09 PM, Olin Lathrop wrote:
>
>>> Did you know that ( -10 > sizeof(myarray)) ?

2009\02\17@035827 by William \Chops\ Westfield

face picon face

On Feb 16, 2009, at 11:22 PM, Per Linne wrote:

> I checked with my (Code Gear) Borland C++ Builder and to me it seems
> that it is the sizeof() that "missbehaves", or what you want to call  
> it...
> I tired to cast sizeof and then it works as you'd expect.
>
> if(-10>(int)sizeof(myarray))
>  does not execute
> else
>  does execute

Yes; that was my fix.  sizeof() returns size_t which got all official  
sometime relatively recently (especially since it's gotten potentially  
bigger than "int"), and it makes sense for size_t to be unsigned.  So  
I'm not sure it's "misbehaving."

I was just surprised that the unsigned type was "stronger" than the  
signed type; i would normally have expected both to be cast to signed  
ints.

Live and learn.

BillW

2009\02\17@041908 by Tamas Rudnai

face picon face
On Tue, Feb 17, 2009 at 2:29 AM, Bob Ammerman <@spam@rammermanRemoveMEspamEraseMEverizon.net> wrote:

> Actually the Borland assembler, TASM, was even better than the microsoft
> one. It understood things like a separate namespace for the members of each
> structure as well as built in support for calling sequences, and a much
> simpler and easy to use segmentation module.
>

Good old days :-) I was using Turbo Pascal as a frame and external asm
modules and inline asm sections and the code was just flying :-)


--
Rudonix DoubleSaver
http://www.rudonix.com

2009\02\17@045253 by Gerhard Fiedler

picon face
William "Chops" Westfield wrote:

> I found a brand new annoyance (or at least a surprise) with C.
>
> Did you know that ( -10 > sizeof(myarray)) ?
>
> Apparently sizeof() is unsigned and causes the signed compare-itand
> to be "promoted" to unsigned as well.

Exactly. Mixing signed and unsigned is always a potential problem.

However, many compilers (and any decent compiler) would issue a warning
for such a comparison where signed and unsigned are mixed, exactly for
this reason.

Gerhard

2009\02\17@050332 by Gerhard Fiedler

picon face
William "Chops" Westfield wrote:

> I was just surprised that the unsigned type was "stronger" than the
> signed type; i would normally have expected both to be cast to signed
> ints.

That's because the unsigned range goes higher up. But with the integer
promotions, it's like with taxes: you really have to know the rules;
there is no "normally" :)

Gerhard

2009\02\17@072357 by olin piclist

face picon face
>> The #v() is stringification, the concatenation is implicit.
>
> Oh.  Lookit that; it's even documented as being for exactly what I'd
> want to do with it!
>
> Now if only the AVR assembler had something similar!  I was actually
> trying to implement these macros for AVR just last week.

I don't know anything about AVR assembler, but if the first non-blank on a
line would never be "/" and it doesn't use brackets [] in its syntax, then
my preprocessor should work for it too.

{Quote hidden}

Dave Tweed has some Perl scripts that do something like this I think.  You
can also do this with my preprocessor a few different ways (the preprocessor
didn't have this capability when Dave created his Perl scripts).

Let's say you pass the condition as two arguments, a register and a bit in
the register that is set for the condition to be true.  You could get more
fancy, but this is good enough to illustrate how the macros would work.
Here is some PREPIC code off the top of my head.  It has not been tested,
not even syntax checked:

/var new labeln Integer = 0    ;init unique label generator number

      ...

//   Macro BLOCKIF reg, bit
//
//   Create a block IF structure.  The condition is true if bit BIT in
//   register REG is set.  The bank must be set for access to REG.
//
/macro blockif
 /var new iflabel String = [str "ifend" labeln] ;make label name
 /set labeln [+ labeln 1]     ;update unique label number for next time
      btfss   [arg 1], [arg 2] ;skip if the condition is true
      goto    [chars iflabel] ;condition is false, jump to end
 /endmac

/macro ifend                   ;ends a BLOCKIF
[chars iflabel]                ;false case jumps to here
 /del iflabel                 ;done with label, pop to previous
 /endmac

      ...

      banksel  myvar
      movf     myvar, w       ;get my special variable value
      xorwf    b'00110101'    ;compare it to the magic pattern
 blockif  status, z           ;is the magic pattern ?
      <code for is magic pattern case>
   ifend


Note how the /VAR NEW command creates a new version of a variable regardless
of any previous version.  This becomes the current version until it is
deleted.  Variables and other symbols are stackable in PREPIC.  Using this
feature requires nothing special to be done to allow BLOCKIF/IFEND blocks to
be nested.

In a real implementation you'd probably want to pass named conditions, then
expand them out to the appropriate registers, bits, and polarity in the
macro.  I'm just trying to illustrate the overall concept of using PREPIC
macros to get what you asked for.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\17@075138 by Tamas Rudnai

face picon face
Actually I was wondering if M4 preprocessor could be any useful on PIC
development - is anybody using that (not only for PIC but for something else
perhaps?)

http://en.wikipedia.org/wiki/M4_(computer_language)

Thanks
Tamas

--
Rudonix DoubleSaver
http://www.rudonix.com

2009\02\17@080536 by sergio masci

flavicon
face


On Tue, 17 Feb 2009, Gerhard Fiedler wrote:

{Quote hidden}

Actually XCSB goes that extra mm and compares signed and unsigned values
correctly i.e. a signed 16 bit int that is less than 0 is always less than
an unsigned 16 bit int AND an unsigned 16 bit int that is greater than
0x7fff is always greater than a signed 16 bit int. The extra code is
trivial and is only generated when comparing signed against unsigned.

Regards
Sergio Masci

2009\02\17@081021 by sergio masci

flavicon
face


On Tue, 17 Feb 2009, Tamas Rudnai wrote:

> Actually I was wondering if M4 preprocessor could be any useful on PIC
> development - is anybody using that (not only for PIC but for something else
> perhaps?)
>
> http://en.wikipedia.org/wiki/M4_(computer_language)
>

Yes I've used M4 a fair bit (to do some pretty wonderful stuff with) but
I've not used it at all with the PIC. It is very eligant in its
simplicity.

Actually I pointed out the possible use of M4 some months ago and
explicitly asked if Olin had seen or considered using it. I'm still
waiting for a reply on that one.

Regards
Sergio Masci

2009\02\17@081415 by Isaac Marino Bavaresco

flavicon
face
Tamas Rudnai escreveu:
> Actually I was wondering if M4 preprocessor could be any useful on PIC
> development - is anybody using that (not only for PIC but for something else
> perhaps?)
>
> http://en.wikipedia.org/wiki/M4_(computer_language)
>
> Thanks
> Tamas
>  

The thing I like most about MPASM is that its preprocessor is almost
100% compatible with C's preprocessor.
I often create header files that are included both in asm and C files.
This way it is easier to synchronize #defines across the project.

I even use comments, inside #if 0/#endif.

Regards,

Isaac
__________________________________________________
Faça ligações para outros computadores com o novo Yahoo! Messenger
http://br.beta.messenger.yahoo.com/

2009\02\17@082355 by olin piclist

face picon face
sergio masci wrote:
> Actually I pointed out the possible use of M4 some months ago and
> explicitly asked if Olin had seen or considered using it. I'm still
> waiting for a reply on that one.

Sorry, I must have missed the question.  I have only vaguely heard of M4, so
I don't know what it can and can't do.  I do know what my preprocessor can
do and how to use it.


********************************************************************
Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products
(978) 742-9014.  Gold level PIC consultants since 2000.

2009\02\17@095822 by sergio masci

flavicon
face


On Mon, 16 Feb 2009, Lee Jones wrote:

{Quote hidden}

XCASM has a '.if_rt' directive that does all the above for you. It can be
nested and works with complex expressions.

e.g.

       ; this generates runtime code

       .if_rt        (fred * 3) > bert

       ... "true" block of assembly code

       .else

       ... "false" block of assembly code

       .endif

Also if the conditional expression equates to the constant NOT 0 then only
the "true" block of assembly code is included in the runtime executable
(the "false" block being excluded). If the conditional equates to the
constant 0 then only the "false" block of assembly code is included in the
runtime executable (the "true" block being excluded). If the conditional
expression does NOT equates to a constant then runtime code is generated
to evaluate the expression at runtime.

In this way the assembler can actually produce optimised code which is
dependent on the actual address of a symbol determined at assembly time
(which is equivalent to link time in other systems because of the way the
XCASM works). This is not possible with a pre-processor.

Regards
Sergio Masci

2009\02\17@105339 by William \Chops\ Westfield

face picon face

On Feb 17, 2009, at 4:50 AM, Tamas Rudnai wrote:

> Actually I was wondering if M4 preprocessor could be any useful on PIC
> development - is anybody using that (not only for PIC but for  
> something else
> perhaps?)

I've used M4 with C to build some complex command parsing tables.  I  
found it usable, but sort of "excessively incompatible with anything  
else."

I've been a bit puzzled that no one has packaged up one of the  
mainframe-generation assembler macro processors as a pre-processor  
(for C, if nothing else.)  As far as I can recall, they weren't THAT  
tied to the underlying assembler and mostly produced "text" (in fact,  
you had to take some care in two-pass assemblers that the macros
expanded equivalently in each pass...)

BillW

2009\02\17@130540 by Dave Tweed

face
flavicon
face
Olin Lathrop wrote:
> Dave Tweed has some Perl scripts that do something like this I think.

I have two Perl scripts that I use for PIC assembly language. The first is
a generalized "structured assembly" preprocessor that handles a wide range
of processor architectures. It was first written for Analog Devices 21xx
series of DSP chips, and now supports 21xx, Blackfin, PIC14/16, PIC18,
dsPIC and the TI '55xx fixed-point DSP. It's easy to add new architectures
as I need them.

The second script is specific to Olin's environment for the PIC, and
performs some additional steps related to banking and formatting the code.

It's somewhere way down on my to-do list to write some documentation for
each of these scripts and make them available on the web, but I have no
idea of when I'll actually get around to doing that.

Tamas Rudnai wrote:
> Actually I was wondering if M4 preprocessor could be any useful on PIC
> development - is anybody using that (not only for PIC but for something
> else perhaps?)

Yes, I use m4 to do some preprocessing on VHDL code. I toyed around with
creating a Perl script for that, but decided that I could do what I needed
to do (copy the details of entity/component declarations) with m4.

It would be difficult to do what I do in my "structured assembly" script in
a generalized macro language like m4. I actually buffer the entire source
file and once I've identified all of the "basic blocks" in it, I emit a
new file that has all of the generated label names optimized for debugging,
in the sense that there is only one label at the beginning of each basic
block, and the generated names are numbered sequentially and related to any
existing labels in the original source file, such as function entry points.

For example, if the original code contains:

my_function:
       code block A
       code block A
       loop
           code block B
           if nz
               code block C
               code block C
               if c
                   code block D
                   code block D
                   break
               else
                   code block E
                   code block E
               endif
           else
               break
           endif
       forever
       code block F
       return

The generated code looks like this:

my_function:
        code block A
        code block A
my_function_1 unbank
        code block B
        bz     my_function_4
        code block C
        code block C
        bnc     my_function_2
        code block D
        code block D
        bra     my_function_6
        bra     my_function_3
my_function_2 unbank
        code block E
        code block E
my_function_3 unbank
        bra     my_function_5
my_function_4 unbank
        bra     my_function_6
my_function_5 unbank
        bra     my_function_1
my_function_6 unbank
        code block F
        return

This was done with the PIC18 architecture setting, which also puts in calls
to Olin's "unbank" macro at every generated label. But note that the labels
are numbered sequentially and use the same base part as the function label,
which makes them easy to find in the debugger or in an xref file.

Yes, I know there's unreachable code generated in the above example, and
jumps-to-jumps, but I'm not interested in automatically optimizing stuff
like that away. Usually, if it bothers me in a particular case, simple
changes to the original source file can usually take care of them, and
often make that file easier to read anyway. For example, the original
input above could be rewritten as:

my_function:
       code block A
       code block A
       loop
           code block B
           break z

           code block C
           code block C
           if c
               code block D
               code block D
               break
           endif

           code block E
           code block E
       forever
       code block F
       return

In which case, the preprocessor generates:

my_function:
        code block A
        code block A
my_function_1 unbank
        code block B
        bz     my_function_3

        code block C
        code block C
        bnc     my_function_2
        code block D
        code block D
        bra     my_function_3
my_function_2 unbank

        code block E
        code block E
        bra     my_function_1
my_function_3 unbank
        code block F
        return

Anyway, if someone wants to play around with this in its present state,
which means reading the Perl source code for the documentation (it's
reasonably well-commented), send me a mesasge off-list to "dtweed at
acm dot org" and I'll forward you a copy. It's about 18KB, 600 lines.

--Dave

2009\02\17@132214 by Rolf

face picon face
sergio masci wrote:
{Quote hidden}

And this, likewise, *could* be considered a flaw in the system.....

In the embedded world the additional cycles could be considered
expensive, and a C programmer 'who knows his stuff' may well be peeved
that the compiler is out-thinking him, and introducing issues. In fact,
a 'well versed' programmer (for better or worse) may in fact take
advantage of this in some other code, and then become perplexed when
compiled with XCSB.

One of the benefits of C is the low-level of the language, and the fact
that it does not make these sorts of decisions on behalf of the programmer.

If the argument is that a language can be strongly type-checked at
compile time without impacting run-time then I am interested in hearing
more. I believe that others have suggested that this is possible (Olin,
et. al.), though in my (not-so-embedded) experience I have used many
languages, and it is surprising how many times the typing of a value may
need to change to get the desired results, leading (in some langaues) to
multiple casts, etc. *****

On the other hand, if the language type checks have a run-time impact
then I am concerned that it is entering an area where uncertainty may
cause issues.

Unlike Olin, I am of the opinion that any language can do anything, with
the condition that the compiler behaviour is well documented, and
consistent. Then, as a programmer, I can elect to use the language, and
make of it what I will (i.e. select the best language for the task at
hand with an informed decision). If the behaviour of the language is
inconsistent with the documentation, or has unpredictable results, then
the language is a liability, and regardless of the other benefits the
language may provide, I believe the language should be avoided.

Rolf


***** - I am an experienced Java programmer, and find it interesting
that the 'new' generics concept in Java which is supposed to 'solve' the
exact problem that we discuss here (compile-time checking of variable
typed-ness) introduces a different set of issues when faced with
inheritance. Also, in a strongly typed language (I presume Java is
strongly typed), they then implement 'auto-boxing' allowing the
programmer to intermix primitive and object versions of variables
haphazardly thus reducing typedness conventions (autoboxing allows the
programmer to treat int's and Integer's, long and Long, boolean and
Boolean etc. without having to convert between them - java will assume
the conversion is implied, and will do the conversion for you without
any warning, etc. unless you try to autobox a null Integer object to an
int primitive ... ).

2009\02\17@154949 by sergio masci

flavicon
face


On Tue, 17 Feb 2009, Rolf wrote:

{Quote hidden}

Maybe you're under the misconception that the XCSB compiler is a C
compiler?

The whole point of not just jumping on the bandwagon and writing another
'C' compiler was to have the freedom to change things for the better.
Allowing programmers to compare signed and unsigned values correctly
is a way of eliminating some obscure bugs.

Look at it this way: if the compiler always generates the same extra code
for this special case (signed / unsigned) comparison then that's the code
that gets debugged - extra cycles and all. If on the other hand we simply
force the programmer to resort to casting signed to unsigned (or visa
versa) to perform his/her comparision then we allow a potential bug to
remain dormant and only manifest itself through very heavy testing.

>
> One of the benefits of C is the low-level of the language, and the fact
> that it does not make these sorts of decisions on behalf of the programmer.
>

This is just a side effect of the implementation of the compiler not the
original design goal. Look at recent posts concerning the implementation
of the right shift operator ">>". It can either preserve the sign or not
depending on how easy it is for the compiler to generate the specific
code.

If you compare a signed or unsigned int to a float in 'C' the compiler
will produce code which will give the mathematical result you expect. Why
should this be any different when you compare signed and unsigned ints?

> If the argument is that a language can be strongly type-checked at
> compile time without impacting run-time then I am interested in hearing
> more.

A runtime overhead is incured ONLY if the (XCSB) compiler detects that a
comparison is being made between signed and unsigned integers (8, 16 or 32
bits). There is no code generated to keep track of the type of an
expression as it is being evaluated at runtime. This is a compile time
only thing.

> I believe that others have suggested that this is possible (Olin,
> et. al.), though in my (not-so-embedded) experience I have used many
> languages, and it is surprising how many times the typing of a value may
> need to change to get the desired results, leading (in some langaues) to
> multiple casts, etc. *****
>
> On the other hand, if the language type checks have a run-time impact
> then I am concerned that it is entering an area where uncertainty may
> cause issues.

No, no runtime type checking in XCSB. The compiler doesn't generate a
tokenised program that needs to be interpreted. It generates a highly
optimised executable.

>
> Unlike Olin, I am of the opinion that any language can do anything, with
> the condition that the compiler behaviour is well documented, and
> consistent. Then, as a programmer, I can elect to use the language, and
> make of it what I will (i.e. select the best language for the task at
> hand with an informed decision). If the behaviour of the language is
> inconsistent with the documentation, or has unpredictable results, then
> the language is a liability, and regardless of the other benefits the
> language may provide, I believe the language should be avoided.

Well sure, if the documentation does not match the behaviour then there is
clearly either an error in the documentation or a bug in the compiler.
This does not mean that there is a fundamental problem with the language
just that there is a bug that needs to be fixed. Advocating that comparing
signed and unsigned intergers be done correctly does not give rise to
unpredictable results. If anything it makes certain kinds of bugs in the
target code behave in a more predictable way.

Regards
Sergio Masci

2009\02\18@061900 by Gerhard Fiedler

picon face
sergio masci wrote:

>> In the embedded world the additional cycles could be considered
>> expensive, and a C programmer 'who knows his stuff' may well be
>> peeved that the compiler is out-thinking him, and introducing
>> issues. In fact, a 'well versed' programmer (for better or worse)
>> may in fact take advantage of this in some other code, and then
>> become perplexed when compiled with XCSB.
>
> Maybe you're under the misconception that the XCSB compiler is a C
> compiler?

I don't know for sure, but to me it didn't sound like this. While this
is a useful idea, it does introduce extra cycles that are wasted in all
cases where you want to compare signed and unsigned ints and know that
they are not outside the positive range of a signed int. Which is a
surprisingly frequent case.

> The whole point of not just jumping on the bandwagon and writing another
> 'C' compiler was to have the freedom to change things for the better.

Right. Better in some cases, worse in others (the introduced run-time
overhead). I think it was Walter Banks here who said that he can create
an equivalent C program for any assembly program, with not more
instructions than the assembly program.


> Allowing programmers to compare signed and unsigned values correctly
> is a way of eliminating some obscure bugs.

The warning I mentioned before almost equally eliminates these bugs. Of
course, if the values go out of the range that's adequate for the cast,
then the comparison is wrong. However, this goes for most integer
operations: add, subtract, multiply. Do you do extra range checking
here, too? I suppose not... because of the run-time overhead. So in a
way, the C behavior of integer comparison is a result of consistency
with other integer comparisons.

You abandoned a bit of consistency for a bit of usefulness in certain
cases. It's a different trade-off, but arguments can be made for either
side.

>> One of the benefits of C is the low-level of the language, and the
>> fact that it does not make these sorts of decisions on behalf of the
>> programmer.
>
> This is just a side effect of the implementation of the compiler not
> the original design goal.

Are you sure? I think that keeping C very low-level, sort of a portable
and better readable assembler, was an original design goal.

> Look at recent posts concerning the implementation of the right shift
> operator ">>". It can either preserve the sign or not depending on
> how easy it is for the compiler to generate the specific code.

Exactly... IMO this fits the original design goal of a very low-level
language.

> If you compare a signed or unsigned int to a float in 'C' the compiler
> will produce code which will give the mathematical result you expect.
> Why should this be any different when you compare signed and unsigned
> ints?

As I wrote above, there are two issues involved. One is consistency with
other integer operations (you of course say that your implementation
maintains consistency with other comparison operations), and the other
is the low-level nature: a C comparison is typically a single assembler
instruction.

Gerhard

2009\02\18@123956 by sergio masci

flavicon
face


On Wed, 18 Feb 2009, Gerhard Fiedler wrote:

{Quote hidden}

If you really KNOW this and WANT to take advantage of it then XCSB wont
stand in your way. It will let you cast either operand to do exactly what
you want.

{Quote hidden}

No it doesn't. The result of these operations can "overflow" whereas the
result of a comparison cannot.

And yes XCSB is consistant between interger operations such as add, sub,
mult AND compare. Unlike 'C' which tends to just evaluate expressions,
XCSB is driven by what you intend to do with the result.

In XCSB if I add 2 8 bit numbers and store the result in a third 8 bit
variable then the compiler knows that I don't care about overflow and only
performs an 8 bit addition. If on the other hand I add 2 8 bit numbers and
store the result in a 16 bit variable then the compiler knows that I do
care about overflow and performs an optimised 16 bit addition.

> Do you do extra range checking
> here, too? I suppose not... because of the run-time overhead.

No I don't do extra checking here because the value of doing so is much
less for the programmer but I do generate code which is consistant with
how the result will be used.

> So in a
> way, the C behavior of integer comparison is a result of consistency
> with other integer comparisons.

'C' behavior just seems to be: evaluate an expression then do something
with the result. Comparison just being based on wheather the result is
true or false. Often a good optimiser will make the generated code look
like 'C' is designed to do something intelligent in the case of a
comparison, but it is not.

>
> You abandoned a bit of consistency for a bit of usefulness in certain
> cases. It's a different trade-off, but arguments can be made for either
> side.

No it just looks like that from your point of view because you are not
aware of all the facts.

{Quote hidden}

I'm pretty sure. Don't forget 'C' only came about because its predecesor
('B') was too slow (being an interpreted language). You can't get much
further from a portable assembler than an interpreter. Also if your goal
really were to produce a portable assembler you would not be using a
dynamic stack to pass arguments between functions since there is an
overhead in setting these up and using them within the called function.

>
> > Look at recent posts concerning the implementation of the right shift
> > operator ">>". It can either preserve the sign or not depending on
> > how easy it is for the compiler to generate the specific code.
>
> Exactly... IMO this fits the original design goal of a very low-level
> language.

I really cannot accept this. The greatest justification of all used
by almost every 'C' programmer is: "portability"

If the right shift operator ">>" is implementation dependent for a signed
int then the behaviour of a program may or may not be the same if I
compile a 'C' program using different 'C' compilers, let alone different
target machines. Where does that leave portability?

No, I contend that 'C' is different things to different people :-)

{Quote hidden}

Actually it is typically at least 2 assembler instructions :-)

Friendly Regards
Sergio

More... (looser matching)
- Last day of these posts
- In 2009 , 2010 only
- Today
- New search...