Exact match. Not showing close matches.
PICList
Thread
'[PIC:] Floating Point in PIC'
2004\08\18@024653
by
William Chops Westfield
On Aug 17, 2004, at 8:58 PM, Wan Zulhelmi Wan Ahmad Kamar wrote:
> How do i represent Floating Point numbers in PIC?
The PIC families do not have any native floating point support at all.
That means you can pick whatever representation is convenient to your
requirements for precision, range, and performance. And then you get
to write all the code to do appropriate operations on that format.
(alternately, you find someone else's floating point library that is
close enough to your requirements and use it. Perhaps simply by using
the floating point support built into a compiler...)
> If I used Floating point, does all the instruction for
> add,subtract,compare function still applicable?
Not directly. Every floating point operation will become a function
call of some sort. Internally, there will be the normal add/etc
instructions, of course, but combined in rather complex ways...
A lot of the time, when people think they need floating point support,
what they really need is just fractions, which can be dealt with MUCH
more efficiently and simply. Sadly, fixedpoint noninteger math isn't
taught much anymore...
Personally, if I were to find an application that required floating
point, it would be time for me to shell out the $$$ for a C or basic
compiler with knownwellbehaving float code, and then just write the
code in the high level language and hope it fit. (alternately, it'd be
time to consider another processor where 'fitting' wouldn't be so much
of an issue.)
BillW

http://www.piclist.com hint: To leave the PICList
spam_OUTpiclistunsubscriberequestTakeThisOuTmitvma.mit.edu
2004\08\18@084252
by
Olin Lathrop
part 1 2670 bytes contenttype:text/plain; (decoded 7bit)
> The PIC families do not have any native floating point support at all.
> That means you can pick whatever representation is convenient to your
> requirements for precision, range, and performance. And then you get
> to write all the code to do appropriate operations on that format.
> (alternately, you find someone else's floating point library that is
> close enough to your requirements and use it. Perhaps simply by using
> the floating point support built into a compiler...)
I somehow missed the original post of this thread. I created my own 24 bit
PIC floating point routines a long time ago, and use them in the relatively
rare cases when floating point is really necessary. Most simple measurement
and scaling can be done with fixed point. One application where I find
floating point very useful is in PID controllers. There the dynamic range
of values can be very high, making it impossible to find a fixed point
format that is small enough in bytes.
Since PIC computations generally work on real world data, 24 bit floating
point with its 16 mantissa bits seems a good fit. At 3 bytes/value they
generally require less storage than fixed point, especially when the range
of numbers can vary widely. And 16 bit precision is several bits more than
what most measurements are good for, so you can do a bunch of computations
without the roundoff error becoming an issue.
I have attached the comments that describe my 24 bit floating point format.
Programs HFP and FPH convert between decimal floating point and the
hexadecimal 24 bit PIC floating point format. These can be useful during
debugging. Both these programs are included in the PIC development tools
available at http://www.embedinc.com/pic/dload.htm.
> Personally, if I were to find an application that required floating
> point, it would be time for me to shell out the $$$ for a C or basic
> compiler with knownwellbehaving float code, and then just write the
> code in the high level language and hope it fit. (alternately, it'd
> be time to consider another processor where 'fitting' wouldn't be so
> much of an issue.)
Compiler floating point is notorious for code bloat. Why not just use a few
handcrafted floating point routines and call them from the assembly code?
I've got one application doing two nested PID control loops and a lot of
other stuff on a 16F876. It's got less than 100 words of program memory
left, and I seriously doubt that it would have fit if it were compiled code.

http://www.piclist.com hint: To leave the PICList
.....piclistunsubscriberequestKILLspam@spam@mitvma.mit.edu
part 2 2575 bytes contenttype:text/plain;
(decoded quotedprintable)
; 24 bit floating point format:
;
; 24 bits are used to describe a floating point value using 1 sign bit
; 7 exponent bits, and 16 mantissa bits as follows:
;
;  byte 2  byte 1  byte 0 
;       
; 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
; 
;    
;  S EXP  MANT 
;    
; 
;
; S  Sign bit. 0 for positive or zero value, 1 for negative value.
;
; EXP  Exponent. The overall floating point value is the mantissa
; value times 2 ** (EXP  64) when EXP is in the range from 1 to 127.
; The special EXP value of 0 is only used when the overall floating
; point value is 0.0.
;
; MANT  Mantissa. Except for the special case when the overall
; floating point value is 0, the mantissa represents a fixed point
; value such that 1 <= mantissa < 2. This means the integer part of
; the mantissa is always 1. Since this integer part is always the
; same, it is not stored. The MANT field contains the 16 most
; significant fraction bits of the mantissa value. Therefore
; MANT = (mantissa  1) * 65536. An overall floating point value of
; 0.0 is indicated by EXP = 0. In that case MANT is reserved, and
; should be 0.
;
; Consider the following examples:
;
; 0.0 > 000000h
;
; S = 0 (positive or zero)
; EXP = 0 (special case for 0.0)
; MANT = 0 (special case for 0.0)
;
; 1.0 > 400000h
;
; S = 0 (positive or zero)
; exponent = 0, EXP = 64 > 40h
; mantissa = 1.0, MANT = 0
;
; 3.141593 > C19220h
;
; S = 1 (negative)
; exponent = 1, EXP = 65 > 41h
; mantissa = 1.570797, MANT = 37,408 > 9220h
;
; Unless otherwise specified, overflow and underflow values are silently
; clipped to the maximum magnitude (7FFFFF for positive, FFFFFF for negative)
; and zero, respectively.

http://www.piclist.com hint: To leave the PICList
piclistunsubscriberequestKILLspammitvma.mit.edu
2004\08\18@103344
by
John J. McDonough

 Original Message 
From: "Olin Lathrop" <.....olin_piclistKILLspam.....EMBEDINC.COM>
Subject: Re: [PIC:] Floating Point in PIC
> in the relatively rare cases when floating point is really necessary.
I think I might take that a step farther. For control, floating point is
usually a bad thing, and with PICs, our objective is almost always control.
Back so many years ago I used to think you needed floating point to do
complex control calculations. Since then, I have automated dozens of
chemical plants, not simple little unit ops, but complex, worldscale plants
with hundreds or thousands of I/O's. Sure, most of the time, the
controllers are simple PI flow controllers. But I have done plenty of
complex feedforward controllers involving distillation calculations and
reaction kinetics. I've implemented predictive models of complex,
multistage plants inside the control system. Not once did I resort to
floating point.
One obvious problem with floating point is that it is large and slow. Maybe
more to the point for control, most FP algorithms tend to be not very
deterministic, which complicates control.
But the most important point is this .... if you intend to control
something, you need to understand it. In complex controllers there may be
many variables with varying degrees of significance. Real world sensors
rarely have a range of more than about eight or ten bits, although it is
often more convenient to carry calculations out to a little more precision
than that. Understanding the implications of the rangeability of the various
measurements is key to making your control work. If you need to resort to
floating point, then you don't understand the problem well enough.
That being said, I do believe in never saying never. Somewhere out there
exists a control problem where you actually do need floating point. But
newcomers to the sport should recognize that this isn't a 1% or even 0.1%
sort of occurrence. It is an extremely rare occurrence that very few
implementors will ever see.
McD

http://www.piclist.com hint: To leave the PICList
EraseMEpiclistunsubscriberequestspam_OUTTakeThisOuTmitvma.mit.edu
2004\08\18@114843
by
Olin Lathrop
John J. McDonough wrote:
> I think I might take that a step farther. For control, floating
> point is usually a bad thing, and with PICs, our objective is almost
> always control.
This may be true in your particular experience, but there are many things
PICs do beyond control, and floating point has its uses. Your statement
sounds more like a religious conviction or an attitude problem than a
rational conclusion. The main deciding factor between fixed point and
floating point is the dynamic range and precision required of the data.
For example, let's say values are measured to 10 bits and you want to keep
intermediate calculations good to 14 bits so that computational noise
doesn't accumulate to a significant level. That means you can have a
dynamic range of 32 bits  14 bits = 18 bits = 260K:1 if using 32 bit fixed
point. That's no problem for directly measured values, but may not be so
easy to arrange for intermediate values especially if divide, square, or
square root operations are required. It may also require different implied
scale factors for the various numbers. There is nothing wrong with that
mathematically, but it is harder to program and more error prone.
For low to mid volume products, the decreased engineering, testing, and
debugging cost is often worth the disadvantages of floating point.
> Not once did I resort to floating point.
That hardly proves the problem could not have been solved effectively using
floating point.
> One obvious problem with floating point is that it is large and slow.
This is a common misconception. Again, depending on the dynamic range and
accuracy, floating point can easily take less space. Floating point gives
up a fixed number of bits (usually 8) in return for a guarantee that the
remaining bits will all be significant over a very wide dynamic range. This
means that in many cases 24 bit floating point is sufficient, but would
require 32 bits or more and a lot of care to use fixed point.
As for speed, some operations like addition and subtraction are usually
slower in floating point due to the need to prenormalize. However,
multiplication and division is usually faster because there are fewer bits
that need to be crunched.
> Maybe more to the point for control, most FP algorithms tend to be
> not very deterministic, which complicates control.
This is just plain silly. Of course floating point is deterministic. There
is one and only one answer for each case, just like fixed point computation.
The difference I think you are alluding to is the nature of the
computational noise. Floating point tends to introduce noise at a roughly
constant signal to noise ratio, whereas fixed point introduces noise at a
roughly constant amplitude. Neither one is inherently better, but each
characteristic needs to be considered in the overall design. Generally both
schemes deal with it by guaranteeing that the computational noise is small
compared to measurement noise and then ignoring it. This is a valid
approach, and works with both schemes.
> But the most important point is this .... if you intend to control
> something, you need to understand it. In complex controllers there
> may be many variables with varying degrees of significance. Real
> world sensors rarely have a range of more than about eight or ten
> bits, although it is often more convenient to carry calculations out
> to a little more precision than that. Understanding the implications
> of the rangeability of the various measurements is key to making your
> control work. If you need to resort to floating point, then you
> don't understand the problem well enough.
So all designers of systems using floating point were stupid and lazy? This
is clearly rediculous.
To summarize, the advantages of floating point are:
1  Easier implementation. No need to determine the scale factors for
various fixed point numbers.
2  Autoranging, which makes them attractive when values can range widely.
3  Usually fewer total bits required for the same minimum guaranteed
precision unless the dynamic range of the values are rather small.
4  Usually faster for multiply and divide.
The disadvantages are:
1  Slower addition and subtraction.
2  Math routines require more code space.
3  No data storage savings if the dynamic range of values is known to be
limited.
4  Writing floating point math routines is more difficult to implement than
fixed point.
*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 7429014, http://www.embedinc.com

http://www.piclist.com hint: To leave the PICList
piclistunsubscriberequestspam_OUTmitvma.mit.edu
2004\08\18@114844
by
Spehro Pefhany

At 10:34 AM 8/18/2004 0400, you wrote:
>That being said, I do believe in never saying never. Somewhere out there
>exists a control problem where you actually do need floating point. But
>newcomers to the sport should recognize that this isn't a 1% or even 0.1%
>sort of occurrence. It is an extremely rare occurrence that very few
>implementors will ever see.
>
>McD
I agree with this. I'd also caution people to be careful with using
crapola resolution floating point as a panacea that would help them
avoid having to understand what is going on. I've see VISIBLE problems
with using 5 decimal digit math in a typical 10bit real accuracy
control system. It all depends on what is actually going on in the math,
such as the condition numbers of matrices.
Best regards,
Spehro Pefhany "it's the network..." "The Journey is the reward"
@spam@speffKILLspaminterlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com

http://www.piclist.com hint: To leave the PICList
KILLspampiclistunsubscriberequestKILLspammitvma.mit.edu
2004\08\18@165323
by
William Chops Westfield
On Aug 18, 2004, at 8:48 AM, Olin Lathrop wrote:
>
>> But the most important point is this .... if you intend to control
>> something, you need to understand it.
I think, before using floating point on a system without LOTS of extra
precision (ie 64/80 bit "doubles" on PCs), you might also need to
review your numerical analysis textbooks so that you have a good
understanding of how errors collect, even with floating point...
I remember what a rude awakening it was when APL on my DEC wouldn't do
my physics homework. It didn't have the range to do 1/hbar^2. Grr.
BillW

http://www.piclist.com hint: To leave the PICList
RemoveMEpiclistunsubscriberequestTakeThisOuTmitvma.mit.edu
More... (looser matching)
 Last day of these posts
 In 2004
, 2005 only
 Today
 New search...