Hi everyone!
I have a little question: what sort of float numbers should be used under what circumstances?
I know only a few about MMX and SSE and those things. I thought of FPU floats and also of fixed point numbers.
Please give your comments, pro and kontra.
Which are commonly used? Which are the fastest? What hardware is needed for each representations?
The possibilities I know:
- fix point: n bytes int, n bytes fraction. Could be 32-32 bits.
- FPU types
Greets, gábor
Fixed point is done using integer instructions, think of it as an integer divided by 2^n.
The reason to use fixed point is speed, if your number is within a range that'll fit within a total of 32 bits it'll be very fast as you just use integer instructions. The final conversion is done using a shift (or an add & shift if you want to round).
The whole advantage of floating point is it's range is dynamic, where you sacrifice precision for range or vice-versa. Most of the time when you deal with very large numbers you don't care about smaller values (the national debt of the USA is over $7.5 trillion, you don't care so much about the cents). It is in fact the dynamic range that makes the hardware more expensive (in terms of gates & timing)!
Mirno
Quote from: gabor on May 17, 2005, 10:55:06 AM
Hi everyone!
I have a little question: what sort of float numbers should be used under what circumstances?
I know only a few about MMX and SSE and those things. I thought of FPU floats and also of fixed point numbers.
Please give your comments, pro and kontra.
Which are commonly used? Which are the fastest? What hardware is needed for each representations?
The possibilities I know:
- fix point: n bytes int, n bytes fraction. Could be 32-32 bits.
- FPU types
Greets, gábor. Fixed point math was really popular when the Pentium first came out because it's floating point unit blew chunks ( as well as earlier Intel processors not being that fast at floating point). If I remember right the original Doom used fixed point math, because of that. Now adays floating point is a lot faster, so usually it's rare that you want to sacrifice accuracy for a slight increase in speed. I prefer using scalar SSE and scalar SSE2 over standard FPU. On my P4 it's faster, and it's also easier to program. That's why I posted that one tutorial on using it to do FPU stuff.
Het Gabor :) You
If you need maximum precision with the equivalent of up to 19 significant decimal digits and/or a range spanning 10^-4932 to 10^+4932, only the FPU can presently provide you with it.
Raymond
Hi ya folks!
I made a little research in this topic and I've found that as Mark posted there is no particular reason not to use FPU and float numbers...
So for me the winner is float.
Could someone give the rande for the real4,8 and 10 numbers? Are there any other floats for the FPU?
Greets, gábor
If you intend to use the FPU but know little or nothing about it, you may want to have a look at the following:
http://www.ray.masmcode.com/index.html#fputut
Raymond
LoL shameless self-promotion! (no insult intended)
I think it is better to use REAL8 and SSE2 because the FPU will probably be phased out in the next decade or two.
:))) The FPU will phase out in the next decade? Don't be so pessimistic! I am positive, that in the next few decades the whole computerlike joke lead by Intel will die out. It is really a joke, that they are doing everything to keep that old way of architecture. Pumping up the clock frequency and shrinking the circuits will definitly not be enough in the next few years. No they come with dual processors, maybe later three CPU, or like in PS3 a CPU a GPU and other 5-6 processors integrated...And we the end users are forced to buy them because of the slow and resource wasting softwares created nowdays, I don't want to name a company, but in opposite its name they are creating huge, say macro soft-wares...
Okay I made all my complaints :))
Viva parallel processing, viva RISC architecture and viva neuron modelling architectures!
:))
BTW: thanks for the post, very usefull!
Greets, gábor
Quote from: gabor on May 19, 2005, 08:56:47 AM
:))) The FPU will phase out in the next decade? Don't be so pessimistic!
If you are pessimistic, you are never disappointed, and sometimes pleasantly surprised!
Look how long it took DOS to disappear. Even now, a decade since the first 32 bit operating system (win95), CPUs still start up in real mode.
But DOS hasn't disappeared. And I expect the ability to run FPU code to persist for at least 20 more years, for the same reason that real mode is still supported. Breaking a mountain of still-useful software would be bad business.
Quote from: MichaelW on May 19, 2005, 11:02:10 AM
Breaking a mountain of still-useful software would be bad business.
No one uses DOS anymore. It disappeared with Win2000 and WinME.
It is hardly a mountain, if anything.
I think we could endlessly argue about such things, but the main problem is, that MS DOS and WIN has too much influence on computer buisness and to be downwards compatible (I don't really see why is this such important rule) the newest versions still suffer from the design and implementation problems made at the very first versions.
However a float is still a float, and when hardware or OS changes only we suffer the low lever programmers. But pease don't start to develope in JAVA, I mean only in JAVA...In my opinion it is allways good to know how things are really working.
Greets, gábor
Quote from: AeroASM on May 19, 2005, 01:23:49 PM
Quote from: MichaelW on May 19, 2005, 11:02:10 AM
Breaking a mountain of still-useful software would be bad business.
No one uses DOS anymore. It disappeared with Win2000 and WinME.
It is hardly a mountain, if anything.
I meant actual DOS, as in MS-DOS version 6.22 and below, not Windows 95/98 MS-DOS mode.
And there
is a mountain of software in use that contains FPU code.
I could have sworn that Win95/98 dos mode was DOS.
Anyway, I am not saying that we should get rid of the FPU and nail all the existing software, I am merely saying that because SSE(2/3) is faster and better it needs to become the standard instead of the FPU. As you say, Intel cannot take it out until developers stop using the FPU and start using SSE. So we should do SSE instead of FPU, to help this.
One thing to keep in mind is that 64-bit Windows does not allow FPU or MMX code in 64-bit mode, only SSE/SSE2/SSE3. Which I think was a dumb decision, it's going to break a lot of code, but there must have been a good reason for it. It isn't a hardware limitation, just an OS limitation. From what I have read, 64-bit Linux allows FPU and MMX code in 64-bit mode. ::)
Quote from: Greg on May 19, 2005, 06:21:26 PM
One thing to keep in mind is that 64-bit Windows does not allow FPU or MMX code in 64-bit mode, only SSE/SSE2/SSE3. Which I think was a dumb decision.
It had to be done some time, and better sooner rather than later so developers do not attempt to write 64-bit FPU.
I would say AeroASM is right. There are times when all the old and obsolete stuff must be thrown away (I'm not saying that FPU is am old technology. BTW I would suggest totally different things to be canceled)
Okay this topic is becoming a topic in the Soap Box.
And yes, I finally choosed to use real4 type, but I am also considering to use 64bit fix point: 32bit int 32bit fraction is quite comfortable. What da ya think?
Greets, gábor
Quotebut I am also considering to use 64bit fix point: 32bit int 32bit fraction is quite comfortable. What da ya think?
That would be fine with 32-bit registers if you limit yourself to additions and subtractions. However, you can forget multiplications (except with integers), and divisions (unless the quotient would be less than 1), with 32-bit registers.
Raymond
Now I'm in trouble again. I used REAL4 numbers and then I bumped into this:
REAL4number REAL4 0.7
Text db 40 dup(0)
fld REAL4 PTR [REAL4number
invoke FpuFLtoA, 0, 12, ADDR Text, SRC1_FPU or SRC2_DIMM
PrintString Text
The result was: Text = 0.699999988079!!! Why is that so?
Should I use REAL8 or REAL10? My problem is that I need to store plenty real numbers and I would use the possible smallest format.
The representation range is about 1.0E+6 and 1.0E-6. It should fit into a REAL8, shouldn't it?
I used the FPULIB and the DEBUG includes and libraries.
Gabor, the methodology by which floating-point operations calculate, intrinsically produces a small error. This error is smallest around 0 and gets bigger with the exponent. This is normal FP behavior. Do a google search for "exponent-mantissa math" if you need an explanation how this works. More bits produces a wider value range and overall accuracy, but there will still be a small error. If you want precise integer and fractional data, maybe your idea of 32bits integer and 32bits fraction would be better suited. (I think you said that earlier.) :wink
Remember to round! (You might be able to keep the floating-point version if you rounded from say, the thousandth's postion. :)
QuoteThe result was: Text = 0.699999988079!!! Why is that so?
In order to fully understand why that is so, you must first be familiar with the floating point data format. I would suggest you look at the following:
http://www.ray.masmcode.com/tutorial/fpuchap2.htm#floats
Then, look at a further explanation given with the description of the "fld" instruction at:
http://www.ray.masmcode.com/tutorial/fpuchap4.htm#fld
If you then need further clarification, I will definitely try to provide it. :clap:
Raymond
Raymond, Mark thanks!
I' ve read some of such documents, and I have now a clearer view. Now I know that those precision problems raised becaouse I switched the rounding to truncating. I used truncating because I wanted to transfer the int part and the fraction part seperetedly into 2 dwords... I found my error there and so I am free of the error about 0.6999 insted of 0.7.
However can you approve me: if I want to use real numbers in the range of 1.0E-6 .. 1.0E+6 so 1 000 000.000 001 should be a valid number then I should not use REAL4??
Greets, Gábor