Haven't done much assembly with with floating point numbers. I'm I correct to assume that to output a floating point number the IEEE standard bit pattern must be converted to a string and passed to an API function and to input such a number a string must be input and converted to the IEEE standard bit pattern and then loaded into memory backwards ( least significant digits first ) or in the proper order into a general purpose register? Thanks for any information! :bg
Almost entirely right. For output in a format understandable to humans, it does need to be converted and rendered as ASCII.
For input from an ASCII string, that string must be converted to the proper format but does not necessarily need to be stored in memory if the string is input from the user; the FPU is required for the conversion and the value could be processed immediately while residing on the FPU. If you initialize a float in the .data section of your app, the assembler converts the string to the prescribed format and it will be stored in memory.
The fpulib package contains the source code for the two functions related to the conversion to/from ASCII/float. If you look at those source codes, you will notice the complex nature of such conversion. If you want to learn more about the IEEE format, some description is available at:
http://www.ray.masmcode.com/tutorial/fpuchap2.htm
Thanks for the tips! Can anyone give the pseudo code for the algorithm to convert floating point numbers back and forth from memory to string and from string to memory/register. I'm looking in the second volume of Knuth's series , but haven't found anything on this particular algorithm yet ( lots of good floating point number information can be found in this book , though ! ). Thanks for any info! :U