News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

Division Algorithm

Started by cman, January 15, 2005, 12:01:20 AM

Previous topic - Next topic

cman

Say one were to compose a module to deal with numbers with a large amount of accuracy by setting aside an array of characters ( bytes ) for each of the whole number and decimal parts of the number. For instance , if we have the number 123456789.123456789 and the two arrays:

 

      whole    db    20  dup ( ? )
      decimal  db    20  dup ( ? )

 


so that the very last location in the array whole has the value 9 and the 11th position holds the first digit of the whole number of the number , 1. The decimal array will be set up in the same fashion. What would be an efficient way to divide two such numbers, for instance if we have :




  largeNumber struct
     
      whole    db    20  dup ( ? )
      decimal  db    20  dup ( ? )
 
  largeNumber ends

;how do we implement this:

largeNumberDivide proc  ptrLHSLongNumber :DWORD , ptrRHSLongNumber :DWORD

............
largeNumberDivide endp
     
;this function will return another "largeNumber" struct with the result of division




Any ideas? Thanks...

tenkey

The "decimal" or, more correctly, the fraction should immediately follow "whole", with the first digit in the first position of "decimal". Pad both ends with zeroes.

What I used to do with longhand division was to list the multiples of the divisor, d, from 2*d to 9*d. Then it was a simple matter of comparison to find the right quotient digit.

You get the most number of significant digits by left justifying the number, so the first nonzero digit is in the very first position. You need to keep track of where the decimal point is located, with another number. And this is pretty close to how floating-point is implemented in hardware (although, these days, they are all binary based.)
A programming language is low level when its programs require attention to the irrelevant.
Alan Perlis, Epigram #8

cman

OK , thanks! I wanted to keep the number shifted towards the right end of the array ( last digit in the last space in the array , since this simplifies the addition of two "largeNumbers" , but your system seems the easier for division as that algorithm inspects the front of the dividend rather than starting at the back of the number. ). There are no "tricks" to doing this sort of division? I've looked through some of my algorithm books , in peticular Algorithmics Theory & Practice by Gilles Brassard and Paul Bratley ( neat book , by the way  :bg ) but only long number multiplication is treated. Where might I look for such algorithms? I've noticed that the "common way" of doing something usually does not translate to the most efficient computer algorithm ( translateing the usual method of long hand division to computer code seems inefficient in that a large number of tests will be needed to find partial quotients ). Combine this with the fact that these functions could be used over and over in certain computations and it seems that more efficient methods might be required. Just curious ( and would like to write good code  :U ) Thanks again!

tenkey

Well, that's what you get for using decimal notation, ASCII, BCD, or otherwise.

And your format is incorrect for adding two bignums.  (0.1 + 0.11) is not 0.12, but 0.21.

For repetitive division by the same divisor d, the usual optimization is to calculate the reciprocal 1/d, and then use multiplication.

With binary notation, division is vastly simplified. Only one test after one subtraction. There is even a "non-restore" algorithm that has linear speed, O(1).

And then there is always the use of lookup tables to speed up code.
A programming language is low level when its programs require attention to the irrelevant.
Alan Perlis, Epigram #8

cman

Thanks for your time and information!  :bg  Could you explain how I could use "binary notation" to solve this problem. I'm not quite certain what your getting at. If this problem is easier solved by translateing it to a simpler problem , I'm all for it! ( reminds me of using "change of variables" in mutiple variable calculus or linear transformations in geometry ! ). I assume this involves using the binary representation of the number in some way. Can you give an example? Thanks......

tenkey

And here I thought you knew all about binary numbers, at least the integers or whole numbers.

A side question is do you want decimal accuracy of fractions? Most decimal fractions do not have an exact equivalent binary fraction. You will run into unexpected roundoff errors when converting to binary.

If you want decimal accuracy of fractions, then I suggest you use a higher number base than 10. A DWORD can hold a signed value up to 2G, so a base of 109 will give you decimal accuracy and allow you to use fast binary operations. You get a little bit of data compression, 9 digits in 4 bytes, instead of 8 digits in 4 bytes (packed BCD). It should be faster to process 30 bits at a time instead of 4.

The other possibility is to scale all your decimals to integer form, convert to binary, and then use binary arithmetic exclusively.

If you want to see how others do it, search for "bignum".

Binary division - restore algorithm

27 / 5 = 5.4

101 ) 11011.0000 ( 101.0110...
    - 101
     ----
     00011 positive, q = 1
     - 101
      ----
      1110 negative, q = 0
     + 101 add back (restore)
      ----
      00111
      - 101
       ----
       0010 0 positive, q = 1
       - 10 1
        -----
        111 1 negative, q = 0
       + 10 1 restore
        -----
        010 00
        - 1 01
         -----
         00 110 positive, q = 1
         -  101
          -----
          0 0010 positive, q = 1
          -  101
            ----
            1101 negative, q = 0
           + 101 restore
            ----
            0010
A programming language is low level when its programs require attention to the irrelevant.
Alan Perlis, Epigram #8

cman

Thanks tenkey , but I already know this information. What I wanted to know was how I could apply this information to my problem. I college we had an assignment in C where large numbers where abstracted by creating a class that used arrays of chars to store large numbers that were beyond the size of those able to be contained by built in types. If we had the number 123456789123456789 this was put in the internal char array of a class this way:

   0 1 2 3 4 5 6 7 8 ......  ( array indexes )
  ____________________
   1 2 3 4 5 6 7 8 9 ....... ( number stored at the address of the index )


I'd like to implement a large number module in assembly language , and need algorithm to do division on two numbers that are stored digit by digit in an array. I was curious whether you had an efficient algorithm for doing the division while the digits are in the array. I was curious about how the digital representation of the number might hasten this task. Thank you for your time.  :toothy

tenkey

I don't have any special algorithms to speed up division. Just general optimizations. Any efficient process that allows you to process more than one decimal digit at a time is a help. That's why I suggest a number base of 109. Unless you can beat the speed of hardware division and multiplication through some alternate means, the large number base seems the way to go. With ASM, you can keep the dual wide product and dividend in registers, which might not happen with a HLL like C (unless, of course, you have access to inline assembly). It's been a while since I've looked at  of Knuth's The Art of Computer Programming: Seminumerical Algorithms (Vol. 2), where I got as far as multiplication optimizations. If there is a "rapid convergence" algorithm similar in spirit to Newton iteration, I don't remember running across it.
A programming language is low level when its programs require attention to the irrelevant.
Alan Perlis, Epigram #8