Why is it necessary to sign extend before performing signed integer division?

Started by fantasia, February 03, 2012, 11:31:09 AM

Previous topic - Next topic

fantasia

Hello!

I am trying to get my head around the mechanics of signed integer division.

My understanding is that before I use the IDIV instruction, I should sign extend the dividend into the high order byte, word or dword, as appropriate. 

I have learned how to perform long division in base-2, 10 and 16, so I can calculate on paper what the result of using IDIV should be.  (On paper: I note the signs of the dividend and divisor, convert both so they are both positive, perform the division, and convert the sign of the quotient if required). But that's on paper!  What I don't understand is why the sign extension is required in the first place. 

May I ask somebody to explain why the sign extension is required, possibly including an example long-division calculation?  I hope this isn't asking too much.

Any advice will be greatly appreciated.

Thanks for reading.

F.

MichaelW

For a 32-bit divisor the dividend goes into EDX:EAX. If the value in EAX is positive, you can zero-extend EAX into EDX, or simply set EDX to zero, and EDX:EAX will then contain a 64-bit representation of the value in EAX. But if you do the same when the value in EAX is negative, then the value in EDX:EAX will be some positive number, obviously not a 64-bit representation of the value in EAX.
eschew obfuscation

raymond

And the computer does not do the division with the negative number(s). Those are converted to positive numbers like you do with pencil/paper and later converts the quotient to its proper sign.
When you assume something, you risk being wrong half the time
http://www.ray.masmcode.com

clive

Because the sign is removed before the division occurs, and then reapplied to the result. If you fail to sign extend correctly, the division will be that of a large positive number.
It could be a random act of randomness. Those happen a lot as well.

vanjast

Correct me if I'm wrong here.. (too lazy to re-search)

As far as I remember division uses an 'Early Out' algorithm in microcode, in such that it 'subracts' multiples of the divisor and then 2's compliment is applied to the result.
Sign extending ensures that the correct sign is carried into the result ???
:eek

clive

Quote from: vanjast
Correct me if I'm wrong here.. (too lazy to re-search)

Well certainly the classical methods would identify when they had consumed/processed all the divisor bits. And would cycle one bit at a time. I think the current hardware implementations are a little more complex. You trade silicon for speed.

http://www.intel.com/technology/itj/2008/v12i3/3-paper/8-radix.htm

http://ec2-122-248-210-243.ap-southeast-1.compute.amazonaws.com/mediawiki/index.php/Binary_division

The ARM Cortex-M3 has a hardware divider capable of 32-bit operations in 2-12 cycles.

With the wrong signing, you'll get the wrong answer more slowly.
It could be a random act of randomness. Those happen a lot as well.

fantasia

Hi everyone!

I was almost sure that the division was performed with positive numbers, so thanks for clearing that up. 

May I ask: how did you come to acquire this knowledge? Is there an online resource that explains all this?  I suyppose if my on-paper method predicts the results correctly I should be happy, but I'm sure I'm not alone in this forum as being someone who just wants to know why.

Thanks everyone.

clive

Quote from: fantasiaMay I ask: how did you come to acquire this knowledge? Is there an online resource that explains all this?  I suyppose if my on-paper method predicts the results correctly I should be happy, but I'm sure I'm not alone in this forum as being someone who just wants to know why.

I guess you have to study, see how others do it, and work the math from first principles. Some thirty years ago, your average microprocessor didn't have multiply and divide instructions, so you had to code them from scratch, later ones had limited precision, so you learned how extend that precision. Assembler books from the late 1970's to early 1980's might be quite instructive.

Pick up some books on binary logic, and hardware gate level implementation of counters and adders.

Personally I don't think there is a single source for this information, or a trivial manner of retaining it, it takes multiple perspectives, some interpretation, and experimentation.

A lot of us here are quite old, and been doing this for a long time.

http://6502.org/source/integers/32muldiv.htm
http://map.grauw.nl/articles/mult_div_shifts.php
It could be a random act of randomness. Those happen a lot as well.

fantasia

Hi Clive,

Thanks for the advice.

I have been studying several books on logic for the past year, and I'm doing an Open University Maths degree in my spare time, so I'm comforatble with the numbers  :U.

What I haven't done yet (but keep meaning to) is read about combinational/sequential logic in great detail.  If this is what's required to understand the need for the sign extension, then great, I'll relax until I get round to it.

It's always worth checking with you guys if there is a quick answer.

Thanks again.