News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

Overloading... well, sort of

Started by Merrick, December 21, 2006, 11:50:08 PM

Previous topic - Next topic

Merrick

A project I'm working on right now is the generalization of Simpson's Rule to 2 independent variables for integrating volumes under surfaces. In practice the problem is trivial, but I have a real issue with generality which I don't know how to solve, but may be a trivial problem. I was hoping to get some input.

In a nutshell, Simpson's Rule is like a lot of other numerical methods. You evaluate a given function at a number of points, turn a prescibed crank with the results, and get your answer. But here's the rub: how do you make the function which the numerical method must call flexible? That is, I can easily write code to call a function like Cos(a)*Cos(b) and integrate all day. But what if I want to change it to Cos(a)*Sin(b), or Tan(a)*Log(b), etc.? And what if I want to do this multiple times in the same program for a given run without having a bunch of repetitive code (I know, you're going to say macros - but then it technically IS a bunch of repetituve code, only I don't *have* to look at it) to call all of the different functions?

At first I'm a bit stumped as to the proper approach. My initial guess is that I would want to have the numerical method as a separate module. It would be called only with the parameters it explicitly needs: the limits of integration (a0, aN, b0, bN) and the integration step sizes (aDelta, bDelta... or number of steps). It would then be the numerical method's task to determine all of the function evaluations it needs, call them, manipluate the values, and return the final result. Here's where the numerical method needs to know which function to call. And how can one routine be written so that it can call more than one function with the same code?

After thinking about it, the only  answer I can come up with is something akin to overloading. In addition to the integration limits and step sizes you could send the information on which function to call and the parameters that function needs. So, for instance, if you were integrating under a 2D Gaussian you would have to send the pre-exponential term, the two means, and the two standard deviations. One very quickly realizes that the type and number of parameters passed is going to be variable in the long run. We need a way to handle this.

My guess was that each function would get its own text name and that the variables for that specific function would be sent as a STRUCT. The generic "FUNCTION" then knows by the text name what function is being called and what the structure of the passed parameters must be... then the required calculation can be performed. (It's *sort of* like overloading).

So, first question is, is this completely way off base?
Second, has something like this already been done, and if so could someone point me at some examples?
Finally, is there a more obvious or simple approach to the problem I'm overlooking?

Thanks for any input...

u

You are heading in the correct direction. For each 2D function have a structure with: pointer to the (callback) 2D function, its text name (optional), and any additional parameters. The base-structure would be:


Base2DFunc struct
pFunc dd ? ; pointer to the callback-function
resolutionX dd ? ; number of samples to calculate on X-axis
resolutionY dd ?
Xmin real4 ? ; the lower boundary of the X-span
Xmax real4 ?
Ymin real4 ?
Ymax real4 ?

X real4 ? ; current X, for which to compute the result
Y real4 ? ; current Y,..
Base2DFunc ends



And, to define a function that calculates the volume of (sin(x)+t0)*cos(y),  [where t0 is an arbitrary extra-parameter], compose this:


; out = (sin(x)+t0)*cos(y)

SinXplusT_CosY struct
base Base2DFunc <>  ; MUST be the first member in the struct
t0 real4 ?
SinXplusT_CosY ends


SinXplusT_CosY_func::
fld [ecx].SinXplusT_CosY.base.X
fsin
fadd [ecx].SinXplusT_CosY.t0
fld [ecx].SinXplusT_CosY.base.Y
fcos
fmul
; result is in ST(0)
retn




Before calling the double-integrator, you just need to set-up a structure of the SinXplusT_CosY type.
In your double-integrator, on each iteration, change the .X and .Y values, and call  the .pFunc callback.

This way, you can define as many custom 2D functions as you like - all handling structures, based on the Base2DFunc struct.
Please use a smaller graphic in your signature.

u

P.S  In the example above, "t0" is a constant value, which the user will be providing via GUI or whatever.

Since you'll need a way to describe the 2D function to the GUI of your app, you need to provide one more struct:


FuncInfo struct
pName dd ?
pFunc dd ?
NumConstants dd ?
pConstantsNames dd ?
FuncInfo ends



A definition-struct for the SinXplusT_CosX would be

.code
Func1_Info FuncInfo < CTEXT("(sin(x)+t0)*cos(y)") , SinXplusT_CosY_func, 1, Func1_constNames>
Func1_constNames:
dd CTEXT("t0 - some constant")


And at some place in your code, maintain a list of addresses of FuncInfo descriptions of your new functions:

All2DFuncs dd Func1_Info,Func2_Info,Func3_Info
Num2DFuncs dd 3



This way, you have full flexibility on choosing what function to integrate, what extra-parameters, ... and this all being easily-expandable (you can add more funcs quickly), with simple way of starting calculations via GUI or cmdline.

This all is basics for making plug-ins for software.
Please use a smaller graphic in your signature.

Merrick

Thanks, Ultrano. That's a really big help!

Tedd

Alternatively, pass just two arguments to your magic function -- a pointer to the function it should use (pFunc), and a pointer to its arguments (pArgs).
The magic function then calls pFunc, with pArgs, and since Func knows what parameters it's expecting there's no need to fuzz around telling it, as long as the caller of magic-func has put the args in place. This way, your magic function differs only by which pFunc you pass in and blindly uses that - no figuring out, no meddling, just clean and simple :bg (Whether you put the args onto the stack, or into a buffer is up to you - you just point to them anyway.)
(By-the-by, overloading is created by compilers by regenerating the same code, with the necessary slight variations, for each overloaded type.)


Quick example..
.586
.model flat, stdcall
option casemap:none
include windows.inc
include kernel32.inc
includelib kernel32.lib

;***************************************************************************************************

magic_stuff proto pFunc:DWORD,pArgs:DWORD

my_add2 proto pArgs:DWORD       ;valA:DWORD,valB:DWORD
my_sub2 proto pArgs:DWORD       ;valA:DWORD,valB:DWORD
my_muldiv proto pArgs:DWORD     ;valA:DWORD,valB:DWORD,valC:DWORD

;***************************************************************************************************

.code
start:
    push 5      ;valB
    push 6      ;valA
    invoke magic_stuff, OFFSET my_add2,esp          ;6+5
    add esp,(4*2)   ;clean up the 2 args


    push 5      ;valB
    push 6      ;valA
    invoke magic_stuff, OFFSET my_sub2,esp          ;6-5
    add esp,(4*2)   ;clean up the 2 args


    push 5      ;valC
    push 6      ;valB
    push 7      ;valA
    invoke magic_stuff, OFFSET my_muldiv,esp        ;7*6/5
    add esp,(4*3)   ;clean up the 3 args

    invoke ExitProcess, NULL

;***************************************************************************************************

magic_stuff proc pFunc:DWORD,pArgs:DWORD
    ;..do some stuff, whatever..
    push pArgs
    call DWORD PTR [pFunc]
    ;..something else with the return value?..
    ret
magic_stuff endp

;***************************************************************************************************

my_add2 proc pArgs:DWORD        ;A+B
    push ebx
    mov ebx,pArgs
    mov eax,[ebx+4*0]   ;valA
    add eax,[ebx+4*1]   ;valB
    pop ebx
    ret
my_add2 endp

my_sub2 proc pArgs:DWORD        ;A-B
    push ebx
    mov ebx,pArgs
    mov eax,[ebx+4*0]   ;valA
    sub eax,[ebx+4*1]   ;valB
    pop ebx
    ret
my_sub2 endp

my_muldiv proc pArgs:DWORD      ;A*B/C
    push ebx
    mov ebx,pArgs
    mov eax,[ebx+4*0]   ;valA
    mov edx,[ebx+4*1]   ;valB
    mov ecx,[ebx+4*2]   ;valC
    mul edx
    div ecx
    pop ebx
    ret
my_muldiv endp

end start

(You could probably add a macro for handling pushing of the arguments/cleaning-up.)

Additional thought: magic_stuff could be given an extra arg of tye VARARG, which would allow you to push the sub-func's args in the same invoke-ation, and the stack would be cleaned up accordingly (as doing this would turn it into a C call-style function, so the "add esp,???" is generated for you.)
No snowflake in an avalanche feels responsible.

Merrick

Nevermind... I just took a better look at your code, Tedd, and it pretty much makes my follow-up question moot!

Tedd...
Yes, that was the second part of my question - about actual overloading. We obviously don't have it with MASM, but there are definitely times when I find it useful. It seems like it could be kluged with something like a function call with two parameters: an argument indicating which "signature" to expect and a pointer to the actual function parameters. The same, of course, could be accomplished with writing several different functions: e.g., func_int, func)real4, func_real8, func_ascii, etc. And a thousand different variations like that. It just seems a little more organized to gather all of that code under one overarching func which gets as one argument "int", "real4", etc.
But, again, just out of curiosity, does anyone see a better way to do that?

Thanks.

P.S.  I know there has been quite a bit of work on MASM OOP, but I really haven't had the time to give that a real look. Does anyone know any work has been done there along these lines?

raymond

I don't know if it could be of any help to you but it may be more food for thought.
When I prepared my library to compute the various functions of complex numbers, I also prepared a test app which could be used to visualize the results of any input with any of the functions of the library. That test app, complete with source code and resource file, is included in the ZLIB package which you can download from:

http://www.ray.masmcode.com/complex.html

Raymond
When you assume something, you risk being wrong half the time
http://www.ray.masmcode.com

Merrick

Raymond,

Thanks. I'll take a look at this.

Merry Christmas, everyone.

Merrick

Raymond... a couple of quick things:

You might mention on your complex numbers page that the angle is referenced with respect to the x-axis in the counter-clockwise direction to help avoid confusion. Also, most programming languages only have an ATAN or ARCTAN function that returns results between -pi/2 and +pi/2, which is of course totally inadequate for the purposes of complex number math. You might mention in your brief tutorial that one needs to take the appropriate quadrant of the complex coordinate into account (and the ATAN2 or ARCTAN2 function supplied with some languages).

Very useful work. Thanks for pointing to it!

raymond

Merrick, Thanks for your interest and comments. You are one of the very few who has ever returned any comment on that complex :eek subject.

I will definitely add that comment about the angle measurement relative to the x-axis for polar coordinates.

I do agree that the ATAN of real numbers can only return a value between -pi/2 and +pi/2 (REGARDLESS of the programming language used). However, the arctan of complex numbers is not computed in the same manner as the arctan of real numbers. If you look at the source code of the ZAtan function, you will find the formula used to compute such a function. If you take the TAN of small cartesian coordinates in any of the quadrants with the test app, and apply the ATAN function to the result, you get back the original data.

Also look at the source code for the other trigonometric functions on complex numbers; they can be quite different than what you may expect and possibly explain some of the "weird" results you would get with the test app.

Raymond
When you assume something, you risk being wrong half the time
http://www.ray.masmcode.com

Merrick

Hi Ray,

I haven't had a chance to take a look at any of the code yet, but by "weird" results do you mean, for instance, that sin(z) would return a result other than [Exp(i z) - Exp(-i z)]/(2 i) ?

raymond

My apology Merrick for misinterpreting your comment about the ATAN. While lying in bed last night, I realized that you must have been referring to the conversion from cartesian to polar coordinates. I will also expand a bit more about it.

What I meant by "weird" results was that if you enter cartesian coordinates such that either the real or imaginary (or both) portions exceed the value of pi/2, the trig functions consider such input as radians. Applying the inverse ARC function to the result will certainly not return the initial input.

If you look at the source code, the equation used to compute sin(z) is:
sin(a,b)  = sin(a)*cosh(b), cos(a)*sinh(b)

And the equation used to compute the sinh(b) portion is:
sinh(b) = [e^(b)-e^(-b)]/2

While the equation used to compute sinh(z) is:
sinh(z)  = 1/2 (e^(z) - e^(-z)), where e^z is computed quite differently than the e^b used to compute the sine.

Raymond
When you assume something, you risk being wrong half the time
http://www.ray.masmcode.com