News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

RT'ing in 3D

Started by vanjast, June 29, 2009, 12:05:12 PM

Previous topic - Next topic

vanjast

While going through all the tuts and methods, I get the impression that a lot of the algorithms are geared for a 2D flat screen with depth perception.

To me it seems like they missed the 3D thing, where for example the bounding box becomes more computationally intensive than using a bounding sphere. A sphere needs no 3D to 2D view frustrum translation, and each screen pixel (X,Y) increment angle is a function of the 3D view spherical coordinates. To me this is so obvious, but after reading all those theses, etc... I don't see this (maybe I'm blind).  :green2

I'll carry on reading more stuff.
:8)


NightWare

hmm, i don't follow this section, because i don't think that ray tracing has a near futur, in term of speed (and i don't speak of reflexion/refraction here...). but the fact that i don't believe it will be usefull soon doesn't mean it can't be interresting.

Quote from: vanjast on June 29, 2009, 12:05:12 PM
While going through all the tuts and methods, I get the impression that a lot of the algorithms are geared for a 2D flat screen with depth perception.
To me it seems like they missed the 3D thing
2d+1d = 3d. you can use many technics to represent it, but the basment is the same.

Quote from: vanjast on June 29, 2009, 12:05:12 PM
where for example the bounding box becomes more computationally intensive than using a bounding sphere. A sphere needs no 3D to 2D view frustrum translation
yes it needed (unless you don't need to position/texture it), your approach of spheres is the one used in video game for lightnings, coz speed is essential here (and no need for consistance). anyway, here sphere are faster because of pseudo optimizations, but when relexion/refraction will be added, you will understand how it's slower...

Quote from: vanjast on June 29, 2009, 12:05:12 PM
and each screen pixel (X,Y) increment angle is a function of the 3D view spherical coordinates.
also named ray tracing... so ? it's just a technic (different of the projections in video game).

vanjast

Let me try say it another way  :green2

1) We RT'ing in 3D space.
2) We have to either rotate (transform) all objects, or even faster - just rotate/transform our viewpoint.
3) All object, contained within a 'bounding sphere' (not cube or square), are transformed around their individual origins. The bounding sphere will improve the early out algorithm.
4) The pixel (X,Y) increment values (hence the individual rays) are calculated from the view transformation.
5) Only from here does the individual primary (and secondary) rays calculations start, still using the bounding sphere through the different ray layers while recursing 2) - 4).

To me it seems that the RT algorithms are OK for a tranformed view (5) , whereas above I'm trying to implement virtually the same thing, but with an non-transformed view. Hence the Sphere boundary which is the same in 3D or 2D, requiring no transformations which would help speed up the early-out thingy - By how much I'm not sure, but I don't think it's going to be slower than the box/cube method.
:wink

NightWare

#3
hmm, i'm not sure of what you trying to say, but :
- the sphere it's for the calculations (it's possible to use a variant, but it's just a variant... coz here you need a point of reference for your calcs),
- the cube exist for quickly eliminating what's out of the limits (depth, left, right, etc...).
it's why you must use a transformed view (2d+depth), not only to have a point of reference in your world, but also to quickly eliminate, for example, what's behind the point of view (all neg z),otherwise how are you going to quickly determinate if a work have to be made or not ? if you don't transform your point of view how are you gonna test the valid neg z from the wrong ones ? you can't test every points/polygons, it's why it's "made" previously with the transformed view, to make things easy after.

EDIT : ok now i see what you mean, in video game we use the z (depth) buffer, to avoid to recalc (most) things, i suppose this is what you talking about. but ray tracing is just another rendering method where others points have an influence on your (currently calculated) point, it's not exactly the same for video game. now combining the two methods can perfectly give a good result, here yes you can try...

vanjast

I'm gearing this up for Xmos  :U

vanjast

To add to the above thing.. and really my 'sort of ambition'

I would like so see a working version of a flight simulator... my way
by my way I mean the 'real thing', the way it's supposed to be.
Xmos and RT are vehicles to achieve such objectives...
Xmos provide a very cheap way of achieving RT....... ect
:8) :bg ::)