GPU vs. CPU - Hardware vs. Software Renderer - Your thoughts!

Started by SubEvil, November 11, 2005, 09:09:18 AM

Previous topic - Next topic

SubEvil

Hi Guys,

I would like to get your ideas, thoughts, opinion and most importantly ... experiences (if possible). For the purposes of education primarily, I'm interested in writing my own software based renderer (and raytracer). Using the SGI and MESA implementations of OpenGL as a base to start from (I already have a fast 3D math library), with more advanced and specific Rotate and Translate functions etc. Taking into consideration all the recent advances in architecture, what are your thoughts on the speed differences a software renderer would be compared to a hardware renderer.

2 reasons I'm considering this progect:

1) Since the GPU has only 32 bit floating point registers, I could really use 64 bit floats, especially in the Z-buffer (for my personal project)
2) OpenGL has a rather "primitive" selection mode. One issue I have with the mode is the fact that when you want to "select" something, you need to switch from "rendering" to "selection" mode and re-render your entire scene. I would like the ability to store an (object) pointer with every pixel rendered in the frustum.

So, how close or far are 3D math functions, texture mapping, lighting, shadows, fog etc. between the GPU and CPU? Do we have exact figures, eg. The GPU is 20% faster in floating point multiplications etc.?

hitchhikr

Go for the gpu if you want speed and doesn't care too much about the compatibility with older cards. The newest pixel shaders (3.0) are very suitable for that kind of stuff (amongst other things) as they now have loop statements integrated, afaik they're only available on geforce 6600/6800 based gfxcards yet.

Another possibility is using the cpu with sub-sampling
(this technic may also be adapted to the gpu).

Or a mix of the two, the gpu can't be beaten when it comes to filling & working on big arrays anyway, as it is one of it's primary purpose.

Also newer cards have floating points textures which can be used to transfer floating points datas computed on the gpu into the main memory (rgba ones can be used to transfer integers).

Realtime raytracing with gpus have already been made (even with the new ps3.0), search the web.

u

" Do we have exact figures, eg. The GPU is 20% faster in floating point" - really depends on what GPU you compare against what CPU.
Although I'm still not enough experienced, I think you should make a software renderer (CPU) first, then use fixed-pipe rendering with GPU, and finally - pixelshaders.

Just a few NBs:
Good (optimized) software rasterizers are hell to write.

Btw, here's what HL2/CSS gamers have as hardware: http://www.steampowered.com/status/survey.html

Subsampling - I saw that Heaven7 demo. Either the textures were bad or it was subsampling's fault - but the demo looked horrible.

I used to favor raytracing (and expected one RT videocard), with shadows, reflections, focus and effects as primary arguments..
but rasterizers are already much better for realtime:
Find the only 4 things that show this image isn't taken from real-life: :)
http://img.gamespot.com/gamespot/images/2005/276/928101_20051003_screen005.jpg (an xbox360 game)


Anyway, it's good to try all techniques - you want to be educated, after all :)
Please use a smaller graphic in your signature.