News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

OpenGL shader, GPU programming

Started by JFG, December 20, 2005, 06:05:26 AM

Previous topic - Next topic

JFG

I am trying to decide between writing an unconventional 3D engine that depends solely on the CPU and highly optimized assembly code using everything through to SSE2 extensions, or making the bulk of the work be done by GPUs, getting GPUs to accomplish roughly the same effect, but out of OpenGL shader language source code.  That being so, I would like it if those savvy on OpenGL shading language can tell me the answers to a few questions:

1) How is OpenGL shader source code compiled?

2) Can data about a whole scene be accessed by shader code, or is the data scope limited to just that data that applies to a particular primitive being processed?

3) Is there some way to circumvent default OpenGL implementation functions and replace them with custom ones coded for the GPU using a target GPU's own assembly language?

4) How might I be able to write shaders in a target GPU's assembly language?

daydreamer

Quote from: JFG on December 20, 2005, 06:05:26 AM
I am trying to decide between writing an unconventional 3D engine that depends solely on the CPU and highly optimized assembly code using everything through to SSE2 extensions, or making the bulk of the work be done by GPUs, getting GPUs to accomplish roughly the same effect, but out of OpenGL shader language source code.  That being so, I would like it if those savvy on OpenGL shading language can tell me the answers to a few questions:

1) How is OpenGL shader source code compiled?

2) Can data about a whole scene be accessed by shader code, or is the data scope limited to just that data that applies to a particular primitive being processed?

3) Is there some way to circumvent default OpenGL implementation functions and replace them with custom ones coded for the GPU using a target GPU's own assembly language?

4) How might I be able to write shaders in a target GPU's assembly language?
1: ask someone else
2: in shader you can emulate 1d,2d,3d array=texture and write the whole scenedata to that texture and read data from that texture, anything you want for example precalculated LUT's to increase speed, ok like you have different opcodes from 486,Pentium, PII, PIII, PIV, amd etc you also have PS 1.1, 1.4, 2.0, 3.0, first ones is completely without ability for conditional jumps, which you can do in 3.0, also length restricted to 128 opcodes in first, newer I think ATI supports unlimited length
3:research on opengl extensions
4: nvasm, I have it and it gives me some errormessage, when I just try to compile a newbie testfile without nothing

I suggest you download and experiment w Cg sdk and you are pretty clueless if you don't know all about matrices and light operation math, to accomplish something useful


zooba

Quote from: JFG on December 20, 2005, 06:05:26 AM
3) Is there some way to circumvent default OpenGL implementation functions and replace them with custom ones coded for the GPU using a target GPU's own assembly language?

AFAIK, most GPU vendors do this anyway. For example: nVidia chips use 'nvoglnt.dll' for their OpenGL stuff, redirected from 'opengl.dll'.

Kashif

how can i make some part of my texture transparent in OpenGL.

Siekmanski

make some part of my texture transparent in OpenGL:

   invoke   glBlendFunc,GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA
   invoke   glEnable,GL_BLEND   ; Enable Blending

If you want an example how to do this let me know or have a look at:

http://www.masm32.com/board/index.php?topic=2522.0

Farabi

CPU based is very slow, but I dont know if you only used integer div and mul.
A member here is created the CPU based game engine and for drawing without texture it only reach 47 FPS with 50% CPU usage, for comparison, GPU can draw it for 32 FPS with 0% CPU usage, and I bet it could draw without texture up to 120 FPS with just 0% CPU usage. For texture rotation alone if you used FPU it will consume a lot of CPU usage, and on 3D world you will need to do resize the texture.

Modern GPU could calculate billion of Floating Point without sweating.
Those who had universe knowledges can control the world by a micro processor.
http://www.wix.com/farabio/firstpage

"Etos siperi elegi"

Farabi

Anyway, talking about nVidia, I was once doing raytracing on my friend computer, and it blow the propeler. But maybe it was just my bad luck.
nVidia still the best. 30 times faster than my current Graphic card.
Those who had universe knowledges can control the world by a micro processor.
http://www.wix.com/farabio/firstpage

"Etos siperi elegi"