feasibility of a new(?) kind of object in graphics engines

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

Cesiumlifejacket
Posts: 2
Joined: Tue Nov 01, 2011 4:01 am UTC

feasibility of a new(?) kind of object in graphics engines

Postby Cesiumlifejacket » Tue Jun 05, 2012 4:12 am UTC

Hey forum citizens, I had this idea for a new kind of 3d object to use in video game graphics engines, but I'm not sure of the best way to implement it, or if it would even be practical, and I thought you people might be able to help. You read xkcd; you're smart, right?

my idea was that some 3d objects in video games might be more efficiently defined via mathematical function than by a set of polygons. A sphere and a cylinder are two simple examples. Such an object would be defined in-game by the location of its origin, the rotation of its coordinates relative to the map's coords, and a mathematical function that would describe the surface of the object relative to its own origin and coordinate system. The benefits of such an object are that it would take less memory to describe than its polygon based counterpart, and that no matter how close the camera came, the object would appear perfectly smooth- no ugly polygons poking out.

(If you're too lazy to read after this you can stop here and tell me what you think of this idea, any feedback is appreciated)

from here I have one idea about how to actually implement this, but with no decent understanding of the video game rendering process, I don't know whether my idea is at all compatible with moderrn engines, or if it's even practical from a resources standpoint.

my idea goes something like this: for every pixel rendered, calculate where a ray sent out from the virtual camera first intersects an object. then, calculate the line normal to the surface of the intersected object, where the ray from the camera first intersects it. calculate what kind of light this point receives (how bright and what color) with more ray tracing, calculate what light that point reflects, and paint the pixel that color.There's obviously more to it but i don't want to bore you with the details.

So what do you think, xkcd? Could a computer perform such calculations 2 million times every 30th of a second? Is this a plausible idea worth pursuing?

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: feasibility of a new(?) kind of object in graphics engin

Postby EvanED » Tue Jun 05, 2012 4:29 am UTC

There are automatic tessellation strategies already; I don't know much about them. (See, e.g., Nvidia's explanation.) This was actually one of the notable features new to DirectX 11.

It's not doing quite what you want, but it's sorta somewhat closeish.

User avatar
PhoenixEnigma
Posts: 2303
Joined: Fri Sep 18, 2009 3:11 am UTC
Location: Sasquatchawan, Canada
Contact:

Re: feasibility of a new(?) kind of object in graphics engin

Postby PhoenixEnigma » Tue Jun 05, 2012 4:45 am UTC

Something along those lines is already used in 3D modeling - take a look at NURBS and Bézier patches. Generally, they are more accurate but slower, so you'll see them in 3D modeling but not in games.

It also sounds like you are thinking of ray traced graphics, which is something people have been working on for a while, and is again slower but more accurate and shows up in CAD/CAM/animation sorts of areas and not in games (though I have seen a few stabs at that...we're probably still a few years from the point where high end computers can handle in in real time).
"Optimism, pessimism, fuck that; we're going to make it happen. As God is my bloody witness, I'm hell-bent on making it work." -Elon Musk
Shivahn wrote:I am a motherfucking sorceror.

User avatar
Sc4Freak
Posts: 673
Joined: Thu Jul 12, 2007 4:50 am UTC
Location: Redmond, Washington

Re: feasibility of a new(?) kind of object in graphics engin

Postby Sc4Freak » Thu Jun 07, 2012 5:00 am UTC

Yes, what you describe is essentially raytracing. People have been working on it for decades but compared to traditional rasterization rendering, it's never been able to produce equal image quality at the same processing power. If you have unlimited processing power and no realtime constraints, raytracing produces obviously superior graphics. But given a limited amount of processing power, I have yet to see a raytracing engine produce superior results compared to a traditional renderer.


Return to “Computer Science”

Who is online

Users browsing this forum: Steeler [Crawler] and 8 guests