Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software

GeForce3: Real-time RenderMan? 173

b0ris writes "This review of the NVIDIA GeForce3 at The Tech Report does a nice job explaining how the GF3 chip can create advanced graphics effects in real time. The author raises the prospect of having real-time Final Fantasy or Shrek-style animation on the desktop in a consumer graphics card. The examples from the GF3 he uses to back it up are almost convincing, even if it isn't quite there yet. Will render farms go the way of the dodo?" Well, I'm all for dreaming, but its gonna be a few years before the GeForce8 can do renderman in real time, but when we get there, Final Fantasy 21 is gonna rule.
This discussion has been archived. No new comments can be posted.

GeForce3: Real-time RenderMan?

Comments Filter:
  • by Anonymous Coward
    Yes, they had a realtime luxo demo, but it was not even near the quality of the original. For one, the resolution is much lower. Pixar renders their frames at I believe at least 3000X3000 resolution to match that of 35mm film and at a color depth of something like 16 bits per channel. The Geforce demo was likely at maybe 1024X768 and 8 bits per channel. Also the GeForce demo was probably programmed using a lower quality shading model because of limitations of the GPU.
  • You forgot to compound Moore's Law computation. (forgetting for a second that it's application is questionable)

    2->4->8->16->32->64->128

    between 6 and 7 years until pixar (by your transistor requirements) quality rendering can happen in real time on a commodity card. Factor in the fact that most users are going to be rendering at computer screen resolutions & 5 years is still a safe bet.

    No, its not time to unscrew your case, but it IS time for game and software companies to pay attention.
  • by Anonymous Coward
    "Most of what we perceive to be "unrealistic" lies in the modelling of animation and movement -- physical dynamics and interactions such as collisions, deformations, effectsm natural pheonomena like wind, human locomotion, etc. -- it is sitll raw CPU speed here..." Actually, its not raw CPU power, anymore. Thats what a vertex shader is. A "shader" is a device that can give every pixel its own behaviour. Grass is a great example. Right now if a developer wanted to add grass, they would have to map every situation they wanted to be in the game. For instance, if they wanted the grass to be blowing in the wind, they would have to make an animation for that and add specific times for when that should happen. Let's say that a developer wanted to make the grass bend if stepped on. The developer would have to make an animation of the grass bedning down. Then they would have some command "IF Character on grass THEN play animation.avi". The problem with this is that if I step on the grass from the right side, and the animation was made of the grass bending to the right, then it would look stupid. If I step on a patch of grass, in real life the grass would bend the opposite way (the grass would not bend into me). A developer could add another animation, one for the grass bending one way and one for the grass bending another. But this would be time consuming and would still only work for two directions - what If I walked on the grass from the north or south side? Shaders fix this problem. They give the grass its own behaviour. The GPU's pixel shaders will determine what happened to the grass, and using the commands written by the developer (since they are programmable), will make the grass bend at the exact right angle. Pixels will no longer do what they have been assigned to do before hand by developers. They will have their own behaviour. They will react uniquely to every situation. This includes "deformations, the effects natural pheonomena like wind, human locomotion, etc." That is exactly what the vertex shaders are for, and it takes a load off the CPU.
  • by Anonymous Coward on Tuesday June 26, 2001 @12:51PM (#126975)
    > Will render farms go the way of the dodo?

    When a video card has the power of a render farm, then people will simply make a render farm using those cards.

    This will always be the case, until the rendering abilities of a card become indistinguishable from reality, and can render twice as fast.



    All your race are belong to Gus.
  • Eh... so what if its off topic. It was related to the post that I was replying to. The poster had an email address at Big Idea productions. Hence the VegiTales. Its amazing how seriously people take slashdot. Relax

  • I believe that Luxo Jr. is about 20 years old, so Duff is still right. :)

  • [disclaimer against redundancy disclaimer]

    It takes a LOT more than polygon-pushing power to make a realistic image. The Geforce 3 (and the OpenGL or D3D which drives it) cannot do motion blur (REAL distributed motion blur, not accumulation), accurate reflection or refraction, shaders of arbitrary complexity, or any scene management and geometry generation operations.

  • ATI gives out source code for its Radeon drivers. NVidia does not. NVidia's chips are 3 generations ahead of ATI.

    Now 5 years after the hoopla, one screenshot of a 320x240 camelion that looks like a movie, and 500 layoffs later, let's all say it in unison, "who gives a fuck if NVidia doesn't release any source code!"

  • You are correct... In my haste of work and other things... I didn't really go through what I was meaning to say.
    Most of what people think of for CG work is raytraced... this isn't always the case, nor is it what I was really going to talk about.
    Real time raytrace isn't going to happen for a long time...
    But... for animatioin, under Maya, it takes longer to go through all the math with the deformers and the skeleton solvers and all that than it does to display. So sometimes, you can only get 1 fps.
    Games use a much different approach, where everything is approximated to some extent.
    Its true that they have IK solvers and are more realistic than in the past, but the dependency is not like what you would have in some animated scene files. What happens for secondary effects, such as hair moving is all based upon the characters movements. This is not really forseen... and only few animators would want that level of control on a character under most situations.

    I also agree with you on the direction for 3D cards and movie CG. It will get better and better. And rendertimes will still be the same, because artists always want to add more realism or quality. As speed goes up, so does complexity, but render times stays about the same. Thats what I call job security.
  • I agree.
    I was typing faster than thinking... not rare for me.
    Everything you state is correct.
    What I was intending to say is that what the graphics card is doing is different than what renderfarm rendering is doing. I don't consider it really rendering unless its from some render package such as Maya, PRrenderman, Mental Ray or the like.
    I have had to do some raytraced rendering before. I know the difference. My fingers must not of.
    Thanks for pointing this out so others are not as confused as my fingers.
  • Not if you were using depthmap shadows and decent textures with high enough anti-aliasing.
    For the image quality of a render, even from Maya, you will have to wait a while... or you are doing some really simple stuff.
    The lighting that the GF3 does does not compare to what Maya renders. There are a couple levels of complexity in difference.
  • Why would somebody do that. The implication is that the card is doing render quality at real time. If this is the case, no farm is needed. Just run the stuff through and hit record. Or transfer in what ever method you want.
    Some software does use the video hardware to render. It is grabing the frame off the buffer and saving to disk. Slower than real time, faster than a full blown software render.
  • People always mention this.
    It would take a lot of development time and coopereation from the software companies to support low bandwith render systems.
    The scene files that we are working with are 10s of MB, and they reference other files that are of similar sizes. You may pull accross 100MB of proprietary scene files (which means encrypted to the users) and then the system determines what to render. It may take 30+ minutes a frame, while either creating a scad of misc files or eating up memory (such as shadow map files, motion blur files...) and then assemble all of them to make a 2-3 mb image to upload.
    The average user's home machine would only be a waste to studios. The bandwidth would kill us. Legal would kill us for letting proprietary data out. Your system would be smoked while rendering... or it would take a long long time.
    All the transfer time of the scene files and the textures would take longer than the render.
    We keep a nice fat backbone to the renderfarm for a reason. No sense in having 200+ procs waiting on data.
    We do use software that allows us to use the users desktop, but this is over a LAN and not a WAN... and that makes a big difference.
  • by tolldog ( 1571 ) on Tuesday June 26, 2001 @12:56PM (#126985) Homepage Journal
    Most of what we see with "realistic rendering" on desktop boxes is OpenGL / direct3d based. This isn't realy rendering, well... its not raytraced.
    Its true that they are getting close and blurring the line between rendering and desktop 3D for all practical purposes there is a difference.
    I just hope rendering never goes away... I need this job!
    Another difference is that game movement is not near as complex as cinematic animation. Most game movement is pre-definded movements trigered by something. A lot of secondary animation and even some primary animation is done by a complicated set of equations. It all depends on the package, but sometimes with these solvers on, you might get 1 fps when viewing the animation. Until issues like that are fixed, you will not be able to generate stuff like that on the fly.

  • by Rendus ( 2430 )
    -sigh-

    The eye can detect above 120, depending on the person. My threshold is around 80 or so, anything above that adds little to the gameplay, other than the framerate is less likely to dip below what I notice.

    What makes 24-30fps acceptable in film and TV is motion bluring. Search the archvies for the arguments, as I don't feel like getting into it again.
  • "yes from 3.x to 12.x in a year"

    That's called marketing

    Vermifax

  • ..he specifically talks about how Nvidia will repeat this allegation with every generation.

    Vermifax
  • by Vermifax ( 3687 ) on Tuesday June 26, 2001 @12:56PM (#126989)
    What Pixar thinks of NVidia: The quote from the Nvidia's website:

    Achieving Pixar-level animation in real-time has been an industry dream for years. With twice the performance of the GeForce 256 and per-pixel shading technology, the GeForce2 GTS is a major step toward achieving that goal.

    -Jen-Hsun Huang, President of NVIDIA Corp.

    Here [google.com] is what Tom Duff from Pixar thinks about that:

    These guys just have no idea what goes into `Pixar-level animation.' (That's not quite fair, their engineers do, they come and visit all the time. But their managers and marketing monkeys haven't a clue, or possibly just think that you don't.)

    `Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)

    Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage they could get out of it!

    Don't forget that the scene descriptions of TS2 frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that? Think again. 96 Gb/sec means that if they clock data in at 250 MHz, they need a bus 384 bits wide [this is typo. 384 _bytes_ wide!]. NBL!

    At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns.



    Vermifax

  • You're absolutely right, I should have thought about it a little more before blabbing that I don't notice interference.

    But anyway, you don't really want a QED renderer. Imagine rendering a soap bubble. In order to get an accurate shifting-rainbow effect, you'd have to model extremely subtle air currents, and the thickness of the film in micrometers. It's far more efficent to take your usual surface and apply a time-varying texture to it, and tweak it until it looks accurately like a soap film.

    The computation required to accurately render QED is absurd. Instead it would be better to have a class of objects in your renderer (diffractors, thin films, etc) that can simulate diffraction. But don't expect those to behave nicely when they interact with non-diffracting objects, the computing required would just be too huge. If you could do that, it's time to start coding The Matrix.

    --Bob

  • Bwwhaahahahahahaha!!!

    You're kidding, right?

    How often, in every day life, do you notice diffraction and interference? I never do. Consider also that the size of objects which cause diffraction are the same order of magnatude in size as the wavelength of light (i.e. 10^-7m). Which, BTW, is far smaller than you can see. Now imagine you're going to keep track of polygons/voxels 10^-7 in size, for a room that's 10m by 10m by 3m. That's 10*10*3/(10^-7)^3 =~ 3*10^23 voxels to keep track of. Forget it. There are far better ways to simulate diffraction, if you really wanted it.

    What I have seen, that's really cool, are Relativistic [fourmilab.ch] ray [man.ac.uk] tracing [mu.oz.au]. Do that Nsuck^H^H^H^Hvidia!

    --Bob

  • 1) They already have enough power to render the movies, that's why they're already out in theaters.

    2) The people who MAKE movies are a different group of people than those who SHOW movies.

    3) Seti@home has to do a ton of redundant work, because people turn seti@home off in the middle of a block and never turn it on again, kids download block then try to upload spoofed workfiles to crank their work completed stats, and other garbage that the Studios just wouldn't tolerate well.

    Consider this: Would the Seti project buy a server farm to perform this work if they could afford it? Or do you think they'd go through all of this crap, simply because they enjoy dealing with crap more than doing science?
  • Dude. Both of those quotes are referring to the GeForce 2 GTS. The Tom Duff post to comp.graphics.rendering.renderman is over a year old. The review being posted is about the GeForce 3.

    Yes, much of what Tom Duff said probably still holds true, but let's try to quote material that is actually referring to the subject at hand, mm'kay?



    --
  • by Osty ( 16825 )

    The reason why watching a film on a TV monitor vs watching a film in the theater is because that film is translated from 24fps to 30fps (usually by frame doubling, but I don't know the process well enough to describe it -- search the web. I've seen a good review of progressive scan DVD players that gave a good background to the whole film->video conversion process). And as far as films go, many theaters actually use projection cameras with shutters that open two or three times per frame, rather than once. It doesn't help the fact that there are still only 24 frames in a second, but it does make it seem a bit smoother (making it seem as if there are really 48 or 72 frames, though those are doubled or tripled).

  • by Osty ( 16825 )

    I just want true 30fps in theaters! Maybe in an all digital theater... they do shoot some movies on HD video cameras, don't they? Still haven't been in one of them newfangled digital theaters. Seattle has crappy theaters.

    Don't I know it! There's no real cutting-edge theaters out here (IMAX doesn't count, as that's not really cutting-edge), though one would expect at least something, what with being a big technology center. Oh well. I guess I'll continue to be happy watching movies at the Bella Botega or Cineplex Odeon (two eastside theaters with stadium seating). As far as better filming processes, the best I've heard of is 32fps. Of course, I'm assuming these get translated down for play in "normal" theaters, because to do otherwise would require a massive layout of cash by theater owners. And we all know that they don't make much money off of anything but the concessions :).

  • that's not the point. the point is that we can do the oldest stuff that pixar did (the lamps) in realtime now. Nobody thinks we can do shrek in realtime now. So that's a 10-year lag in prerender to realtime. The question is, will we be able to do shrek in realtime in another ten years?
  • The reason you think you've never seen photo-realistic CG is because when it's photo-realistic you can't tell that it's CG :) "Special effects" aren't the only computer graphics in movies nowadays; in a lot of movies nowadays the set that the movie is filmed on isn't actually what's seen in the movie - buildings are added (in a LOT of movies, many of which you wouldn't even think would have CG at all), people are added (for example, in the edited version of "Eyes Wide Shut", CG people were added to block out penetration and appease the US's puritan hangups. The problem was, the people were completely static and it was obvious that they weren't real), atmospheric effects are added, etc.

    I don't remember what movie it was, but I read an article on the making of some movie (set in the 1800's, one of those cheesy romantic dramas, released about 2 years ago) and they showed the original filmed scene where you could see scaffolding, cameras, and lights. Then they showed the final result, which was a fully convincing 1800's-era scene. Most of the buildings and background people were created in 3D Studio MAX and rendered with Mental Ray, and you just can't tell. It was truly impressive. The buildings moved perfectly with the camera angle, the CG people walked and moved perfectly (there were no closeups of them, which removed the hardest part - facial modelling. The human eye is very good at picking up inconsistencies, especially in objects we observe every day, such as human facial emotions. It was very impressive nonetheless).

    Of course, convincing facial modelling isn't impossible - look at this picture [raph.com] by Asier Hernaez Laviña, which was modelled and rendered in 3D Studio MAX. Not video, but it's an amazing technical achievement and is almost indistinguishable from a photograph.
    --
  • The BeOS GeForce driver doesn't even support hardware 3D yet...
    --
  • No direct OpenGL support (which is what I meant to type).
    --
  • Seeing some of the discussions about being able to render FF8 in real time using the 3D models is a bit silly. A movie is not like a game. These GeForce cards are designed more for doing 'unpredictable' realtime motion. A movie is all predetermined motion. For example a movie could be compressed into the 'visible surface meshes' and textures required to render the scenes. The model database would only be required to generate meshes that could later be rendered in realtime. Two different things IMHO
  • >>he is wrong about 99% of movies being
    >>rendered with renderman.

    actually i'm not. Other than Antz/Shrek from PDI (which have their own in house renderer), i'd say 95% (conservative estimate) of the feature quality CGI put out is done with PRMan.

    Mental ray - yeah i'd been used for a few things (e.g. Flubber) when you absolutely HAVE to use raytracing. But other than that, no way. It can't swallow the type of scenes that PRMan can handle, the memory requirements are FAR too high, and it's motion blur is weak at best in comparision. There have been plenty of post houses that have tried to use it, but once you start to throw large scenes at it that require quality anti-aliasing, motion blur, and HUGE geometry databases it just falls apart. MR3.0 will tackle *some* of these problems, but the fact that it's a raytracer gives it some inherent limitations of what it can handle.

    And this isn't from somebody who hates mental ray --- i worked on it at Softimage for 3+ years. It's a good renderer, but it's no PRMan.

  • >>The shader language can raytrace.

    Correct. and PRMan returns BLACK whenever you call trace. Therefor PRMan doesn't raytrace - ever.

    You can hook it up to another renderer (e.g. BMRT, RenderDotC) to handle the trace() calls, but that raises it's own issues.

  • by furiousgeorge ( 30912 ) on Tuesday June 26, 2001 @01:11PM (#127003)
    >> This isn't realy rendering, well... its
    >>not raytraced.

    That makes zero sense.

    'real' rendering is not raytracing. Raytracing is one type of approach for simulation light propagation. It's not the end-all and be-all. it has it's own serious problems.

    Go see a movie. 99% of the CGI that you will see in feature films is done with PRMan (Pixars implementation of the Renderman standard). PRMan doesn't raytrace. Ever.

    Raytracing has it's place. So do a lot of other approaches. Open your mind.......

  • by Shotgun ( 30919 ) on Tuesday June 26, 2001 @12:50PM (#127004)
    everyone else to the punch.


    Sarcasm mode on:
    Will computers continue to get faster? Will we someday have lightbulbs in every room of the house? Will everyone who wants one be able to afford an automobile one day?

    Well, it'll be a few years before we're able to play color video games on our personal computers, but when we do the arcade games will really rock!!
    Sarcasm mode off:

    Really? What kind of sensless 'wow-computers-are-getting-faster' is this? The article actually makes sense and is interesting. It explains how computers are getting faster. It's the silly, so-called 'editoralizing' that stoopid.

  • No, I actually own a copy. I don't use it for more than playing, but I got a student edition so it wasn't too expensive. I saw the 6.5 upgrade but didn't realize what new features were in the package. I guess I'll be upgrading when I get some time... :)

  • by eric2hill ( 33085 ) <eric@ i j ack.net> on Tuesday June 26, 2001 @01:02PM (#127006) Homepage
    I have a copy of Lightwave 3D and it supports OpenGL for realtime previews of animations. It uses its own internal renderer (or screamernet) to do the final rendering, but loading a scene in layout and hitting the realtime preview looks pretty neat on a GF2. I'd really like to see if some of the additional textures and lighting capabilities will be supported in LW6 in the future...

  • Actually, you are both right. The textures are critical for the look, but many of the best effects use motion capture (look at FF) because non-mocap motion is too damn difficult to get right on things we "know".

    By "things we know" I mean human motion and things that we see every day and notice subconciously. Dinosaurs and spaceships are easy to fake since most people only see those in the movies -- and that is Hollywood motion, not reality anyway.

    If you notice, it isn't that uncommon to see a rendered STILL that is indistinguishable from reality. However, rendered MOTION is still a bitch.

    -chill
    --
    Charles E. Hill
  • Raytracing is only necessary in reflection and refraction -- which can be faked pretty damn good now.

    Other shading methods (radiosity for proper lighting) are used elsewhere.

    Real-time rendering CAN be achieved by using the proper methods and not just throwing the entire ball of wax at any scene.

    The idea is SMART rendering: Z-culling (so you only render pixels that affect the scene); polygon reduction (so you don't bother with a 10,000 poly item that is so far away in the frame it is a single pixel); variable mapping (using environmental maps for reflections when appropriate (like fly-thrus where there are only "background" objects).

    Think Hollywood set -- build (and shoot) only what the camera will see, nothing else.


    --
    Charles E. Hill
  • I'm sure the GeForce8 will knock my socks off, but I'll be damned if I'll be able to find space for the cooling unit. A garage might be able to handle it...
  • Well, I'm all for dreaming, but its gonna be a few years before the GeForce8 can do renderman in real time, but when we get there, Final Fantasy 21 is gonna rule.

    My dream is that GeForce8 can make it unnecessary to discuss the quality of the game in the same sentence we discuss the quality of the graphics. For years now, we have seen one product after another try to top the preceding generation in terms of delivering beauty and graphic heat -- and yet it has been a long time since games have really done, IMHO, a great job of delivering fun.

    This is not to say that twitch isn't fun -- or that pretty isn't interesting. Its just to say that I'm not sure that more more photorealism equates to great gaming.
  • Actually, the biggest problem with 60Hz is the fact you get a low frequency beat with the light output from 60Hz flourescent or incandescent lights. I've noticed that moving from Austrlia (50Hz lighting) to the USA (60Hz lighting) that a 60Hz refresh is a LOT worse here than in Australia.
  • by throx ( 42621 ) on Tuesday June 26, 2001 @04:16PM (#127012) Homepage
    This is completely false. Nyquist doesn't apply to a synchronous transfer.

    The electron gun scan rate is CONTROLLED by the video card, so the frame rate coming out of the card is constant. The RAMDAC accesses memory at a constant rate, determined entirely by this refresh rate. Frames are generated into the back buffer and flipped into the front buffer once the entire frame is generated.

    Nothing is actually "sampled" in the chain from frame generation to displaying the image (unless we want to talk about pixels rather than frames).

    This means you need 30fps generated by the card to get 30fps displayed on the screen - not the 120fps you are suggesting!!
  • by throx ( 42621 ) on Tuesday June 26, 2001 @01:28PM (#127013) Homepage
    You got the subject right and then proceeded to throw it all away in the body of your article.

    Rendering is not raytracing. Rendering (in terms of 3D) tends to be an all-encompassing term which covers the conversion of the model (ie bytes that describe a scene) into the image (ie bytes that depict a scene). Raytracing is simply one tool at the disposal of the rendering engine.

    Raytracing isn't even the best you can do as it can't cater for atmospehric effects and diffusion of light through a scene.

    A commercial renderer (LW, Maya, 3DSMax) will use a lot of different methods to generate the final scene. Some objects will used simple renders that you find on a Voodoo 1 chip, others will use complex ray traced algorithms that can't be done in 3d hardware yet.

    The GF3 with it's pixel and vertex shaders is just one step closer to what Pixar and ILM managed to achieve in the 80s. The problem is that Pixar and ILM are just getting better and better every day. There is no way a GF3 would ever be able to produce something like Shrek in real time, and by the time a GF* does it will pale in comparison to what is coming out of the movie studios.

    Many games now have some rather complex IK effects to get more realistic motion. The Halo engine produces some fairly impressive physical effects - look at the way the jeep drives sometime, it is quite realistic. Games have the distinct disadvantage to cinematic animation at this point - in the movie you know exactly what is going to happen and you can write special exceptions where needed, even altering the vertexes by hand if needed. In a game EVERYTHING has to be either anticipated, or computed in real time. No wonder things are still a little forced.

    3D cards are getting there. Most can put out plenty of FPS when required (remember the cinima renders are only 24fps - below what most gamers consider even passable). It's really getting the polygon count up and the parallel processing power up now. Given that most 3D cores have MORE processing power than the CPU in your machine, it's hardly surprising that the processing load is steadily going from the CPU to the 3D card.

    Who knows where the future is going, but I'll assure you that 3D engines are just going to get better and better, and movies are certainly going to improve to the point where you won't be able to tell an animated film from the real thing.
  • I tend to agree with you: amazement by advancements in technology gets old after a while For example, Slashdot just posted an article on IBM's super fast transistor. So WHAT? Transistors have been constantly getting faster for decades. But graphics are different. There is a distinct milestone, an endpoint, which is the exact duplication of visual reality. I will continue to be awed by computer graphics advances until a CG human which is indistinguishable from a real person is walking around on my screen.

    LS
  • It seems to me a little ironic that someone attacking a misuse of the word "ironic" is really just paraphrasing some article he read after the release Alanis Morrisette's song.
  • Disclaimer: I work for DotC.

    RenderDotC doesn't raytrace either. You might be thinking of Mirage-3D, the author of which, the great Timm Dapper, also works for DotC.

  • The reason that 24 FPS is acceptable in a movie while not acceptable in a game is because in a game, that 24 FPS is an average. You sometimes get better, you sometimes get worse. In a movie, 24 FPS means that you get exactly 24 frames every single second of the entire movie. The reason that 24 FPS is sucky in a game is because it means that when the animation gets complex, you get substantially less than 24 frames in a second.

    The eye can't even detect anything above 30 FPS or so.

  • You are detecting itermittent dips in the framerate.

    Cells in the retina have a recovery time of ~30 milliseconds. Do the math.

    (If "80 FPS" seems choppy to you, it is because this is an average. The framerate only has to dip below 30 or so for a hundred milliseconds or so to be detectable.)

    Here's a link [utexas.edu] from google

  • Okay... just one small thing... rain on your wedding day is ironic... considering it's supposed to be good luck...
  • > But I think we're still a ways off from being able to do away with the repetitive textures that dominate roads or brick walls in videogames.

    Exactly. Why? Because we use textures as a form of compression. Computers just don't enough memory and bandwidth to allocate an unique texture for EVERY surface. (Light maps push this boundry though, as can be noted in Quake with it's light map cache.)

    The reason textures even "work" to begin with, is that from a distance, a surface looks pretty much "flat". But at the microscopic level (atoms) the "surface" is extremely hilly. In the real world, *ALL* those micro details ADD UP when that object is light. And that is why the textures in any game stand out like a sore thumb. It's not the "textures" themselves that are the problem. It's the surface roughness and lighting that we are CRUDELY approximating (for real-time rendering.) Bringing this back on topic, thats why off-line rendering farms can look SO much better and realistic. They have the time to do all the expensive math calcs needed for realistic lighting (i.e. ray-tracing)

    > It didn't appear flat, but rather bumpy
    That's why bump-mapping is so badly needed in today's games. It fakes the atomic "roughness" of a surface.

    I'll dig up a link to that Quake 1 client (with source) that added bump-mapping later today. The cool part was that you could adjust the level of bumpiness. A textured brick with a little bit of bump-mapping looked WAY better and started to look like a real brick (with indents.)
  • > In a game EVERYTHING has to be either anticipated, or computed in real time.

    Frame-based animation is the example of the prior.

    Skeletal-based animation (and motion blending, ala Granny) is the example of the later.
  • >> . Pixar renders their frames at a color depth of something like 16 bits per channel.
    > The fill rate on the Geforce series is reasonably high. The color depth is 32 bits

    For the GeForce, 32 bits per pixel is only 8-bits per channel, and can leave bad banding and mach artifacts with overlays.

    16 bits per channel is 64 bits per pixel (ARGB). Unfortunately it will be a while before consumer cards even start thinking of supporting it.
  • by epeus ( 84683 ) on Tuesday June 26, 2001 @01:20PM (#127023) Homepage Journal
    The GeForce 3 demo at MacWorld was Luxo junior rendered in real time, so Pixar quality animation is possible, for a sufficiently early value of Pixar...
  • I personally loved the older ones. Although the one considered the best of the series never got an american release, The real FF III, the Japanese one. Play that one and you'll understand.

    Yeah, one of my housemates brought a copy of FF II back from his house. I went nuts on it for like 6 hours, saved the game, quit - next day, gone. Play the start again for maybe an hour, save, quit, gone again. Game is fried. Too bad, the first few hours were a lot of fun... :)

    (I don't see what the problem is with DBZ-style hair, but then again I do spend about 2 hours a day watching it).

  • by Ryu2 ( 89645 ) on Tuesday June 26, 2001 @12:42PM (#127025) Homepage Journal
    In realistic graphical simulations, rendering is only a small part of the equation. Most of what we perceive to be "unrealistic" lies in the modelling of animation and movement -- physical dynamics and interactions such as collisions, deformations, effectsm natural pheonomena like wind, human locomotion, etc. We never think of these things conciously, but perceptually, if anything is out of place, we immediately notice something's amiss.

    Here, even the most advanced renderer won't help much if you're talking about real-time interactive stuff -- it is sitll raw CPU speed here...

  • The theory goes: The fact that none of the situations described in the song are ironic, is itself ironic. :)
  • ...when the GeForce2 GTS first came out. nVidia's marketroids said it was "a major step toward achieving [Pixar-level animation]." Pixar's Tom Duff commented:

    "Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage they could get out of it.

    "At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns."

    Source [fgnonline.com]

    Some of the stuff the GeForce3 can do is great, but let's calm down. Move along...nothing to see here...

    -brennan

  • Its a 10? minute long realtime rendered video.

    www.theproduct.de [theproduct.de]

    It's really amazing, and it would seem that what they were describing in the article is already here, but maybe im not quite clear on what they meant.

    Malcolm solves his problems with a chainsaw,
  • You are somewhat correct, but not quite there. Since modern 3D games are more complex, the game may lag a little now and then and therefore we experience more skipping in motion than in simple 2D platform games. But this is not the whole truth (as in: why do Amiga and TV consoles fasciliate "perfect motion"?).

    Searching for "human eye framerate" on Google provided this link [www.ping.be]. A very good point raised here is that the screen is turned on and off many times a second. This makes us much more perceptible to refresh rates above 30 Hz. Especially on TVs and monitor pictures with higher intensities, where white colour is the brightest. If you don't believe me, adjust your monitor refresh rate to 60 Hz and notice the difference. Compared to 100 Hz, I notice the blinking extremely well. Hell, I even notice it a little when switching from 100 Hz to 85 Hz. However, if you use a lower refresh rate, your "eyes" adjust after a while. Especially using darker colours on the screen makes it easier. This might be a synchronisation problem, and that we start synchronizing with the lower refresh rate after a while. However, we DO notice extremely well when comparing, and working on a lower refresh rate may give you more headaches!

    Notice the difference between refresh rate and framerate. IMHO refresh rate has everything with how "smooth" motion you can have. With lower refresh rates, it's much easier to create completely "smooth" scrolling (we perceive the motion as continuous), but we might notice the blinking of the screen. This is why games on TV can look PERFECTLY smooth, but "horrible" on a high Hz monitor. The more Hz you have, the higher STABLE framerate you need to get the same effect. So if you want more smooth motion in games, I recommend learning to play at a lower monitor refresh rate. Really! Your head may throb, but it's smoooth ;-)

    All in all, I think of the problem as in two parts:

    A) A synchronisation problem between refresh rate and framerate. (Which is really the same as your conclusion) Sometimes, a frame can take longer than a refresh and people will notice.

    B) A synchronisation problem between the eye and the refresh rate of the monitor and it's intensity (remember colours are frequency too!) Remember that the human eye isn't built for watching rapidly blinking objects.

    They don't pay me, so I won't clarify much more than this. ;-)

    - Steeltoe
  • They don't pay me, so I won't clarify much more than this. ;-)

    Actually, I lied (they still don't pay me though). Modern 3D games usually use more time to draw a frame than just one vertical refresh on the monitor. So you notice lag in motion on most new 3D games anyways. The higher number of refreshes used, the more noticable "jaggy motion" you get (depending on refresh rate). You won't notice anything in between, except occational skips (by chance) now and then. Try playing an older 3D game. With low enough detail level and resolution, you should be able to push the limit so that everything is calculated in the vertical refresh period and get "perfect motion" (Doom for instance). If this fails, try adjusting the refresh rate of the monitor down.

    The VR period is when the beam on the monitor moves from the lower right- to the upper left corner and the screen is blanked. If you want "perfect motion", this short time is all you got to draw the next frame. That is why it is easier to have "perfect motion" at lower frequencies. Actually, the motion is not perfect at all since there is no motion(!). However, the brain is fooled into seeing perfect motion. If you just skip one refresh, that's enough to notice a small lag in motion (depending on refresh rate).

    In reality though, you have a little more than the vertical refresh. As the beam goes drawing down the screen, if you manage to stay ahead of it (drawing the scene from top-to-bottom), the player can't notice it. I know this from experience. This does not of course apply if you are using double-buffering. It's harder to have "perfect motion" with double buffering, since you need to synchronize with the VR in order to set the screen address every refresh. I believe most modern games draw directly on the screen nowadays since the GPUs are so fast, so this might not be a problem anymore.

    So all in all. It doesn't matter how high you can push your fps. As long as you don't synchronize properly with the monitor refresh rate, you'll not get "perfect motion". The refresh rate is what is fooling the brain in the first place. A higher framerate than the refresh rate is meaningless. Humans DO recognize the difference between objects blinking 30-120 Hz with high difference in intensities. Humans are NOT simple math and simple science.

    - Steeltoe
  • I'm not sure I follow you. What is creating this low frequency beat. The frequency of the light and the frequency of the blinking?

    - Steeltoe
  • by ChristianBaekkelund ( 99069 ) <draco AT mit DOT edu> on Tuesday June 26, 2001 @01:13PM (#127032) Homepage
    1) Tom Duff sounds on the money with regards to the technical misconceptions...but an even bigger ever elusive problem: 2) "Pixar-level" animation in the end is not about polygon count, it's about COUNTLESS man-hours spent modelling, lighting, and animating....no card can ever replace that.
  • PRMan doesn't raytrace. Ever.

    You are wrong. The shader language can raytrace.

    Using BMRT [bmrt.org] together with PRMan, it can ray trace, and many people use it. Like in Hollow Man [imdb.com], for instance.

    Here is a gallery [exluna.com], which includes Hollow Man. The call looks like this :

    color trace (point from, vector dir)

    Traces a ray from position from in the direction of vector dir. The return value is the incoming light from that direction.

    Source [exluna.com]

  • B8 00 4C CD 21

    What's so frightening about terminating a program?

  • PRMAN CAN RAYTRACE USING BMRT AS A TRACER.

    BMRT Raytracing Howto [exluna.com]

    You people are amazing. You don't even bother to look at my link, and you tell me I'm wrong.

  • PRMAN CAN RAYTRACE USING BMRT AS A TRACER.

    BMRT Raytracing Howto [exluna.com]

    You can hook up PRMan and BMRT together, using BMRT to do the trace() calls. This is in fact a semi-supported function that Pixar gives to people. When you get PRMan, they'll happily give you BMRT, as well. Many things Pixar does use BMRT. BMRT is good. Don't diss BMRT. When people use PRMan, BMRT is a natural thing to include in many cases!

  • To draw an analogy :

    If I'm in Word, and I add an Excel spreadsheet to it - and then I print it - is that Word printing a spreadsheet? By my definition, yes - by your definition, no.

    By my definition, Word can use Excel as a spreadsheet renderer. By your definition, apparently, just because Excel is not built in to Word, it means that Word is incapable of printing spreadsheets.

    They're telling me, "There's no way for Word to print a spreadsheet," and I'm saying they're wrong. You're also saying I'm wrong. But I'm not, I'm right - and the page I pointed to shows how it can be done. It's not easy, and there are problems, but it can be done. The actual facts are on my side.

    Your definition might be more technically correct (since "we don't say that program 1 is performing the specific task"), but mine is certainly more useful. Since my point was that program 1 is able to perform a specific task, by commincating with another program. Many programs are incapable of communicating with other programs in such a manner, and that makes PRMan pretty cool, in my book.

    By the way, it's "English," not "english."

  • Well, I guess I live in the real world where people make their arguments using English language sentances, not echo and sed.

    After executing your commands, I am left with the following statement from you :

    prman can't do radiosity via an SL trace without BMRT.

    Now, I will apply the English language suggestion for good writing, "Don't never use double negatives," after which, your statement becomes :

    prman can do radiosity via an SL trace with BMRT.

    This is shockingly like my original statment :

    PRMAN CAN RAYTRACE USING BMRT AS A TRACER.

    And I guess I agree with myself. So, then I can only laugh, when I read your insulting statement :

    If you still think that's prman doing raytracing then I suggest you take a remedial english class.

    Because I just proved that you "think that's prman doing raytracing"! So, why exactly did you need to insult me twice in your post, in order to agree with me?

  • This will be my last post in this thread.

    Thanks - having the last word is kind of fun.

    You might consider getting some help for that persecution complex.

    If this is a back-handed apology, I accept. If it's merely another insult, you might want to consider taking some agression management classes. I think it's somewhat childish of you to move an argument of fact into a namecalling bout, suggesting that I don't live in the real world, don't know how to use the English language, and now that I need therapy, and have a "widdle head."

    Semantic debates are sometimes enthralling, because you can twist words to make facts lie - but they don't really further understanding. Your definitions of "program" and "call" are well stated, and I believe I understand them, but I don't believe that they reall help you.

    Word can call Excel to do spreadsheets, but that doesn't make it a spreadsheet program. PRman can call BMRT to do raytracing, but that doesn't make it a raytracing program. Eudora can call PGP to do encryption, but that doesn't make it an encryption program. The JVM can call methods in a class file to give you the long-distance phone calling capabilities in DialPad, but that doesn't make the JVM a long-distance phone calling program. Quake III : Team Arena can call jpeg library functions to load jpeg images, but that doesn't make it a jpeg image loading program. Internet Explorer can call Hotmail to send email, but that doesn't make Internet Explorer an email program. PRMan can call the shader language to do shading, but that doesn't make it a shading program.

    Utility is an interesting thing. I can use a butter knife to turn screws, and I will agree with you that my ability to use a butter knife in that manner doesn't somehow turn it into a "screwdriver," in the traditional sense. But, if your definition of a screwdriver is merely that it is an implement with which one may turn a screw, it becomes a pretty hazy line. If you ask me for a screwdriver, and I hand you a butter knife, I'll laugh at myself, right along with you. It's silly, it's not what you asked for, but it'll do the job. You have to agree that if a demolitions expert is trying to defuse a nuclear bomb without his tools, the clock says 26 seconds, and he asks me for a screwdriver, if I hand him a butter knife - I've saved the day!

    Most effects houses have a hard time staying in the black, and most use PRMan. If an effects house is down to the wire, and their client demands that a certain effect needs to look more real, and the only way to pull it off is by having PRMan call BMRT to do raytracing, they'd rather use my definitions of "program" and "call" than yours.

    I love your last paragraph - I think you should use it, then next time you're on Jerry Springer.

  • Go find a theatre playing something in ShowScan format - that's 60 fps, and looks lovely. Even the grain is reduced, at that framerate.

    Not many features released in ShowScan, I'll agree. But you occasionally see it turning up in amusement rides & Vegas "experiences".

  • Um, are you on crack? Square has made no such decision. The Final Fantasy franchise is way to profitable to axe. Where did you hear this? I'd be interested to know your source. Or were you just assuming XI would be the last one due to the fact that it would be online? Sakaguchi has stated in interviews, if I'm not mistaken, that future FFs will not neccessarily be online, implying that there will be more after XI. I see no reason for Square to cut off the largest, healthiest branch of their game development.
  • Stores around here are kinda slow as well. But, pricewatch always has the new stuff as soon as it comes out. You have to pay for shippig ans all that, but if you want the latest, there you are.
  • I've had no problems with a GeForce 3 card from ELSA. ELSA is partly owned by NVidia, and aims primarily at professional users. They have a number of older cards with price points above $1K. Surprisingly, their GeForce 3 card isn't more expensive than ones from game-oriented manufacturers. And ELSA provides a six year warranty.

    I'm using Win2K SP1, and the driver for the GEForce 3 works quite well. All the graphics features work. The chameleon demo is indeed impressive. Considering that NVidia just started shipping the Win2K driver, I'm quite impressed.

    I have more fans in my systems than most users go for. Nothing is overclocked. GeForce 3 boards have heat sinks on the RAM and a fan on the graphics chip. This board is pushing the limits of what's possible with current semiconductor technology. Power and cooling should be sized accordingly. Just shoving this into some low-end PC with a minimal power supply and fan may not work.

  • Or try something like this [indiana.edu]. As the Pixar dood mentioned, even if the card could *render* it in realtime, there's a whole lot more than just rendering. That pic there (my first, so don't expect much ;) takes several times longer just to PARSE than it does to render. And it's absolute crap compared to the crazy stuff Pixar does.

    However, I'd say we ARE about advanced enough to do crap like this [indiana.edu] in realtime... ;) (No goat links, I swear!!)

    However, there is some hope... I remember reading in a great book about 3D games (Black Art of Macintosh Game Programming) that raytrace-quality realtime games would be (according to the author's math) about 20 years away. Interestingly, that's exactly what the Pixar guy predicted, and that book was printed in 1996. My observations: today, Pixar does far more than simple raytracing. It's radiosity up the wazoo, for example (I assume ;). So to me, this suggests that ~20 years from when the book was published, we will be able to have realtime raytracing of 1996 quality. Still not too shabby. BUT. There are gazillions of optimizations you can make in realtime games that you can't make in raytracing. Here's how I see it: We can improve the algorithms a few powers of ten, efficiency-wise. (Don't say we can't, you'd be very wrong.) We can speed up our processors a few powers of ten. I think we're getting there faster than these guys are suggesting, just as long as we don't aim for the moving target of Today's Pixar Production$. (As he points out, there will never be a day when the realtime graphics are as good as the prerendered ones, simply because the big companies have the cash to throw at it to make it look better.) Anyawy. Sorry this was so long. Great stuff ahead, though. :)

  • If you note, the original post was correct in its use of "ironic" except for the quoting of the Alanis Morrisette (sic) song in the subject. The singer is Canadian, however, and whether or not that fits your definition of "American" is up to you.

    It is slightly ironic that the same people who one day were saying digital art is still art are the next day saying that animation on the level of a movie that took thousands of man-hours to create can be generated by a computer. Thus stripping away the art value of the movie (or at least the animation in the movie).

    It does piss me off when people misuse words, especially words that are very nuanced and clever. But there you go. People are stupid.

    BTW, good examples.

  • Not only that, but it's not like "slow" renderers are going to be standing still while nVidia advances their technology. There is a heckuva lot of room for advancement before we can generate photorealistic images of any arbitrary environment. Wake me up when there's a GeForce that can do stuff like this [artoolkit.org] in real-time (and don't forget the motion blur!), and then we can compare state of the art again.
  • Sure there is incredible potential in the geforce 3 and programmable stuff like it, but computer graphics in movies has too many extra things going on for it to be done in hardware realistically. If there was specialized hardware for all the nuances like raytacing, inverse kinematics, procedural textures, volumetrics, radiosity, etc. then maybe in could be done in realtime, but I don't think we're going to be there any time soon. Rendering is very slow because for the most part it can't take advantage of special hardware because the quality just isn't there. Anti-aliasing, motion blur, and good depth of field take time as do physics simulations, heavy subdivisions, and complex shaders. It's coming, but not for at least 4 more years, and that can only happen with the high end rendering tools like renderman and mental ray being written to take advantage of a specific card which isn't going to happen unless there are standards in place.
  • I think that his point was that you can't just have a solution in a video game. How is it saved? Eighther you use a mesh sequence which is space consuming and inflexible, or you do the simulation in realtime. I am not talking about a falling brick that can be done with keyframes, I am talking about water simulation and particle dynamics.
  • Renderman doesn't do radiosity. Different techniques are used other than raytracing for realtime games and probably will continue to be for some time.
  • Well, I hve no doubt that they could render an orange tree in better than realtime.

    After all, that orange tree would have taken years to grow.

  • 35mm film is less than 3k x 3k. If this link [softimage.com] is right anyway.

  • I never liked FF. Big Dragonball style hair, people riding these weird chickens... silly big swords... that's all I ever saw. Well, a friend of mine was playing FF8 and his character had to dress up like a woman to go do something. I guess that's more interesting than a big chicken. I'd rather watch Record of Lodoss War or some other old classic.

    The FF movie looks nothing like the video games I have seen, thank goodness. I hope the characters in the movie aren't breeding those giant chicken things...

    - Someone Confused by the FF Hype

  • Video transfer uses something called "3:2 pulldown," I used to read up on that stuff when I collected LDs. Pretty tricky actually turning a 24FPS non-interlated data format into a 30FPS interlaced stream... but no matter how the frames are sliced and diced a film only hs 24 frames per second of data to offer.

    I just want true 30fps in theaters! Maybe in an all digital theater... they do shoot some movies on HD video cameras, don't they? Still haven't been in one of them newfangled digital theaters. Seattle has crappy theaters.
  • He's 5 years out, Luxo Jr. was made in 1986.
  • by _ganja_ ( 179968 ) on Tuesday June 26, 2001 @03:33PM (#127070) Homepage
    Well this post is going to be off topic but it's something that I need to get off my chest. The meaning of the word ironic, especially by Americans, is complete crap. Take the title of your post, that is not one bit ironic its just unlucky. In fact if you listen to the song that you took the title from everything in the lyrics is just unlucky. E.g. "Like a traffic jam when you're already late" is not one bit ironic, it would only be ironic if you were a town planner and got caught in a traffic jam on your way to a meeting to discuss the traffic problem.

    "10,000 spoons when all you want is a knife", how is that ironic? It would only be ironic if later you discovered that a spoon would have done just as well for say, opening a can of paint.

  • If you read the RenderMan Interface Spec. (V 3.2) the entry for trace() reads as follows:

    "
    color trace( point P, point R )

    trace
    returns the incident light reaching a point P from a given direction R. If a particular implementation does not support the Ray Tracing capability, and cannot compute the incident light arriving from an arbitrary direction, trace will return 0 (black). "

    So, you can call trace() in prman, but it's not going to do you any good.

    That said, it is possible to write a ray tracer in the shading language! This has been done, in fact, by an insane person named Katsuaki Hiramitsu [http]. This shader, however, does not use the trace() call. The trick lies in actually defining the objects you're going to do ray tracing on in the shader along with your own version of trace(), which is, by necessity, intimately bound to the type of object you've defined.

    So, saying the shading language can ray trace is like saying you can keep yourself alive for a while by eating selected portions of your own body. It's possible, but certainly pessimal.

  • by Junior J. Junior III ( 192702 ) on Wednesday June 27, 2001 @04:54AM (#127081) Homepage

    I still play Ms. Pac Man, but I hardly ever play games from just five years ago.

    Graphics are cool and all, but they're essentially just pornographic. Not in the sexual sense, but pretty graphics just sit there vacuously to amuse your eyes. As has been said long and loud, game developers should strive to focus at least as much on gameplay as they do on making their graphics cutting edge. Give the user an elegant interface, something fun to interact with, something new, and something challenging.

  • Big Brother Bill will be very happy. I will support the insoc-MSDN party by upgrading all the msn telescreens from Dell-compaq-IBM-HP-sun corporation and I can play some Big brother sponsered games with it. I know Dr. Gates will run pretty good with the new card after shooting all the 3d GPL virus's. A game of find that communist counter-revolutionary will go well too. I need to have the face of Goldstein oops I mean Linus memorized for this game.

    Remember Freedom is slavery, war is piece, and Ignorance is strength!

    Now I need to stop goofing around here on the slashdot insoc-msdn party news and go back to work studying the 11th edition of NewSpeak by MSN expedia. I keep hearing people here on slashdot speaking in oldspeak.

    You people all need to learn how to excell(smart tag link to Microsoft office homepage)on what you do to learn and explore (smart tag link to internet explorer site)your newspeak party langauge. With free(link to how free you are with hailstorm/.net)enterprise and innovation to lead the market, great Microsoft can actively access(link to ms access)all the information we need. We need an active innovatorto actively explore, and actively leadthe market, and they ask that we all support the revoluton by your activation subscription.

    See that wasn't hard. You need to all speak newsspeak and only use these adjectives innovation, lead, explore, access, active, word, excell. This will make thought crime impossible. less is better. For something double-pluss-un-innovative like linux you should not use the word bad or sucks. You all are required to use the following words above with the -un extension. If something is really innovative you need to put doublepluss innovative or really bad its doubleplus uninovative. Everything non Big brother is just plus uninnovative. So remember its not GNU-linux but doubleplussuninnovative gnu/linux. Now lets here you all respect big brother now and after my newspeak lesson I will play some video games and render linus doing a double-plus-un-innovative things to scare people so that Bill Gates can actively explore my record so that I can be considered loyal member who doesn't doublethink.

  • uh, since when has nvidia had unstable drivers? They're usin' unified driver architecture, meaning that the same drivers that they made for the original Geforce 2 years ago now can be used with the geforce3 (although the drivers might need to be tweaked a little for the differentiating architecture). This gives Nvidia over 2 years to tweak their drivers for maximum stability and performance. the only reason why you're complaining, im assuming, is because you are using the leaked beta drivers that Nvidia never authorized. well guess what? they arent supposed to be out! So if you care about stability that much, just get yourself the official drivers and stop bashing the company for something it has done right :P

    Well, I had to throw my TNT2 card into the trash because it had unstable drivers. I recently built a low end machine for my wife to play everquest, and since eq doesn't need a high end graphics card I purchased a TNT2. About 50% of the time that she zoned or started the game her comp would get a fatal exception 0E. In fact it would get this fatal exception:

    A fatal exception 0E has occurred at 0028:C0006EB2 in VXD VMM (01)+ 00005EB2

    After days of playing with it.. buying new memory, trying everything I could I finally find out this is a problem that the TNT2 card has been having for years. I would say the drivers are realitivly unstable. I bought a Voodoo5 card for the machine and it hasn't crashed since. If you goto this link [techadvice.com] you'll see that the only workaround I could find (the AGP apeture size and disable video caching did not work) was to TURN OF HARDWARE ACCELERATION. ROFL! Why by a 3d card at all if you have to turn off the hardware acceleration to get it to work properly? Can't even run EQ with the acceleration turned off. And just to prove that the problem is not eq's.. I had the same problem when running Half-life on the card.

    It saddens me that Nvidia is quickly gaining a monopoly on the graphics industry, because I truely do not want to purchase another card from this company after they knowingly allow the old TNT2 cards to have driver problems without fixing it.
  • really? I have a tnt2 (Diamond tnt2...I forget the revision number...) in my machine...I know its old, I'm waiting for creative to release a geforce3. But, I've had my Diamond Viper 770 forever (and so have all of my friends) and I (them too :-) have never had _any_ repeat _ANY_ problems with them. Be it using the drivers that came with the card, or the latest detenator drives I downloaded from Nvidia (for Linux or win32). Either your board was defective (did you try to exchange it?) or you shoved it into a pci slot some how. Was that voodoo5 you bought pci? If it was, have you ever tried another card in your AGP slot? That could be fried. Also, check your BIOS, do you only have 64 megs of ram, and have your AGP apeture size set to 64 megs? The TNT2 chipset was one of the best chipsets in my mind. I'm still running a Viper 770 on my windows boxen at home, and my linux box here at work.


    Heheheh.. Yeah the voodoo 5 is AGP. I don't think it's the TNT2 chipset but the way the specific card vendor (IOMAGIC) integrated the chipset into the card didn't work with the drivers, i.e. the TNT2 chipset works fine.. but the IOMAGIC card has problems with the drivers. I agree that it is probably a problem with the IOMAGIC card, and not Nvidia, but the people who purchased IOMAGIC cards need support too!

    If you see my name up there, I develop device drivers for PCI cards, so uhh.. I think I can tell the difference between a AGP slot and a PCI slot.. heheheheh.. funny to think about trying to fit a keyed agp card into a PCI slot.. I'm certain the pin locations are different, so if you did manage to get it in I don't think you would see ANY graphics.. :)

    The machine had 256 megs of ram, and I tried all the various AGP apeture size settings with no success.. the only way I could get the card to work reliably is to turn off hardware acceleration in windows all together, which wasn't an acceptable fix for me.

    As for the card, I've seen several iomagic cards have the same problems (3 or 4 different cards that people have had) in some 3d games (it seems EQ and Half-life is the worst). Again, I think the chipset is probably okay, but the drivers didn't seem to work with iomagic's specific implimentation of the card.

    As to the person who mentioned that it was funny that I got a Voodoo5.. Well I had only built the machine for 1 purpose, and that was to play Everquest. I purchased the Voodoo card AFTER Nvidia had purchased 3dfx, but I got it because I knew that card had very few issues with EQ, compared to the number of issues I saw people were having with the GeForce2 cards at the time.
  • It sounds to me like you had somthing else going on, or you had a bad TNT2, I've played quite a lot of EQ over the past year on a TNT2 without a single crash, before that I was using a PCI TNT without a crash as well.

    I would guess you either had a heating problem, old drivers lying around in the windows directory, or some motherboard/card interaction. nVidia has probably the best driver support out of all the consumer card companies, compared to say 3dfx who couldn't even be bothered supporting my Voodoo2 properly a little over a year after it was out and never did get a real OpenGL implementation going.


    Well.. I'm 99% sure that it was a driver problem because if you follow my link lots of people are having the problem, and I have seen the problem happen on several different cards on different machines. All of the cards were from one vendor tho, IOMAGIC. Now, that aside, you guys have definately made me MUCH less worried about my decision to purchase a GeForce3 card for the new machine I am building. I have honestly been very worried about purchasing any Nvidia chipsets since I saw the problem. I know that lots of peope HAVEN'T had the problem, but the problem definately does exist...

  • I think the intended benefit is in rendering the graphical representation of an already-calculated model. Sure, creating the models, wireframes, animations, etc. are the most time consuming process, but rendering the textures, reflections, transparencies, etc. aren't cheap either. That's where the GF3 comes in.

    The hard(er) part is getting the models and animations to look "right." The easy(ier) part should be rendering the textures for those models. Right now they're both expensive. The GF3 should lower the cost of the latter.
  • 1) Tom Duff sounds on the money with regards to the technical misconceptions...but an even bigger ever elusive problem: 2) "Pixar-level" animation in the end is not about polygon count, it's about COUNTLESS man-hours spent modelling, lighting, and animating....no card can ever replace that.

    I totally disagree. Take a look at the Quake movies that were made. Before Quake came out, how many thousands of man-hours would it have taken to render and animate those movies "the old-fashioned way"? A shitload! Then Quake came out, and you could get semi-realistic 3D graphics, and people could "act out" the scenes with rudimentary tools.

    Skip ahead 5-10 years, where the CPU power and "acting" tools available are much more sophisticated. Are you still going to claim that "pixar-level animation" cannot be done with a good 3D model artist, a scene artist, and an electronic actor?
  • It seems to me that we will never get to the point of realistic rendering in games because by the time one reaches pixar-level animation, there is so much art and detail going on before the rendering stage.

    If we do get game companies trying to produce games with "realistically rendered" graphics, won't they need budgets of 100 million for each game to develop all of the data (detail, world, etc.) that the hardware will operate on? Then we'll be walking into the software store laying down $5,000 for a game instead of $50.
  • Most of this stuff can continue to use Newtonian optics. Even Bragg diffraction, now that I think about it. So you're right that degenerate modelling will do us for quite some time, and most of the canon.

    But the first person who wants to model proper rainbows, sun-dogs, or coatings...

    The really hard part about QED isn't the iterations. It's defining the integration regime in the first place. I haven't looked in a couple of years, but I bet even the best Feynman diagram tools still can't work without heuristic input.

    --Blair
    "Luxo, Jr. always wanted to grow up to be an electron microscope."
  • Damn cool, but it's not QED yet.

    It's all based on waves (classical theory). And it seems to be a surface phenomenon only, and dependent only on the geometric surface description.

    Real QED would include interactions of photons with the subatomic particles of the atoms within the body of the material.

    Diffraction and thin-foil effects are too-simple examples, with classical analogues. Phase-conjugate mirrors or simulacral holograms; now there you have to have QED.

    This isn't to take away from Stam's work. It's gorgeous. The idea of walking into a bar with a double-barrelled shotgun and blowing away the pseudo-retro Wurlitzer with the wave-rendered CDs rotating on top, wave-rendered shards of CD spinning through space...

    The idea of finding a secret because of its slight change in lustre vs its surroundings when the overhead lights dim and an accent spotlight becomes dominant...

    The idea of being able to tell painted plastic from painted metal and painted wood, or black dirt from gunpowder and incinerated-demon charcoal...

    Someone get nVidia on the horn.

    --Blair
  • by blair1q ( 305137 ) on Tuesday June 26, 2001 @01:15PM (#127121) Journal
    Is anyone working on a Quantum Electrodynamic model of raytracing? Diffraction gratings would be cool. It would improve other things. Like hair, thin films, etc.

    --Blair
  • by Salieri ( 308060 ) on Tuesday June 26, 2001 @01:30PM (#127124)
    I think it's a little ironic that today we talk about bringing Shrek and Final Fantasy to the desktop when just yesterday a slew of 4's and 5's affirmed that, beneath raw power, there is art in computer graphics.

    Believe me, there is a lot of artistic skill that goes into making animation like that, from storyboarding to complicated modeling and animating to directorial talent and writing ability.

    Just because Avid-style editing has been brought to the desktop, doesn't mean what you see on iFilm is as good as what you see in theaters. Most of the time it isn't. It's all about the talent, not the tools.

    Case in point: Robert Rodriguez [amazon.com], who scraped together only $7,000 to become one of Hollywood's hot young directors. For those who don't know about him, his latest film was the hit Spy Kids.
  • The problem is that even if we can have 100k polys per frame, it's going to suck to see your marines run straight to gunfire, or just stand and watch while their friends get slaughtered because their fuzzy logic is too fuzzy to realize they could help. Those bastards are always out of the reaction range, even if it's just a few meters away. Frankly, AI has been too long neglected. I know AI can be pain in the ass to program, but I'd like someone to make a licensable AI engine, just like with GFX engines these days...
  • I am not sure about real time rendering for flicks like shrek and final fantasy. But real time rendering for stuff that is less detailed is definitely here. I've worked with Maya quite a bit, and I remember hating the way I had to wait ages for it to render my movies ( 8 hours meant that I would leave it to render over night). Now what I was working on was not cutting edge and my use of lighting and textures was quite simple. I'm pretty sure that the GeForce 3 would render something like that in real time with no problems at all. Real time rendering is here and I'm happy that I wont have to wait a day before I can see the result of my work. This means a massive improvement in productivity and ease of use.. atleast for me.
  • You mentioned something specific in your post that I just wanted to comment on quick, specifically the reference to what makes something realistic or unrealistic: "physical dynamics and interactions such as collisions, deformations, effectsm natural pheonomena like wind, human locomotion, etc"

    This is basically proving your point, but just to throw my own slant on it, having worked on small 3d rendering projects myself, it's never been the graphical hardware that has ever made anything I've worked on more realistic. And very rarely, it's been things implemented in code (deformations, etc) but rather texture detail.

    A quick explanation I feel is in order before I leave work. First and foremost, the most important aspect of any 3d environment is it's actual construction, in this case, polygons. However to the end user, a bunch of polygons will look like just that, a bunch of polygons. However with smart texturing (explanation of THAT in another minute) these polygons can actually begin to LOOK like something.

    Now, smart texturing. This is the tough part, I think, but also the most rewarding. I define smart texturing to be using textures which simulate real life features. Not just having a brick wall where all the bricks look the same, but a brick wall where some of the bricks are broken, some are off color, some are missing, but NEVER in any repeating pattern. The last part is the most important, but rarely done these days. However repetition of textures is usually the first thing that triggers a message in the brain that says, "ooh, right. this is a videogame"

    I first realized this a few years back when I was driving to work one morning. I was looking at the road ahead of me and was more or less studying the blacktop. I was trying to figure out why I had never seen a realistic looking road in a videogame before. I realized that it was because most roads consist of very simple textures: black surface, yellow and white lines. But the road I saw in front of me was drastically different: it wasn't just black, but a multitude of color. It didn't appear flat, but rather bumpy, and "textured". It was covered in skid marks, and there were signs that accidents had take place there before. The surface wasn't just some asphalt, but rather it told a story. Humans had been there, damaged it, and thats what made it interesting to look at that day.

    But I think we're still a ways off from being able to do away with the repetitive textures that dominate roads or brick walls in videogames. But sometime while texture mapping, experiment with details in the environment that show humans have been there: wear and tear. It's what makes it cool to see bullet holes in the walls and blood splattered all over the floor when you play Quake 3.

    Just some thoughts. Yeah, I rambled. But work is over now. Mission accomplished.

    -NeoTomba

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...