The evolution of PhysX - Addendum
I got a bunch of questions about my last series of blog posts so I thought I’d add a quick note here - at the risk of confusing people even more.
The figures I posted are for the CPU part of PhysX only. This does not concern or affect the GPU parts of PhysX in any way. Those things are orthogonal. If we optimize the CPU parts and get a 10X speedup, it does not mean your GPU will suddenly provide 10X less value, because it is running others parts of PhysX anyway - neither the rigid bodies, nor the raycasts/sweeps.
Only a few features are GPU-accelerated, e.g. cloth or particles, mainly because they are the ones that map well to GPUs, and they are the ones for which the GPUs provide real X factors.
Now as shown in the recent “destruction video” I posted, people here are also working on GPU-optimized rigid bodies. This new module is called “GRB”, and it is currently not part of PhysX. But it does provide a speedup compared to our current CPU solution. In other words, it is still faster than PhysX 3.3 on CPU. You might have a hard time believing it, but people are trying to be fair and honest here. One of our tasks is to optimize the CPU rigid bodies as much as we possibly can, just to make sure that the GPU rigid bodies do provide some actual benefit and speedups. If you don’t do that, you release your GPU solution, it’s slower than a CPU solution, and you look like a fool. Like AGEIA. We are not making that mistake again. The CPU solution is here as a reality check for ourselves. I suppose we could just use Bullet or Havok for this, but… well… we think we can do better
Meanwhile, it is correct that the features that do work on GPU are currently only working on NVIDIA cards, simply because they are implemented using CUDA. There are both obvious political and technical reasons for this. It should be pretty clear that at the end of the day, NVIDIA would like you to choose one of their GPUs. If you are actually complaining about that, then there is little discussion left to have. Of course they want to sell their products, like every other company in the world. And of course they are going to use their own technology, CUDA, to do so. To me this is pretty much the same as what we had in the past with D3D caps. Some cards supported cubemaps, or PN-triangles, or whatever, and some didn’t. GPU PhysX is the same. It’s just an extra cap supported by some cards, and not by other. Complaining about this is silly to me. It would be like complaining that ATI didn’t make any effort to make PN-triangles work on NVIDIA cards. Seriously, what?
The deal is simple. NVIDIA gives you a free, efficient, robust physics engine. In exchange, if possible, add some extra GPU effects to give people an incentive to buy NVIDIA cards. Fair enough, right? I don’t see what the fuss is all about.
—-
Anyway, the usual disclaimer applies here: I’m not a spokesperson for NVIDIA, what I write are my own thoughts about it, and for all I know I may be completely wrong about their intentions. What I know for a fact though, is that most of the stuff I read online about PhysX is just completely insane wrong.
I’ve been optimizing rigid body simulation in NovodeX/PhysX for a long time now, and there’s no big conspiracy behind it. Again, all those engines are free and publicly available so I invite you to run your own experiments, do your own benchmarks, and see for yourselves. We really have nothing to hide.
May 13th, 2013 at 11:29 am
Nice and forthright