April 6th, 2016
Speaking of PhysX joints and ropes…. It can do that.
April 6th, 2016
After GDC 2016 some people asked me about this video:
https://www.youtube.com/watch?v=ezTOYSms9us
I didn’t know this engine, but some googling revealed that it has been around since 2014 at least:
http://forum.unity3d.com/threads/ape-advanced-physics-engine-for-robust-joints-and-powerful-motors.259889/
First: welcome! Physics is fun and one more competitor is always healthy for end users. Also, your engine looks very nice so far.
Second: unfortunately your claims are slightly misleading and perhaps a bit unfair.
It is certainly correct that the PhysX joints are not perfect. It is certainly correct that PhysX does not handle high mass ratio differences very well. But that is just a side effect of using iterative solvers instead of the real thing. You can use PEEL to verify that Havok, Bullet, Newton, etc, all suffer from the same issue.
But this was not a random design decision, or something that we did not expect. We all started with “perfect” solvers a while ago (NovodeX 1.0 for example). They would solve everything by-the-book and behave much better in the presence of large mass ratio/etc. Unfortunately they were also very slow, and the customers didn’t care about accuracy. They cared about performance, and memory usage. To this date, most customer requests and feedback we get are still about exactly that: it’s never fast enough, and it’s always using too much memory. On the other hand, games get away with inaccurate solutions and imperfect solvers all the time, because they don’t use complex physics. Iterative solvers work fine for ragdolls, and most games don’t use more complex physics than that. So they don’t want to pay the price for a proper solver, when a cheaper one does the job just fine.
Now, when that solver is indeed not enough, we usually have dedicated solutions to specific problems. For example characters will use a character controller module. And vehicles will use a dedicated vehicle library. Contrary to what I read at least twice on different forums, recreating a car using rigid bodies connected by joints will not give you the “most realistic” vehicle, far from it. If nothing else, tires are not rigid bodies at all. If you want the most realistic driving behavior, you need a dedicated simulation just for the tire model (as they did in Project Cars for example). Using raycasts is not a problem per-se, because the contact information it gives you is in fact pretty much the same as what a rigid body cylinder would give you: contact point(s) and normal(s). Contrary to what people claim, PhysX is perfectly capable of simulating a “monster truck”. In fact, we were the first ones simulating a monster truck with rigid bodies connected by joints, back in 2002 with NovodeX 2.0. (And we also did the tank-with-hinge-joints in Rocket, remember?). But we eventually dropped that approach because it is too crude, it doesn’t give you enough control over the driving behavior, and ultimately it does not look realistic enough. The current PhysX vehicle library is way more advanced, with models for the gearbox, clutch, suspension, wheels, anti-roll bars, and so on. It is not easy to use and we don’t have a good demo/sample for it, but the resulting cars are quite fun and pleasant to drive - much more than the NovodeX monster truck ever was. I’m not saying that as the PhysX guy, I’m saying that as the guy who logged hundreds of hours in the Forza games.
It is the same for joints. Most game physics engines have an extra dedicated solution for articulated systems, because they are perfectly aware that regular joints won’t work well there. Thus if you are trying to do an articulated character, there is a dedicated solution in PhysX called, well, “articulations”. There are equivalent solutions in Havok/Bullet/etc. Somebody pointed this out in the forum thread above but it was ignored, maybe because it didn’t fit the desired narrative.
I am not saying that the current PhysX articulations are a perfect solution to all problems (they certainly have their limitations as well), but if you are not even trying them then you are comparing apples to oranges. Just to prove my point I went ahead and recreated one of the scenes in PEEL (I might try other ones later). The forum thread says:
“If you tried this kind of setup with PhysX you should have known that PhysX can’t sustain this sort of load and complexity.”
This is wrong. It works just fine, as long as you use articulations:
To be fair and to give you the benefits of the doubt, it is true that Unity does not expose articulations to their users, so this was probably not possible to try there.
However, even with regular joints, you can get much better results than what got presented. For example here is a short list of things to try to improve ropes:
Yes, I realize that some people will consider this “cheating”. Well, game physics is a lot about cheating. Which brings me back nicely to what I was saying first: welcome! There is certainly room here for new engines that favor exactness over performance.
PhysX 3.4 continuous optimization
April 1st, 2016
Conventional wisdom says that you should use a profiler, find the bottleneck, optimize the bottleneck, repeat until the profile is flat.
Conventional wisdom says that you should not waste time with minor optimizations that won’t make a dent in the framerate. The gains must be “significant” to justify the time and effort spent optimizing.
Oh fuck off now. Conventional wisdom is stupid.
In terms of optimization I usually say that “everything matters”. All the minor insignificant gains eventually add up to something very valuable in the end. I’ve seen it on the ST. I’ve seen it on PC. I’ve seen it (a lot) on consoles.
And I just saw it again today. Here’s what 3 months of insignificant optimizations look like when you ignore conventional wisdom and keep doing them anyway. First changelist was at the end of last year, last changelist was last week. Scene is “ConvexGalore2″ in PEEL (bunch of convexes falling in a pile).
Granted: there is in fact a “significant” optimization in there that accounts for 1/3 of the gains. But the remaining 2/3 are from supposedly insignificant ones.
And that’s not even an April Fools joke
March 17th, 2016
I already mentioned GRB on this blog (e.g. in the posts about PEEL). Here is some news about it fresh from GDC.
Pointer or reference: no difference?
March 11th, 2016
There was some discussion recently on Twitter or something, with people claiming that there was no difference between a pointer and a reference for the compiler. Well it’s “mostly true” but in some critical cases it’s very wrong.
In fact, it is so wrong that in the past I was forced to remove all references from some classes in our codebase, because they broke our binary serialization system.
I’m not going to spoil it for you, it’s more fun if you just run the following test first with a pointer then with a reference, and see what happens.
Surprised, like I was? You should not be, it is a perfectly normal, expected and documented behavior. The compiler does exactly what it should be doing.
But it shows that pointers and references are not always “the same” w.r.t. generated code, nope.
Physics benchmarks for dummies
May 3rd, 2015
(This is a copy of PEEL’s User Manual’s Appendix A. I am re-posting it here since people rarely bother reading docs anyway)
Benchmarking on PC is a black art. Benchmarking physics engines is even harder. Use the following notes to avoid the most basic mistakes.
Use the proper power options.
This is typically found in Control Panel => System and security => Power Options. Select the “High performance” power plan. Running benchmarks with the “Balanced” or “Power saver” plans produces unreliable results.
Close all programs except PEEL. Unplug the internet.
Do not let programs like Outlook, Winamp, antivirus software, etc, run in the background. They can start random tasks at random times that will interfere with your benchmarks.
Ideally, start the Task Manager and kill all unnecessary processes. There are so many here that listing them all is impossible, but with some experience you should be able to know which ones can be killed, and which ones are worth killing.
It is of course very tedious to do this each time. So ideally you would take a radical step and use a dedicated PC with a fresh Windows installation and no internet connection. That is exactly what I do, and PEEL’s benchmark results at home are a lot more stable than PEEL’s benchmark results at work. Even when I do unplug the internet cable on the work PC…
Be aware of each engine’s “empty” operating overhead.
In theory, when you run a physics update on an empty scene, all engines should take the same amount of time, i.e no time at all since there is nothing to do.
In practice, of course, this is not the case. PEEL’s first test scene measures this operating cost.
Avoid benchmarks with just one object.
As a consequence, avoid running benchmarks with just a few objects or even a single object. The simulation time for just one object is likely to be lower than the engine’s empty operating overhead, because the main internal algorithms are usually a lot more optimized than the glue code that connects them all together. Thus, such benchmarks actually measure this operating overhead more than anything else. While it is an interesting thing to measure, it does not reflect the engines’ performance in real cases: the empty overhead is a constant time cost which is going to be lost in the noise of an actual game.
Thus, for example, it would be very wrong to run a benchmark with a single object and conclude that “engine A is faster than engine B” based on such results.
Try small scenes and large scenes.
Not all engines scale well. Some engines may be faster with small scenes, but collapse completely with large scenes – because large scenes have a tendency to expose O(N^2) parts of an engine.
Traditionally it is wise to “optimize for the worst case”, so benchmarks involving large scenes tend to have a higher weight than those involving small scenes. Note that “small” and “large” are vague terms on purpose: a large scene in a game today might be considered a small scene in a game tomorrow. And at the end of the day, if it is fast enough for your game, it does not matter that an engine does not scale beyond that. It may matter for your next game though.
The point is: here again it is difficult to conclude from a limited set of benchmarks that “engine A is faster than engine B”. You may have to refine your conclusions on a case-by-case basis.
Be aware of sleeping.
Virtually all physics engines have “sleeping” algorithms in place to disable work on non-moving, sleeping objects.
While the performance of an engine simulating sleeping objects is important, it is usually not the thing benchmarks should focus on. In the spirit of optimizing the worst case again, what matters more is the engine’s performance when all these objects wake up: they must do so without killing the game’s framerate.
Thus, PEEL typically disable sleeping algorithms entirely in its benchmarks, in order to capture the engines’ ‘real’ performance figures. Unfortunately some physics engines may not let users disable these sleeping mechanisms, and benchmarks can appear biased as a result – giving an unfair advantage to the engines that put all objects to sleep.
Obviously, concluding that engine A (with sleeping objects) is faster than engine B (with non-sleeping objects) is foolish. Keep your eyes open for this in your experiments and benchmarks.
Be aware of solver iteration counts.
Most physics engines have a fast iterative solver that uses a default number of iterations. That default value may be different in each engine. For fair comparisons, make sure compared engines use the same number of iterations.
Alternatively, tweak the number of iterations in each engine until they all use roughly the same amount of time, then check which one produces the best simulation quality for the same CPU budget.
If a complex scene e.g. with joints does not work well by default in engine A, but works well with engine B, think about increasing the number of iterations for engine A. It might make it work while still remaining cheaper overall than engine B. And so on.
Comparing how engines behave out-of-the-box, with their default values, is only the tip of the iceberg.
Artificial benchmarks are not an actual game.
What works in the lab does not always work in the field. A good result in an artificial benchmark may not translate to a similarly good result in the final game. Good results in artificial benchmarks are just hints and good signs, not definitive conclusions. Take the results with the proverbial grain of salt.
Benchmarks are often artificial because they capture situations that would not actually happen in a game. At the same time, situations that would actually happen in a game often aren’t complicated enough to expose significant differences between engine A and engine B, or they are too complicated to recreate in a benchmark environment.
Similarly, physics usually only takes a fraction of the game’s frame. Thus, if engine A is “2X faster” than engine B in benchmarks, it does not mean that using engine A will make your game 2X faster overall. If your physics budget is 5% of the frame, even if you switch to an incredible physics engine that takes absolutely no time, you still only save 5% of the game’s frame. Thus, it might actually be reasonable and acceptable to switch to a slower engine if it offers other benefits otherwise (better support, open source, etc).
Benchmarks are never “done”.
There is always some possible scenario that you missed. There is always a case that you did not cover. There is maybe a different way to use the engine that you did not think about. There is always the possibility that an engine shining in all available benchmarks performs poorly in some other cases that were not captured.
There are more than 300 tests in PEEL, and still it only scratches the surface of what supported physics engines can do. Already though, in the limited set of available tests, no single engine always ends up “fastest”. Sometimes engine A wins. Sometimes engine B wins.
April 7th, 2015
Version 1.01 has been released. Download link.
Release notes:
* april 2015: v1.01 - the Bullet single-triangle-mesh issue
- the Bullet plugin was crashing or behaving oddly in all scenes featuring the “single triangle” mesh. This has been fixed. The reason was that the triangle’s data was not persistent (contrary to what happens for other meshes), and since Bullet does not copy the data, bad things happened. It looks like all the other engines copy the data, since they were working fine. Thanks to Erwin Coumans for figuring out the root of the problem.
- Opcode2 plugins will not crash anymore in raycast scenes without meshes (they won’t do anything though).
April 4th, 2015
I am very happy to announce the first public release of PEEL - the Physics Engine Evaluation Lab.
I briefly mentioned it on this blog already, here.
Source code is included for the main program and most of the PINT plugins. That way you can create your own test scenes and check that everything is done correctly, and benchmarks are not biased.
Pre-compiled binaries for most of the plugins are provided, for convenience. Some of the binaries (in particular Havok plugins) have been removed, since it is unclear to me if I can distribute them or not. On the other hand some plugins are currently only available as binaries (Opcode2, ICE physics..).
Please refer to PEEL’s user manual and release notes for more information.
Have fun!
(As usual, the bitcoin tip jar is here if you like what you see )
March 7th, 2015
According to the Internet, I am one of the most legendary scene coders but I write unreadable code. LOL.
I totally need to put that on a business card
PhysX is Open Source (EDIT: or is it?)
March 5th, 2015
https://developer.nvidia.com/content/latest-physx-source-code-now-available-free-github
Note that contrary to what the post says, this is only the second best version (3.3.3). We are currently working on 3.4, which already contains significant changes and significant speedups (for example it includes Opcode2-based mesh collision structures, which provides faster raycasts, overlap and sweep queries). I think we will eventually open source 3.4 too, when it is released.
EDIT:
I’ve been reading the internet and receiving private emails after that. Apparently what I wrote is technically wrong: it is not “Open Source” because it does not have a proper open source license, it comes with a EULA, etc.
I notice now that both NVIDIA’s press release (above) and EPIC’s (here) are carefully worded. They actually never say “Open Source”, or even “open source”. They just say things like:
“NVIDIA opens PhysX code”
“PhysX source code now available free”
“The PhysX SDK is available free with full source code”
The weird thing then, is that many internet posts do the same mistake as I did, and present the news as if PhysX was indeed “Open Source:
http://techreport.com/news/27910/nvidia-physx-joins-the-open-source-party
http://www.dvhardware.net/article62067.html
http://forums.guru3d.com/showthread.php?p=5024001
https://forum.teksyndicate.com/t/physx-made-open-source/75101
http://hardforum.com/showthread.php?t=1854357
(etc, etc)
Why is everybody making this mistake, if indeed none of the official press releases actually said that?
I’ll tell you why.
That’s because the distinction between “NVIDIA opens PhysX source” and “PhysX is open source” is so subtle that only pedantic morons misguided souls would be bold enough to complain about it when given something for free.
Give them a finger, they’ll take the whole hand, and slap you with it.
I have the feeling this is the only industry where people are so insane and out of touch with reality. You’ve given a free Porsche and then you complain that it is not “really free” because you still need to respect the “strings attached” traffic code. Hello? Just say “thank you”, enjoy what you’re given, or go buy a Ferrari if you don’t like Porsche. Jeeez.