Archive for the 'Programming' Category

The prophet programmer

Friday, September 9th, 2011

It seems there is one in every team. He is living by the book, following the rules to the letter. He considers himself bright and smart because he always knows the latest trends, the latest official Right Way to write things according to the C++ Standard. He follows it religiously, even the new rules not implemented yet by any compiler. And he looks down on you if you do not write “proper” code. He is the prophet. The Book is right. You must follow the rules.

His zealotry has no limits. He will conscientiously rewrite your “illegal” C++ when you are not looking. For your own good of course.

He will go and meticulously replace all your “i++” with “++i” in your vanilla integer for-loops.

He will go through incredible hoops to get rid of a lonely “goto”, used to skip a large block of code and jump to your function’s epilogue.

He will use unreadable, cryptic, unbelievable templates to replace your simple define, because “define is bad”.

He will tell you with a straight face that he went ahead and replaced all the “NULL” in the codebase with “0″ or “nullptr”, because “NULL is bad C++”.

He filled his head with many of those mantras, and he is obsessed with them. They are the rules. They must be followed.

Well, my dear prophet programmer, I have news for you: you are not bright. You are not smart. You are not clever. You’re a fucking robot. It does not take a genius to blindly follow recipes from your cookbook. You are a brain-washed moron doing a machine’s job. If you blindly follow the Standard, you end up with standard code, which by definition anybody can write.

The best programmers are not the ones blindly following anything. They are exactly the opposite of you. The best programmers are the ones who know when rules should be bent, when boundaries should be broken, and when envelopes should be pushed. The best programmers are the ones who, constantly, on a case by case basis, hundred times a day, stop for a moment and think about how to best solve a problem. They are not the ones turning off their brain to follow a recipe. They are not the ones trying to fit a preconceived solution (design pattern?) to everything. If a preconceived solution solves your problem, it was probably not really a problem worth solving - that is, it is such a common and tired issue that anybody can look up a standard answer in a book. How does solving such a thing in such a way makes you “smart” ?

The best programmers are creative. They have a big imagination, and they are not afraid to use it. They borrow techniques from one field and apply them successfully to an apparently unrelated field, discovering subtle links and connections between them in the process. They are never satisfied with the status quo.

The best programmers, the heroes, the top coders, like Nick of TCB did with the sync-scrolling eons ago, are the ones who invent new techniques to solve problems that nobody solved before them. By definition they are not standard. They are the very opposite of what you preach.

Multi-SAP redux

Saturday, August 27th, 2011

Some years ago I wrote a bit about the Multi-SAP approach in my SAP document. I think I also said I would release some code, but never did. Erwin Coumans tried to implement one version in Bullet, but it’s still marked as “experimental” some years later, and it looks a bit like everybody gave up on that technique - including us in PhysX 3.0.

Well I didn’t.

I still think it’s a very nice method, and very easy to implement once you have a regular sweep-and-prune working. So I’m restarting this effort, refreshing my old prototype, adding some new tricks learnt since last time, etc.

As a teaser, and to show non-believers that yes, it rocks, I quickly added it to my old CDTestFramework, as found in Bullet 2.78. I’ll let you guys test it on your respective machines, but at least here at home, it beats the alternative algorithms in Bullet’s default scene.

msap_demo

Stay tuned…

Clear / reset dynamic arrays

Thursday, August 25th, 2011

The typical STL-vector-style dynamic array usually has 2 functions to “clear the array”. One releases objects and deallocates the memory (size and capacity become 0), one simply releases objects and sets the size to 0 without actually deallocating the memory (capacity remains the same).

Let’s pretend the function names are “clear” and “reset”, and let’s pretend that “reset” is the version deallocating the memory, while “clear” does not.

“reset” is good to save memory and make sure you don’t get arrays that grow out of control, never actually releasing huge amount of memory that they may not need anymore.

“clear” is good to limit the number of array resizes when the same array is reused over and over, say from one frame to the next.

But of course the truth is that both are actually bad. “reset” is bad because it makes you resize and reallocate stuff over and over again. “clear” is bad because it will waste memory at some point (Murphy’s Law being what it is, one of those never-deallocated arrays WILL grow out of control even if you think “it never happens”).

However a lot of programmers still continue to use dynamic arrays, and still continue to use either “clear” or “reset” depending on what they think is the best option in each particular case.

I would argue that in most cases except maybe the most trivial ones, one can’t really tell ahead of time whether “reset” or “clear” should be used. And thus, maybe we should not.

Instead, just create a small wrapper doing the proper thing depending on how many objects are actually contained in the array:

void Array::resetOrClear()
{
const int s = size();
const int c = capacity();
if(s>c/2)
clear();
else
reset();
}

In other words:

  • if we used more than half the capacity, it’s likely that the memory is not wasted and we may need a similar amount next frame => keep the memory => “clear”
  • if we used less than half the capacity, maybe we’re an array that grew for some time during a spike but now the memory isn’t used anymore => discard the memory => “reset”

You don’t need to pause and think for each array whether to call “clear” or “reset”, just call “resetOrClear” all the time, and it will likely make the right decision, limiting both the number of resizes and the amount of wasted memory.

Or better, of course: don’t use bloody dynamic arrays.

SAP code bugfix

Monday, March 14th, 2011

This is embarrassing, but there was a bug in the previously posted SAP code. Fetch the new version here:

http://www.codercorner.com/Code/SweepAndPrune3.rar

Thanks to Olli-Pekka Räsänen for reporting it!

Don’t trust the compiler

Wednesday, February 9th, 2011

How clean your source code looks like is not as important as how clean your generated code looks like.

When supporting multiple platforms, your code not only has to work on the weakest machine. It also has to be compiled by the weakest compiler.

Don’t trust the compiler. No, it will not magically optimize your crap.

Faster convex-convex SAT: internal objects

Monday, January 24th, 2011

I noticed that I had a fairly low amount of posts in the “physics” category, for somebody working in physics & collision detection all day long. So let me try to increase the ratio a bit. Last time we talked about partial hulls, today I’ll present another optimization for the same problem.
The penetration depth between two convex polytopes can easily be computed using separating-axis tests. But the large number of edge-edge tests often makes this process very slow, and other algorithms such as EPA or Xenocollide are thus often chosen over SAT to compute the MTD between two convexes. There are however a large number of tricks to optimize the SAT version. One of the most efficient ones is to use simple “internal objects”, i.e. simple objects such as a sphere or a box embedded in the convex.
Recall that for each candidate axis we do something like:
bool testSepAxis(const btVector3& axis, const MyConvex& hull0, const MyConvex& hull1, float& dmin, Vector3& mtd)
{
// Project hull 0 on candidate axis
float Min0,Max0;
hull0.Project(axis, Min0, Max0);
// Project hull 1 on candidate axis
float Min1,Max1;
hull1.Project(axis, Min1, Max1);
// If the projected intervals don’t overlap, the candidate axis is a separating axis.
// In that case the hulls don’t overlap, we can early exit and return non penetration.
if(Max0<Min1 || Max1<Min0)
return false;
// Else compute penetration depth (PD) = how much the intervals overlap
const float d0 = Max0 - Min1;
const float d1 = Max1 - Min0;
const float depth = d0<d1 ? d0:d1;
// Then keep track of minimal PD. The axis for which the PD is minimal is the MTD.
if(d<dmin)
{
dmin = d;
mtd = axis;
}
return true;
}
So we’re looking for the minimal overlap, and the MTD is the axis for which the overlap is minimal. The idea, then, is to first project the simpler, internal objects on the candidate axis, and use the overlap results to skip the real hull projections if we can.
Say our internal objects are spheres (it could also be boxes, but for the sake of the argument let’s use spheres). Let hullOverlap(V) be the overlap value for the hulls on a given axis V, and sphereOverlap(V) be the one for the spheres on the same axis. Since spheres are embedded in the hulls, it should be clear that we always have:
hullOverlap(V) >= sphereOverlap(V)  (1)   (see Figure 1)
Now let minHullOverlap be our current best result, i.e. our current minimal penetration depth computed from real hull projections (not sphere projections). It should be obvious that thanks to (1) we have:
sphereOverlap(V) > minHullOverlap  => hullOverlap(V) > minHullOverlap   (2)
In other words, if sphereOverlap(V) is already bigger than our current best result, hullOverlap(V) will be bigger as well, and thus there is no need to compute it.

Internal objects

Figure 1
This is interesting since, of course, projecting a sphere is a lot faster than projecting a convex hull. Projecting a sphere is one dot product, projecting a hull is one dot product per vertex (plus, we don’t need to even access the vertex memory).
As Figure 1 shows, the embedded spheres don’t need to actually touch for this test to be useful - only the spheres’ projections on the candidate axis. In practice, this happens very often, and that single optimization sometimes culls away 90% of the actual hull projections. You can try out this demo for a proof-of-concept.
It is also a good idea to test the MTD from the previous frame first, to initialize minHullOverlap to a small value right from the start - discarding candidate axes as early as possible.
Finally a note about the internal shapes one should choose. A sphere might not be a great fit for long thin rods, in which case an embedded box might be better. But it might be easier to just use both, and at runtime choose whatever internal shape gives the best results. This is what I do in the above demo.

A bold statement

Friday, January 7th, 2011

Let me make a bold statement:

All functions taking a “const String&” parameter as input are bad.

…where String is either your own custom string class, or std::string, whatever. This is bad, of course, because it forces your users to have a String around. If they get the string from an external source (typically as “const char*”) you just forced them to create a temporary String just to call the function (or worse, the temporary will get created without them even noticing). Since you pass it as a const, you will not modify the text and there is no reason to force people using a String. The proper API just takes a “const char*” parameter.

(Yeah, yeah, I know, it ignores Unicode stuff and loses some minor optimizations when the String class caches the length of the text, whatever, you get the idea of the bold statement. And also, any const function in the String class is stupid, as an old classic GOTW entry showed eons ago).

Finally the truth!

Friday, January 7th, 2011

Just noticed this from http://89.151.96.106/forums/viewtopic.php?f=1&t=54587&start=25

There’s two topics haunting me on this forum:
This one, and a topic where I managed to write that OPCODE was messy code! :)
Pierre Terdiman commented on that, of course. :wink:
That was years ago, but I clearly remember it.
Looking at OPCODE now, of course I realise that it’s actually remarkably clean code! :oops:

All you people who complained about the coding style in OPCODE - you know who you are -, take that! :)

Konoko HD

Wednesday, November 3rd, 2010

The guys at Oni Central did it again… It takes a lot to impress me but I’m constantly baffled by what those people manage to pull out. Anyway this time one of them created a gorgeous HD version of Konoko. Check it out in the screenshots below! (click on thumbnails to load a larger image). In those pics you can also see some of the levels/rooms I’ve been working on recently.

Lockers room and fight in “depot 03″:

The new Xeno-biology lab:

Running lariat in the xeno-lab:

The new “machine room”:

Pistol disarm in the corridors:

Misc throws & weapons poses:

“Best comments”…

Sunday, September 26th, 2010

Thanks for the link, Benoit :)

http://stackoverflow.com/questions/184618/what-is-the-best-comment-in-source-code-you-have-ever-encountered

shopfr.org cialis