While this may not help with the explanation to the suits, here's a good way for a programmer to understand it:
Writing a compiler that can create perfectly parallelized ASM for an arbitrary program to run on a server with an arbitrary number of CPUs is a restatement of the halting problem. The reason is that it's (currently) impossible to come up with general rules for determining which two statements can be executed in parallel and which can't. The reason for that is it's very hard for a human (let alone a computer) to determine which statements are dependent on each other. (Functional languages have it easier if they enforce the no-side effects rule. For instance, Scheme is theoretically easier to parallelize than Perl because more of the code has no side-effects, hence doesn't depend on anything.)
If you think about it, that's a good metaphor for project management. Essentially, the PM is attempting to parallelize the project's tasks. After a certain point, the tasks cannot be parallelized anymore and any additional resources are unusable.
My criteria for good software:
- Does it work?
- Can someone else come in, make a change, and be reasonably certain no bugs were introduced?