No such thing as a small change | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
The only useful measures of productivity in any field are:
You seem to be more interested in the first measure - correctly finished tasks per unit of time. In programming, there is a corollary measurement that I allude to in my sig - the difficultly of changing a worker's product. If you produce code that works perfectly twice as fast, but it takes 10x longer to make a change, you have cost the company more money in maintenance that you saved in production. And, given that most applications spend 80% of their life cycle in maintenance, this can be a very significant criterion. And, then there's the more ephemeral stuff. Let's say we have a team member who doesn't produce a lot of code and what they do produce isn't very good. But, they have a very good understanding of the application's architecture and have been instrumental in avoiding a number of pitfalls. What value does that person bring to the team? Personally, I like having people like that on board. But, how do you measure "pitfalls avoided through lunch discussions"? Ultimately, it boils down to the fact that programming isn't engineering - it's sculpting or music composition. While you can have production quotas in the arts1, that leads to a very stagnant output with little innovation. And we can see this in our field. Take a look at the innovators of programming theory and how they tend to work. Then, look at the corporate drones and how they tend to work. Putting metrics on programming output isn't a bad thing. But, it's not clear how one measures maintainability, code quality (not kwalitee), and various other "unmeasurables." And, frankly, code coverage is probably more important than the number of tests. I may only need 30 tests to cover these 600 lines of code, but I may need 600 tests to cover these 30 lines over here. The rest of your proposed metrics fall into the same category. ("number of lines made reusable" - that's called refactoring and it's something everyone should do on a continuous basis!) As for "average time to fix" ... that kind of metric scares the bejeezus out of me. It assumes that your programmers are slackers who only intend on mooching off the company. Bugs will take however long they take to find. Fixing a bug is almost always very quick, once you've isolated the problem and generated a repeatable testcase. Managers who say "This bug has been open for 3 weeks. Fix it already!" don't understand that creating a repeatable testcase can take 99% of time to actually fix a bug.
My criteria for good software:
In reply to Re: Measuring programmer quality
by dragonchild
|
|