Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw

merge request checklist

by mandog (Curate)
on Oct 03, 2009 at 17:21 UTC ( [id://799033] : perlquestion . print w/replies, xml ) Need Help??

mandog has asked for the wisdom of the Perl Monks concerning the following question:

There has been quite a lot of good discussion on code review, but I haven't seen anything specific on keeping the perfect from being the enemy of the good. Is the checklist below worth the effort of following it?

PURPOSE: establish policies/process for merging
         branches into main line of development.

Before a branch gets merged or rejected:

	Two members of mvhub-dev vote 'approve' 

	zero members vote 'disapprove'
	Fearless Leader makes an arbitrary and capricious decision.

At each stage of the process, all question should be answered 'yes' 

Answering 'no' or "I don't know", means it isn't ok enough to proceed. 

Before proposing a merge:

	Do all the tests pass?

	Does this merge make trunk better in any way?

	Does this merge NOT make trunk worse in ANY way?

	Does merge have only one purpose?

	Do I accept responsibility for this merge?

Before approving a merge:
Can I say 'yes' to all pre-merge proposal questions above? Do I understand all changes? Code is ok with: app-mvhub/doc/policy/code_format.txt? Code/design is as clean as reasonably possible? Does the merge request meet it's stated goal? Before abstaining: If two other people vote 'approve', no harm will be done? Before voting 're-submit' or 'disapprove' on a merge: I am NOT disapproving because related improvements remain to be made. EXAMPLES: Bob submits: + for ( my $i = 1 ; $i < $param_no + 1 ; $i++ ) { + $param = $test_href->{ 'param' . $i }; + do_something_with($param); + } Betty votes 're-submit' because there is a cleaner way to write the code: foreach my $key (keys %$test_href){ do_something_with($$test_href{$key}); } Sue submits: a merge request with a new .t file to check all SQL keywords are capitalized. Bob votes: 'approve' He thinks that the test should also check if each key word is onl a line by it's own. However, he votes 'approve' because that Ann votes: 'resubmit' 'INSERT' is a key word and Sue's .t file doesn't catch that. Joe submits a merge request that: 1) updates the checklist for moving to production 2) applies approved perltidrc to a .pm file Swetha votes: 'disapprove' Joe should have submitted two separate merge requests.

Replies are listed 'Best First'.
Re: merge request checklist
by ELISHEVA (Prior) on Oct 04, 2009 at 07:52 UTC

    I have some qualms about the one-purpose criteria. Other than that the questions are good. The main problem is how do you know you have the right answer? I am not convinced voting is the way to determine this.

    The merges you have discussed above are all fairly trivial changes to a codebase - patches and tests for standard compliance. But there are many other sorts of merges that often have widespread effects: pre-release renaming of poorly named public API methods and restructuring needed to facilitate the next round of enhancements are just two that come to mind. Unless you define the term "single purpose" in a very broad manner, it may not be so easy to classify these merges as single purpose. At the very least it is a matter of debate that two voters can easily disagree on. Your voting process can quickly turn into "private good" being the enemy of "collective good" unless there are clear and commonly held definitions of what is meant by single purpose.

    The same problem can happen with tests. If one person feels that capitalization of SQL keywords is an important standard but another feels it is a matter of style, then your voting process could end up with a stalemate or turn very political as one person trades votes with another in favor of their favorite standards. Most good managers are reluctant to get involved in team dynamics unless the project is going terribly off-track so the "arbitrary and capricious decision of a manager" may not be as much of a check as you think. Furthermore, in my own management experience, if I've defined a process and that process needs my arbitrary intervention on a regular basis then I tend to consider that process broken.

    But the biggest problem is the actual answer to the questions. Small focused patches are fairly easy to assess the benefit of. Larger scale changes often require significant analysis before one can be certain that the net impact will be positive. If everyone on the team had to spend their time gathering the information to vote properly, that could also stalemate a project. People with deadlines to meet would defer their vote in favor of their own work. Those more willing to vote quickly may not fully understand the amount of analysis needed to assess the change.

    A final problem is with the testing criteria. One important category of tests are regression tests. However, there is a fundamental chicken-and-egg problem with using them to assess the validity of complex merge on a production codebase. If you run the merge on a pre-merge branch you aren't necessarily running it on the same code as trunk+merge. If you do the merge first and then roll-back if there are problems, you need a way to insure that team members don't accidentally download the trial merge before it has been fully tested. Any merge process also needs a way to deal with this problem.

    In my own work, I prefer a merge process that relies on complementary skills. Rather than decide by majority vote, each merge has to pass approval by four types of team experts: someone with expertise in the overall architecture, someone with expertise in team coding standards, someone with expertise in testing and of course, the person who created the change set in the first place. On a large team where people specialize these will in fact be four separate people. On smaller teams a single person may take on two or more of these roles.

    In my opinion, a much better check on the perfect being the enemy of the good is the reality of the market place. The line between good enough and perfectionism never exists in isolation from the market. Some applications need more stability than others. On the other hand, no one benefits if the project is so delayed that people lose their jobs or if the value of your ESOP or options are nil because the company is so slow to market that share price is heading into the darkeness of Sheol. On open source projects, money isn't always the motive. Exposure and mindshare may be its replacement. Helping the team connect the dots between technical decisions, pride of work, and business implications (or whatever alternate market definition of success is in play) is one of the most important jobs of the manager. When it is done well, it is one of the best possible controls on perfectionism because it is organic and reality based.

    Best, beth

    Update: added some reflections on how to control perfectionism.