Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Winning people over to better development practises

by simon.proctor (Vicar)
on Mar 13, 2006 at 17:21 UTC ( [id://536312]=perlmeditation: print w/replies, xml ) Need Help??

Hi,

I've decided to make one last ditch attempt at getting co-workers to apply better practises to work. To date, I'm the only one to write test suites and do any form of defensive programming. I've tried to win people over to better styles through informal conversations but most have confessed that they don't see the point to it all.

However, I have finally managed (after years!) managed to get a weekly development meeting going with agendas etc. So I'd like to give it another shot. I'm aiming to do a mini presentation on the whole thing. To help with that I figured it would be useful to have a general script that I can print and give out (for people to read later). I'd take this longer prose and summarise it into key bullet points and hopefully encourage people to read the fuller thing (or failing that give me a script to work to). I also want to produce a simple project that starts as a ball of mud and has tests and defensive programming applied.

I've got a wordy first draft which is heavily stolen from books and sites such as refactoring.com and code complete. I'm not bothered about that too much but I would appreciate feedback on what to add and how to construct the demo app. Also, if anyone has done anything like this before and has tips to share - I'd be grateful!

I'll just re-iterate though - this is a wordy first draft!

Design, Implementation and Refactoring

Design is considered a 'wicked problem'. In many cases, to produce a design the problem is solved twice. To produce the design, the problem is solved (even if only in part) and then solved again to prove that the solution works. In fact, it may be that only once the problem is solved that side problems emerge. Their existance proving to be unknown until the original problem had been worked on.

Whatever the process, a design is produced via a combination of heuristic judgements, best guesses and assumptions. Many mistakes are made in the process of the design because of this. In fact, a good solution and a sloppy one may differ only in one or two key decisions or perhaps choosing the right tradeoffs.

Because this, good designs evolve through meetings, discussions and experience. In some cases, they also improve through partial implementation (hence the 'wicked problem' moniker).

A design, to stand a chance of working, should also restrict possibilities. Because time and resources are not infinite the goal of the design should be to simplify the problem into an acceptable form for implementation. Not all processes for this are the same. Each new problem introduces an entirely new set of variables. Failure to recognise this can result in the wrong technique, tool or process being applied.

The success of the implementation can be measured in different ways. Glibly, it can be measured as 'it does what we want' and is then left to rot until the next problem occurs. Many projects associated with problems fail due to poor management, requirements (etc) but equally many (especially software projects) fail due to complexity.

Managing complexity is a key factor in ensuring success. If a solution is too complex (either by design or evolution) then it becomes increasingly impossible to maintain. This is a major source of cost and resource overhead.

Complexity can arise in these simple cases (say):
  • A complex solution to a simple problem
  • A simple, incorrect solution to a complex problem
  • An inappropriate, complex solution to a complex problem
Managing this allows many design considerations to become much more straightforward.

Characteristics of a good design:
  • Minimal complexity
  • Ease of maintenance
  • Loose coupling
  • Extensibility
  • Reusability
  • High use of low level utility (software design)
  • Low level fan out (software design)
  • Portable
  • Lean
  • Layered (predominantly software design)
  • Standard techniques
Over time, common solutions to common problems emerge. These common solutions are reasonably abstract. Enough to be applicable in a general sense but specific enough that they can be recognisably applied to the solution. Application of common solutions (or patterns) can achieve many of the above characteristics. Unfortunately, they are not always applied (or worse applied incorrectly) because of the reasons already stated.

It is often the case that the first implementation (or even the first few of many) are not easily maintainable, simple or reusable. Rather than stay stuck in the design process, it can and is more advantageous to take a pragmatic approach. As stated, it can be impossible to completely solve a problem satisfactorily without first solving it.

Here, the best approach is to attempt to make the best decisions possible at the time. Then, re-examine the problem and solution for signs of a good and/or bad design. With the experience of the implementation, improvements can be easier to spot and mistakes easier to find. This process is called refactoring. Refactoring can include shifting to patterns or between patterns where applicable, removing now uneeded functionality or reducing complexity.

By refactoring, we examine what we thought we knew, what we tried and what actually happened and try to make it
  • simpler
  • easier to maintain
  • reusable if possible
  • easier to understand
Even achieving only one of these can be critical to the long term success of a project.

While implementing, analysing and refactoring a solution to a problem, it is important to be able to prove your solution works as promised. Critically, it is also important to be able to proce what happens to your solution when things don't go to plan. Understanding your corner cases, the limits of your inputs and outputs will test your assumptions. It will also provide for planning for and mitigating unforseen circumstances (at least as much as possible).

In software design, software testing is used to provide this metric. By tieing the development of tests directly to the implementation of the software solution, the solution is built in parallel to the tests that prove the solution works. Aside to the benefits above, this testing also provides
  • a test of the design at a low level (how it works, couples, simplicity etc)
  • proof that changing one part of the system hasn't broken another part of the same system
Accepting that software must evolve as requirements change and the complexity of the solution changes mandates that software testing be included in the production of any solution right from the outset. This testing allows the assumptions to be checked and rechecked even before higher level testing is considered. After all, if the software doesn't work as advertised there is no point arranging usability and acceptance testing. If you don't know how your software will fail there is no point putting it up for client review.

In producing your implementation, it is also important to code defensively. Rather than assume that resources are available (say) you test your assumptions are true before working with them. By doing so, you produce a simple first candidate area for your software tests. If you are working with a resource and it 'goes away' you can produce a test that re-enacts this scenario. Once this test is written, you work with your code until all the problems are fixed, handled appropriately or are documented. In this simple process of defensive programming coupled with testing, the reliability of the solution should dramatically increase.

Testing can also form part of the installation process. Deployment of the solution needs proof that it is installed and operating correctly. As a suite of tests and benchmarks have already been produced, what better method of proving the deployemnt is ready for acceptance testing?

Summary
  • Examine the problem simply
  • Worry about only what you need to implement
  • Implement it as best you can
  • Rexamine the solution and the problem
  • Repeat
This can be helped with defensive programming, pragmatic design, refactoring to common solutions where appropriate and testing at all stages of implementation.

Replies are listed 'Best First'.
Re: Winning people over to better development practises
by dragonchild (Archbishop) on Mar 13, 2006 at 17:56 UTC
    You're being waaaaay too complicated, and you missed the point. The point behind all these practices has nothing to do with the initial product. It's all about managing change.

    If you never had to change your program, does it really matter how well it's written? Do you really need the test suite? ... No, not really.

    Here's a very simple way to get someone to understand the power of test suites:

    • Grab a copy of CGI.pm off of CPAN
    • Ask a coworker to add a small, but non-trivial, feature.
    • If your coworker isn't Lincoln Stein, they will go goggle-eyed
    • Offer the test suite and ask them if having it makes them feel better about making the change.
    • Don't say another word.

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      Yeah I figured I was. However, I wanted to get everything down in first and see if I was missing the point or was simply off the mark by a few points.

      Your method of understanding test suites is a good idea but sadly would not work where I am. My co-workers would simply make the change and then shotgun debug until it worked. It wouldn't even be a concern except when explaining why its taking so long to make the change.

      I see your point about change but not about the test suite. After all, in our environment we need to prove it will work and it won't crash. But still - that is just my opinion :).

      Thanks!
        Your method of understanding test suites is a good idea but sadly would not work where I am. My co-workers would simply make the change and then shotgun debug until it worked. It wouldn't even be a concern except when explaining why its taking so long to make the change.

        Take a look at the source code for CGI.pm. You can see it here. Then, tell me that again. :-)

        Seriously, this method has worked in the past. It's kinda my big gun when nothing else works.


        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

        The point of Test suite is that it proves that it will work. If the test is not working a with a particular value( for example). Then you write a test to prove your point and then they will have to fix it. Once you have a test, you will always keep the test because that will be validation that any code changes that they implement will work.

        They way you are doing things, without a test suite, you only have two things which say your code works. Your co-worker says that it works and that your program does not crash. How do you know that it works correctly? Suppose your program puts a NULL value into database instead of ''? Your program may not crash but that does not mean it is doing the right thing.

        Think of it another way, if CPAN did not have any tests, do you think that Perl would be anywhere near where it is today? There would be no way of verifying if there was a bug with a particular version of Perl, or Linux, or a module. Your program would keep crashing and you would have no easy way to determine where you have a bug.

Re: Winning people over to better development practises
by brian_d_foy (Abbot) on Mar 13, 2006 at 18:59 UTC

    Win people over by showing results. If your way improves their lives, they'll be interested. You have to show them what's in it for them, and not everyone is in it for the purity of experience or artistic beauty. :)

    Here are some things that programmers might care about and what will appeal to them. Tell them how agile programming benefits them personally and you'll probably have an easier time convincing them. Peer pressure through public praise (or shame) can work too.

    • Spending less time debugging
    • Having fewer meetings (for good or bad)
    • See results quickly
    • Dealing with fewer business people or customers
    • Working on their own ideas (instead of overtime on a bug hunt)
    • More time coding (less analysis, design, etc)
    • et cetera

    What you really need is agreement at all levels, inclduing management, about how the programmers should be working. If you don't have the culture in place, you're always going to be fighting a losing battle.

    For the management folks, you'll have to appeal to the things they care about. Maybe you should invite them to some of the meetings. :)

    • Higher productivity
    • Fewer bugs reported by customers
    • Higher sales
    • Lower payroll
    • Reputation in the marketplace
    • et cetera

    As far as design goes, I don't think it has much to do with communication or meetings (or even experience). Lots of bad designs come out of processes that have all three of those. Although it sucks to admit it, some people can design and some people can't. Good design comes from recruiting good designers. I see a lot of people who think that just because they are smart programmers that they are equally smart at everything else. It just ain't so.

    --
    brian d foy <brian@stonehenge.com>
    Subscribe to The Perl Review
Re: Winning people over to better development practises (TDD)
by eyepopslikeamosquito (Archbishop) on Mar 13, 2006 at 21:19 UTC

    When I gave a lunchtime talk evangelizing TDD (Test Driven Development), I summarised the benefits as follows:

    • Improved interfaces and design. Especially beneficial when writing new code. Writing a test first forces you to focus on interface - from the point of view of the user. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion. Implementing only what is required to pass your tests helps prevent over-engineering.
    • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
    • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation. Especially beneficial to new developers familiarising themselves with a codebase.
    • Debugging. Spend less time in crack-pipe debugging sessions. When you find a bug, add a new test before you start debugging (see practice no. 9 at Ten Essential Development Practices).
    • Automation. Easy to test code is easy to script.
    • Improved Reliability and Security. How does the code handle bad input?
    • Easier to verify the component with memory checking and other tools.
    • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
    • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
    • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
    • Psychological. Instant and positive feedback; especially important during long development projects.
    • Reduce time spent in System Testing. The cost of investigating a test failure is much lower for unit tests than for complex black box system tests. Compared to end-to-end tests, unit tests are: fast, reliable, isolate failures (easy to find root cause of failure). See also Test Pyramid.

    Note that the first point above is the most important.

    Update: Some further points added later from Effective Automated Testing:

    • It is easier/cheaper to write automated tests for systems that were designed with testability in mind in the first place.
    • Interfaces Matter. Make them: consistent, easy to use correctly, hard to use incorrectly, easy to read/maintain/extend, clearly documented, appropriate to audience, testable in isolation. For more detail see On Interfaces and APIs.
    • Dependency Injection is perhaps the most important design pattern in making code easier to test.
    • Mock Objects are frequently useful and are broader than unit tests - for example, a mock server written in Perl (e.g. a mock SMTP server) to simulate errors, delays, and so on.
    • Consider ease of support and diagnosing test failures during design.
    • Single step, automated build and test are a pre-requisite for continuous integration and continuous delivery.
    • Automation is essential for tests that cannot be done manually: performance, reliability, stress/load testing, for example.

    The talk was well-received and did change both development practices and management awareness. I also illustrated each point with specific examples from our workplace (e.g. a developer refactoring without a test suite causing a rush of new bug reports from customers).

    See Also

    Updated 19-Mar-2006: Improved wording. Also note that many of the ideas for these bullet points were derived from chromatic and Schwern's excellent Test::Tutorial talk. See also Unit testing -- module, book, and website suggestions wanted. Updated 23-Aug-2018: Minor improvements to wording (keep in sync with Effective Automated Testing). See also You've gotta have tests! by talexb. July-2019: Added See also section.

      Thanks, that's a good list to start with.

      I wondered, did you find it difficult to handle some people in winning them over to this? How did that change over time?

        I wondered, did you find it difficult to handle some people in winning them over to this?

        For the people in my team, I had no difficulties whatsoever because I could sit with them and show them how to do it (as adrianh notes below: "show not tell").

        For people in other teams, it depended on their interest and aptitude: some really surprised me by writing excellent unit tests without any prodding at all; others didn't seem to get it; others complained that they didn't have the time. Nobody said it was a stupid idea, the most common reason for not doing it was "I have a hard deadline and I just don't have the time right now, maybe I'll try it on my next project".

        How did that change over time?

        It's been over a year now, and TDD is growing slowly but steadily as the early adopters spread the word and show others how to do it. There are a certain percentage of programmers (maybe more than half) that don't read or study anything outside of work; the only way to reach them is to sit with them and show them how to do it.

Re: Winning people over to better development practises
by perrin (Chancellor) on Mar 13, 2006 at 19:11 UTC
    Just make it policy. If people who have never tested don't have to test, they will generally not try it, since it's bound to result in at least a temporary slowdown in work and they don't want to get blamed for that. You have to make it officially okay for them to "waste" time on writing tests before they are likely to try it. (To be honest, it's resulted in a permanent slowdown in my work, but has also reduced the number of bugs and made it easier to make changes later, so I think that's a fair trade.)
Re: Winning people over to better development practises
by samizdat (Vicar) on Mar 13, 2006 at 19:30 UTC
    While I agree with all that's in your manifesto, it's way too pompous. Spend more time thinking of small, concrete steps that will show obvious benefit.

    dragonchild and brian_d_foy both had some excellent comments. I suspect most of your programmer peers have not worked together on projects. Besides testing, spending the time to define APIs is critical. In my view, the more top level review a project gets, the better and quicker its success. And no, I don't consider management "top level", although they can often be included in another buzzphrase that's important, "Customer-centered solution".

    Even if you just improve your team-based development practices, you'll be better off. Add to that a clear and complete specification, and you'll be worldbeaters!

    Don Wilde
    "There's more than one level to any answer."
Re: Winning people over to better development practises
by Anonymous Monk on Mar 13, 2006 at 19:21 UTC
    Your approach is way too subtle! I recommend the "Sledgehammer Method" to co-worker management.

    If you co-workers refuse to learn, hit them repeatedly with a very large sledgehammer until they do learn. A few will not make the grade, but HR can replace them with more suitable replacement workers.

    If HR fails to produce suitable replacement workers, the "Sledgehammer Method" can also be applied to improve the quality of HR managers! It's foolproof!!!

    Disclaimer: Use of the "Sledgehammer Method" may be illegal in some jurisdictions. Consult your local listings...

      As Al Capone is alleged to have said, "Kind words and a gun will get you farther than kind words alone".

Re: Winning people over to better development practises
by adrianh (Chancellor) on Mar 14, 2006 at 12:54 UTC
    So I'd like to give it another shot. I'm aiming to do a mini presentation on the whole thing.

    Of course this will only work if the developers want to change. It's been my experience that presentations on best practice only have any affect when developers realise that they have a problem and want to fix it.

    One of the best ways to get people to realise this, in my experience anyway, is to show not tell. When you work with somebody else do the "right thing". Co-worker sees that this results in an easier time for all involved and then wants to learn how to do it to. Then you can give the presentation because that's the point that your co-worker wants to learn.

Taoist promotion of good development practices
by Anonymous Monk on Mar 14, 2006 at 04:38 UTC

    To win people over to good development practices, projects must be run such that when people make major development practice errors, they will as soon and as clearly as possible see the *negative results* of their mistakes. And so that when people get things right, they will as soon as possible see the good effects *on the project* of getting things right.

    A good test department could help with this, by sending developers prompt evidence that their crappy development style is preventing the developers from properly debugging their code.

    Or people could be expected to give copies of their code to peer developers for review, and watch as their peer developers say "WTF? Your documentation is not clear to me at all, and your code design makes your code nearly impossible to document because it's such a mess." Hopefully the peer developers won't get caught in a "yes the emperor's new clothes are beautiful indeed" type effect. Or an "I'll praise your crappy code if you praise my crappy code" effect, in which case the problem is not so much that you need to promote better development style as that your coworkers are crapheads :-), or just don't know any better (yet).

    BUT if it's impossible to show the developers the problems their bad development style is causing, maybe it's not bad development style. :-)

    For example, some people think that "defensive programming" means writing your code so that when things start to get messed up inside your program's/library's/module's running state, the code tries to cover the problems up for as long as possible. This is bad for debugging. You want to know about bugs in your program as early as possible. If people think "covering up bugs until too late" is what "defensive programming" means, you will never win them over to it, or you will be sorry if you do win them over. :-/

    Last but not least, you might say, "Gee, I hate when I have a humongous big pile of code and I have to debug all of it at once. That's really a big hassle. I like to make a little piece of code that's easy to test, and gradually work up, so that I don't ever have to put up with a humongous ball of mystery bugs, i.e. MONDO CRAPPO."

    Then the developers (yourself included, you might not know as much about the best way to develop as you think you do) can learn from their own mistakes early and often!

    The trick is not to say "RESPECT MAH AUTHORITY! FOLLOW MAH COMMANDMENTS AND ABSOLUTELY BELIEVE ANYTHING I SAY!" but rather to set things up so that people see for themselves what are better vs.worse practices. Even if they might discover that the books you are reading are partly wrong. :-)

    Chris in Tucson, Iconoclast (Why not join tucson.pm? Oh, you're probably not IN Tucson, nevermind.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://536312]
Approved by Tanktalus
Front-paged by brian_d_foy
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (6)
As of 2024-03-29 01:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found