Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

The dangers of perfection, and why you should stick with good enough

by redhotpenguin (Deacon)
on Mar 11, 2008 at 07:56 UTC ( [id://673424]=perlmeditation: print w/replies, xml ) Need Help??

Monks,

In engineering school I was taught the 'cost, quality, time - pick two' axiom. It has been a kind guide - whenever something is wrong, I can identify some part of that axiom that was not in balance.

As a quick review, here are short descriptions of each metric:

  • Cost - the number of engineers on the project, fairly objective and measurable quantity
  • Time - a temporal quantity measured in hours/days/etc, again another measurable quantity
  • Quality - this metric is highly subjective and is the troublemaker

I am not here to write about cost or time. I am here to write about quality, and in particular 'perfect' quality. To illustrate my point however, let me define some normalized metrics for cost and time (despite the fact I am not here to write about those).

  • Cost:
    • Free
    • Unaffordable
  • Time:
    • Now
    • Forever

Both cost and time have equivalent boundary metrics of zero and infinite. What happens when we try to define those metrics for quality?

  • Quality:
    • Broken
    • Perfect

Lets break it down a bit further. There is technical quality, and there is quality associated with meeting the business requirements.

Technical quality has some qualifiable attributes. Don't repeat yourself, make sure your code compiles, those are easy. Create coding standards is a bit more difficult, since there is more wiggle room. Trying to achieve technical perfection is insanity. You end up with a bunch of wasted time and developers who spend their time arguing over whitespace and other nuances instead of writing code which keeps everyone employed.

Business quality has some definitive attributes also, they are most often referred to as features. There is another saying I remember from engineering school - 'Shoot the engineer and ship the product'. Loosely translated, it means you don't want to try and meet all of your feature requirements. Why? Number one, your stakeholders will see that you have done this and happily hand you more to deliver, and you haven't shaken all the bugs out of what is already being shipped. Number two, you will never ship your product. Stuff always takes longer to ship than is expected, and trying to make it perfect will make it take infinitely longer.

So we don't want to ship something broken, and we don't want to ship something perfect. We want to ship something that is good enough. One way to interpret good enough is not broken, sometime soonish, and at a reasonable cost. It has taken me a while to understand this, but I think I'm getting there. Great products are never perfect, they are good enough.

  • Comment on The dangers of perfection, and why you should stick with good enough

Replies are listed 'Best First'.
Re: The dangers of perfection, and why you should stick with good enough
by BrowserUk (Patriarch) on Mar 11, 2008 at 09:44 UTC

    As you identify, the problem child is quality. Not just achieving quality, but getting an agreement of what constitutes quality. Even getting an honest answer from individuals about what they see as requirements to achieve quality is impossible, because people will say what they think is expected, regardless of whether they personally have found it useful, simply because no one wants to be seen as the cowboy.

    Put 10 SEs in a room and tell them not to come out until they have agreed upon a definition of software quality, and you have a pretty good definition of that old engineering tool, a long weight. Unless of course it's Friday afternoon, in which case they'll be done by 4.30.

    You do not achieve quality through process

    All you achieve is process procedures compliance. And the diversion of energies away from producing the code. The arguments will be about the process; and how to achieve the process; and how to measure compliance to the process; and the generation, dissemination and interpretation of reports detailing that compliance. Or not.

    And training courses detailing the process and it's procedures. And regular meetings to review the procedures and the state of compliance. And a department charged with measuring that compliance and producing graphs to show it.

    How do you achieve quality?

    Yes. That's a question and I do not pretend to have the answer. For a start, there are as many answers as there are projects. And what constitutes quality for a given project depends upon many, many things: lifetime, risk, cost-benefits, audience, the price the market will bear, etc.

    But even if you had hard and detailed numbers defining all those things, (were that possible), there is still another factor. What is the required quality. Customers, be they internal or external, professional or public, do not expect the same levels of quality for everything.

    We do not expect the same levels of fit'n'finish of the interior trim in a commercial vehicle as we do a private car. Or of a £7k micro-mini, as in a £40k luxury saloon. However, the brakes had better damn well work regardless of the cost of the vehicle. And it's probably best if the bonnet doesn't fly open when you're on the motorway.

    In software terms that might mean that your home page is graphics heavy and carries information about special offers. But once the customer starts filling in forms to make their purchase, they're probably not interested in seeing fancy graphics and those same special offers on every darn page. The important thing here is to ensure that when they make a mistake on the forms, and they will, that they can back up to correct them and not get dumped back to the first form of 17 and loose all the input they already typed.

    Processes tend to apply the same criteria to all projects and all parts of a project. And that's a recipe for generating the most procedurally compliant white elephant in history.

    The first order of business is to make the software work. Now. Today. Tomorrow. Next week. But not 3 months or 6 months from now. Once the software "works", you can review it. Its functionality. The source code. The backup, maintenance and testing. You can highlight the weaknesses, prioritise them and fix them.

    It doesn't matter how elegant, how well documented, structured, versioned, tested or reviewed your code is, until it works, your quality is 0, nil, zip, nada, non-existent.

    So, make it work. First and fast. And then review it thoroughly from every aspect. Source code layout. External interfaces. Internal interfaces. Testing. Build processes. Effectiveness. Efficiency. In no particular order.

    Then draw up a list of things that need to be improved. Put that list in order of importance. Work your way through that list as far as you can within the timescales available to you. Make sure that your program continues to work after each change. Back out the change and concentrate your resources on redoing that change, even at the expense of suspending other lower priority items on your list until you get it right.

    None of this implies abandoning all good practices. You'd have a hard time pursuading experienced programmers to abandon those practices that they have personally seen benefit from over the years anyway. The experienced ones will follow their own set of best practices. They may not be the same set of best practices, but they will work; for them. Where conflicts arise between individual practices, in most cases they will be resolved without management or procedural intervention, but in the final instance a manager listens to the pro and cons of both sides and makes a decision. Less experienced programmers are mentored by the more experienced. They work to his standards until they've developed a set of their own that they can justify.

    Make it work first. Everything else is just window dressing until it does.

    Processes and procedures can be helpful, and cost effective, in getting projects to work. And in established, successful shops/teams, the established processes and procedures, born of evolution and necessity, are the oil upon which the work flows. But if the process becomes king, and it all too frequently does, especially imposed or adopted, theoretically wonderful processes, then abandon hope all ye who enter there. Because once the arguments become about whether the process is being followed rather than about whether the goals are being achieved, you might as well pack your bags and go home.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      The first order of business is to make the software work. Now. Today. Tomorrow. Next week. But not 3 months or 6 months from now. Once the software "works", you can review it. Its functionality. The source code. The backup, maintenance and testing. You can highlight the weaknesses, prioritise them and fix them.
      This sounds like a process (and a best practice as well). Doesn't it?

        Hm. I don't think it does. As it suggests a goal or set of goals--not how to achieve them.

        But even if it could be formalised into something approaching a process or best practice, I would still rail against its imposition, despite being the one who suggested it. Especially on existing, productive teams that currently use other methods.

        Any methodology is better than no methodology. And, at their best, all methodologies--waterfall, TDD, RAD, Use cases, SSADM et al--all have their merits when applied properly to particular projects. The problems arise when a methodology is seen to be successful for one project and is mandated for all subsequent projects without recourse to logic and common sense on the ground. Once formalised, and without the considered application of human intelligence on a case by case basis, they can become millstones around the necks of the people to whom they are mandated.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: The dangers of perfection, and why you should stick with good enough
by moritz (Cardinal) on Mar 11, 2008 at 08:32 UTC
    You've got some valid points there, but I don't quite understand the link between technical quality and coding standards.

    Of course coding standards are important to make code readable, but for example if a program is well readable, virtually bug-free but has trailing whitespaces on some lines I'd still call it "perfect".

    So you should start thinking about which quality measurements are important to you. Coding standards shouldn't be the no. 1 priority.

    BTW there are counter examples to your "don't make it perfect" statement. One thing that comes to my mind is Donald Knuth's TeX program, that was virtually bug-free and thus nearly perfect. It has incredibly low maintenance costs because only a few bugs have been found so far.

    So you can come quite close to perfection, and if you want your software to persist for 20 years, and keep bugfixing affordable, try to get as close as you can.

Re: The dangers of perfection, and why you should stick with good enough
by samizdat (Vicar) on Mar 11, 2008 at 13:24 UTC
    Since we don't want The Business to 'shoot the engineers', we'd better be part of the solution. :) I think you've raised some great points here, as has BrowserUK. I agree with them, so the question becomes, 'how do we get not broken, sometime soonish, and reasonable cost?' I've been exposed to an Agile Programming seminar recently, and I think there's a lot to be said for some of their arguments. You have to be willing to rethink 'not broken', though, and that's hard to get an engineer to do. We want to spend lots of cycles perfecting underlying mechanisms, but Agile says 'release early, release often'. The seminar presenters suggested that there's more value to mocking up an end-to-end transaction for one use case, and getting customer buy-in on it, than in crafting a perfect back end. By using such methodology, we hand the responsibility for deciding 'good enough' back to The Business, where it belongs. As you iterate through more and more use cases, you get instant feedback on what 'not broken' means, so you end up crafting a better end-to-end-product. The Business, in turn, gets to draw the line on 'sometime soonish' and 'reasonable cost' without ending up with crap.

    As you can gather, I'm sold on this concept. I have been on dozens of projects -- and run some myself -- where engineering of internals, or programming methodology, took precedence over pleasing customers and responding to their feedback. All too often, the temptation is to get lost in details because they're fun and they're safe. Unfortunately, without constant pressure to prove 'not broken', we often end up with smoothly functioning but worthless pieces when the business runs out of patience or money.

    We engineers need to grow into having more business sense. It's less and less possible to be a subject matter expert in a prograamming language and have that be enough. In truth, the 'business sense' I'm talking about is not that different from 'common sense', but the sad fact is that few have that, either.

    One piece of advice I heard recently was 'spend the company's money and your time as though it is your money and your time'. Ultimately, it is.

    Don Wilde
    "There's more than one level to any answer."

      One piece of advice I heard recently was 'spend the company's money and your time as though it is your money and your time'.

      I've seen this work spectacularly badly at Universities, where every effort is made to horde the income from projects for 'rainy days', to the detriment of the project. Very similar to the way the project leader ran his own finances (lived like a pauper on a strong salary). This was really bad in two ways: His group constantly ran cost over-runs because of equipment failures because of the refusal to service or upgrade equipment, or doing specialist tasks in-house rather than contracting the specialist (usually the statistician). This also often breached the contract with the agency who gave him the grant money for his project! All the funds were for a specific project, not something else and not for profit (which is how 'savings' have to be catagorised here). What was worst was that he never spent those savings, the internal account he stashed his savings in kept getting (legitimately) skimmed to cover costs in other projects (company accounts aren't bank accounts after all).

      Money, especially a clients money for a project, is to be spent not saved.1


      1 This of course excludes the money marked as 'profit' that should be taken out of the project managers hands immediately, I'm talking about the money that was marked down as 'costs'. Never ever bloody ever run under budget.

        I certainly can't disagree with you, Bloodrage, that there are situations where anal-retentives in the wrong place screw things up. I'm hoping that most engineers I know won't be so stupid as to act like a bean-smasher.

        What I meant by that advice was that we should have the attitude that we have to make each dollar and each hour of our time count. That kind of individual sense of responsibility is what makes a free market work and it's also the way to keep a big corporation from descending into bureaucratic ossification. The best way to reduce operating expenses is to get development done as rapidly and effectively as possible. Sometimes that means spending more to get enough tools to do the job right, other times it means spending more design time on a whiteboard before spending money.

        Don Wilde
        "There's more than one level to any answer."
Re: The dangers of perfection, and why you should stick with good enough
by Herkum (Parson) on Mar 11, 2008 at 15:52 UTC

    One of the huge initial proponents of quality was a man named 'Philip Crosby', You can find a quick overview here. One of the toughest tasks in production is quality and defining it. Here is a excellent overview of quality management.

    Four Absolutes of Quality Management

    1. Quality is defined as conformance to requirements, not as 'goodness' or 'elegance'.
    2. The system for causing quality is prevention, not appraisal.
    3. The performance standard must be Zero Defects, not "that's close enough".
    4. The measurement of quality is the Price of Nonconformance, not indices.

    As far as IT goes, trying to write code to requirements can be tough. You are often working with people who do not understand what type of product they want or how to write requirements. On the other hand, you will have programmers that don't understand enough about the business to fill in the gaps that the rest of the business may know.

    Another area that a number people miss No 2, prevention leads to quality not appraisal. What I mean by this is error handling when it comes to getting your data from somewhere else. A good example is when programmers don't use place holders when doing a SQL query, but instead just stick raw values from the client. The result was a number of applications that are vulnerable to SQL injection attacks.

    You are correct, Quality is the toughest one, but it is not a developer issue, but a management issue as well. Quality often requires to sit down and do some real work, and to be honest most people are reluctant to do that, especially when they think they can pass that work off onto the programmer because "Their smart, they will figure it out."

      Quality is defined as conformance to requirements, not as 'goodness' or 'elegance'.

      Overall, I think that's the right approach. However, I've seen it most often as a conformance to static requirements, which pleases no one.

      Ultimately quality means "The stakeholders are happy with the results of their investment."

        I much prefer the "The stakeholders are happy with the results of their investment" definition rather than tying quality to requirements.

        My experience is that humans seem to be notoriously and consistently bad at producing 'requirements'. 'Good requirements' should completely define the thing to be delivered...but it seems that we are not so insightful or complete in our understanding of what is needed to be able to make it truely 'complete.' There always seems to be 'things forgotten' or 'things misunderstood' by the requirements producers.

        Hence it seems to be all too frequently that things that exactly match requirements are still not what the stakeholder, as chromatic noted, wanted. Though I have seen too many stakeholders reluctantly/begrudgingly decide (often, it seems to be in the interest of keeping cost and schedule to a minimum) that they're happy enough to go ahead and accept the product. I'd be hard pressed to consider such an outcome as having delivered a 'quality product.'

        The weak (in my opinion) linkage between 'quality' and 'requirements' is one of the reasons that my teams have so much trouble delivering satisfactory systems even though we've thoroughly tested that every single requirement is proven to work. So we've gone to broader testing to try to ensure that what we've come to call 'essential services' (which are end-to-end functional capabilities/services as defined/requested by the stakeholders...somewhat similar to Use Cases) are provided correctly and consistent with what the stakeholder expects. It is one of the key elements of our movement towards defining 'quality': a stakeholder-expectations-centric strategy.

        ack Albuquerque, NM

        A major source of conflict is between usesr who are unable to describe their wants and programmers who unable to understand users needs. While static requirements are not very good, it can create some stability in a project.

        One position I had, the requirements were literally changing everyday. It was impossible to make any progress because we were always going back and fixing the code we just worked on. Requirements were finally required to be written down as it was the only way developers were able to make any progress. The users still kept changing their requirements, but at least the developers had a fall back point on why things were done a certain way.

        The moral of the story, "There are some people who cannot figure out what will make themselves happy".

        A reply falls below the community's threshold of quality. You may see it by logging in.
Re: The dangers of perfection, and why you should stick with good enough
by ack (Deacon) on Mar 11, 2008 at 19:10 UTC

    In my work experience as a systems engineer we constantly face the same 'formula'...except that we typically define it somewhat differently:

    Cost, Schedule, Technical_content

    Rather than redhotpenguin's

    Cost, Schedule, Quality

    Both formulations work and both have many of the same challenges. If you substitute Technical_content in place of Quality in redhotpenguin's post, the post would speak to my situation, too. In fact, as I think about it, I think that Technical_content and Quality are closely enough related that the two are, in many ways, just two sides of the same coin...maybe that's why redhotpenguin's writing so closely reflects my own thoughts.

    I have been puzzling over a variation on redhotpenguin's dilema, which is:

    If we imagine that the triumverate of Cost, Schedule and Technical_content were an abstract equation of the form: a*Cost x b*Schedule = c*Technical_content where the '*' is a normal multiplyer, but 'x' is an abstract notion of 'combines with'...though it could, also, be a normal mathematical multiplier if we choose the a,b, and c correctly and have a quantified version of the three variables Cost, Schedule and Technical_content.

    The question that we've been faced with is: how could we 'change the results' so that for the same Technical_content and Schedule, we get lower Cost?

    It seems that, in my experience, many folks don't see that to change the cost (for example) but keep the Schedule and Technical_content the same, you have to change the model...i.e., by, so to speak, changing the coefficients: a,b,c.

    What makes the model? From my perspective, the model is the impact of all the policies, practices, processes, procedures, engineering & business accumen, etc. of an organization or other undertaking. To change the relationship of Cost, Schedule, and Technical_content, the model has to change...and that means the organization or undertaking has to change.

    In my experience, I regularly come up against management, customers, or stakeholders that demand (sometimes they even attempt to edict) that "Technical_content and Schedule will not be sacraficed, but we're going to deliver at 50% the cost." It never happens...unless the model is changed...and that means organizational changes.

    In a previous thread on Testing (The dangers of perfection, and why you should stick with good enough) I presented an argument for 'doing more thinking and less heaping on of non-value-added testing' which is part of trying to change our organization...i.e., changing the 'model'. So far, it has worked and worked very well. It seems that as one redefines one's organizational processes, procedures, strategies, (including testing strategies...and as redhotpenguin states, quality strategies) then the model (i.e., its coefficients) can change.

    But once the model is instantiated the relationship between the variables is fixed...unless and until the organization or undertaking is, again, changed

    Whether one uses the variables of redhotpenguin (i.e., Cost, Schedule, Quality) or the ones we've been working with (i.e., Cost, Schedule, Technical_content), the issues which redhotpenguin brings up and discusses are still the same...and form a nexus of engineering challenges.

    Thanks, redhotpenguin...love this node. It strikes very, very 'close to home' for me.

    <
    ack Albuquerque, NM
Re: The dangers of perfection, and why you should stick with good enough
by eyepopslikeamosquito (Archbishop) on Mar 11, 2008 at 23:26 UTC

    Others have commented on "process" issues related to quality; I'd like to focus on "people" issues, the human aspect. My view is that people have a bigger impact on quality than process; that a team of low quality developers will produce a low quality product, no matter what process they use. A key quality issue then is how to attract, identify, inspire and retain quality people. (Google, for one, spend a lot of time and money on recruiting, endeavouring to hire only high quality people).

    Chapter 4 "Quality - If Time Permits" of Peopleware is worth a read. Quoting this review:

    Philip Crosby wrote in his book "Quality Is Free" that letting the builder set a satisfying quality standard of his own will result in a productivity gain sufficient to offset the cost of improved quality...

    A policy of "Quality - If Time Permits" will assure that no quality at all sneaks into the product. Hewlett Packard is a company that makes a cult of quality, reaping high productivity due to high, builder-set quality standards. Their sense of quality identification increases job satisfaction resulting in one of the lowest turnover figures in the industry.

    Power of Veto - Hitachi Software and parts of Fujitsu give project teams an effective power of veto over delivery of what they believe to be a not-yet-ready product.

    Joel Spolsky tries to strike a sensible middle-ground between "sales-driven" and "developer-driven" companies by creating a Development Abstraction Layer:

    You've got your typical company started by ex-software salesmen, where everything is Sales Sales Sales and we all exist to drive more sales. These companies can be identified in the wild because they build version 1.0 of the software (somehow) and then completely lose interest in developing new software. Their development team is starved or nonexistent because it never occurred to anyone to build version 2.0... all that management knows how to do is drive more sales.

    On the other extreme you have typical software companies built by ex-programmers. These companies are harder to find because in most circumstances they keep quietly to themselves, polishing code in a garret somewhere, which nobody ever finds, and so they fade quietly into oblivion right after the Great Ruby Rewrite, their earth-changing refactoring-code code somehow unappreciated by The People.

    Both of these companies can easily be wiped out by a company that's driven by programmers and organized to put programmers in the driver's seat, but which have an excellent abstraction that does all the hard work to convert code into products below the decks.

Re: The dangers of perfection, and why you should stick with good enough
by mr_mischief (Monsignor) on Mar 11, 2008 at 17:59 UTC
    I've found that often number of features is a red herring in the quality of software. I wrote much more about that in Document-centric vs. Workflow-centric design, where I expounded on the differences between process-centric projects and document or object centric projects.

    The central idea of my root meditation is that making your features make sense together is more important, and often far more important, than the number of identifiable independent features.

    A sensible task flow is important in a generic menu-driven application centered around a document object. It's more important in designing and developing software for more specific work flows. Indeed, it's often the most important factor. It's my opinion that in many cases the process should be the central theme in the design of software, and my rationale I hope is made clear in my other node.

Re: The dangers of perfection, and why you should stick with good enough
by Bloodrage (Monk) on Mar 11, 2008 at 22:04 UTC

    The OP seems to be on the edge of a particular issue that really rips my nighty, that is non-compliance. Some work environments have standards, and some of these are loose ad-hoc in-house standards that they've worked out themselves over the years and pretty much held by oral tradition, others are terribly strict standards imposed by outside agencies and take up several dozen shelf-inches of documentation, and of course there are a vast array of in-betweens.

    There is a tendency by some (usually those new to the environment and have 'more experience' in the field of work, just not with the standard) who pull the "It works. Bugger the standard. I'm done" routine, and often because it works, and saves time, this is usually ignored. This seems to be fine in the short term. Then change happens; staff turn over, something breaks, a system begins to behave strangely, you're audited by your standards compliance agency, or worst of all you're audited by the government agency who makes you work to that standard.

    This is where non-compliance blows out whatever perceived savings that the short-term thinking that "It's good enough dammit" argument leads to. Several hundred hours of unchargable work1 to patch up whatever hole is in your project. Often workplace standards are there for a reason, sometimes it's for a 1 in a million risk, but sometimes this is unacceptable. The thing to consider here is that the properly compliant work barely gets a second glance, and auditors are canny bastards who instinctively home in on the project with the most spectacular compliance issues.

    I suppose what I'm describing is the other side of this argument: When is 'good enough' not good enough. I mean, is 'it works' good enough for Air Traffic Control, Medical Systems, Utility Networks, Food and Drug Testing, Your Bank's Web site, or Space Flight Control Systems?


    1In a rather special incident our Director had to counter sign several hundred data record documents (and initial and date the corrections) and have them countersigned by our QA, the clients QA, and the government representative, because a particular employee refused to sign them as a waste of time. It had to be the Director because all the other members of that project were no longer employed by the company. Not a programming example, but it has analogies. In this case failure to patch the problem would have lead to our company and all it's employees open to charges of fraud. Nice.

      Seems to me that there is a big difference between statutory requirement where risk to life and limb is involved, and a set of coding standards (preferences). Non-compliance with a statutory requirement could never be deemed "good enough".

      I mean, is 'it works' good enough for ...

      ...a list of mission critical software that represents maybe 2% of the software written. Even then, if your definition of "it works" is: meets all the projects requirements, which by definition includes any statutory requirements applicable, then yes. It works is good enough.

      A piece of (say) game software is unlikely to result in death or injury, if it occasionally accumulates enough floating point errors to cause a game piece spacecraft to attempt a high speed rendezvous with the moon. Space flight control software could. Failure to destinguish between the quality requirements of the two could be disasterous for the company developing the software either way.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        A piece of (say) game software is unlikely to result in death or injury

        Actually I have discussed this specific scenario, my flat mate (A flat is a house in New Zealand sublet by a bunch of students) wrote flight sim software for a company that ran high-end arcade games. Coding standards were... loose (in that they were what the lead programmer said they were). The simulators were a cross between you standard in-cockpit arcade cabinet and a tumble dryer that used gravity and rotation to emulate the sensation of zooming around in space. The part of this project that caused a lot of contention between the Boss ("Get it done now cheap"), the Engineer ("ye canne break the laws of physics"), and the programmer ("you want what to do what now?") was; if the computer, which controlled everything, crashed, how could you program it (i.e. the crashed computer) to enter a safe mode that would stop the simulator and open the doors? The Boss insisted that the computer could do it, just write the code dammit. The programmer said it couldn't as that's an aspect of the Halting Problem and you needed a second computer. The engineer said they needed an independent redundant system with a big red button.

        Eventually a micro-controller was installed, but the whole argument was referred to as the discussion about "Doors that shouldn't eat people". In this case the required standard is obvious, often when working to standards (or statutory requirements) the reasons can be obscured.


        ahh no! my non sequitur powers!

        I think the reason the OP's comments got me riled up is that it's the 'what does it matter when it saves time and money' kind of attitude1 that can make working in these 2% situations very difficult. I think what I really wanted to communicate is that you can not always assume 'it works' is good enough. If your working environment has standards set higher than that, you're quite likely to be contracted to work to them, not under them, and you are obliged to do so.

        I suspect it's residual bitterness from having to spend late nights at work double-checking data sheets against data inputs (that have already been checked twice yet still have 5% error rates). Eventually I did most of the key-punch work myself because I had the fastest data entry speed with the lowest error rate. People look at you like you're doing voodoo when you can do data entry on the keypad without using the mouse to navigate cells or look at the screen.


        1 I'm not belittling his opinion. In situations where you've got to ship the product on a deadline, you've got to do what you've got to do and it is the right way to be thinking. I'm just putting up the counter PoV.

Re: The dangers of perfection, and why you should stick with good enough
by talexb (Chancellor) on Mar 13, 2008 at 15:42 UTC

    This raises some good points -- I'm sure it must drive business owners around the bend the way software's so imaginary, yet plays such a vital part of the success or failure of some businesses.

    What is Quality?

    For me, quality, as it applies to software, is a measure of how little trouble the software's going to cause, where trouble is defined of some combination of time and money to fix.

    Why is it important?

    Quality is important because as the quality of the code decreases, the time and money required to improve it, or even keep it up date, increases -- this is technical debt. And in addition to time and money, there is also the opportunity cost -- if a customer wants a feature in 30 days, but that can't be done with the existing staffing levels because they're too busy bailing water, the company loses out on the opportunity to make another sale.

    Quality is free -- but it is not a gift

    The title of this section is the key quote that I remember from The Art of Quality. Building a quality software system is a mindset, an approach to the craftmanship of writing lines of stuff and getting them to successfully run a business.

    A quick and dirty solution is something that needs to be revisited as soon as possible. If not, it quickly becomes permanent, just One More Quirk in the established complexity that we drag around.

    Writing code needs to start with quality in mind -- for me, this includes Test Driven Development, so that the developer can prove that the code really does what it is supposed to do.

    Alex / talexb / Toronto

    "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

Re: The dangers of perfection, and why you should stick with good enough
by Erez (Priest) on Mar 11, 2008 at 19:10 UTC

    I was taught the 'cost, quality, time - pick two' axiom.

    I know it as "cheap, good, fast - pick any two", placing Cost as close as can be to the bottom, while not reaching "free", Time is also aiming for Now, but isn't there.

    With these scales in mind, I believe the idea is that you should reach the "once it's not broken" stage, rather that "as close as can be to perfect".

    Update: fast, not free, thanks Roy Johnson

    Software speaks in tongues of man.
    Stop saying 'script'. Stop saying 'line-noise'.
    We have nothing to lose but our metaphors.

Re: The dangers of perfection, and why you should stick with good enough
by sundialsvc4 (Abbot) on Mar 14, 2008 at 15:28 UTC

    Scott Adams has mined a rich mother-lode of irony for Dilbert in the endless “conflict” between project-management and engineering. (He consistently takes the engineer's point-of-view in his strip, while being a business management consultant himself.)

    Both objectives have to be met at the same time:   the product has to be “definitely good enough,” and it has to hit the market and make money there, all at the same time and in a game where there's just no room for second place.

    The need for all of these ingredients becomes most-obvious when it is also the most-painful, namely, when any one of these pieces is dysfunctional. Once again, Scott Adams created an on-screen character as a (highly dysfunctional) archetype of this-or-that role.

    How do you know if your team is “dysfunctional?” Walk through your workplace on a Sunday when no one's there. Count the number of Dilbert cartoons you see posted, in the hallways, on the terminals and cubes. Notice carefully what each one is saying, because it's a bellweather as well as a silent, socially-acceptable form of protest.

Re: The dangers of perfection, and why you should stick with good enough
by Gavin (Archbishop) on Mar 12, 2008 at 13:43 UTC

    When is "good enough" good enough.

    Some thoughts on Index Numbers (Oct 1952) by A.J.H Morrell M.A. Mathematician and Statistician.

    “Perfect accuracy is unattainable-and unnecessary. Some of you may have heard of the foreman, an ex-sergeant-major, who was walking round the plant, finding fault. He came to one operative, picked up some gadet he had just made and said, “Is this right?” “Its near enough,” replied the operative. “I don’t want it near enough,” snapped the foreman, “I want it right.” “Very well,” said the operative, “It is right.” The foreman examined it carefully and measured it, then grunted, “That’s near enough.””

    I think this holds true for many situations today, the problem is knowing when “near enough” is near enough as at present there is no ISO standard for “near enough”.

Re: The dangers of perfection, and why you should stick with good enough
by hesco (Deacon) on Mar 28, 2008 at 05:53 UTC
    I wrote some good enough code just this morning, or rather I adapted some previously working code to work again, at least good enough, against a moving API. I had not used that script in six or eight months. And when I broke it out again to get some work done, I found that the webservice I had written it against had changed its user interface.

    So I had about 850 records to process. After a couple of hours at it, it was working again, at least well enough. I got distracted by other projects in front of me (upgrading an asterisk server and debugging new issues introduced by the upgrade). When I turned back to the console running my script against the web service, I found it had finished processing my 850 records. And the results it produced showed that it gave me useful data for over a third of those records. A quick GROUP BY query on my result table showed that the next enhancement I was about to dive into (which would easily have consumed a good bit of the day I wound up devoting to my upgrade / debug work) would at best have let me try to process only an additional 5% of the total records.

    Instead of spending another four hours writing the code to 'perfect' my script, I spent 10 minutes writing the boss, showing her the results of my GROUP BY query and explaining that, given the always limited resources we work with, that the additional investment of my labor was probably not worth the trouble and expense for another 40 records.

    I suspect she trusted me on that conclusion. She did not write back insisting on 'perfection'. She also had other priorities for my time and energy.

    -- Hugh

    if( $lal && $lol ) { $life++; }

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://673424]
Approved by Corion
Front-paged by clinton
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (6)
As of 2024-04-25 08:04 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found