Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Software Design Resources

by Anonymous Monk
on Aug 22, 2003 at 01:59 UTC ( [id://285637]=perlmeditation: print w/replies, xml ) Need Help??

I'm interested in learning more about good software design practices and quality assurance. Anything on theoretical advances in proving programs would be appreciated too (math isn't a problem).

I've read code complete, writing secure code, the pragmatic programmer, Professional Software Development: Shorter Schedules, Higher Quality Products, More Successful Projects, Enhanced Careers (long enough title?), about 4 on extreme programming (blech!) and a few others I've forgotten.

The common problem with all these books I've found is that they lack substance, seeming to focus almost entirely on metaphors and don't provide anything that can be directly applied. What I'm looking for are books with working code, real examples of extremely high-quality projects and their design, not the fluff that is commonly recommended.

I've also found online forums (present forum excluded ;-)) and newsgroups to not be worth the time to sort through all the noise, and rarely even then is there something worth reading.

While I've produced some fairly solid software (at least I think it's solid...), I highly doubt that I'm at, or even that close, to the top of quality development/assurance practices. Can anyone shed some light on the practices that allow software to be used in applications where any failure is simply not acceptable? Thank you for your responses.

Replies are listed 'Best First'.
Re: Software Design Resources
by BrowserUk (Patriarch) on Aug 22, 2003 at 06:47 UTC

    In the mid-eighties, I was briefly involved in the development of an IBM internal language called PLS86. A descendant of PL-1 language that IBM started developing in the late 50s and early 60s, it was targeted at the x86 platform and one of its major design goals was to allow the production of "mathematically provably correct programs". The language was briefly considered for use in the development of IBM's then top-secret, and as-yet unnamed, OS/2. (SO inventive. Not!). Anyway, the language was dropped (for use in developing OS/2) not least because IBMs then partners in this development, MS, refused to use it and so C was used instead.

    I was friends with one of the guys who did considerable work on the development of the "mathematical proof" bit of the language. I don't recall (probably because I didn't really understand much of it) all of the details, but there was a point in the development of the language, which went ahead anyway, where the provability requirement was quietly dropped.

    The reason came down to the simple statement: It wasn't possible!

    Some of the bits I remember that lie behind the conclusion are:

    Even with it being a totally pre-compiled language, without the dynamic features of some other languages (eg. eval in perl, INTERPRET in REXX etc.), unless the program is entirely statically linked, there is no way to ensure that you will not get failures due to the unavailability or incorrect runtime linking of dynamically bound code segments.

    In order to prove the program is correct, it becomes necessary to add additional code to verify the inputs to every part of an algorithm and more code to verify the output. However, there are many problems with this.

    • The first set are that the only effective ways of doing this for any possible set of inputs and outcomes are
      • to duplicate the entire algorithm using an alternative set of instructions.

        This is the method used by some life-critical projects. For example, the software used for airliner fly-by-wire systems is developed from the same specs by three different software teams in complete isolation (clean-room environments) and targeted at 3 different (types of) processors. There is then a fourth processor that supplies the same input to each of the three and compares their outputs.

        If one processor reaches a different conclusion to the other two, it's result is ignored. If it consistently produces a different result, then that processor is flagged as being broken. The idea is that by using three different types of processor, no two are likely to be affected in the same way by a hardware, micro-code or software bug (think Pentium floating point bug!).

        By having the teams that develop each set of software work in isolation, it is less likely that they will make the same (wrong) assumptions. So long as two of the three processors arrive at the same result at any given step of the process, it's quite likely that they are correct.

        Whilst this method works for the most part, it is hugely costly and so not cost effective for most types of software.

      • to have a lookup table that maps inputs to outputs.

        Even allowing for the fact that the storage required to maintain these tables would be huge for anything other than the most trivial of projects, there is still the possibility that the inputs can be corrupted before they reach the process. Memory (RAM or DISK) failure, sensor failure, human input error etc.

        Beyond even that, the resultants in the lookup tables would have to be generated. For anything beyond the trivial this would need to be done using software.

        Catch-22, how do you ensure that that program is correct?

    • The second set of problems are that the additional code added to verify algorithmic correctness are
      • themselves software--and so subject to bugs, and failures.

        Catch-22 again,

      • adds more code to the project thereby increasing complexity--Complexity and volume are the prime multipliers for bug rates.

        all the additional processing costs hugely in terms of both development and execution time.

        People have, and still do, attempt to address this by making the test/ verification code conditional. That is to say, they add the tests and verification code in such a way that it can be 'turned off' in the production environment. The problem with that is that removing the code has a measurable, but sometimes unpredictable effect on the overall algorithm. This is effectively the same as the Quantum Effects principle in Physics--you cannot measure something without effecting it.

    That isn't the complete story, but it's all I can remember for now.

    What this means is, that for any real-world, costed, project, it becomes necessary to define a level of reliability that is 'acceptable' and then design your development and testing criteria to achieve that.

    Testing is good, but it is not a silver bullet to bugs. It is impossible to achieve 100% coverage of tests. The first major project I was involved in testing (the GUI component of OS/2), had over 700 APIs, with an average of 4 parameters per API and some with as many as 13! Many of those parameters are themselves (pointers to) structures which can consist of as many as 40+ discrete fields, and each field can (for example) be 32 individual booleans; or integer values with ranges of +- 2**31 or 0-4**32; or floats ranging from something like 1e-308 to 1e+308.

    Do the math! Even without considering the effects of interdependencies between the APIs--you can't draw a line until you've obtained a presentation space, you can't obtain a presentation space until you've obtained a device context and so on--the numbers become astronomical. Considering testing everything simple isn't possible given the projected life span of the earth, never mind that of human developers:)

    Given you have an experienced programmer developing yet-another-version of some project that s/he has done several of before, then they will probably be able to make some educated guesses about where the edge cases are and there by reduce the sets of tests to a manageable size. However, it was proved fairly comprehensively (to my satisfaction anyway), by some work done by another bit of IBM around the same time, that even experienced developers make bad guesses as to where the edge cases are when they move to projects that are even slightly different to those they have done before. In fact, that particular team showed that they make worse than random choices! And the more experienced they were (in similar but different project types), the worse their guesses were.

    The upshot was that we ended up developing a random test case generator. The logic went like this. If you have a few computers generating random but valid sequences of API calls, along with other code to test the resultant program, then with you can use statistics--the number of programs generated -v- the number of them that failed--to determine the rate at which bugs are being found. By re-running every test cases generated, both good and bad, each time a new build was released, you get an effective regression test. You also get a statistical measure of the rate at which earlier bugs are being fixed, which of them are re-appearing etc. You can also break the statistics down by component, developer, day of the week etc. etc. This allows you to target your efforts to where they are of greatest benefit.

    The effect of this was amazing and salutary. There had been many, many test cases written prior to the RTG being available. WIthin a month it was shown that all of the test case produced by programmers/ testers targeting their efforts according to their (or their superiors) best judgment, had covered less than 15% of the total APIs with 10% having been duplicated over and over, 5% of the possible parameter combinations, and found less than 1% of the possible bugs.

    Don't ask me how they arrived at this last statistic, it was way to much voodoo for me, but I can tell you that within two months, they were beginning to predict the number of new bugs that would be found, old bugs that would be cured and old bugs that would re-surface in any given week (with twice daily builds) with amazing accuracy. It was right around this time that the project got moved from the UK to the US and I never heard much more about it.

    You might find this article interesting Coming: Failsafe software. The only way the software industry is going to move out of the metaphorical iron- or maybe bronze-age, is when we start using computers to assist us in our work.

    It's a strange thing. If you describe a problem in almost any field of endeavour to a programmer, he will nearly always suggest a way that he could improve or solve that problem using a computer--except when the field of endeavour is his own!


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
    If I understand your problem, I can solve it! Of course, the same can be said for you.

      Within a month it was shown that all of the test case produced by programmers/testers targeting their efforts according to their (or their superiors) best judgment, had covered less than 15% of the total APIs with 10% having been duplicated over and over, 5% of the possible parameter combinations, and found less than 1% of the possible bugs.

      According to the limited description, the repeated 10% "duplication" was probably due to "best judgment" bias. If random sampling is used instead, it's almost impossible to have such a duplication across testers and overtime (though you still will have duplications "locally").

      "Best judgement" bias is like trying to estimate the total number of Perl programmers in the world by asking people what language they use in Perl websites alone.

      The "1% of the possible bugs" estimate was possibly derived from something like the catch-and-release method I mentioned in my another reply in this thread.

      The the unprobabilistic approach of "best judgement" and the probabilistic estimate of "1% of the possible bugs" seem strange to occur together, however.

      *     *     *     *     *     *

      In case "probability" may sound like a voodoo, consider this: In a population of 100, if all are ages 25, what sample size do you need to come up with a 100% confidence estimate of the population average age? One, of course.

      If 90 of them are of age 30 and 10 age 20 (average 29), a random sample of size one give you an "average" 30 90% of the time. Pretty good "estimate" actually, considering you don't have to ask all 100 of them.

      The worst case (in term of sample size needed) is a 50/50 split.

      So, a population of one million all aged 25 only need a sample size one to get a good (perfect in this case) estimate, whereas a population of 30, 10 aged 30, 10 aged 20, 10 aged 5 needs a larger sample size.

      The moral: the quality of a statistical estimate is affected by the heterogeneity of the population, not its size. It's very counterintuitive to many people, I know.

        Sorry. I don't think that I made that bit very clear. The statistics I gave were for the coverage achieved by the teams of test case writers prior to the introduction of the Random Testcase Generator. That is to say, there where a bunch of coders charged with the task of sitting down and writing programs to exercise given subsets of the APIs. They used thier 'best judgement' to write the programs such that they covered the edge cases of each individual function an combination of functions.

        The 15% was a purely mathematical count of the APIs exercised derived by simply grep'ing and counting them from the assembled test suit.

        The 10% duplication meant that of the 15% that had actually been exercised, two thirds of them had been exercised in more than one testcase. For some parts of the API set this is enevitable. You can't do anything when testing a GUI API set without having called CreateWindow() for example, but this did not explain all the duplication.

        Much of it came down to the fact that given any two programmers with similar experience, their best judgement, based on their prior experiences, will lead them to similar conclusions about what needs testing. Hence, they will tend towards testing similar things. Even though they are each assigned a particular set of API's to test, it's enevitable that there will be some overlap. Given a team of nearly 100 programmers from different backgrounds, you would think that their ranges of experience would lead to a fairly wide coverage, but it doesn't happen that way. They all tend to concentrate their efforts on similar clusters of "suspect" APIs. Worse, they all tend to assume that some APIs are not necessary to test, for similar reasons.

        As for the 1% of possible bugs. The bit that I consider to be tantamount to voodoo, is the determination of the number of possible bugs. In order to state that "only 1 had been found", it is necessary to know how many were found and how many could have been found. How do you begin to determine how many there could be?

        I fully understand the mechanism whereby it is possible to estimate how many bugs will be found on the basis of how many have been found, and projecting that forward, once the test cases are being produced randomly. This is fairly simple population sampling, standard deviation stuff. You only need to know that the sample is a truely random selection from the total population. You don't need to know the total population size.

        But to conclude that 1% of possible bugs had been discovered by a set of testcases that the previous 2 statistics went soley to prove that their generation was anything but random, from the determanistic count of those that had been found, means that they had to have determined, or at least estimated to some degree of accuracy, the total possible bug count.

        I have a good degree of faith in the guys doing the work, and I was treated to nearly four hours of explanation of the methodlogy involved. The program that produced that statistic ran on a top of the range S-370 quad processor system and consumed prodigious amounts of cpu-time. The datasets were not very large.

        It involved a iterative process of refining a complex polynomial with an infinite number of terms, until it approximated the discovery rates and coverage that had been determined by counting. Once the polynimial in question had been refined until it closely approximated the real-world statistics it was developed to model, it was then iterated forward into the future to project to a point where no more bugs would be discovered. In real time this would have amounted to decades or maybe centuries. Once that point was reached, they then had the estimate of the number of bugs that could be discovered and it was this figure that was used to calculate the 1% figure.

        Beleive me. This went way beyond graduate level statistics with which I was familar with at that time, though I have since forgotten much of it.

        I'm going to stick to my guns and say that this was the deepest statistical voodoo that I have any wish to know about:)


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
        If I understand your problem, I can solve it! Of course, the same can be said for you.

Re: Software Design Resources
by graff (Chancellor) on Aug 22, 2003 at 03:27 UTC
    allow software to be used in applications where any failure is simply not acceptable

    That's a helluva concept... I would take the condition "failure is not acceptable" to mean something like "completion/closure of the software project is not expected within your lifetime". ;^) Keep in mind that the "final release" of an application is not when all the bugs are fixed. It's when people stop using the application (so the remaining bugs that surely still exist will never be found, let alone fixed).

    The causes of software failure constitute an open set. They include not only things that are arguably wrong in the code design, but also varieties of data or operating conditions that no programmer could anticipate, no matter how careful or thorough the design and testing.

    Apart from that, the evolutionary facts of life, as applied to hardware, OS's, programming languages and user needs, preclude the possibility of stasis in any application. Code quality is like internet security -- an ongoing process rather than a finite state to be acheived.

    Your complaints about metaphors and lack of substance are well founded, I'm sure. But how can you expect substance, "working code", stuff that can be "directly applied", etc, in the context of talking about programming in general? Either you spout generalities or you delve into the details of a specific app (where the app usually requires some amount of app-specific QA/QC) -- or else you try to make some general point using some particular example,which turns out to be irrelevant to the majority of readers, so they can't "directly apply" it.

    In order to learn the best way to write code, and in order to improve your skill as a programmer, you have to write code, fix code, review and critique code, and be involved enough in each particular application domain to know what the code should be doing -- i.e. to know what users of the code need to get done. This is not a matter of having a formula or cookbook for writing code that can't break; it's a matter of being able to figure out what to do when it does break. Because it will break.

    (I know, I'm ignoring some areas of "general" software design where it really pays to learn from books, documentation, release notes, etc -- things like "how to write C code so that you don't allow input buffer overflows", "how to write cgi code so that you don't expose your server to malicious or accidental damage", and so on. But I would argue that these should be addressed in an application-specific way -- it seems unproductive to try speaking or learning about them "in general theory".)

      Thank you for your reply.

      where any failure is simply not acceptable
      ...
      That's a helluva concept

      Yes it is. However, many such situations exist today where a single bug will completely destroy a company's reputation and open them up to severe liability. A single bug in certain medical systems could easily cause many deaths. A bug in flight control software can cause (and has caused) a hundred million+ dollar mission to end in failure.

      But how can you expect substance, "working code", stuff that can be "directly applied", etc, in the context of talking about programming in general?

      You'll have to obviously narrow the field a bit to get to the level I'm requesting, but narrowing out brainfuck doesn't exactly eliminate a good portion of your audience. Pseudo-code, class diagrams, etc can be equally applied to Perl, Java, Python, Ruby, C++, and almost any language in popular use today.

      The causes of software failure constitute an open set. They include not only things that are arguably wrong in the code design, but also varieties of data or operating conditions that no programmer could anticipate, no matter how careful or thorough the design and testing.

      Actually, I disagree. Provided that the supporting systems interfaces are accurately and fully specificed and that they do not fail, the responsibility rests entirely on your code. If everyone follows that strategy then you will have a flawless system (this is obviously a lot harder at the hardware level). The task is not to complete the entire product perfectly, but to complete your software's functionality perfectly.

      In order to learn the best way to write code, and in order to improve your skill as a programmer, you have to write code, fix code, review and critique code, and be involved enough in each particular application domain to know what the code should be doing

      Yes, but after that what happens? To continue simply learning a simple fix here and there will not result in the reliability required for such systems.

        Pseudo-code, class diagrams, etc can be equally applied to Perl, Java, Python, Ruby, C++, and almost any language in popular use today.

        ... The task is not to complete the entire product perfectly, but to complete your software's functionality perfectly.

        Good points -- but it actually looks to me like the thrust here should be how to write specifications that are appropriate, adequate, bullet-proof and idiot-proof, and also make sure that these specs are readily translatable into the chosen programming language, complete with all necessary testing protocols -- i.e. given that the spec says "input comes from X, consisting of N bytes, etc" the test has to say "here's what is supposed to happen when input comes from anything other than X, and/or does not consist of N bytes, and/or etc."

        Most of the perl core documentation (and much of the extra module documentation) that I've seen has a lot of the properties that one would want for proper/robust software specifications -- the reader is told what inputs are needed or allowable for a given function/method, what it returns when used properly, and what happens when not used properly. And this is typically done using clear, simple language, not overburdened with nonce acronyms, jargon or "technical legalese", and yet not at all vague, either. I would hope that critical-impact projects would model the development of specs on such examples.

        Admittedly, the theory (or practice) of robust software design in critical-impact apps is not an area where I should try to assert any sort of personal expertise. I'll shut up now.

        I think that learning Perl, besides reading some books is not so difficult since the code is nearly always open to read it.

        If I were you, I would choose a subject that you have worked on it for a long time (expert by direct experience) and after finding the best script of this kind develloped by someone that you respect, try to modify it and improve it.

        That is an active way of developing skills and style.

        You won't like to change the author's style. And since the concept comes from another mind you are going to be forced to understand the way of thinking of this author.

        That is a universe to disconver for your improving. And it is free. Just try to improve the code and then publish it, mentioning the improvements and the original author.

        Finally, if you didn't learn a lot by re-writing the code you are going to learn a lot by the critics or by the people that love your improvements :)

Re: Software Design Resources
by cleverett (Friar) on Aug 22, 2003 at 06:26 UTC

    Go work for someone like Rockwell Collins doing avionics software. Mostly what they do is test, test, test, and at the end of a long day of testing, they test some more. I know, because some of my best buddies scored jobs there, and now they do Ada in front of green screens.

    I suppose that's one way to make a living.

    Proving programs correct most probably sucks for anything non-trivial. What are you going to prove? That it follows the specs? Broken as designed would devolve to "proven worthless". So it comes back to writing good specs. Good bedside material.

    Finally, don't sell metaphors short ... metaphors are a well tested mechanism for transmitting knowledge between human beings.

Re: Software Design Resources
by chunlou (Curate) on Aug 22, 2003 at 07:03 UTC

    Seems like you're operating at an extremely low level of fault allowance more like building an airplane than a buddy website.

    Instead of trying to write a book, guess I'll start off with some somewhat randomly selected thought.

    In order to know the "quality" of your code, it's good to know how many bugs are there, which of course is unknown. Finding how many bugs in your code is like finding how many fish of certain species in a particular area of an ocean. Rarely could we find it by exhaustive count but need a probabilistic approach instead--say, a catch-and-release approach (language borrowed from fish-counting).

    Example (rewording of something I posted elsewhere a while back). Your code accepts various combinations of input. Some bugs are to be found by entering those input (whereas some others by load-testing, etc.) Normally, all possible such combinations are too vast to be practical, timely and productive to test them all. Instead we randomly select, say, two subsets among all such possible combinations for Tester One and Two to test.

    Let T be the total unknown number of possible bugs associated with all combinations.
    Let A be the number of bugs found by Tester One.
    Let B be the number of bugs found by Tester Two.
    Let C be the number of bugs found by both Tester One and Two.
    

    Hence (let P(X) be probability of X)

    P(A and B) = P(C)     (by definition)
    P(A)P(B) = P(C) (independence assumption) A B C --- * --- = --- T T T A*B ----- = T C

    That means, the less bugs both Tester One and Two found at the same time, the more likely there're still a large number of unknown bugs yet to be found. Or, the more common bugs found by both Tester One and Two, the more likely that they have found most of the bugs. The idea can be visualized with a Venn diagram:

       +----------------------------------+
       |                                  |
       |    +------------+                |
       |    |            |        T       |
       |    |    A       |                |
       |    |            |                |
       |    |     +------|-------+        |
       |    |     |  C   |       |        |
       |    +-----|------+       |        |
       |          |          B   |        |
       |          |              |        |
       |          +--------------+        |
       |                                  |
       +----------------------------------+
    

    Since A and B are only random subsets of all possible combinations, they are not going to detect all possible unknown bugs (associated with data-input).

    The key lies on C, the common area. If you look at the Venn diagram and if you imagine squeezing the superset T smaller and smaller, it will be less likely for C to be small--A and B must tend to overlap.

    The whole point is to estimate the total number of bugs (again, associated with data-input, not everything else, such as workload) without having to go through an exhaustive testing.

    Of course, that simple estimate probably won't be statistically very valid, since bugs are not independent. But it still gives a good conceptual insight--if a bunch of independent testers tend not to find common bugs, there're probably still pretty of bugs out there.


    _________________
    NOTE: It's not testing results that are most important but the process (as learning). The results are only as good as what you learn from them and how you understand the problems better and differently.

    It is the same meta process with programming. A programmer is considered "good" not just because he writes code that works but he writes code that actually solves the problem pertaining to the users, not the problem the programmer finds interesting and feels like to solve. He doesn't stop just because "it works."

    If the testing process only helps you answer the what (such as how many bugs) but not the why, the process is flawed.

    Flawed in what sense? The what (such as benchmarking) only tells you where you are. The why helps you predict and plan. If someone doesn't learn something new (both what and why) out of a testing, the testing is pointless, no matter what fancy technique used and statistics derived. (Repeated testing without being able to pinpoint and solve anything is symptom. For one to blame testing that fails him is like he blaming his car driving him to the wrong destination.)

      Interesting approach. I considered a similar statistical method earlier but the problem is, what if neither tester finds a bug? Mind you, if that's the case after a month of testing the code would probably be satisfactory for most purposes :)

        Did that happen?

        If 90% of the sidewalk is covered by potholes, what's the odds of you walking on it "randomly" after a month without walking into a pothole?


        _____________
        To be fair, how you "randomly" select something for testing or sampling purpose could be rather technical, but the basic concept remains.

Re: Software Design Resources
by adrianh (Chancellor) on Aug 22, 2003 at 09:17 UTC
    Anything on theoretical advances in proving programs would be appreciated too (math isn't a problem).

    If you're into formal proofs you should be taking a look at things like the Z language. However these systems are a lot less useful than many people imagine. Even with Z the task of proving a program does what you think it does is hard, and the problem of ensuring that your real-world requirements match your Z-code is still non-trivial. All they do is move the problem up a level.

    The common problem with all these books I've found is that they lack substance, seeming to focus almost entirely on metaphors and don't provide anything that can be directly applied.

    Personally I'd take another look at XP. I'm not sure what books you've been reading, but XP is all about directly applicable rules and practices. Rules and practices that I have found very effective at increasing code quality.

    Can anyone shed some light on the practices that allow software to be used in applications where any failure is simply not acceptable? Thank you for your responses.

    If you really mean "not acceptable" than it's all about large amounts of requirements tracking, testing and process. For example, see this article on the on-board shuttle group at Lockheed Martin - the people who write the software that runs the space shuttle.

Re: Software Design Resources
by Anonymous Monk on Aug 22, 2003 at 07:18 UTC

      Good stuff.

      It reminds me some reoccuring negative experience nonetheless, not with the material but with the implemenation.

      One, people like to toy with "best practices" if it's someone else who are going to practise it. That is, there's often lack of walk-the-talk leadership.

      Two, fad diet syndrome. One month, this is CMM. Another, this is TQM. And later, it's UML. Nothing get learnt, nothing get done.

      Three, if a skill (a small unit of a larger set or the whole thing) need more than, say, three pages to explain to an overwork programmer, nothing will be learned.

      In order for any practices to be learnt and practised, it's pretty much mandatory to break down the whole material into smaller pieces as self-contained as possible, introduce only one small piece at a time (or risk confusion), integrate it the learning process into the on-going projects so that someone can apply what he's just learnt to his work as soon as possible, not to a situation that may or may not happen.

      Everyone will hestitate to spend the whole night studying something that doesn't help him meet the deadline due yesterday.

Re: Software Design Resources
by johndageek (Hermit) on Aug 22, 2003 at 19:58 UTC
    Good questions right up to "where any failure is simply not acceptable?"

    This moves us from the difficult to the excruciatingly improbable (or impossible if I can use the word loosely).

    No fluff intended at this point.
    Assume all good problem definition has been done, the code is written and tested by the coder to validate that it meets the specifications.

    list all possible uses.
    list all possible failure points,
    list all the uses you have not thought of.
    list failure points you have not considered.

    Test the list above.

    Now a few guarantees need to be in place.

    All uses of said software will be run on the exact same hardware, OS, and supporting environment it has been tested on. (please note the Hardware, OS and environment will never fail in any way. No guarantee needed since it will not fail)

    Let us leave the ridiculous, and splash our face with a bit of reality. What say we make the specification attainable and personal. What procedure would you put in place to test software that, if it fails you (or if you are a parent - your child) will die a slow, horrible death (other than old age).

    Neat, tidy question, with definite consequences that are high enough to put most people on their toes. Now, where do we start?

    Define the requirements, environment of use, who the users will be, budget, time constraints, who else can be brought in to test, how long can we put the software in the field with real users before the test period is said to end. Will death due to failure be enforced if the flaw is not in the software but due to hardware, or environment?

    Now let's hash some of this around.

    Hardware - Make it as redundant as possible. Power - back up generators and batteries. software - oh oh, 2 options here.
    1) do we make it simple, bare bones, and as easy as possible to spot potential errors.
    OR
    2) do we build the software to run across multiple hardware platforms running validations across the platforms that all are in sync, allowing the majority to rule in case of a difference in responses? Coding all statements to handle unforeseen values? The list can go on.

    Your question is not a programming question but a philosophical question, because life is fraught with failures. In what situation would "any failure simply not be acceptable"? People die all the time for stupid reasons, so that will not do. Any life activity has risks, both objective and subjective. To remove all risk of failure from a life , one must remove life. To write a program with all possibilities of failure removed is to not have a program.

    I know, this is a bunch of metaphorical fluff that can not be directly applied, but it is a question that is worth some thought because it stretches the bounds of how we could scope a project, and may help us in our attempts at qwality.

    Enjoy
    dageek

    Excellence is our realm, perfection is God's

      Excellence is our realm, perfection is God's

      How much does this God person charge per hour? ;-)

        Everything.

        As the story goes: The scientists were excited, they had created life! A challange was issued to God, and God accepted. The scientists and God met at the agreed upon place. The contest began, God scooped up a hand full of dust, the scientists reached for a handful of dust. God paused, and said "Hold on there, get your own dust.".

        Enjoy!
        John :)

Re: Software Design Resources
by mattr (Curate) on Aug 23, 2003 at 11:49 UTC
    Building fault tolerant systems is a whole field, you are going to need to do a lot of study. It certainly touches on development methodologies such as XP, practices such as well-defined requirements and unit testing, use redundant hardware and power, use watchdog processes and well-defined error modes (with a catch-all well-defined way of crashing that won't kill your system) at all times, and also hire professionals. You still aren't going to be able to build something for aviation or a power plant with that though. For that you are going to need to make another quantum leap. It costs a lot of money and painstaking attention to make things that good. You might consider studying embedded device development paradigms too.

    You may get some good responses here but I also recommend going to slashdot.org, where this sort of thing has been asked lots of time in the past (search for it). Or if you can't find it, post an Ask Slashdot. You will get a lot of replies.

Re: Software Design Resources
by zby (Vicar) on Aug 22, 2003 at 09:43 UTC
    There is no algorithm determining that a given program is correct (or even that it stops ever). That's mathematically proved you can't have a general procedure directly applicable in all cases. All you can do is try and hope that your case will be one of the speciall cases and you'll have the chance to find the correctness proof. There is no methodology.

    Of course I concentrated here on just part of the whole developement process.

Re: Software Design Resources
by dorko (Prior) on Aug 22, 2003 at 21:22 UTC
    I'm interested in learning more about good software design practices and quality assurance.
    It's about the process. They Write the Right Stuff details the way NASA's on-board shuttle group writes code.

    Block quoth the article:

    This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
    This article would lead one to believe SEI CMM Level 5 is what you are looking for. Interesting statistics about the CMM certification process can be found here.

    Cheers,

    Brent
    -- Yeah, I'm a Delt.

      This software is bug-free

      This is, of course, the reporter saying this not the developers. They do occasionally get bugs. They also have a relentless process to find them, track down why those bugs slipped through the net, and aim to prevent the same class of bug ever occurring again.

      They also have a very large budget.

        Correct. As I recall from the article their developers only find 85% after their code reviews. The testing people find the extra 14.9%+ through very thorough testing. So the bugs are created, they're just resolved before the product ships.

Re: Software Design Resources
by artist (Parson) on Aug 22, 2003 at 03:54 UTC
    It's a wondeful opportunity for you to write something that these readings doesn't provide to you.

    We all will be interested.

    artist

      Perhaps, but I find it difficult to believe that what I've read exemplifies the very best software development practices out there.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://285637]
Approved by diotalevi
Front-paged by diotalevi
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (7)
As of 2024-03-19 11:48 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found