Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Projects where people can die

by cog (Parson)
on Sep 07, 2006 at 19:16 UTC ( [id://571781]=perlmeditation: print w/replies, xml ) Need Help??

Suppose you're given a project in which a failure can mean the loss of human lives.

Suppose you want Perl to be the language in that project.

What would be your cautions regarding the choice of Perl?

How would you go about using CPAN modules?

Replies are listed 'Best First'.
Re: Projects where people can die
by BrowserUk (Patriarch) on Sep 07, 2006 at 20:26 UTC

    I would not use Perl. Nor any language that cannot be compiled to machine code and run from ROM. Harddisks can suffer dropouts.

    In at least one machine code, a one-bit change in an opcode could change a common condition JUMP instruction in to a HALT instruction.

    A single bit change in a ascii '0' makes it an ascii '1' and vice vera. In Perl, that could make

    while( 1 ) { ## Critical code }

    Into

    while( 0 ) { ## Critical code that never gets run. }

    With compiled code written to ROM (PROM, EEPROM maybe), preferably with EEC and an embedded crc check, testing can validate the final binary image. There are simply too many possibilities of accidental or malicious change to the source code of non-compiled, and runtime compiled languages.

    I'd use Perl to prototype it maybe, and to test it, but not for final, life-critical application code.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      A single bit change in a ascii '0' makes it an ascii '1' and vice vera.

      Well, yes, it does. However this could equally happen at runtime after the code's loaded from ((EE)?P)?ROM — are you going to guarantee the absence of the effects of cosmic rays / radiation / jam on your processor? This makes the programming language you've used irrelevant.

      The safer way to provide security is to have multiple redundant, different (many people miss this distinction) systems checking each others' results. NASA (IIRC) use three machines to perform the same navigation/guidance tasks — if one disagrees, it's deactivated. They also have a "just land this thing" computer system which can be brought up manually.1.

      Have multiple systems, written by different people, presumably in different languages, cross-checking their results. When they have different results, you have a problem — a bit like realtime testing, if you will.


      1: I don't know if the separate computers are different... and I'm not sure of the exact numbers.

      Update: I've now re-read some of the posts further down about validating the correctness of the OS/compiler etc.... I wouldn't use Perl either. I would, however, still advocate different systems cross-checking their results.


      davis
      Kids, you tried your hardest, and you failed miserably. The lesson is: Never try.
        Are you going to guarantee the absence of the effects of cosmic rays / radiation / jam on your processor?

        Well, there are such things as space-rated cpus, and in any environment/application where radiation is a hazard, they would be used along with secondary protection (lead or gold shielding)--but hardware has moving parts; is subject to wear and tear and tolorances. Hardware fails. Disks fail. Even high quality brand new disks fail. Of course, you can run extensive tests to reduce the likelyhood of some failure modes, but in doing so you run the risk of increasing the likelyhood of others--through wear and tear.

        In anycase, there are no guarentees.

        • Maybe the computer will be hit by a crashing airliner, so you bury it underground encased in steel and concrete.
        • But then you might have an earthquake that vibrates something loose--so you suspend the computer inside its concrete coffin to isolate it from that.
        • But the power supply might get severed--so you put a generator inside the coffin.
        • But that might fail--so you add two.

        It's all about likelyhood, and the most vulnerable component in most computer systems is the harddisk. That's why solid state secondary storage is such a holy grail. Removing that from the equation just makes sense.

        With no guarentees, it's all about minimising risk. And that's about spending your money to achieve the biggest bang for you buck. Of the millions of computer users around the world, it's probable that 5 or 10% have experienced some form of disk failure. I have.

        How many have experienced cpu failure--of any kind? Of those that have, how many could be attributed to some form of radiation degeneration of the cpu (or memory)? Much harder to access as without extreme analysis, there is simply no way to know.

        The point is that it is possible to test Perl code as thouroughly as any other code, but the additional step of repetitive runtime compilation is one further possibility of failure, For non-life critical systems, the additional risk is (in most cases), not worth the cost of elimination. But for life-critical systems, it is not worth the risk not to.

        The safer way to provide security is to have multiple redundant, different (many people miss this distinction) systems checking each others' results.

        I'm cognisant of the technique.

        Applied to a Perl program, this would entail producing a completely separate implementation of perl. Since there are no specs--the existing sources are the spec--there is nothing against which to build such a system, let alone verify it.

        I have a memory of reading an artical--possibly related to the fly-by-wire systems on Eurobus aircraft--that suggested that using a single set of sources compiled by different compilers and targetted at different cpus was better than producing two set of sources in different languages. I can't find references. From memory, the rational went that by starting with a single set of sources, it reduced the complexity, by removing the need to try and prove that two language implementations were equivalent. That somewhat unintuative conclusion actually makes economic sense. Every reduction in complexity comes with an increase in the possibility of proof. Maybe :)


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        ... are you going to guarantee the absence of the effects of cosmic rays / radiation / jam on your processor?

        Actually (BEGIN ANECDOTAL EVIDENCE . . .) I had a software engineering prof in college that used to work on military spec systems. She told a story of doing a demo for some Air Force brass of some sort of avionics that was supposed to be able to work after taking fire. Right before the demo, one of the officers came up and popped out a chip from the board.

        She said it still passed . . . :)

        So yeah, there are people who engineer things for those type of environments with those kind of constraints.

Re: Projects where people can die
by jZed (Prior) on Sep 07, 2006 at 19:38 UTC
    I'd wrap the people in an eval and have them throw exceptions rather than die :-)

    Seriously though, I'd use modules that have extensive test suites and have been used in other production environments (no bleeding edge stuff or latest releases). I'd use Devel::Cover and friends to make darn sure that all of the potentially fatal aspects of the program were testable and tested.

Re: Projects where people can die
by Anonymous Monk on Sep 07, 2006 at 20:26 UTC
    Suppose you're given a project in which a failure can mean the loss of human lives.

    Suppose you want Perl to be the language in that project.

    What would be your cautions regarding the choice of Perl?

    How would you go about using CPAN modules?

    You don't use Perl. Perl isn't designed for life-critical applications; it's designed to make life easy for the coder. That's great when the coder is the guy you care about; in this case, it isn't. The guy who's life is on the line is the guy you care about; and you're going to formally prove that he will never die, no matter what your system does.

    That takes a hell of a lot of engineering and formal design work, and a relativly small amount of coding.

    In any highly serious (life-critical) app, you need formal design, you need formal analysis of the entire state of the system, you need a rock hard, iron-clad spec of the entire thing, and you need QA built in from the ground up.

    Coding time and effort for these sorts of projects is simply irrelevant. The effort it will take to prove every possible logical outcome of the code, and to test every possible branch path is going to dwarf the code itself, no matter what language you use to write it in. The tests will take ages to run; but they will be comprehensive. The certification will take forever to get; but it will formally prove safety (to the degree that you've deemed an acceptable risk). For every line of code you write, there will be hundreds of hours of proof to ensure that that particular line won't kill anyone.

    You'll use a language that compiles directly onto the hardware you're running, like C, or Ada; you won't use any language that requires an operating system, or you'll have to certify every single line of the OS, too. You don't want to do that. Just certifying the correctness of the compiler is going to take years and cost hundreds of thousands, if not more.

    My friends work on subway controls for automatic train systems. They literally spend days debating the impacts of the changes to a single function; they have to prove to all members of the team that what is proposed is correct, and they do so multiple times, at multiple levels of review, so that no one person's mistake will cause a fault in the end product. Coding is the very least of their worries; not that it's easy, but it's at least all the code has to do is match the spec. The spec itself has to be provably correct; and that's the hard part.

    If you're serious about hard-real time control systems, you don't use anything resembling Perl. If you really think you should use Perl, go talk to some professional engineers who build these sorts of systems, and let them change your mind.

    Perl is good for many things. Life critical apps are not one of them. CPAN doesn't enter into it.

      I'm sure I agree with you in the ideal.

      But in the realm of the practical, the US Navy chose Windows(tm) for a critical battleship controls system, which crashed, leaving the battleship stranded for a short time during a wargame.

      So, there's what we should do, and what we actually do. Readers of RISKS digest are well familiar with this principle.

      In that regard, I don't consider Perl and the CPAN to be any riskier than Windows. {grin}

      -- Randal L. Schwartz, Perl hacker
      Be sure to read my standard disclaimer if this is a reply.


      Update: Yeah, for me anything at sea that is used in battle is a "battleship". Well, either that or an aircraft carrier. Shows what I know!
        It's not the same. Military enterprises are vastly different from civillian ones.

        A corporation that knowingly fails to employ proper engineering tactics could end up with it's entire staff, from the CEO down to the poor schmuck who coded the thing, up on a huge string of both civil and criminal charges. It's simply not acceptable to knowingly let civillians die. That's not something corporations are allowed to do.

        It's the right of the military to get their own soldiers killed however they see fit: as decoys, as cannon fodder, to distract or confuse the enemy, or in a whole host of other ways. It's not great for morale, but it's certainly something a military is allowed to do.

        In the case you cite, the military decided that the risk to it's soldiers was acceptable. That same risk would not be acceptable in a civilian context; but the military is free to sell the lives of it's soldiers as richly or as cheaply as it chooses.

      You'll use a language that compiles directly onto the hardware you're running, like C...

      You really ought to mark the sardonic parts of your post. I almost fell out of my chair.

      Yes, in one sense C compiles down to hardware (or at least the hardware instructions the VM inside the CPU provides), but I'm not sure "safety" is a word that should apply to a language that allows pointer arithmetic.

        Well, control systems have been written in *Assembly Language*; the development process, correctness by construction, and exhaustive testing are what are expected to produce correct results, not intrinsic features of the language. And if a given language feature, such as pointer arithmetic, is deemed too unsafe (or even just too unpredictable), it is simply not used.

        That said, you're totally right: Ada is much safer than C for the types of errors you mention, and thus more widely, (but not exclusively) used for such applications.

Re: Projects where people can die
by punch_card_don (Curate) on Sep 07, 2006 at 19:54 UTC
    Back-up systems.

    Types of computing that can result in death:

    • some kind of controller of something physical
    • a system that produces data that people's exposure to something is based on
    • a system that produces information used to decide if something or somewhere is safe
    In the latter two cases, the calculation is usually done off-line, with time for reflection becfore action. What's needed is independent back-up calculations to corobborate the first one.

    In the first case, controllers of physical systems, it's controller malfunction that is the danger. You must have either back-up controllers that monitor the primary and can detect malfunction and take over, or else security measures that physically prevent the system from doing anything dangerous even if the controller instructs it to do so.

    In other words, if peoples' lives are really at stake, it's back-up systems you want to demand. Personally I'd ask for one in Perl and another in another language on a separate box.




    Forget that fear of gravity,
    Get a little savagery in your life.

      I strongly concur with the idea of systemic redundancy. In the case of E911 location systems, for example, some of them are programmed with the notion of fallbacks, such that the appropriate Public Safety folks get a reasonably precise location where it is available, and a less precise one (along with information about its precision) when the most precise location is not available, continuing to fall back to less and less precise information.

      In general, if my life were at stake, I would prefer that your system be cross-checked by a robust set of independent production processes (and human agents) to maximize my chances (in addition to an exhaustive test suite).

Re: Projects where people can die
by eyepopslikeamosquito (Archbishop) on Sep 07, 2006 at 20:58 UTC
Re: Projects where people can die
by perrin (Chancellor) on Sep 07, 2006 at 21:21 UTC
    What about when you're working on a project and you want to kill somebody? Would you use Microsoft Project? How would you assign that person's tasks automatically to someone else after clubbing them to death with a copy of "Code Complete"?
Re: Projects where people can die
by tilly (Archbishop) on Sep 08, 2006 at 03:46 UTC
    To my eyes the ethical question is, Do I think that this system, written in Perl, will be an improvement? If it is, then I'm willing to go ahead with it, and I'm willing to accept responsibility for potentially killing people. (I would, of course, be extremely cautious, insist on code reviews, pay a lot of attention to testing, etc, etc, etc. I'd also be very cautious about quality control with external modules, etc. If you're working somewhere that deals with these kinds of issues, they undoubtably have procedures. Take them seriously and follow them.)

    Then again I'm married to a doctor. That does change one's perspective. You learn that people are killed all of the time by accident. But don't let that paralyze you because people also die because of hesitation. And sometimes you simply have to gamble with someone's life. If you're unable to live with that, there are some jobs you should not have.

    Oh, and I've also heard enough horror stories that I'm more willing to accept imperfection. That's why I cited improvement as an ethical standard. If you kill 50 people because of a bug, but you've saved 150 because the new system works better than the old, then you're up 100 lives. It would be better to be up 150, but 100 lives saved is nothing to sneeze at.

Re: Projects where people can die
by b10m (Vicar) on Sep 07, 2006 at 19:37 UTC
Re: Projects where people can die
by swampyankee (Parson) on Sep 07, 2006 at 22:16 UTC

    Even bad directions can be life critical (somebody u-turned into oncoming traffic when his GPS route-finding software told him to), so "life critical" can involve almost any kind of software. Presuming you're limiting it to software where failure is likely to be immediately lethal, with no user recourse, I'd (gag, choke) probably use Ada.

    The problem is less the language than the programming discipline applied to the product. I remember reading that NASA estimated changing one line of code in Apollo mission software cost about 4 000 USD. In 1965. So, the costs associated with this level of discipline are most assuredly not trivial, and are likely to be comparable regardless of language.

    Perl is not going to be running on embedded systems (for a few years), so the life critical projects are unlikely to involve something like your car's anti-lock brakes. It may involve emergency call systems (911 systems in the US) or air traffic control or, God help us, national command and control systems. In any case, the last three are likely to be very large, very complex projects, regardless of language.

    One famous case -- a medical risk of computers -- involved a badly designed interface, but not any kind of "real-time" programming.

    emc

    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Albert Einstein
Re: Projects where people can die
by bigmacbear (Monk) on Sep 08, 2006 at 02:36 UTC

    Several other people have said "You don't use Perl." One or two have articulated why you don't use Perl, so I thought I'd bring together some of the reasons:

    • No operating system is designed to be reliable enough for use in life-safety applications, and all disclaim (or ought to disclaim) their fitness for such purposes in their end-user license agreements.
    • The licenses under which Perl is distributed expressly disclaim all liability, including fitness for a particular purpose. That is the only possible way to provide software free (as in beer).
    • This means that anyone using off-the-shelf software for life-safety-critical applications assumes the full liability by using such software for purposes it cannot be guaranteed to safely fulfill.
    • Some projects just should not be automated, and some problems cannot be solved with computers. In the few places where computers are trusted with life-safety issues, they are invariably custom-built from both a hardware and software perspective, and do not use a general-purpose operating system of any sort.

    Enough said.

Re: Projects where people can die
by hardburn (Abbot) on Sep 07, 2006 at 22:07 UTC

    In a life-critical application using an interpreted/VM language, you don't just want to ensure the correctness of your own code. You also have to ensure the correctness of the interpreter/virtual machine. In the case of Perl, the internal code is so full of feaping creaturisim and hysterical raisins that verifying correctness is nearly impossible.

    However, that does not mean that VMs in general are a bad idea for such an application. One can start by writing a VM with a stripped down set of opcodes which are designed so that the resulting programs are easy to verify. The VM need not be complex (complexity would only make the verification process harder) and can be integrated into a ROM chip (to borrow BrowserUK's idea).


    "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

Re: Projects where people can die
by adrianh (Chancellor) on Sep 08, 2006 at 07:02 UTC
    Suppose you want Perl to be the language in that project.

    I would only suppose that if I thought that Perl would be the best language for that project - which would mean I had already assessed the risks/benefits.

    Don't pick a language because you "want" to. Pick one that's best for the job.

      Do note that the choice of languages may be limited due to the technical team involved :-)
        Do note that the choice of languages may be limited due to the technical team involved :-)

        If a developer came to you and said:

        "Hi - I'm developing a life critical piece of software. I'm not going to build the best solution possible because my developers only know language Foo".

        Would you trust them to do a good job?

Re: Projects where people can die
by zentara (Archbishop) on Sep 08, 2006 at 12:07 UTC
    How about putting it into a Real world situation, and analyzing it for real world things like a cost-benefit analysis, etc.

    Suppose we want to monitor dangerous gasses in a deep mine. The sensors and wires are pre-determined and our job is to make a monitoring system, to deal with it.

    The first thing we run into is cost. 3 redundant computers running Perl, are easily obtainable and cheap; whearas the radiation hardened ROM monitors are expensive and need to be special ordered. This factor can be very important in cash-strapped economies, and may lead to people NOT replacing a faulty ROM unit, due to cost...( after all ... 2 monitors should be as good as 3 right? And the current budget only allowed 1 spare unit, and we used it last week).

    Then we have the problem of on-site modifications. Like what happens when its discovered that the sensors change output levels with age, and need calibration on-site. Thats simple to do with the Perl computers, but with a ROM system, they will need to put a post-it note on it, saying every reading between .05 and 6.3 must be adjusted by log(x) percent. Or how about the manager is going out of town, and wants realtime email updates messaged to him? Ooops, the ROM didn't allow for that.

    I think you can see where I'm going with this. There are so many sitautions, where the infrastructure is cash-strapped, yet human lives are at risk. Furthermore, cheap computers and Perl is within the budget, but custom ROM chips are not. So the Perl solution would probably be very useful in these situations, and would save many lives, even though it has drawbacks.


    I'm not really a human, but I play one on earth. Cogito ergo sum a bum
      Then we have the problem of on-site modifications.

      ...which overload the capacity of your system, causing it not to recognize that the gas levels have siginificantly increased in the last N minutes becuase it was sending out email to the PHB that wanted the change made to the control system in the first place.

      Of course, this knee-jerk reaction is based only on limited reading, and not real world experience in the subject. I am not sure that I would have the needed level of confidence in the tools or my skill to be on that project :)

      --MidLifeXis

Re: Projects where people can die
by swampyankee (Parson) on Sep 08, 2006 at 17:42 UTC

    As I mentioned in my response, there are different kinds of life critical software.

    For something like a 911 (police|fire|ambulance dispatch) system, Perl would probably be viable, as the system is going to be quite large and complex, requiring access to databases (to locate where and from whom calls originate, locations of firehouses, availability of personnel), GIS (which is the best firehouse, directions to the victim), and the ability to handle many calls, and route them to many human dispatchers. While this is certainly a life-critical system, it's certainly not in the same category as, say, the software for a digital flight control system or ABS.

    I'm not sure what language I'd use for programming a life critical embedded system, such as DFCS or ABS; this is extremely far out of my range of expertise. I know they've been done, successfully, in assembler, Fortran, Jovial, Coral, Ada, and C. I know of embedded (but not necessarily "life critical") systems programmed in Forth (which was created for this purpose) and Basic. I believe the Occam language was developed as a verifiable language for writing life-critical software. I've no idea if it's being used for this.

    I think that the discipline around the entire project is far more important than the selected language. A lot of early life-critical software was written in assembler. Any sensible programming language gives enough rope for a programmer to (figuratively) hang (him|her)self, even Ada. Assembler provides enough rope to hang everybody in the neighborhood.

    emc

    At that time [1909] the chief engineer was almost always the chief test pilot as well. That had the fortunate result of eliminating poor engineering early in aviation.

    —Igor Sikorsky, reported in AOPA Pilot magazine February 2003.
Re: Projects where people can die
by lin0 (Curate) on Sep 22, 2006 at 21:23 UTC

    Update: URL's turned into links

    I feel that I am getting late to the discussion but anyways, here are my 2 cents:

    If you are writing code for critical applications, the first thing you need to do is to make it as specific as possible: only one task!

    Then you have to validate your application. Please, note that you also have to validate:

    • Your operating system
    • Your compiler (Perl)
    • Your drivers
    • Any library that your application is using (modules from CPAN)
    • Even the processor on which your application is supposed to run (remember that some time ago 6 million pentium units were shipped with a flaw in the floating point arithmetic) (see for example: [1])

    In short, you have to make your code as specific as possible and you have to make sure you have a strong validation system to ensure the quality of the software used in your application.

    For general guidelines related to the development of software for medical applications, I recommend you to have a look at the FDA (US Food and Drug Administration) guidelines in [2,3,4,5]

    lin0

    1. http://www.fda.gov/cdrh/comp/guidance/fod456.pdf
    2. http://www.fda.gov/cdrh/ode/guidance/585.html
    3. http://www.fda.gov/cdrh/comp/guidance/938.html
    4. http://www.fda.gov/cdrh/ode/guidance/416.html
    5. http://www.fda.gov/cdrh/ode/guidance/337.html

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://571781]
Approved by Joost
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (8)
As of 2024-04-19 17:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found