Dan Bernstein, author of qmail, has written an article entitled Some thoughts on security after ten years of qmail 1.0 (PDF). While it doesn't once mention Perl, it is a fascinating read on how to reduce the number and impact of bugs in our code.

One interesting point that he makes is that we often sacrifice security in the pursuit of speed:

Using an interpreter to impose simple data-flow restrictions on the address-extraction code would make bugs in the code irrelevant to security—a huge benefit. However, most programmers will say “Interpreted code is too slow!” and won’t even try it....

Anyone attempting to improve programming languages, program architectures, system architectures, etc. has to overcome a similar hurdle. Surely some programmer who tries (or considers) the improvement will encounter (or imagine) some slowdown in some context, and will then accuse the improvement of being “too slow”—a marketing disaster.

I don’t like waiting for my computer. I really don’t like waiting for someone else’s computer. A large part of my research is devoted to improving system performance at various levels... But I find security much more important than speed. We need invulnerable software systems, and we need them today, even if they are ten times slower than our current systems. Tomorrow we can start working on making them faster.

I predict that, once we all have invulnerable software systems, we’ll see that security doesn’t actually need much CPU time. The bulk of CPU time is consumed by a tiny fraction of our programs, and by a tiny fraction of the code within those programs; time spent on security verification will be unnoticeable outside these “hot spots.”

  • Comment on [OT] Some thoughts on security after ten years of qmail 1.0

Replies are listed 'Best First'.
Re: [OT] Some thoughts on security after ten years of qmail 1.0
by kyle (Abbot) on Nov 06, 2007 at 21:24 UTC

    The author tries to give the impression that qmail has been through a real usage workout (used at very large sites, has lots of installations), and I certainly can't say for sure otherwise. Still, I'm not very convinced that qmail has had the eyeballs and testing to really call it as solid as the author would like to call it.

    I had a look at the Debian popularity contest numbers. There are four times more sendmail installations than qmail, and there are eight times more Postfix installations than sendmail. All of these are well behind exim, which is Debian's default.

    The author tries to argue that minimizing privileges of trusted code is a distraction. He basically says that if it's trusted and it has a bug, it's still a security problem. That's true as far as it goes, but I think it misses the point of minimizing privileges. The point is to reduce the severity of problems created by bugs. This is similar to how I reduce the severity of my daughter's injuries by letting her use safety scissors rather than a chainsaw. Sure, she could still put an eye out if she really tries, but if I've saved her from losing a limb, I think it's a good policy. Sometimes you can't (or don't have time to) fix all the bugs in a program, but you can make the bugs it has do less damage.

    He has good things to say about being secure but less efficient, as clinton has already highlighted.

      I guarantee that qmail has been through the workout that he describes.

      I have no idea what current stats are, but his claim is based on things like this survey he did in late 2001. Based on those numbers, qmail certainly was widely used, particularly at very busy sites. (Particularly Critical Path.) To the best of my knowledge it still is popular for busy sites, though it is not widely deployed among home users. (Which is what the Debian popularity contest shows.) Furthermore his licensing makes it much less popular for a system like Debian. Not only is his software not free by Debian standards, but he does not allow vendors to change his filesystem layout for qmail. That reduces acceptance quite a bit.

      Also given the security claims he made for it then, and the reputation he has, I guarantee that his codebase has been audited. (In fact I personally know more than one person who has audited his code.) That he would only have 4 bugs reported is (by industry standards) nothing short of astounding. Even though further review might find more bugs, I'm confident it wouldn't find many more. And it would certainly not find anything close to the number of problems that there are in sendmail.

      In short, when it comes to security, Dan Bernstein has a well-deserved reputation as an overbearing obnoxious jerk. But he has earned the right to be one, and you should take him seriously.

Re: [OT] Some thoughts on security after ten years of qmail 1.0
by Gavin (Archbishop) on Nov 06, 2007 at 19:58 UTC
    "invulnerable software systems"

    I don't think invulnerable software systems will ever be a reality.
    That is while humans have anything to do with their design, coding, implementation or operation.
Re: [OT] Some thoughts on security after ten years of qmail 1.0
by zshzn (Hermit) on Nov 07, 2007 at 08:08 UTC
    Berstein, as you said, did not mention Perl, but another theme in his paper is very relevant to Perl and programming languages in general. That is the theme of insecure practices limited by compilers and language semantics.
    Most programming environments are meta-engineered to make typical software easier to write. They should instead be meta-engineered to make incorrect software harder to write. An operation that is not exactly what I normally want should take more work to express than an operation that is exactly what I normally want. There are occasions when I really do want arithmetic modulo 2**32 (or 2**64), but I am happy to do extra work on those occasions.
    He brings up a good point there. In this example, when two numbers are added, the compiler can figure out if the result turns out to be smaller than one (or both) of the operands. Is the behavior we currently get, silent overflows, what we would prefer? And if not, why aren't our compilers taking more activist choices in a deterministic case?

    Bernstein doesn't talk about talent in the area of writing bug-free lines of code, he instead talks about making choices to limit factors such as the amount of code and the amount of trusted code. Extending this idea, he advocates programming languages that make it more difficult to write incorrect code, and compilers that perform extended checks. In the same sense that strict and warnings are not necessary pragmas in Perl if your code is perfect, culturally we support their use anyways, we could prefer the same helping hand from C compilers.

    Nowadays I am much more insistent on programming-language support for smaller-scale partitioning, sane bounds checking, automatic updates of "summary" variables (e.g., "the number of nonzero elements of this array"), etc. By "sane bounds checking" I don't mean what people normally mean by "bounds checking," namely raising an exception if an index is out of range; what I mean is automatic array extension on writes, and automatic zero-fill on reads. (Out of memory? See Section 4.2.) Doing the same work by hand is silly.
    Languages providing strong support for those issues and others, and making them easy to use and well-integrated, can greatly limit security vulnerabilities in those areas.

    We are fundamentally less likely to create software vulnerabilities due to the simple revolution of using an SvPV (a structure containing an integer for length and a character array) instead of a null-terminated character array, and only allowing userland (i.e. non-XS) access through a provided API. We can do something right and partake in enforced code reuse.

    Additionally, our choice of programming language frames our focus in programming. The issues we need to focus on when programming in C are not entirely the same as the issues brought to our attention in Perl.

    This may all seem obvious, but we still believe strongly in the responsibility of the individual to code well regardless of the environment he is in. This is ideal, but not realistic. People make mistakes. Limiting those mistakes through code reuse, and finding them through peer review, are benefits provided by using progressive languages that don't require you to resculpt your wheel continually. People should be responsible for the environment choices they make - what they do for their security karma regardless of how well they code an average statement.

    Languages need to be held responsible for the types of code problems they encourage, the frame of the debate, because it is at underlying language and system levels that the security context is first defined.

    UPDATE: Fixed quoting as blazar suggested