OpenBSD Journal

OpenBSD and auditing (Research question)

Contributed by Dengue on from the maximum-uptime dept.

Arrigo Triulzi writes : "I am helping out a friend working on a software reliability project and we've been having lots of discussions regarding reliability and Open Source software. I have been suggesting OpenBSD as a good example of how software can be made reliable by auditing. Unfortunately I can find very little about the OpenBSD project (I did search a bit all over the place but found very little "hard facts").

What would be marvelous is if someone could let me have some pointers explaining how the process works and, in particular, hard data about things like bugs found. I have a distinct recollection about a message regarding the *printf () audit but just can't find it anywhere on the web. This is a serious research project, which has some pages about it available at the Centre for Software Reliability of City University in London, UK."

(Comments are closed)


Comments
  1. By Anonymous Coward () on

    I have a feeling a lot of what is found stems from a) intuition and b) luck. OpenBSD strikes me as somewhat minimalist, which eases the way to decipher problems.

    Proactive might mean, "I read a lot of source." If you read a lot of source, at some point you may find a bug. You can apply this information to other parts of the source tree.

    OpenBSD's core team also strikes me as well knit. Each member tends to be responsible for a specific area of the project-- and they collaborate. I doubt it is pulling teeth to get other members to look into a potential problem.

    I assume they get back to each other quickly as well-- as an outside developer I usually receive a response within a day. So I doubt potential issues are put off for too long.

    On that note, OpenBSD's core team tends to be made from seasoned developers. Which lends itself to figuring out the best possible solution.

  2. By Arrigo Triulzi () arrigo@maths.qmw.ac.uk on mailto:arrigo@maths.qmw.ac.uk

    OK, so the mechanism is sort of clear to me: tight group of individuals, quick feedback, I assume a fair amount of determination to get through what might well be a rather boring job (I frankly don't think I'd like to spend my days trawling through the usage of *printf() all over an OS so I respect their work even more).

    Now for the real question: are there figures? Is there anyone who you think could take the time to reply to an e-mail on this subject in the core team? The last thing I want is to sound like a time-waster to them so any suggestion or data is welcome.

    Thanks.

    Arrigo

  3. By Marc Espie () espie@openbsd.org on mailto:espie@openbsd.org

    Focus plays a large part. Potential security problems have really quick turn-around times.
    In the cases I've seen, it takes usually less than 6 hours between the time a security issue is noticed, and the resulting commit that fixes it. It is a question of priorities.

    Pro-active is important too: fix potential problems. Don't sit on your ass trying to prove that the problem is exploitable. Going farther: provide better interfaces where conventional practice is obviously error-prone (to wit: strlcpy).

    Look for patterns: a distressingly large number of security issues come from the same basic errors. Not knowing the API you're programming for (notice how OpenBSD manpages often get new sample code and extra caveats about common programming errors ?).
    Obviously fishy code that's almost impossible to read, looks too clever to be good, and is plain wrong 99 cases out of 100.

    And your standard buffer overflows and race conditions, a distressing number of times...

    Many runs over the source tree happen as follow:
    somebody notices something very fishy somewhere in a program. A bad programming pattern is identified. A full-scale search of the whole source tree is then run, that usually identifies at least ten instances of the same pattern.

    Nice `success stories' of an incredibly hard to found bug don't happen. Most security holes are obvious in retrospects. Audits are tedious. The benefits are huge. The alternative not morally correct.

    In the end, secure code simply is code without bugs. Written clearly. With a tendency to avoid cuteness, and to be plain and boring for 90 lines out of 100. It's the remaining 10 lines that make the code go fast and be smart anyways.

  4. By Ed Trada-Oteh () ed@trada-oteh.com on trada-oteh.com/~ed

    I asked about this on the misc list as to whether this audit was eyeballed or whether there were some tools, to which I got mostly ignored and a terse "eyeballed" response.

    It would seem that the "Open" in "OpenBSD" certainly doesn't apply to being open about
    their methodology.

    There have been previous 'accusations' of
    hypocrisy in the 'we fix our bugs but don't disclose' policy leading some people to see
    OpenBSD as hiding it's vulnerabilities for
    PR reasons. The OpenBSD response is that it's
    simply 'not reasonable to disclose every
    bug fix which may or may not be exploitable'.

    So, a policy of not revealing any specific
    methods or tools or numbers or details about
    the source code audit basically leaves the OpenBSD
    project as a sort of not-open open-source project.

    Good luck.

Credits

Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original deadly.org with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]