Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Security is nothing but an obstacle course. The only time security is not merely an obstacle course is when you completely destroy the thing you're trying to protect.

This is fundamentally something everyone needs to understand about computer security-- it's all about creating bigger and harder obstacles (including literal, physical obstacles). You can never absolutely secure something while it exists.



That's a fine argument for rejecting all of engineering. An asteroid could always strike the bridge we're building! Why bother making it structurally sound?


I'm not sure why you're treating my comment as an argument to reject security (or to reject anything other than your specific wording choice). I'm just saying that security, fundamentally, is about making obstacles.

There's nothing implied there regarding the importance/unimportance/quality of those obstacles. I don't think understanding its fundamental nature takes anything away from Security.


That's really not true. I think you're conflating cryptography with security. In crypto, I suppose you could consider algorithms that increase attacker cost as "obstacles", though I think the word loses meaning when the "obstacle" involves summoning more CPU cores than there are atoms in the solar system.

In practical security, closing a buffer overflow, sanitizing inputs, and proving code paths are not "obstacles". There are a finite number of vulnerabilities in any piece of code.


In practical security, closing a buffer overflow, sanitizing inputs, and proving code paths are not "obstacles".

I think you're being a bit picky over wording, an "obstacle" is just something which makes it harder for some to break your system, examples of which are closing buffer overflows and sanitizing inputs.

You could, possibly, use it as an argument against engineering, but I think you'd be wrong. The same as someone arguing "we're all going to die anyway so lets get it over with now" is wrong: it means that you have to make the most of what you do have.


This isn't just pickiness. This is two totally conflicting mindsets about security. I'll be ungenerous and say that mine, which rejects the concept of obstacle courses, is the practitioner's mindset.

We don't let things ship when we know they have exploitable vulnerabilities. We recognize that there are known unknowns and unknown unknowns, and we try to mitigate the former. But the known knowns? Come on. Just turn SSL on. The Javascript rewriting hack is not hard.


I'm not referring to cryptography, I'm referring to security in the general sense. Security is fundamentally about placing obstacles between your assets and those who you don't want to use/alter/control those assets.

When you design software, you have all sorts of assets you'd like to protect, even for the simplest cases. Administrative access, general data, CPU cycles, workflow, network bandwidth, and maybe even passwords, secrets, etc. Your job (with a security design hat on) is to place the firmest obstacles you can between those assets and those you don't want to use them (usually prioritized by a combination of probable risk, severity of loss, and cost to protect).

Thinking in terms of assets you need to protect is a highly recommended way to design securely, regardless of whether you're talking about military, bodyguards, buildings, software, cryptography, or systems. You then proceed to put up your best obstacles and then reshore/rebuild/redesign those obstacles when they are known to be compromised or compromisable. All the while ensuring that the system operates as efficiently as you can reasonably manage.

Fixing "a buffer overflow" is just a method for repairing the software obstacles you already had in place. Sanitizing inputs is, indeed, adding an obstacle for attackers. "Proving code paths" takes into account that your obstacles remain in place to protect your data with certain assumptions about the systems involved.

If you're interested in security engineering in general, I recommend picking up some of the latest (last 5-10 years) literature on Threat Modeling as it really can change one's perspective on what security is all about. A lot of engineers I've worked with think of it in terms of "using the best string library", when it's more about designing systems to protect your assets as best as you can and still get your job done.

There are all sorts of practices, tools, fixes, libraries, etc that help improve existing "bricks" in your obstacles, but they're absolutely no replacement for actually understanding what you're trying to protect and having a strategy for preventing those from being compromised. The best security is designing such that you never put your assets at risk than that you use the right libraries to, for example, manage them during transit.


Man, this sucks. This is a stupid semantic argument ("obstacle" versus "control" versus "constraint"), but it's also what's wrong with a lot of crappy security out there today.

An obstacle is something you overcome. I don't think you mean it that way, but that's what a lot of people think. So, for instance, a Javascript hashing scheme backed by a Greasemonkey script that tries to verify that passwords actually make it through the hash function. That's good, because it "adds obstacles". The security of the system is the sum of the value of all the obstacles.

No. The security of the system is inevitably the value of the 1-2 most important controls and constraints. Think of it like the difference between O(log n) and an O(n) algorithms: the constant factors don't mean much. So, you can do all sorts of gymnastics with hashes and nonces and salts (and timestamps and sequence numbers and MACs), but you turn SSL on, and now the only thing that matters is SSL.

The "obstacle" mindset, also known as "defense in depth", is what gets us IPS, web application firewalls, and antivirus. None of these $50,000 products work. But they're defended by managers and purchasers and vendors as "another layer in a defense in depth strategy". What we need is software that works, with defenses that are clear and fundamentally sound. What we get, too often, is band-aids.

Again, I apologize, because I'm turning you into a straw man and I don't mean to. The word "obstacle" sets me off. It shouldn't, because obstacles pay for my consulting team and my development team. I should say, "more obstacles, please!" "Please, build another ActiveX control to implement an AES challenge-response protocol with a compiled-in key!" "Please, build another web filter for which every nonminimal UTF-8 encoding variant is another security advisory for!"

Oh well. I'm old.

By the way, being somewhat close to the drama here, I want to note that the "new Threat Modeling" is a bit controversial, the terms are still up in the air, and if you want to learn more about security, you'd be far better off reading Ferguson's "Practical Cryptography" and McDonald, Dowd, and Schuh's "Art of Software Security Assessment".

Don't build obstacle courses. Design stuff that works.


So, you can do all sorts of gymnastics with hashes and nonces and salts (and timestamps and sequence numbers and MACs), but you turn SSL on, and now the only thing that matters is SSL.

Just as a side note: SSL libraries are big, ugly, and bug-prone. If you use SSL for user logins, your users' login information will be more secure... but your server will be less secure.


Wow. I think you should support that claim with evidence.


You want evidence for the fact that SSL libraries have bugs?

http://security.freebsd.org/advisories/FreeBSD-SA-07:08.open... http://security.freebsd.org/advisories/FreeBSD-SA-06:23.open... http://security.freebsd.org/advisories/FreeBSD-SA-06:19.open... http://security.freebsd.org/advisories/FreeBSD-SA-05:21.open... http://security.freebsd.org/advisories/FreeBSD-SA-04:05.open... http://security.freebsd.org/advisories/FreeBSD-SA-03:18.open... http://security.freebsd.org/advisories/FreeBSD-SA-03:06.open... http://security.freebsd.org/advisories/FreeBSD-SA-03:02.open... http://security.freebsd.org/advisories/FreeBSD-SA-02:33.open... http://security.freebsd.org/advisories/FreeBSD-SA-01:51.open...

Is that enough?

OpenSSL and OpenSSH are tied, at 10 advisories each, as the pieces of software which have been responsible for the most FreeBSD Security Advisories -- outdoing Sendmail and BIND (7 advisories each), procfs (6 advisories, and removed from the default system configuration due to its poor record), and tcpdump and cvs (5 advisories each).


Of the 12 postings you provided, 6 have nothing to do with server security, 2 are dupes, and only two of the remainder date to after 2004. Thanks for making me do that research. I guess I deserve it.

The comparison to Sendmail? Pretty laughable. Why don't you work from the real list of Sendmail vulns, not the ones in your personal database?

Now, I'll respond: under what circumstances would you advise a prospective YC app developer to avoid SSL because of the risk of server vulnerabilities?


Of the 12 postings you provided ... 2 are dupes

I only posted 10 links, which is probably why you think there were 2 duplicates. :-)

The comparison to Sendmail? Pretty laughable. Why don't you work from the real list of Sendmail vulns, not the ones in your personal database?

FreeBSD security advisories were an easily available list of vulnerabilities which were assessed on the same basis. If I were going to "the real list of Sendmail vuln[erabilities]" (whatever you consider that to be) then I'd also have to use a real list of OpenSSL vulnerabilities -- including those which didn't affect FreeBSD because we didn't ship those versions, and the "oops, last months' security patch was broken" vulnerabilities which didn't affect FreeBSD thanks to the fact that the FreeBSD security team proofreads vendor patches.

under what circumstances would you advise a prospective YC app developer to avoid SSL because of the risk of server vulnerabilities?

If they didn't care about the confidentiality or authenticity of data being transmitted, then I would advise them to not use SSL.

More importantly, if they were using SSL, I'd advise them of the increased risk and suggest additional layers of defence -- for instance, terminating HTTPS within a jail at a proxy which forwards requests in plaintext over a localhost connection.

Of course, individual circumstances always vary, so it's hard to give any sort of blanket advice.


You have two advisories for the same 0.9.7l get-ciphers vulnerability. I have, as you've noticed, lost the ability to count. Yes, less than 40% of the evidence you provided survives a minute's scrunity.

If you really think OpenSSL has a worse track record than Sendmail, assert it directly. I don't think you will.

I think you've just provided some spectacularly bad advice to web devs here, Colin.


You have two advisories for the same 0.9.7l get-ciphers vulnerability.

No, there's one advisory for the original vulnerability, and a second advisory for a new vulnerability which was added when OpenSSL shipped a broken patch (this one we didn't notice in time -- mea culpa).

If you really think OpenSSL has a worse track record than Sendmail, assert it directly. I don't think you will.

Overall? No -- Sendmail had a horrible track record in the past. Recently? Yes, I would say that OpenSSL has a worse track record than Sendmail over the past 4 years.

I think you've just provided some spectacularly bad advice to web devs here, Colin.

You're entitled to your opinion, of course, but I'd like to hear more details -- which bit in specific do you consider was bad advice?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: