Of course the password is in plaintext. Logins are done via HTTP, not via HTTPS. You know, there isn't that little yellow lock thingy in the bottom left corner of the window?
There are still ways to add a little extra security for non-ssl logins. One way is by hashing the password via javascript with a random number provided by the server before posting it via HTTP. (see http://pajhome.org.uk/crypt/md5/auth.html)
Security is nothing but an obstacle course. The only time security is not merely an obstacle course is when you completely destroy the thing you're trying to protect.
This is fundamentally something everyone needs to understand about computer security-- it's all about creating bigger and harder obstacles (including literal, physical obstacles). You can never absolutely secure something while it exists.
That's a fine argument for rejecting all of engineering. An asteroid could always strike the bridge we're building! Why bother making it structurally sound?
I'm not sure why you're treating my comment as an argument to reject security (or to reject anything other than your specific wording choice). I'm just saying that security, fundamentally, is about making obstacles.
There's nothing implied there regarding the importance/unimportance/quality of those obstacles. I don't think understanding its fundamental nature takes anything away from Security.
That's really not true. I think you're conflating cryptography with security. In crypto, I suppose you could consider algorithms that increase attacker cost as "obstacles", though I think the word loses meaning when the "obstacle" involves summoning more CPU cores than there are atoms in the solar system.
In practical security, closing a buffer overflow, sanitizing inputs, and proving code paths are not "obstacles". There are a finite number of vulnerabilities in any piece of code.
In practical security, closing a buffer overflow, sanitizing inputs, and proving code paths are not "obstacles".
I think you're being a bit picky over wording, an "obstacle" is just something which makes it harder for some to break your system, examples of which are closing buffer overflows and sanitizing inputs.
You could, possibly, use it as an argument against engineering, but I think you'd be wrong. The same as someone arguing "we're all going to die anyway so lets get it over with now" is wrong: it means that you have to make the most of what you do have.
This isn't just pickiness. This is two totally conflicting mindsets about security. I'll be ungenerous and say that mine, which rejects the concept of obstacle courses, is the practitioner's mindset.
We don't let things ship when we know they have exploitable vulnerabilities. We recognize that there are known unknowns and unknown unknowns, and we try to mitigate the former. But the known knowns? Come on. Just turn SSL on. The Javascript rewriting hack is not hard.
I'm not referring to cryptography, I'm referring to security in the general sense. Security is fundamentally about placing obstacles between your assets and those who you don't want to use/alter/control those assets.
When you design software, you have all sorts of assets you'd like to protect, even for the simplest cases. Administrative access, general data, CPU cycles, workflow, network bandwidth, and maybe even passwords, secrets, etc. Your job (with a security design hat on) is to place the firmest obstacles you can between those assets and those you don't want to use them (usually prioritized by a combination of probable risk, severity of loss, and cost to protect).
Thinking in terms of assets you need to protect is a highly recommended way to design securely, regardless of whether you're talking about military, bodyguards, buildings, software, cryptography, or systems. You then proceed to put up your best obstacles and then reshore/rebuild/redesign those obstacles when they are known to be compromised or compromisable. All the while ensuring that the system operates as efficiently as you can reasonably manage.
Fixing "a buffer overflow" is just a method for repairing the software obstacles you already had in place. Sanitizing inputs is, indeed, adding an obstacle for attackers. "Proving code paths" takes into account that your obstacles remain in place to protect your data with certain assumptions about the systems involved.
If you're interested in security engineering in general, I recommend picking up some of the latest (last 5-10 years) literature on Threat Modeling as it really can change one's perspective on what security is all about. A lot of engineers I've worked with think of it in terms of "using the best string library", when it's more about designing systems to protect your assets as best as you can and still get your job done.
There are all sorts of practices, tools, fixes, libraries, etc that help improve existing "bricks" in your obstacles, but they're absolutely no replacement for actually understanding what you're trying to protect and having a strategy for preventing those from being compromised. The best security is designing such that you never put your assets at risk than that you use the right libraries to, for example, manage them during transit.
Man, this sucks. This is a stupid semantic argument ("obstacle" versus "control" versus "constraint"), but it's also what's wrong with a lot of crappy security out there today.
An obstacle is something you overcome. I don't think you mean it that way, but that's what a lot of people think. So, for instance, a Javascript hashing scheme backed by a Greasemonkey script that tries to verify that passwords actually make it through the hash function. That's good, because it "adds obstacles". The security of the system is the sum of the value of all the obstacles.
No. The security of the system is inevitably the value of the 1-2 most important controls and constraints. Think of it like the difference between O(log n) and an O(n) algorithms: the constant factors don't mean much. So, you can do all sorts of gymnastics with hashes and nonces and salts (and timestamps and sequence numbers and MACs), but you turn SSL on, and now the only thing that matters is SSL.
The "obstacle" mindset, also known as "defense in depth", is what gets us IPS, web application firewalls, and antivirus. None of these $50,000 products work. But they're defended by managers and purchasers and vendors as "another layer in a defense in depth strategy". What we need is software that works, with defenses that are clear and fundamentally sound. What we get, too often, is band-aids.
Again, I apologize, because I'm turning you into a straw man and I don't mean to. The word "obstacle" sets me off. It shouldn't, because obstacles pay for my consulting team and my development team. I should say, "more obstacles, please!" "Please, build another ActiveX control to implement an AES challenge-response protocol with a compiled-in key!" "Please, build another web filter for which every nonminimal UTF-8 encoding variant is another security advisory for!"
Oh well. I'm old.
By the way, being somewhat close to the drama here, I want to note that the "new Threat Modeling" is a bit controversial, the terms are still up in the air, and if you want to learn more about security, you'd be far better off reading Ferguson's "Practical Cryptography" and McDonald, Dowd, and Schuh's "Art of Software Security Assessment".
Don't build obstacle courses. Design stuff that works.
So, you can do all sorts of gymnastics with hashes and nonces and salts (and timestamps and sequence numbers and MACs), but you turn SSL on, and now the only thing that matters is SSL.
Just as a side note: SSL libraries are big, ugly, and bug-prone. If you use SSL for user logins, your users' login information will be more secure... but your server will be less secure.
OpenSSL and OpenSSH are tied, at 10 advisories each, as the pieces of software which have been responsible for the most FreeBSD Security Advisories -- outdoing Sendmail and BIND (7 advisories each), procfs (6 advisories, and removed from the default system configuration due to its poor record), and tcpdump and cvs (5 advisories each).
Of the 12 postings you provided, 6 have nothing to do with server security, 2 are dupes, and only two of the remainder date to after 2004. Thanks for making me do that research. I guess I deserve it.
The comparison to Sendmail? Pretty laughable. Why don't you work from the real list of Sendmail vulns, not the ones in your personal database?
Now, I'll respond: under what circumstances would you advise a prospective YC app developer to avoid SSL because of the risk of server vulnerabilities?
I only posted 10 links, which is probably why you think there were 2 duplicates. :-)
The comparison to Sendmail? Pretty laughable. Why don't you work from the real list of Sendmail vulns, not the ones in your personal database?
FreeBSD security advisories were an easily available list of vulnerabilities which were assessed on the same basis. If I were going to "the real list of Sendmail vuln[erabilities]" (whatever you consider that to be) then I'd also have to use a real list of OpenSSL vulnerabilities -- including those which didn't affect FreeBSD because we didn't ship those versions, and the "oops, last months' security patch was broken" vulnerabilities which didn't affect FreeBSD thanks to the fact that the FreeBSD security team proofreads vendor patches.
under what circumstances would you advise a prospective YC app developer to avoid SSL because of the risk of server vulnerabilities?
If they didn't care about the confidentiality or authenticity of data being transmitted, then I would advise them to not use SSL.
More importantly, if they were using SSL, I'd advise them of the increased risk and suggest additional layers of defence -- for instance, terminating HTTPS within a jail at a proxy which forwards requests in plaintext over a localhost connection.
Of course, individual circumstances always vary, so it's hard to give any sort of blanket advice.
You have two advisories for the same 0.9.7l get-ciphers vulnerability. I have, as you've noticed, lost the ability to count. Yes, less than 40% of the evidence you provided survives a minute's scrunity.
If you really think OpenSSL has a worse track record than Sendmail, assert it directly. I don't think you will.
I think you've just provided some spectacularly bad advice to web devs here, Colin.
You have two advisories for the same 0.9.7l get-ciphers vulnerability.
No, there's one advisory for the original vulnerability, and a second advisory for a new vulnerability which was added when OpenSSL shipped a broken patch (this one we didn't notice in time -- mea culpa).
If you really think OpenSSL has a worse track record than Sendmail, assert it directly. I don't think you will.
Overall? No -- Sendmail had a horrible track record in the past. Recently? Yes, I would say that OpenSSL has a worse track record than Sendmail over the past 4 years.
I think you've just provided some spectacularly bad advice to web devs here, Colin.
You're entitled to your opinion, of course, but I'd like to hear more details -- which bit in specific do you consider was bad advice?
It adds a little. It means that a passive eavesdropper has to mount a offline dictionary attack before knowing the plaintext password. That's better than nothing.
No. Respectfully, you too have failed to think this through. Attackers with access to raw traffic can (and routinely do) change the traffic. Attackers will simply rewrite your hashing Javascript in transit.
That's an active attack. My statement was clear and correct. The number of people who can passively eavesdrop on traffic (eg, on open wifi) is much larger than those who can dynamically change traffic in transit.
There are no passive-only attackers. Attackers who can observe raw traffic can hijack it (if it's the '90s) or redirect it (if it's 5 years ago, and in Brazil, and money is involved --- just to make it specific).
Look, we're locked in an Internet message board death struggle and neither of us are going to concede anything, so let me just finish with this tangent:
If you tried to sell an app to a Fortune 1000 company that defended against passive-only attackers but left logins open to active attackers, and they contracted out a 1 week 1 person web pen test to make sure your app was safe for peripheral customer data to go inside, you'd get dinged for this and you'd cut a dot release.
If, instead of cutting a dot release you explained why it was worth them moving ahead with a pilot that defended against passive attackers, you would Lose The Sale. Seen it happen.
I don't much care about your Hacker News password, but lots of you write applications, and I've seen some of the most unlikely (message boards, bug trackers, blogs---err, content managers) wind up in security audit hell. My advice, take it or leave it: don't bother with these Javascript hash schemes.
As should have been clear, I wasn't talking about apps with Fortune 1000 company customer data. I certainly did not suggest the mild javascript-hashing technique would be appropriate for such situations. (So, your 90+ word tangent hypothesizing that I might try to sell such a thing is... obdurate? A strawman? Unfair?)
And, you seriously think there are "no" passive-only attackers? No people happy to merely scan or log traffic, not actively hijacking TCP sessions, but looking for info to exploit later? I suggest both the guy in the wifi cafe running a sniffer, and the NSA hardware in AT&T's room 641A, count as "passive-only attackers". Of course the javascript-hashing technique is only helpful against the former.
you are saying that if someone is encrypting the password using RSA in javascript and then using the hash to exchange the password between server/client, is volnerable because someone can interfere in the traffic and change the javascript served to the user, so that the password is sent in plaintext and therefore steal the password?
then why meebo and other sites practise this method without security problems?
That's not the argument here, though. The proposed solution --- and the one that Meebo uses, when you don't use their SSL login --- has a glaring security problem.
i was implying, how come they didn't run into security problem when they are practising a non-safe method? why do they expose so many users to such danger without their knowledge?
Why doesn't it add security? It means the password isn't being transmitted in cleartext (it's using an irreversible hash). For the record, Yahoo! use that exact technique on their login page (though I'm not sure why as it's served over https).
UPDATE: I see from a post further down that you're talking about re-writing the JavaScript in transit. Fair point.
The downside to this is that it requires the server to store the password unencrypted and unhashed. The server must have access to the original password to hash with the random number for comparison. In my opinion, this wouldn't be an improvement in the overall security of the system.
Avoid sending a plaintext password by using HTTPS. It's the easiest way.
You store the password hashed with a salt in the database (just keep track of the salt you used). The server can send the salt to the client, in addition to the random number. So the client is performing two hashes: md5(md5(password+salt)+random_token).
However, this is a common security problem that doesn't get enough attention. I would guess the easiest way to crack anyone's bank account is to create some flimsy website that requires users to register. Chances are good they will use the same user/password combo that they would use for their bank. Or you could crack one of the thousands of existing login websites to get passwords, which would be a lot easier than hacking a bank's database.
If I'm writing a minor login website, I'll assign a random password to users. I don't want to be liable if one of my servers is hacked and someone's bank account gets accessed because of it.
What do people out there generally do about this problem? Do you take a similar approach or make additional efforts to secure your servers?
I use a hash. I don't quite know how secure that alone is though. My thoughts are if my server is compromised(and an endless stream of attempts lead me to believe it's not extremely unlikely) having both the scripts with the database could deduce the passwords. Am I wrong?
You are. One of the defining properties of a cryptographic hash is that you cannot easily deduce the input given the output, nor can you easily construct an input to produce a given output. (see the overview at http://en.wikipedia.org/wiki/Cryptographic_hash)
However, it's still possible to do a dictionary attack on the database of hashes ("is the hash of "password" the same as the user's password hash? yes? Bingo!")
How is this relevant? The question of P ?= NP isn't limited to number-theoretic functions -- if P = NP then it is possible to find a preimage to any (polynomial time) hash function in polynomial time.
A hash is a one way thing so you can't reverse it. However, if they have that much access it'd be trivial to just change your script to dump the username/password combo into a plain text file whenever the login succeeds.
You can't have it both ways. If you are hashing the password before it is sent from the client to the server, you need the raw password on the server side.
(Assuming you are concatenating it with a server-provided random value before the hashing.)
You're assuming they're using a secure challenge response protocol. They're not. They're sending password-equivalent hashes over the wire, so they can say they're not sending passwords.
You'd be right to point out that this is a lot of silly acrobatics to go through to avoid a single SSL login page.
users have million sites where they have logins and passwords - they can't have million password - even pattern can be guessed. If the risk is to low it's ok to not use HTTPS. But some blurring would give positive results - as like some hackers commented already.
One approach is to invent a mnemonic system for constructing secure passwords. The passwords should be systematically related to the specific site (including variables that are only known or relevant to you), but the patten should not be obvious to anyone but yourself (and memorable enough that the passwords & method of constructing do not need to be explicitly recorded). This makes it possible to have a unique password for every site. As a further foil to anyone who might somehow crack the pattern, use different mental password construction algorithms for sites with different levels of perceived security importance (although then it is easier to get confused).
A million's a big number. As a User, I'd have to say that a hundred or so passwords sounds about right - and I live online 24/7. For most users - a dozen, tops.
Seriously, this isn't your bank account... what if someone breaks in and steals all your karma!
I'm surprised that the crowd here would even blink when hearing this. cperciva is right with his rhetorical question: this shouldn't be news to anyone, especially hackers.
Looks like the point of the post was more to not use the same password you use here as on other sites, and less to worry about your HN account getting hacked.
It's not like you have to remember them...
I just use a script that based on a site name/keyword I give it, generates a password for me. Then the browser remembers it.
Of course that means if anyone steals my laptop they have all my logins, but that's extremely unlikely.
"It's not like you have to remember them... I just use a script that based on a site name/keyword I give it, generates a password for me. Then the browser remembers it.
Of course that means if anyone steals my laptop they have all my logins."
I call BS to that. Fair enough for websites that use your banking or credit card information, but for the rest? I don't think so.
For irrelevant sites such as Hacker News, Reddit, whatever other minor web 2 site you can think of, you should ALWAYS use the same password. Why fart around with a ridiculous number of passwords for websites that are nothing more than minor daily distractions?
100% agree. Using "password" as your password will allow you to sign up for twice as many web 2.0 sites in the same amount of time. As an added bonus, you'll always remember your password at the 10% of sites you return to a second time.
Start here: you forgot gmail. With your gmail I account, I can get most of what you do with money online.
You're also ignoring reputation. Breaking your Facebook account may not be a huge win by itself, but if I can use it to get your 50 Facebook friends to load a Javascript worm, I might have probably 40-50% of their bank passwords. Look what happened with Orkut a few months back.
I use a password manager, and auto-generate a new 12-char pwd for every site I register with. Of course, if anyone would like to hack into my bank info and assume my debt, I will be more than happy to oblige. ;)
I don't know how much more load SSL would cause but
a good solution would be to just use HTTPS for the login and then redirect back to the HTTP, like Twitter does.
I wish I could say I was surprised, but I'm not. If you look at websites that don't involve transferring money, you'll find a lot of passwords transferred in plain text. It should not be so, but it is.
I use the same password here that I use on other sites which aren't very critical and which wouldn't really do me any harm if my account were cracked.
sigh the correct solution for this problem is SRP (see RFC 2945) which provides a secure transfer of password data and can be (and has been) implemented in javascript.
However while that solution will prevent packet sniffing, if you want to prevent phishing attacks you still need to use SSL
You can't implement SRP securely in Javascript, as I've repeat ad nauseum here --- attackers redirect traffic, they don't just sniff it, and when they do that, they can trivially rewrite the JS that delivers the SRP code.
Beyond that though, if you're going to implement a crypto protocol, don't make it SRP. It is much harder to do SRP securely than it is to do simple message digests.
Well if you're going to implement a crypto protocol over just using an off-the-shelf solution you're probably screwed anyway[1]. SRP over SSL is secure as far as I know, and SRP is certainly better than most of the other home-brew solutions proposed here.
[1] I've read your blog before and know that I don't need to tell you this, but the generally community might :-)
Digest access authentication is an immediate and halfway decent fix.
At one point I thought this was a big enough problem to write a greasemonkey script hashing passwords clientside for every website. But it just isn't a big interest of mine.
Before you consider converting your own web app to HTTP auth instead of login forms, be aware of the fact that several of the top security firms will demerit your app for doing it, and that will hurt you selling to companies.
I don't totally agree with this (HTTP digest auth, while silly, is still better than the crazy Javascript hashing schemes), but the logic is, it is difficult to "log out" and manage sessions with HTTP auth.
be aware of the fact that several of the top security firms will demerit your app for doing it
Speaking as FreeBSD Security Officer: Several of the top security firms provide ratings which reflect the quality of their checklists more than the quality of the security in the systems they're assessing. The FreeBSD security team recently dealt with a case of "if you don't fix this, people using FreeBSD will lose marks on PCI audits" -- and our answer was "screw the auditors, this isn't a security issue and we're not going to send out a bogus security advisory just to keep some idiotic auditors happy".
First, it's interesting to see where you draw the line, Colin. As FBSD Security Officer, you also attempted to have the OS disable Hyperthreading, ostensibly to eliminate localhost timing channels. That was a change with user-perceptible impact and minimal security benefit.
Second, seperate PCI auditors out from security audits (though Hacker News readers should be familiar with both). I agree with your sentiment regarding PCI auditors. (PCI, for those who don't know, is the Payment Card Industry standard you get audited against if your application accepts credit cards).
Third: like it or not, when you get dinged on a report like that, you can lose a sale. "Hah-hah!", Colin says. "Screw the auditors!" But Colin, and FreeBSD at large, loses nothing from failing an audit. Want some horror stories about people who do lose?
There was a recent "security issue" reported to us whereby an attacker who could specify a printf format string could cause a buffer overflow. We don't consider this to be a security issue since if you're allowing an attacker to specify a printf format string, you've got much bigger problems already -- this "issue" doesn't make things any worse.
Note that there are operating systems that have scrubbed their format string support, and, as a result, applications with format-string vulnerabilities have not been exploitable.
I'd elaborate, but I don't know the context of the finding you were dealing with. If it was, "FreeBSD needs to get rid of the %n token", the auditor was right, and you were wrong. I'd be surprised if it was that simple, though.
It wasn't that simple; but in any event, do you seriously think that FreeBSD should stop supporting a feature which (a) people have been using for two decades, and (b) is required by POSIX?
If I was going to make FreeBSD more secure by removing features, I'd start by removing the boot loader -- which would render the system utterly useless, yet very secure. If someone wants to shoot themselves in the foot by using a feature incorrectly, we're not going to stop him -- but we will do our best to make sure that the gun doesn't explode if someone looks at it oddly. (I think the X11 people called this the "tools, not policy" approach.)
Yeah that one's pretty easy to answer. Of course you should get rid of obscure printf(3) format string features, used in fewer places in your codebase than you have fingers on your right hand, if it means saving your users from a similar number of vulnerabilities.
Again: unless there are two Colin Percivals working on FreeBSD, you tried to remove Hyperthreading from FreeBSD to "save" your users from a crypto timing channel attack that has never been seen in the wild. Reconcile that with standing fast on format string apocrypha of the sort uncovered by "Month of XXX Bugs" fuzzers.
Are you sure you're still speaking as FreeBSD Security Officer? I may quote you on some of this in the future.
obscure printf(3) format string features, used in fewer places in your codebase than you have fingers on your right hand
I just did a quick grep of /usr/src and counted 17 places where %n was used in a format string -- fairly equally split between printf and scanf. I don't know what sort of mutant you think I am, but I don't have that many fingers on my right hand.
And that's not even counting all of the 3rd party code which gets run on FreeBSD (including the 18000+ programs in the ports tree) which make the perfectly reasonable assumption that FreeBSD's printf(3) conforms to POSIX requirements.
you tried to remove Hyperthreading from FreeBSD
No, I didn't. I turned it off by default. There's a big difference.
Are you sure you're still speaking as FreeBSD Security Officer? I may quote you on some of this in the future.
Here's a quote for you: As FreeBSD Security Officer, I do not believe that POSIX-mandated features should be removed in an attempt to make foot-shooting harder. The printf and scanf family of functions are dangerous, and should NEVER be called with a format string provided by a potential attacker or constructed from data provided by a potential attacker.
After I worked (very briefly with the project, more extensively out of it) on FreeBSD, I played a minor part in the OpenBSD audit, back when it was mostly Theo and bitblt. You're the FreeBSD Security Officer, you should know all about that.
OpenBSD made vast, sweeping changes to their code to minimize and mitigate security problems. Have you read privsep SSH?
How much third-party code links to your unsafe libc? How could you know? You think it's more sound to rely on every undereducated third-party developer to make the right choices about using your libraries, than to simply scrub out the seventeen (17. really.) places in your own code that use an apocryphal and dangerous printf feature, so you can eliminate it?
You're way out of step with the rest of your peer group, which appears to have learned not to trust random developers to use C libraries safely.
I love that argument. "Bad programming? Use good programming. Not our fault." By that logic, there's no security benefit to writing web apps in Python over C.
simply scrub out the seventeen (17. really.) places in your own code that use an apocryphal and dangerous printf feature, so you can eliminate it?
Go back and read what I wrote. There are 17 places in the FreeBSD base system where %n is used in a format string. I have no clue how many times it's used in code in the FreeBSD ports tree, or in 3rd party code which isn't in the ports tree -- and I'm not going to go and break lots of perfectly good code just because someone might shoot themself in the foot.
Yes, you've definitely made it clear that you don't think it's your problem. Maybe if you just turn "%n" off by default. That's not the same thing as breaking the code, is it?
Anyways, this is a tangent. It's amusing that you can stick up (in some sense) for clientside Javascript security, which is at least 0.0001% more secure than plaintext, but at the same time conduct protected arguments in the mailing lists about why CPU features should be turned off, lest someone ever figure out a way to make an attack you helped research become feasible.
DAA isn't intended to solve the same problem as SSL. SSL also isn't always worth the cost of the certificate, depending on the information being guarded; nor is it absolutely secure [http://eprint.iacr.org/2004/111.pdf, May 2004].
I don't understand the all-or-none attitude that appears to be prevalent in the security industry. For all the discussion spent on search spaces, and maximizing attack effort, there doesn't seem to be much acceptance for "good enough" security. There appears to be this attitude that -- to use wireless security as one example -- "WEP is broken, so you might as well just run a totally open network."
I see that when you say that anybody that can sniff traffic for stupid JavaScript hashes can also necessarily inject traffic to alter the negotiation between the host and the client. While in theory that's correct, in practice it ignores the difference in the number of people that know enough to execute either attack.
It makes sense to say, "If you're going to do password encryption, you might as well do it right", but I don't think it follows that even a stupid JavaScript MD5 hashing scheme is as insecure as plain-text password transmission.
The paper you linked to, which I just got around to looking at, has no bearing whatsoever on the security of SSL for password security. It was also never, so far as I can tell, published.
Maybe I'm not thinking this completely through here, but to do that, wouldn't you have to store your users' passwords in plaintext to verify the hashes? That's just replacing one problem with another.
If you use the salt that you're using in your database as the token to concatenate instead of a random token, an attacker will only be able to use the data they capture to login on your site as opposed to all the sites the user has used the same password on. That sounds like it could work, but it might have security implications I haven't thought through yet.
No. The server stores salted hashes, and serves the salt and a nonce as part of the login page. The client then submits hash(nonce + hash(salt + pass)).
This protects against both replay and rainbow attacks.
The salt is generated on the server, it's always the same for a given user (and possibly all users). You concatenate this with the password in some way. The utility of this is that if a user's password is mypassword then it will be hashed as, say, mypasswordsalt so someone with a "rainbow table" of all hashes and the corresponding cleartext would normally have quickly known that the hash's cleartext is mypassword but the dictionary will fail to match the hash since there's no entry for mypasswordsalt. This is how rainbow tables are made useless should the attacker get access to the passwords in the database.
I never used the nounce technique and can only make guesses so I'll refrain from trying to explain that part.
This is going to make me sound like even more of an asshole, but I'm going to say it anyways because it is true: if you have to explain to yourself what a "salt" is, or you can't spell "nonce", you shouldn't be designing security systems. That doesn't mean your app needs to be insecure; it just means you should be using someone else's authentication system to do it.
"if you have to explain to yourself what a "salt" is, or you can't spell "nonce", you shouldn't be designing security systems."
No, I don't have to explain to myself what a salt is, though my "Here's how I understand it" introduction probably mislead you into thinking that. The only reason I said "Here's how I understand it" is that initially I wanted to explain both what a salt is (a notion I certainly understand because I read about it and implemented it) and what a nonce is (a notion I probably don't understand because I didn't really read about it and didn't implement it).
I know I still have much to learn but at least I know what I don't know with respect to nonces, and I find it pretty lame that you're saying I shouldn't "design security systems" just because I can't spell a word properly for a technique I just admitted I "never used and can only make guesses [about]"
I've been working like crazy for the last 2 years to learn everything about Common Lisp and making websites and you're saying I should give up everything just because I still have things to learn?!
You can't judge someone's skills just by looking at a data point like that. The concepts of closures and macros are now completely automatic and obvious to me but I wouldn't call someone who never heard of it or has just a basic understanding of it "someone who should never program". Of course I'd point it out if they said they had a firm grasp of it while it was obvious that they didn't.
I'm not sure why I'm meant to care how hard you worked over the last 2 years to learn Common Lisp.
A huge fraction of the security breaks over the past 15 years --- which cost us billions and billions of dollars --- are traceable to the mindset that says that figuring out security is just like figuring out how to scale a database: "you try and try and try until you get it right". Well, no.
I don't have an authentication system to sell you, but someone else does, and you should use it before you try to build one yourself.
This is hacker news. I think that we are all here to learn. We all have different levels of expertise as well as areas of interest. I'm personally not working a full time coding job yet because I'm still in school and I still have quite a bit that I want to learn just hacking around on my own smaller projects.
The fact that the concept of a salt isn't totally automatic to him or that he misspelled nonce only means that he shouldn't be designing security systems right now. There is a lot that we can all learn, some of us just have farther to go than others.
Using a different salt for each user presents an issue: the salt isn't known until the user name is known. For a web application, this would require a two-stage login form - one form asking for the user name and a second asking for the password. Such an arrangement would be quite unfriendly towards users. Fortunately, there is a simple alternative. The salt is generated by concatonating the user name with a "system salt". The system salt is the same for all users on one system.
Here is how it generally works. This method provides a salt along with a nonce. In general, only the actual user knows the password.
Start:
User registers for a service and submits a username + password.
The password is salted and hashed in the browser and then sent to the server over a secure connection along with other info. like username etc...
Then when somebody wants to so log in:
Server sends a random nonce and keeps track of nonce for all requests when the initial login page is sent to the browser.
Then the user enters username + password to the browser. The browser computes hash(nonce + hash(salt(password))) and sends the result to the server along with username info.
The server looks up the password hash associated with the username and compares hash(nonce + stored-hash) vs. the info. sent by the browser. If there is equality, the user is authenticated and interaction proceeds as normal.
Note that only the user actually knows the password. Even the server only has the hash of a salted version of the password. Next, even this salted hash is never sent directly over the network. A decent implementation will randomize the nonce so that packet sniffing attacks don't work.
Either attackers have access to the raw traffic or they don't. If they do, and you don't use SSL with CA-anchored keys, attackers will rewrite the Javascript in transit, and your scheme provides no additional security.
If attackers don't have access to raw traffic, sending over the plaintext password is just fine, because attackers don't have access to raw traffic.
I'm sorry, but security isn't an obstacle course. For evidence, go research what percentage of online bank transactions in Brazil were fraudulent in 2007. There is no clever solution to this problem.
Send passwords over SSL, or don't care about the password you send.
Either attackers have access to the raw traffic or they don't. If they do, and you don't use SSL with CA-anchored keys, attackers will rewrite the Javascript in transit, and your scheme provides no additional security.
No. It's entirely possible for an attacker to have access to your data (say, because they compromise a system which has been dumping packets for offline analysis) without being able to rewrite packets in transit.
I'm not saying that the suggested mechanism is a good idea, but let's be honest in our security analyses, ok?
Colin, attackers aren't sniffing SprintNet with sunsniff to get access to traffic anymore. They're knocking over one of the 15-odd points where they can substitute their IP for COMCAST.NET.
Personally, I think you do a grave disservice to developers by dignifying the notion that "passive-only attacks" are a reasonable threat model.
I didn't say that "passive-only attacks" were a reasonable threat model. In fact, I explicitly stated that I was not saying that the suggested mechanism was a good idea.
But even if only 0.0001% of attackers are limited to passive-only attacks (and I suspect the actual value is higher -- more like 0.1%) then the suggested mechanism is 0.0001% more secure than transmitting the password in plaintext -- which invalidates your assertion that it "provides no additional security".
No, because you're ignoring the fact that one of the 15-odd places that an attacker can bust up to redirect traffic to their own servers is the "observe all packets" vantage point, which allows them to predict DNS XIDs and source ports.
I know you're smarter than this, Colin. I think you're being pedantic. Would you advise anyone on this message board any differently than me? I think you already said "no".
Would you advise anyone on this message board any differently than me? I think you already said "no".
There are two parts to giving advice: Helping people with their immediate question, and helping people better understand the field in question so that they won't need your advice the next time. You told people that what they were talking about doing was a bad idea, which I agree with; but the explanation you gave was misleading.
I agree with your recommendation of "don't do that"; but I think it's important for people to understand WHY they shouldn't do that -- and claiming that it's "no more secure" rather than explaining that it's very slightly more secure obfuscates rather than elucidates.
Banks in Brazil routinely issue each customer with a duress PIN to mitigate the situation of a customer being frogmarched to an ATM at gunpoint. This is in addition to ATMs being unavailable in early hours due to the majority transactions being theft at gunpoint.
Initial implementations of the duress PIN displayed an identical withdrawal limit and led to customers being shot. Subsequent implementations provide a reduced limit.
I discovered this after lending a trifling amount to a Brazilian ex-colleague in the UK. He retained his Brazilian bank account and, to repay me, he had calculate the timezone differences for ATM use.
I don't understand why this guy is getting down modded...I think it's perfectly reasonable to ask a question without incurring a penalty, regardless of how much you disagree with it.
@almost: an article can be stupid or smart, I didn't put it as no.1 - users does. If you think that his question is stupid and article is stupid and everything including super bowl is stupid - that means you don't match... take it easy
Is this really news to anyone?