Both are terrible for privacy so it comes down to which one has a nicer screen now. :(
I'd rather have Google check an Apple phone attestation than have Google check a Google phone attestation, and vice versa, though, because you can assume each company is trying to keep as much information private to themselves instead of giving it to the other. Google is probably just getting "yes it's an Apple phone" and some kind of temporary token, instead of my IMEI, IMSI, phone number, all signed in accounts, biometrics and so on.
The camera isn't the part doing that verification. The google service serving that "reCAPTCHA" is what's doing that validation. Unless you're using a custom browser that is reporting a different domain to google than the one requesting the reCAPTCHA, google's service will know which domain is which.
It would be generated by some other website like Amazon. Because I own, say, Meta, I copy these Amazon-generated codes over to Meta, make people scan them on their phones to sign into Meta and then pass the solution back to Amazon so my bots can sign into Amazon.
We don't yet know how the client side works, perhaps there will be a decompilation posted soon.
It's possible this scenario is acceptable to them because it means they can still tie your access to something that's easier to ban without requiring a full account login.
What are you implying? That it will become ineffective due to that?
That's possible... and they might change their mind if so, we will see.
I feel like it's a similar issue to when scrapers pretend to be an allowed-origin webpage in order to abuse "public" API keys for web services.
They could also require the mobile device to interact with the requesting webpage in some manner, similar to mutual PIN/codes for Bluetooth/TV pairing these days. That way bulk sharing of the codes would still require active participation from the device that requested it in the first place, likely with a short time limit.
Do people expect that Instagram can't read their Instagram private messages? I don't think people expect that. And E2EE is not nearly as cheap as the HN crowd likes to pretend—how do those devices get those keys if not through a central service? Especially if one of them is a web browser?
were like 20 years past that, at the very least 10 years with Snowden.
The people have spoken, caring about your communications not being spied on puts you in the minority. I mean like, just look at social media. You have tons of people who not only don't care about being spied on, they actively document everything for more views.
The answer to most everyone question you’re asking is just, “public key cryptography”. It’s kind of disheartening to me that such basic 1990s tech as implemented by Phil Zimmerman is now obscure enough to merit questions like this.
Both parties exchange public keys through the central service. Only the possessor of the respective (on device, Secure Enclave ideally) private keys can decrypt the messages encrypted to the public key. The process can also work in reverse, encrypting with the private key so only holders of the public key can decrypt: this is called “signing”.
And how does one verify that the public key received belongs to the intended party, rather than a mitm?
If the answer is blind trust in a third party that runs the messaging service then I suspect that you can guess what the people asking those questions are really asking.
Diffe-Hellman-Merkel key exchange is vulnerable to attacker-in-the-middle attacks.
Eave could just do key negotiation with Alice and separately do key negotiation with Bob. You have to use a slightly more complicated cryptographic protocol to avoid this issue.
How would the keys get stored in the user's private browsing window? Do they lose all chat history when they log in on a private browsing window and then close it?
I don't know the technical details of that for sure, but I think the answer is that keys and chat history are stored on-device only; for example you lose your WhatsApp history if you don't restore a backup when moving to a new phone.
If a messaging app is showing you message history in a private browsing window then perhaps the encryption key for that history is derived from your password or something like that; that can be done locally so that all the server ever sees is encrypted data.
What if you log into the app and then log out of the app and then log into the app again? Should you be able to see your messages?
E2EE is a fail-secure design. In case of any doubt it deletes your private messages. When applied to this case I don't think the downside of constantly losing all your messages outweighs the upside of Facebook pretending they don't have a copy of all of them.
Are you asking for technical details about E2EE in messaging apps, or simply making the point that you don't like it? If you don't like it, then fine, you do you, however I would point out that we all accept some inconvenience in our lives as a trade off for improved security; the lock on my front door is inconvenient but I'd rather have it than not.
As to whether or not Meta have been lying about it, then that would be on-brand for them, but then what are they turning off if so? Or maybe the whole thing is theatre, and I should better disconnect from the internet altogether? I don't see the value in speculating about that.
Not being able to receive messages except on one device isn't a minor inconvenience.
To fix this, you either need to authorize each device (and web browser) from another device that's logged in, or the central authority holds your keys.
I run WhatsApp concurrently on two phones and receive all messages on both devices. But generally speaking this is where we disagree - requiring all devices to be authorised by me is feature not a bug as far as I'm concerned.
> And how does one verify that the public key received belongs to the intended party, rather than a mitm?
Fingerprints. Again, this is like Crypto 101. Not saying that as a personal attack of any kind, I just remain incredulous that what used to be entry level knowledge in “our thing” has evidently become so obscure.
You shouldn't be talking down like this, you're wrong about it. Alice and Bob need to exchange keys beforehand in some trusted out-of-band way. There's no protocol that solves this if Eve can be in the middle. I'm not sure what you mean by fingerprints, but if you describe a protocol, I can describe the mitm attack.
Bob and Alice are setting up their e2e channel, and because they have some extra level of concern about snooping, they telephone each other and read off some form of hash of the public key to each other.
A more complex variant would be something like PGP implemented, where Bob and Alice could both sign each others keys after this exchange, ensuring that someone who hadn’t met Bob but did trust Alice could inherit trust in Bob’s Alice-signed key.
You’ve stated unequivocally that I’m wrong, so now, please show your homework.
This is a very frustrating exchange. You guys are saying the same thing. For key exchange to be secure against an attacker who can MITM the channel you're securing, either the public keys or at least their respective fingerprints need to be exchanged out of band, over some channel the same attacker cannot also MITM. For a sophisticated enough targeted attack, a telephone isn't that.
The way military radios handle this is hardware key loaders that have seeds pre-synced in factory, in person. Every day in the field, a unit comms person takes the key loader and loads new keys onto everyone's radios. The key loaders themselves are reseeded and resynced during maintenance periods between campaigns or exercises. They're physically accounted for on every movement and twice a day when not moving, and if they ever can't be found, all messages from any device they loaded keys onto is considered compromised.
Anyone trying to overthrow a government or run a criminal empire or whatever is going to have to take measures at least this drastic. Or quit LARPing and accept that nation state attackers can probably slide into your Instagram DMs, which are probably being sent to people you don't know, and if they're hot and actually answering you, 90% chance they're a honeypot anyway.
Web of trust or centralized trust are the main answers here.
Compromise of the secret key is a whole other issue - revocation.
MITM of a key can be solved pretty well via web of trust techniques.
Apologies if the dialog is frustrating to read! As a “recovering cypherpunk”, I find these sorts of discussions animating, as long as they’re polite and technically focused! Much love!
No, it's not at all this simple. This is why so many "e2ee" apps like Telegram are bogus, they ended up prioritizing UX over security because there are many places where you can't pick both.
Webs of trust based on OOB key verification and signing, or centralized trust authorities are the two primary models I’m aware of.
I’ve always been enamored of the idea of DNS as a back end protocol to enable the former largely decentralized solution.
Bob looks up Alice and receives her key from Alice’s namespace within the DNS hierarchy, along with her trust claims. David then looks up Alice’s key within her namespace, sees a reference to endorsement by Bob, and can validate this by querying Bob’s namespace. David can also issue non-authoritative queries about Alice’s key to Bob’s DNS servers, ensuring that there is no mismatch between the query response received by Bob and the one received by David.
If Mallory manages to compromise Alice’s DNS, but not Bob’s, the result is a mismatch in query responses that both Bob and David can thus detect.
At scale, a MITM compromising a system like this would be difficult without compromise of a large number of independent namespaces, increasing the likelihood of detection via the non-authoritative queries.
The missing component in this arrangement is cryptographic security of DNS, which I cynically suspect is why the DNSsec working group was comprised of the usual suspects and eventually produced a protocol without query encryption. It could still be layered on by a protocol extension, however.
In practice it's possible to make a system that's hard to mitm if users are diligent. WhatsApp publishes a public record of hashes of the keys. If both sides check that record against their local keys, it's hard for WhatsApp to present different versions to each. Though that's a more recent development.
The harder part that Instagram is most likely concerned about is getting low-effort users to keep their private keys safe without losing them.
Throwing this on the "brainstorm if we had an ideal legislative world" pile: Stealing a user's private key should be a felony, even if it hasn't (yet) been abused for anything.
The tricky part is keeping it from being "permitted" by a crappy contract of adhesion. Banning it entirely would make it very difficult to buy/sell backup services...
lol honestly, I think a little on the contrary. If we can make a thing impossible technically, the law defers to that. One thing the government really can’t do easily in Western countries is forcing a company to add features or change core functionality.
I'd say those are legal barriers, rather than technical barriers.
For example, suppose the government demands constant access to your core database. You don't need to invent any new algorithms for that, you might just make an SQL user and a firewall exception and call it a day.
Similarly, If you have a messaging client, you don't need complex R&D to steal the "end-to-end" keys.
I’m not sure why you think so? If the service provider claims E2E but intentionally provides a defective version of this, it’s a pretty clear cut violation of the federal statute, which afaik based on the statute’s language contains no exceptions for defects cajoled into being inserted by government pressure short of a clear statute mandating it, which does not exist afaik.
Insider trading is a part of it. If someone bets a few billion dollars that America will invade Iran, the probability shoots up to 98%, even though nobody else thinks it will happen. They can then run a press release about how their platform predicted the invasion before anyone else did.
These were Oscar predictions and similar. So no insider trading and, when I wrote about, the prevalence of major prediction sites on the Internet seemed to degrade the crowd wisdom because so many people just went with what a few sites were picking.
reply