> why would you suddenly trust their unverifiable claim that the data is now E2EE
> It's a useless PR tactic.
Maybe because a single whistleblower would bring down the mother of all class action lawsuits?
Hardcore anti-corporate types like to imagine that these companies are evil geniuses, where all 100,000 employees are operating in perfect alignment, with no mistakes or disagreements, and all secrets are kept perfectly.
It just doesn't work like that. Threat model it for a second: how many more phones is Apple going to sell with this? Maybe a 1% increase, to wildly overestimate it? And what would be the financial harm from a single engineer popping on HN and saying "it's all BS, phones send the keys to the cloud, I worked on the system to store them."?
> There's absolutely no _technical_ thing they could do to gain any trust.
Well, that's true. But there's also no non-technical thing they could do. It is literally impossible to prove perfect technical compliance on an ongoing basis using any combination of technical and non-technical means.
That goes for open source too. Evil compilers, etc, can turn perfectly solid source into malicious binaries. The compiler's source can even be perfectly secure.
At some point you have to think about probabilities and motivations, and move away from this "anything not 100% perfect, which BTW is not possible, is 100% useless" world view.
> Maybe because a single whistleblower would bring down the mother of all class action lawsuits?
Sure, like that is going to happen. I mean, "Facebook can read your supposedly-encrypted Whatsapp messages" will raise how many eyebrows exactly?
> But there's also no non-technical thing they could do
No, that's untrue. For starters, release the source. Allow me to run my own backup software on their servers. Allow me to transparently run my own encryption before I upload stuff to their servers. And a very long etc.
> anything not 100% perfect, which BTW is not possible, is 100% useless
This is 100% useless not because it is not 100% perfect (it very well could be), but because it is 100% useless by conception. What threat model does this protect against exactly? The scenario where Apple servers get compromised? I'm quite sure this risk does not even enter the mind of the target audience here, and if it did, the hacker could very well push the silent update anyway. The scenario where Apple itself has access to the data? This does absolutely nothing to prevent it. The scenario where someone can social engineer an Apple employee to give your iCloud key to someone else? It was already not possible.
> What threat model does this protect against exactly?
Two big threats: 1) insider attacks like the Saudi Twitter infiltration[0], and 2) Overreach by legitimate government process like subpoena[1].
> release the source
Useless. How do you know it's the exact source running on-device?
> Allow me to run my own backup software on their servers
Useless. How do you know your own backup software isn't compromised via a secret deal with Apple?
> Allow me to transparently run my own encryption before I upload stuff to their servers.
Useless. How do you know the OS isn't grabbing the raw files? How do you know your own encryption isn't compromised? How do you know that Xcode isn't inserting backdoors in the encryption you compiled from source?
> And a very long etc.
All useless. Tell me your perfect solution and I promise I can show it's useless (by your standards).
> Two big threats: 1) insider attacks like the Saudi Twitter infiltration[0], and 2) Overreach by legitimate government process like subpoena[1].
This does not prevent any of these threats, it does not even necessarily make them more difficult whatsoever. "Insiders" will still have access to the source code doing the encryption and communications, and it is just not possible to protect against government overreach that can literally force you to do anything and keep quiet about it, even in otherwise relative sane countries. Search for NSA letter.
I actually don't expect any corporation to be above the government, fwiw, but this is off-topic.
> Useless. How do you know it's the exact source running on-device?
Because you built it yourself?
> Useless. How do you know your own backup software isn't compromised via a secret deal with Apple?
Because it's YOUR OWN backup software?
> Useless. How do you know the OS isn't grabbing the raw files? How do you know your own encryption isn't compromised? How do you know that Xcode isn't inserting backdoors in the encryption you compiled from source?
Because I have the source of the OS and I built it myself? Because I have literally used the same compiler I use for other platforms and not Facebook's? Because I can then actually monitor the actual communications between the device and the mothership? etc. etc.
The point of this entire thing was to show that _there is_ non-technical policies they can do to actually increase the trust level (or at least have a discussion about it -- as you are), but there is very few technical stuff they can do to increase it, and that's because it would miss the entire point. It's not about "trusting trust perfection" or whatever you think you are trying to argue here. You are trying to protect stuff from Alice by trusting Alice without even being capable of verifying it. It just can't academically work. You need to either be able to verify it or at the very minimum separate both roles.
> This does not prevent any of these threats, it does not even necessarily make them more difficult whatsoever. "Insiders" will still have access to the source code doing the encryption, and it is just not possible to protect against government overreach that can literally force you to do anything and keep quiet about it, even in otherwise relative sane countries. Search for NSA letter.
There you go again :)
You literally just said something that used to take a subpoena from any law enforcement now takes an NSA letter. And that an insider attack that used to mean retrieving a backup file now means inserting back doors in source code that go undetected.
And somehow those aren't even more difficult?
> Because I have literally used the same compiler I use for other platforms
It is literally provable that Apple will never be able to satisfy you. For any mitigation they introduce, you can (rightfully) create a hole in that mitigation.
What you're missing is that the same flaws and attacks appear in all of your "it would be better if" solutions. Once you're invoking NSA letters and malicious source code, all bets are off... including for open source.
> It just can't academically work.
Yes, we agree on that. But it also doesn't work if you're protecting stuff from Alice by trusting Bob, who might be secretly an agent of Alice.
> You literally just said something that used to take a subpoena from any law enforcement now takes an NSA letter
I didn't say that. You said "overreaching government".
> It is literally provable that Apple will never be able to satisfy you
Nothing _technical_, that is, which has exactly been my point.
> Once you're invoking NSA letters and malicious source code, all bets are off... including for open source.
That's not true at all. There's an entire world of difference where "oh the software is just hidden from my eyes, communicating constantly and opaquely with the mothership, changeable at any moment by the same mothership, and all of it running in the same hardware also made by the same mothership" versus "I have these separate components that are only communicating through these channels in these clearly specified ways". The first only allows useless technobabble fake solutions, the second system actually allows discussion about trust and is usually the very minimum expectation of any cryptosystem.
> But it also doesn't work if you're protecting stuff from Alice by trusting Bob, who might be secretly an agent of Alice.
I don't see that as necessarily true either. But anyway, I can now choose between multiple providers for encryption, which _finally_ goes towards measurably increasing trust. Remember, despite the accusations, I have never claimed it had to be 100% trusting trust perfect, I am just claiming this one proposal is 100% useless. If you didn't trust Apple backups before and you would now, I'd question your judgement.
Something like hacking into a journalist's phone would require a lot of cooperation between infrastructure, software, and security to actually perform a targeted attack.
Despite Apple's harsh warnings about leaking secrets, people at Apple have already been spilling the beans about Apple's upcoming Ad platform for over a year, and that's just for something as morally grey as ads that they're going to spin as "privacy preserving" anyways. For something that actually goes against <everything> Apple has ever stood for, like targeting a journalist's phone to read their communications or extract data and secret keys from their advanced protection-protected iCloud Backups, at least one of the hundred involved would find a comfy bunker to live in with a phone line leading straight to News Corp or NYT.
Do you honestly believe that a malicious actor who can access data storage can also necessarily access a silent mechanism to affect the security internals of a given iPhone? And also the theoretical hacker wouldn't be able to just push said theoretical silent update to your device to just exfil the data anyway?
Really having a hard time understanding the detailed security implications of your scenario beyond this vague notion you're presenting that a theoretical hacker can use theoretical tools to silently pwn any Apple device collected to the internet at any time.
> that a malicious actor who can access data storage can also necessarily access a silent mechanism to affect the security internals of a given iPhone?
A malicious actor who can access _already encrypted_ data storage where you cannot even associate files with a given account ID _without_ having already put a backdoor in the corresponding code may be able to actually put such backdoor in the software that is distributed to iPhones? Yes, I believe that.
> It's a useless PR tactic.
Maybe because a single whistleblower would bring down the mother of all class action lawsuits?
Hardcore anti-corporate types like to imagine that these companies are evil geniuses, where all 100,000 employees are operating in perfect alignment, with no mistakes or disagreements, and all secrets are kept perfectly.
It just doesn't work like that. Threat model it for a second: how many more phones is Apple going to sell with this? Maybe a 1% increase, to wildly overestimate it? And what would be the financial harm from a single engineer popping on HN and saying "it's all BS, phones send the keys to the cloud, I worked on the system to store them."?
> There's absolutely no _technical_ thing they could do to gain any trust.
Well, that's true. But there's also no non-technical thing they could do. It is literally impossible to prove perfect technical compliance on an ongoing basis using any combination of technical and non-technical means.
That goes for open source too. Evil compilers, etc, can turn perfectly solid source into malicious binaries. The compiler's source can even be perfectly secure.
At some point you have to think about probabilities and motivations, and move away from this "anything not 100% perfect, which BTW is not possible, is 100% useless" world view.