Much of the US/UK legal system is based on common-law rules that are several hundred years old. In some cases those old laws have been codified, in some cases not, but either way there's no need to drop them just because they're old. On the contrary, laws that have stood that long without needing to be changed have demonstrated that they are extraordinarily good ideas.
I’m not sure point to the UK is a good example. There are plenty of weird and obscure laws that simply aren’t enforced or followed anymore. Everything from laws about handling salmon suspiciously, through to various right around who can drive sheep across Tower Bridge.
Those laws survive not because anyone considers them a good idea, but simply because the issues caused by ignoring them are substantially smaller than the effort involved in removing them.
We also have a bunch of laws that are still followed, but only in the most technical sense. Every “Parliament route” train schedule falls into that category. Train services that must be provided at least once a day, sometimes only once a week, which nobody actually uses, and in some cases only travel to stations with no practical public entrances. Those laws don’t survive because anyone things they’re a good idea, it’s just easier to run the train, than it is to get parliament time to abolish the law.
It's unfriendly to developers and power users, but very friendly to the other 99.999% of users.
I used to work for Google, on Android security, and it's an ongoing philosophical debate: How much risk do you expose typical users to in the name of preserving the rights and capabilities of the tiny base of power users? Both are important but at some point the typical users have to win because there are far, far more of them.
The article implies that this move is security theater. It's not. I wasn't involved in this decision at all, but the security benefit is clear: Rate limiting.
As the article points out, Google already scans all the devices for harmful apps. The problem is knowing what apps to look for. Static analysis can catch them, dynamic analysis with apps running in virtual environments can catch them, researchers can catch them, users can report them... all of these channels are taken advantage of to identify bad apps and Google Play Protect (or whatever it's called these days) can then identify them on user devices and warn the users, but if bad actors can iterate fast enough they can get apps deployed to devices before Google catches on.
So, the intention here is to slow down that iteration. If attackers use the same developer account to produce multiple bad apps, the dev account will get shut down, requiring the attackers to create a new account, registered with a different user identity and confirmed with different government identification documents.
Note that in the short term this will just create an additional arms race. In order to iterate their malware rapidly, attackers will also need to fake government IDs rapidly. This means Google will have to get better at verifying the IDs, including, I expect, getting set up to be able to verify the IDs using government databases. Attackers will probably respond by finding countries where Google can't do that for whatever reason. Google will have to find some mitigation for that, and so on.
So it won't be a perfect solution, but in the real world, especially at Google scale, there are no perfect solutions. It's all about raising the bar, introducing additional barriers to abuse and making the attackers have to work harder and move slower, which will make the existing mechanisms more effective.
It's not even about power users. The article describes this pretty well: It is about the fact that this action will destroy or at least severely harm the open source app ecosystem. What I can see is that this already has a chilling effect on app developers releasing apps on F-Droid. You might say why should I care about that when I am one of the 99 % of normal users. But it all comes down to freedom. If you destroy alternatives to the Play Store, you remove the freedom of choice that even the 99 % of users would have if they were willing to switch to proper open source solutions.
Does anyone know if there is a concrete evidence that bespoke measure violates the EU's digital markets act?
> in the name of preserving the rights and capabilities of the tiny base of power users
These are the rights of all the users. Take that perspective.
Remotely pushing a code to billions of devices to lock their baisc function (running code user loads) unless the device owner pay and provide sensitive info is a full-scale global malware attack by itself.
Completely false dichotomy - you could release a separate android channel that would require flashing through fastboot but still be signed, don't require unlocked bootloader and fully pass "Play Integrity".
> How much risk do you expose typical users to in the name of preserving the rights and capabilities of the tiny base of power users?
Rights and capabilities are for everyone, even if they're not currently using them due to not being a "power user". They're an important "escape valve": if things get bad enough, normal people become power users out of necessity.
In that case, an ID-gated play store and a developer settings toggle with a scary warning message would serve the same purpose for that 99.999% while leaving the rest minimally affected. Clearly that's not enough for google.
>> Note that you are trusting this app with your private key.
On Android, the Kryptonite code uses the AndroidKeyStore to store the private key, which means that the app does not have access to it. At a minimum (on old devices), AndroidKeyStore keeps the private key material in a separate process, so it never exists in the app's process space. On newer devices (launched with M or later), the private key material is kept in the Trusted Execution Environment, so nothing in Android user or even kernel space has access to it.
EDIT: Actually, there's one small flaw in the Kryptonite code that may make the private key accessible to a sophisticated attacker who compromises the app. The key allows signing without using a hash function. Signing a sequence of carefully-chosen plaintexts can reveal the private key. I filed an issue and sent a pull request.
It's a platform feature, so it's open source, but there's always a delay between the announcement and the time the code hits the public repositories. It'll be there before too much longer.
I find it useful to keep in mind the notion that all knowledge is gained by a process of guesswork and criticism. When you listen to my voice or read my words, you must guess at what I mean to convey, because my words by themselves, even if not mumbled, are almost never sufficiently precise to carry my meaning.
So instead, you have to build and discard internal explanatory models of my meaning, criticizing them by cross-checking them with other things I've said, and with your understanding of my understanding of the world. Meanwhile, I'm doing the same thing on the other side, hypothesizing the models that you're creating based on what I've said and trying to add more words to fill any gaps in what I presume that you're presuming that I mean.
When we arrive at a point where you and I both believe that you hold a consistent mental model of what I wished to convey, then we believe that I have communicated to you.
Stated that way, it's clear that communication is really, really hard -- even though we do all of that model building and evaluation without conscious effort in most cases. And it's also quite obvious why it's easier to communicate with people you know well, because both sides have a better mental model of the other's mental model. Both are _wrong_, always, but they're less wrong than similar situations between people with less shared context.
This view also makes it abundantly clear that it's important to validate communication. If you restate to me in your own words what you believe I intended to convey, there's a good chance I'll catch any major discrepancies between what I intended and what you got. A good chance, but we can still end up believing that we're in agreement when we're not.
In theory it is possible to define a language and communication techniques that do not depend on this iterative, contextualized method. This is essentially what we do in formal languages, such as those we use in mathematics or programming. But it is not how people communicate because it's actually far more efficient to rely on compression via shared context than it is to communicate with formal precision. Further, formal communication only obviates guesswork and criticism at the level of understanding which is directly expressed. I can read an assembler program and understand with perfect precision what the individual instructions do, but the leap to understanding the goal of the program again requires guesswork and criticism.
As an aside, it's interesting to note that the process of guess-and-evaluate is essentially the same as the scientific method of hypothesize-and-test and even the same as the evolutionary method of vary-and-select. There's a compelling argument that all knowledge creation occurs via this process -- and communication is knowledge creation, even if it simply conveys an idea from one brain to another, because there's no direct transfer mechanism the receiver of the idea must create it based on observations of the words of the giver.
> communication is knowledge creation, even if it simply conveys an idea from one brain to another, because there's no direct transfer mechanism the receiver of the idea must create it based on observations of the words of the giver.
I feel like science and mathematics (and even computer science) attempts to counter this.
If we can mechanically generate mathematical proofs, and those proofs can be proven equivalent to one another, then we have essentially simplified the abstraction mechanics of the individual brain and transferred them to a computer.
I view abstraction as a means for expanding all potential permutations of a given model and it's application, and then a reduction to a different set of terms that connects models.
We might be able to prove that we can agree with one another, but we might not be able to prove that we agree with our own selves. We can always make the problem more difficult individually by guessing and creating more questions.
To me the process is two sides of the same coin. One side is creation, the other side is destruction. You can't convey an idea without having an idea, and you can't question that idea without having another idea. Negation is still a logical mechanism. You can question whether there is more to think about than a choice between [(exist) or (does not exist)].
I really prefer to think about possibility and potential. It's an open space to me.
You're not wrong, but I think you overstate the case. I wouldn't say employees are encouraged to publicly trash the company's products. Not at all. But the company does respect employees' right to speak their mind in public, and it does encourage thoughtful internal dissent.
I often tread pretty close to the line on what I say in public, and have even been reined in by Google legal counsel in a couple of cases. I found the experience of being told to cool it to be surprisingly affirming and liberating, and a powerful confirmation of the true commitment to openness in Google culture, because of the reasons for which it was done and the way in which it was done. Specifically, in both cases I really had crossed a line which could be potentially troublesome for Google in court, and in both cases the attorney who contacted me was respectful of my opinions and my rights to speak them to the point of being very apologetic about telling me to shut up. It was very clear to me that Google really didn't want to silence me, and did it only because they truly had to. I think that's awesome.
Based on my experience, I have zero concern for Brad's job, and wouldn't be surprised if he gets some mild and unofficial kudos.
> here we have a handful of _startups_ that confess there's isn't enough work to keep everyone in the nimble team up on toes for even forty hours a week
I don't think they said anything of the sort. There's no claim they don't have enough work for 40 hours; I'm sure like most of us there is no end to the work, and it can and will consume all the time we're willing to give it.
They're just not willing to give it as much. It's possible that will put them at a disadvantage to their competitors. It's also possible that they may be sufficiently more creative to overcome that disadvantage.
Yeah, but negative vote is a perfect example of how stupid this community has become. Not even able to digest even the slightest counter-thinking, and the recoil is never short of abusive.
Look at the moron below who calls "up on toes" as corporate slang. NO. It's not a corporate slang, rather a pursuit of a start_up to beat the rest.
Hopeless! This site used to be quality. Now it's full of sheep with one-track mind. My last note here. Go ahead vote it down, enjoy your dumb echo-chamber and live through your satisfaction.
I like a $20 bill more than a $10 bill. Is that an emotional reaction? No, it's an objective value judgement based on the fact that the former is more useful (roughly twice as useful) as the latter. The word "like" merely expresses preference. It's neutral as to the basis of the preference, which may be emotional, rational or some combination.
Correct. At present the only solution for pre-4.4 devices is to avoid using WebView to display untrusted content. If you're an app developer using WebView you should make sure it's only displaying trusted content which means either local content or remote content from trusted sites with non-broken SSL. I recommend using Google's recently-released nogotofail toolkit to test for SSL breakage (https://github.com/google/nogotofail).
The ideal fix for this problem is for OEMs to update devices to 4.4.
Google could invest the resources to create patches. What Google can't do is get those patches delivered to end-user devices. Given the fact that if Google provided patches they'd never reach users anyway, why should Google bother? And we know that OEMs won't provide updates because they are already refusing to provide the one that has existed for some time now: Android 4.4.
(Disclaimer: I'm a Google employee, and I work on Android security, but I'm not a spokesperson and these are only my own opinions.)
>What Google can't do is get those patches delivered to end-user devices.
Apple manage to do it. Google made a conscious decision to trade off allowing end users to keep up to date with achieving faster adoption of Android among OEMs and carriers. You can't now pretend that the results of those decisions are some kind of inevitability. It was Google's choice, and they are responsible for the result.
>And we know that OEMs won't provide updates because they are already refusing to provide the one that has existed for some time now: Android 4.4.
Apple makes all of its devices and therefore controls them. Google can't dictate to Samsung, HTC, LG, etc.
Google has updated the in-support Nexus devices. The Galaxy Nexus is something of a question mark, but the number of active Galaxy Nexus devices is tiny. It would make more sense for Google to offer GNex users a new device than to upgrade the few remaining GNex's to 4.4.