My understanding is they added this for legal reasons. Users were not aware Instagram and WhatsApp were owned by Facebook and legislators had issue with that.
This is interesting, and is reflected in some other comments, but I have always gone under the assumption that you will only learn things once so it is best to learn them the correct way. This mostly holds true in an academic situation though in my experience, and I have applied it as such.
In dance it's pretty important to learn the basics well from day one. If you learn wrong technique it worms its way into your muscle memory and is difficult to unlearn and replace with something else. (It's also why the most basic dance classes ought to be taught by expert teachers, but that's another hobbyhorse.) And bad technique can lead to injury.
Other fields don't necessarily have a "correct" way, or they do but it's comparatively easy to replace a mistake once we notice it. In some cases, making a mistake and then correcting it can actually fix the right way to do it in one's memory. (Think of embarrassing mistakes we sometimes make when trying to speak a foreign language!)
I guess now is the time to look into how to root the console and install a custom ROM in a similar fashion to de-Googling your android phone. There is already enough support in the community for side-loading APKs and the like. Does anyone know of any ways to achieve this?
Write to your MP. Not that they will do anything - Maybe you'll be luckier than me and wont get a canned response about how they are "working hard to stop this" without taking action.
I occasionally do - and there are times where I do get a response back.
I'm considering registering myself as a lobbyist so that I can have more of a face to face contact with councillors in my city, and actually push for more change.
Because right now, if people in tech aren't advocating for better security practices...we're going to doom ourselves by standing by and not doing anything at all.
Bit late to the party. I checked out thebrowser.com but there is no mention of the subscription cost anywhere - it seems they won't tell you until you give them your email. Would you know the cost per billing period? It seems like a cool service but I'm curious how their cost compares to something like the economist (different information, I know).
It should be noted that your privacy is not preserved if you test positive and need to upload your Daily Tracing Keys to a server. Your broadcast IDs for an entire day can be linked together, making it easier to de-anonymize you. I understand that they use Daily Tracing Keys to reduce the demand of the backend server, but I think it would be better for user privacy if they either reduced the linkable period from a day to say an hour, or used an unlinkable design.
In case you test positive, and we actually have the resources to trace your contacts, your privacy will be gone in todays system for sure.
As you'll have to provide information about your recent contacts to the authorities performing the contact tracing. At least that's how I understand our local law (Germany).
So I don't think its necessarily worse doing it with an App than doing it the old fashioned way. Sure digital traces are always easier to abuse, but then on the other hand, because things get automated, actually less people might get access to your data. Which would be a privacy win.
I believe what's even more important than how we design the app, is how we design the legal framework around it. We do need rock solid laws, having enforceable data retention periods, and that limit access to the pare minimum needed.
Unfortunately, our track record for the design of such laws has not been too good over the last years.
I'm not sure if you intended that to be positive (ie tracing might be more complete in some cases) or negative (ie concerns about not wanting to reveal certain data). I'm going to go ahead and respond to the negative interpretation in case any future readers interpret it that way.
This is true, but I think a DP-3T like protocol (ex the Apple-Google spec) doesn't actually pose much risk here. The hypothetical drug dealer or other illicit contact can receive a notification that they were potentially exposed to someone that was infected, but in general no one else (a police officer, a spouse, etc) will be able to determine who was in contact with who.
In order to link someone to a particular location, you would need to observe their broadcast identifier while they were there and also link their diagnosis key back to them (this is likely to be quite difficult for most actors to accomplish).
In order to reveal a contact between two people, you would either need to do the above for both of them or to observe at least one of them at that location and time in some other manner.
I’m sure the protocol is fully privacy preserving, now. But if we give an inch, the government will take a mile. This is about normalizing self-surveillance and isolating ourselves in response to notifications on our phone. Sure, the tech is privacy-preserving now. But who’s to say an emphasis will remain on privacy in future iterations of the technology?
Personally, I will not opt-in to this technology, and if forced to use it, I will leave my phone at home. It’s a small act of civil disobedience but it’s a necessary one IMO.
It’s alarming to me how so many in tech seem welcoming of, even excited for, this technology. I say this as someone who wrote my senior thesis on a subject related to privacy enhancing technology, so I’m familiar with the ideas.
> It’s alarming to me how so many in tech seem welcoming of, even excited for, this technology.
It gets contact tracing right by accomplishing the goal while yielding almost no ground on privacy and remaining almost entirely offline. In an ideal world, all new technologies would be implemented in such a focused manner without regard for turning a profit.
I'm puzzled by your concern about normalization of self-surveillance; everyone I know has already voluntarily made drastic alterations to their behaviors due to current circumstances. I really don't see what introduction of this technology changes.
> who’s to say an emphasis will remain on privacy in future iterations of the technology?
If people don't object to widespread state surveillance later, would they have objected now? I don't see why a decentralized technology specifically built to prevent surveillance should lead to an increase in acceptance of it.
The difference is that the old system relied on human memory which is fallible, not to mention you can omit details which would lead to further trouble (infidelities for one). In this system the only control a user has is to turn off bluetooth, or leave their phone at home if Apple/Google override the users ability to turn this off.
The protocol states that it will upload the Diagnosis Keys, a set of Daily Tracing Keys relevant to your exposure. So in short, if this is the case it forces the user to either upload all their keys or none.
I would like to note that a v1.1 has recently been released, my information is about v1.0.
The specification (at least v1.1) contains nothing about uploading keys. The API appears to provide only the minimum required for protocol implementation.
The ENSelfExposureInfoRequest class can be used by an app to obtain diagnosis keys for the previous 14 days. What an app does with those keys is up to whoever implements it.
Under what circumstances do you think it would be okay for an infected person to hide their contacts? Surely you’re not valuing your marriage over the lives that will be lost in the resulting spread?
If a user is in close confinement with someone they fear will lash out at them if they test positive, for one. Off the top of my head, lets say you take an Uber home and the driver now has your home address, you don't know if they will try and attack you.
This is an example off the top of my head, as other comments in this thread have explained, violence against people who have the virus is happening around the world and is something that must be accounted for in these protocols.
If you have a Bluetooth receiver logging the different IDs you've come in proximity with and when, its easy to deduce who the positive user is by who you were in proximity of at that time.
Well, let's keep in mind it is decentralised, so only people who have been in contact with you can correlate it with your location at a given time in the past. Not the whole world nor a central authority.
And even is someone goes to that extent to track your identity down, I am not sure that local de-anonymisation is a problem. This is not something like HIV. I don't think there is any social stigma to catching the coronavirus. If you catch it you should self-isolate, and it will be obvious to the people around you that you got it. And if you don't want to self-isolate and want to hide it, what is the point to self declare that you got contaminated on the app in the first place?
> I don't think there is any social stigma to catching the coronavirus.
"In Mexico, Colombia, India, the Philippines, Australia and other countries, people terrified by the highly infectious virus are lashing out at medical professionals — kicking them off buses, evicting them from apartments, even dousing them with water mixed with chlorine."
...and these are cases where the victims don't even have the virus.
Disease has always carried stigma. We tend to lash out at things we don't understand. History has seen everything from leper colonies to menstruating women herded into tents.
You or I may be able to rationalize it and say "well, shit, the test was positive-- time to self-isolate" but plenty of plebes will use it as cause to incite a witch hunt, especially if a loved one dies from it and transmission is attributable to you.
OK, that's a fair point, but it would take someone uneducated that believes in the stigma to also be a tech wiz to collect and correlate the data. Presumably the app won't tell you when and where the contact happened (and if so, no implementation of contact tracing is anonymous since the app won't know that you were in a busy bus full of strangers or in a small office with a single colleague).
> it would take someone uneducated that believes in the stigma to also be a tech wiz to collect and correlate the data
Not quite. It takes a wiz to collect and correlate the data, yes. What happens to that data after that? For it to be useful, it's going to get stored somewhere. All it takes is an uneducated clerk or bored intern with access to go snooping around the de-anonymized data to compromise anybody implicated.
And this does happen routinely.
* Facebook, Uber and Google have all had problems with plebes (and tech wizzes!) with god-tier access doing inappropriate things with sensitive data.
* Bored data entry clerks with access to the credit reporting database routinely snoop on neighbors', exes' and celebrities' credit reports in spite of federal law.
* Revenge porn is such a thing that rule 34(a) ought to be that if you produce nudes, your confidant or Geek Squad/iRepair technician will post them on the internet.
* Look at how often people get doxxed by employees leaking customer PII onto reddit and 4chan, then look at how fast the mob descends on people innocent of any actual wrongdoing.
* We've seen a secretary get her hands on the Pepsi formula and try to sell it to Coca-Cola.
* The people living in the geographic center of America continue to receive death threats and harassment because of a flaw in outdated MaxMind databases that attributes ungeolocatable IPs to their location.
* There are people who refuse to participate in the census because of what certain cults of personality have done with such data.
Any chain of confidentiality is only as strong as its weakest link. You presume far too much intelligence and rationality on the part of humanity. Never forget that half of Americans wanted a belligerent narcissist to be "leader of the free world," and he still has supporters despite publicly recommending anti-parasitics and Lysol douches as solutions for a global viral pandemic.
Sensitive data is not created and left to decay in an underground bunker in Yuma. Despite its practical uses, at some level it will be exposed to individuals who lack discretion and will be exploited to malevolent ends.
Not once in human history has it worked out any other way!
It's a bit more complicated than that. What's being suggested here is that it would be possible for a bad actor to observe all Bluetooth activity over a large area. They could then use a diagnosis key to reconstruct someone's path through this monitored area, and then deanonymize that person by combining their path with other data sources. Later, an uneducated and hostile individual might somehow gain access to this deanonymized data and abuse it.
Couldn't they do that for anyone with bluetooth on, whether or not they're using the app? I get that knowing they have Coronavirus might make them a bigger target, though
If you enable the framework but never test positive (and thus never publish any of your keys), it's no different than if you had just kept Bluetooth on all the time.
If you enable the framework, later test positive, and choose to publish your diagnosis keys, each key can be used to link all your rolling identifiers together for the corresponding time period (nominally 24 hours). Contrast this with a randomizing Bluetooth implementation, which never intentionally reveals anything that would allow the different MAC addresses to be linked.
Of course, Bluetooth MAC address randomization itself is trivial to defeat for a reasonably capable and motivated adversary. If they can plant a bunch of radios for the purpose of tracking you, why can't they also use cameras?
That would work if there's just 1 person who is crossing the path, and you're able to physically identify them. If it's a crowd of people, you won't know who was what device unless their device is immediately next to the evil antenna. This isn't very realistic in practice, and is very unlikely to become common place world-wide. However, the virus IS already world-wide, and is a giant threat to many.
> If it's a crowd of people, you won't know who was what device unless their device is immediately next to the evil antenna.
Actually that's not true for the situation I described.
The bad actor would be able to connect any of your broadcast identifiers they observed back to each other via the diagnosis key that you published. Assuming they have a number of nodes monitoring Bluetooth traffic over a broad area that you passed through, they will be able to reconstruct the path you traveled over time.
For a naive implementation, the resolution of this reconstruction would depend on the spacing of the nodes. For a more advanced implementation, other data could be integrated to drastically improve it. Remember, your Bluetooth device is a broadcasting radio at the end of the day.
As to the likelihood of such things becoming commonplace worldwide, do bear in mind that many devices now periodically randomize their Bluetooth MAC addresses due to real world examples of tracking. Thankfully in this case it would only be possible to compromise the privacy of those who tested positive, and only within a singe 24 hour period (ie the daily tracing key rotation time frame) at that.
Yes, I agree this is true if someone was to go to extreme efforts. It seems to me personally quite unlikely, especially since the governments are already the ones who distribute the apps, and at least it seems in the initial implementation, are the ones who confirm your status.
I'm much more concerned about reducing COVID-19 to save millions of lives.
People who are educated tech wizards can also have the same stigma. There are many factors at play in places that have more diversity and existing problems between communities (like race, religion, class, etc.). Hopefully, like you said, the app won’t tell you when and where the contact happened. But we still need to make sure that we have as much privacy protection as possible while making this useful.
> I don't think there is any social stigma to catching the coronavirus.
Sadly, this is not true in India. Infected people have been threatened by their neighbors and friends with death. Infected people have also been harmed. Doctors, nurses and healthcare workers who have been caring for COVID-19 patients have been evicted from their houses or physically harmed (the latter forced the government to bring an emergency law providing for stringent punishment for those who attack healthcare workers).
I’m guessing that there will be a social stigma depending on the culture as well as other factors like the mortality rate (if your area has a higher mortality rate, then you’d likely hate infected people and take matters into your own hands).
Humans can very quickly develop irrational fears and react on that.
So there is a huge need to preserve the privacy of those infected.
So all of the tokens are being put on a central server. Today, governments use WiFi and Bluetooth to track traffic. It is not far fetched to see that your commute from point A to B could be tracked using Bluetooth receivers in transit stations.
This technology is currently being used to track people today. The use of Bluetooth address randomization does not do a sufficient job to prevent this, the only option is to not use Bluetooth.
It is important that people are aware of these risks. I am fortunate to live in a place where I can live my life without scrutiny from the government, but not all are afforded such a luxury.
Even if they do that, they can only track the people who self report as contaminated for the period during which the self reporting applies (i.e. n days before testing positive). Not before, not after.
But I just can't think of a system that achieves contact tracing while no one having any idea of the whereabouts of a person who self declares as contaminated. At one point the person who self declares has to volunteer to disclose some information.
I think its important to give the power to the people by allowing them to omit tokens from sensitive time points. In the current protocol, that means losing a whole days worth of contacts. If you reduce the period to an hour, you still allow people to share the contacts made on their commute or their lunch break without divulging or tracing them back to more sensitive time periods they don't want to be traced back to.
That’s one side of the ethical question, but what about the other side, what about the people who have been in contact during the period where the infected person would rather not have its location disclosed?
And it is a bit theoretical, as the authorities who have the capability to track your blutooth across the city have many other ways to track you (starting by calling your phone service).
What I object here with the NHS is the creation of one more tracking database with the explicit intention to let some researcher roam through it to find something interesting.
I appreciate you looking at the other side. To explain my view point, in this system it seems like all of the risk is put on the infected party who reports themselves. By decreasing the level of control they have, I believe you will see a decrease in the number of adoptions. It is valid to think about the non-infected user wanting to have this information, but today they don't even have this information so to even know they were exposed on their commute is above and beyond what is in place today.
I guess my original comment is a bit vague. When I look at these protocols I am interested in how large scale adversaries (Nation State) would use this technology, but also small scale adversaries (day-to-day person you are not friendly with). I think its also important to note as others have, that being outed as having the virus does put people at risk of violence in some places.
Telling people when they've been exposed is not a kindness we might extend from the goodness of our hearts when convenient. It's something we must get right, every time, or the conditions that require lockdown today persist until vaccination. We cannot afford people out and about making untraceable contacts three weeks for now, any more than we could three weeks ago.
Every person is a danger to society until this is over. Release is out of the question. The choices here are continued incarceration, or parole.
I can sort of imagine a libertarian solution here, with truth in labeling: as long as I can tell before I get within six feet of you whether you share a connected component with any conscientious objectors, then I can make my own decision about risk. But I cannot imagine that many public places would permit entry to such people.
> I don't think there is any social stigma to catching the coronavirus.
Maybe not. But there is definitely a social stigma about certain activities, like meeting your drug dealer or cheating on your spouse, which would be revealed through automated contact tracing.
And to the authoroties, who then promise to not violate that promise.
Except they refuse to be limited by cryptographic means to that.
Why? Because they demand the ability to change their promise in the future ... exact details to be specified. Perhaps “solving drugs” with contact tracing. At which point they have your data, you have zero control, and “your honour we can prove he lied: he was close to that drug dealer 5 times. Further details (such as that this happened in the train station and 5000 other people were also close) cannot be confirmed because that would violate privacy”.
This is the government that got caught letting police officers stalk their ex for 2+ years and then initially arrested the victim for more than 2 weeks when caught. Let’s not pretend they’re above doing this, especially since it’s become increasingly clear this contact tracing is the police’s wet dream.
This is almost never actually the case. Your daily keys are random, so the only way to know it's you is for someone to monitor bluetooth devices near by, and associate those keys with a physical identity... which becomes very difficult unless there's only one other person you come into contact with. In practice, it provides about the best anonymity you could ask for.
Valid point. DP3T (in one configuration) adresses this and lets you filter out certain parts that you do not wish to disclose. Thus, this then requires you to upload all broadcast identities used in the relevant timeframe, but because of space-related issues is then „compressed“ using a Cuckoo-Filter. This, however, yields false-positives. To eliminate those to a managable amount, it further requires more space.
So, this has a tradeoff.
Personally I don‘t think that linking multiple IDs in a day is a big intrusion of your privacy (and remember, it‘s only disclosed to anyone for the timeframe that is epidemologically relevant) - full de-anonymization still requires some second channel, such as cameras or the like - which can be linked together without those Broadcast IDs anyways.
The thing with Bloom/Cuckoo filters is that you can play around with the parameters and, for example, provide a set of filters for a day in such a way that the app users can do a binary search.
It never provides a false negative so all positives can download their set-up filters until they're satisfied.
The filter that DP3T are describing isn't that much bigger than the DTK set anyway.
The unlinked DP-3T is one extreme, there is a happy medium if developers don't want to use Cuckoo Filters or Bloom Filters due to false positives, which is to decrease the linkable period. If the period was an hour, people could freely share legitimate tokens for their commute, but hide the ones where they had an hour long 1-1 with their manager.
It's hard to test positive given the extremely limited supply of tests and all the effort you'd have to go through to get tested. I really can't imagine anybody willingly testing -more so with new invasive tracking- unless they are really ill and in need of medical attention.
The issue is, if your putting the risk on infected users, what is the benefit to them to release their tokens? They are already at risk, this just makes them bigger targets.
The recorded broadcast IDs can be linked together, but only if they are somehow gathered from all of the devices that have been near you. As far as I can tell, the spec doesn't include recorded broadcast IDs ever leaving the device.
There are some differences. DP-3T proposes two different systems, one with linkable tokens and the other without linkable tokens. The first system is similar to Apple-Google in the sense that your tokens for a day are derived from a key which is uploaded to a central distribution server when you test positive. In the second system the tokens are not linkable and they propose the use of a Cuckoo Filter to reduce the space complexity. A Cuckoo Filter is a probabilistic data structure that can tell you if an item is not or might be in a set. As a result there are some false positives.
DP-3T also explains how records are uploaded to a central server and the interactions with health-care providers. Apple-Google omit this part and focus on proximity data collection.
I'd like to mention the TCN Protocol here (https://github.com/TCNCoalition/TCN), another very similar specification. I bring it up because the readme goes into quite a bit of (easily understandable!) detail regarding the trust assumptions of such a protocol and associated rationale.
Ultimately I think Apple and Google are right to omit record upload and authentication concerns from the base protocol. The low level implementation should be as interoperable and generalized as possible in order to facilitate immediate uptake and maximum reusability. Higher level concerns such as who to trust and how to interact with users can be handled by the various app implementations.
It seems unlikely that anyone will deploy a version of DP-3T that differs significantly from the approach built into Android and iOS, due to the need for apps to obtain special permissions to run in the background. So the alternative variants that go under that brand are probably a dead letter.
"Those privacy principles are not going to change," said Gary Davis, Apple's global director of privacy. "They are fundamental privacy principles that are needed to make this work."
Can an app not simply ask the user for, and subsequently be granted, the necessary permissions? At least on Android I had understood it to work that way in theory, although in practice perhaps it doesn't always behave ideally (https://support.google.com/pixelphone/thread/6068458?hl=en).
Edit: I see now that it's specifically iOS that doesn't provide for granting the required permissions. I find such lack of control over a device that one supposedly owns highly concerning at best.
Guess: many people would tap OK without thinking about it (they don’t understand or care what background means) then would be unhappy that their battery drains..
It seems to me that restricting freedoms to combat ignorance is unlikely to have a desirable outcome. To your specific example, I suspect that bluntly warning that granting the permission has the potential to lead to significant loss of battery life would get even the most technically illiterate user's attention.
More generally, how are background streaming services supposed to work on the iPhone? Does Apple have to individually approve every app that wishes to do so (ex Spotify, Pandora, ...)?
No, they would just click anything that makes that dialog that stands between them and their goal of installing the app go away without reading the text and then be unhappy that their battery drains. Any design that relies on a confirmation dialog is fundamentally broken. Even technically competent users will read most confirmation dialogs as „Let me do what I want [Abort] [OK]“ no matter what you actually write there.
In many situations we may not have better solutions, but that doesn‘t change the fact that this is terrible.
> Any design that relies on a confirmation dialog is fundamentally broken
I'm having trouble interpreting this in any way other than a claim that granting users control over their devices is a fundamentally broken idea. I won't dispute that users often choose to do dumb things in practice, but it seems the two of us have a fundamental disagreement in our underlying worldviews.
> Even technically competent users ... no matter what you actually write
I'd argue that such users aren't actually technically competent then, despite the high opinion they might have of themselves. On the other hand, perhaps the users are technically competent and it's actually the relevant software developers that have done a poor job of communicating? If an actual technically competent user is experiencing significant difficulties using a program, then perhaps the program doesn't work as well as the developers thought it did.
The issue is that because of some "bad" users you restrict all users. What I do when I designed a prompt dialog that gates a dangerous operation is make the user type something, yopu could have the user type a different thing so you can confirm he actually reads the prompt text so there are technical solutions, IMO the justification that Apple is taking your freedom to protect a subgroup of users is not the reality, the reality is that restrictions make Apple more money, if lifting the restrictions would make them more money you will see a lot of praise on how smart tech is behind Apple's dialog prompts that allow you to lift restrictions.
Could this be a feature rather than a bug here? Help maintain tracing to a very high level statistically while still giving plausible deniability for personal privacy?
This is definitely by design. The Cuckoo Filter relies on hashes of the input so there is a chance of collisions. My understanding is a Cuckoo Filter is a recent extension of a Bloom Filter, if you're familiar with those.
Would you okay with me reposting your paper on my site? I'm working on a piece about the regulatory implications of contact tracing apps in the U.S. -- you've done a better job outlining the pros and cons of the various approaches than I could.
Feel free to hit me up henriquez AT protonmail if you'd like to discuss!
The data on comorbidities seems to match data from Italy, where hypertension, type-2 diabetes, and heart disease, were the top three in that study [1].
One thing I do wonders is if the mortality rate is higher as they counted the number of patients requiring intubation, not those who received it [2].