Android devices already know exactly where they are even with GPS disabled, because they sniff the nearby WIFI networks and then ask Google where they are. QED Google knows already, all combined is mass metadata surveillance already provided to those that tap into it.
Any sub-meter precision or presence detection does not really matter, if these companies have all your other questions, queries, messages, calendars, browse history, app usage, and streaming behaviour as well.
First this is not just Android. Apple does the same thing. You can buy an iPad which physically does not have any GPS hardware and it can reasonably tell you where you are. Personally I first learned of this feature when I bought a second-generation iPad, so it’s been there a while ago.
Second, it is a logical leap to assume Google knows everything already. They could for example build this nearby Wi-Fi based location querying API with privacy in mind, by purposefully making it anonymous without associating it with your account, going through relays (such as Oblivious HTTP), use various private set intersection techniques instead. It is tired and lazy to argue that just because some Big Tech has the capability of doing something bad therefore they must already be doing it.
> Second, it is a logical leap to assume Google knows everything already. They could for example build this nearby Wi-Fi based location querying API with privacy in mind
In which world are you living?
> It is tired and lazy to argue that just because some Big Tech has the capability of doing something bad therefore they must already be doing it.
It has the capability of doing something bad, and it has a history of doing it. Better not forget the last part.
As a former Google employee, I had witnessed first hand how powerful the internal privacy working groups were and how much they were able to push back against product teams when they not only demanded more privacy on new features but also invented many of the privacy techniques that made things possible. It’s frankly not even hard to find Google authors of RFCs that meaningfully contribute to Internet privacy.
No matter what you think, no stranger on the internet can convince to ignore my own lived experience.
Have you heard of Meta opening a port in the Facebook and Instagram apps and sending them tracking data from the browser [1]? Would you say it meaningfully contributes to Internet privacy?
Or Google being fined for abuse of their dominant position [2]?
Do you think that when you make an LLM request, it uses full homomorphic encryption not to disclose your information?
I guess I don't even have to give an example for Amazon, do I?
> It’s frankly not even hard to find Google authors of RFCs that meaningfully contribute to Internet privacy.
Well I didn't want to say that Google employees are malevolent, or that Google doesn't create good technology. But Big Tech (including Google) clearly regularly abuse their dominant position.
Back to "Google knows everything". Would you say that Google Search is built in a way that the Google servers cannot associate an IP to its search requests? Would you say that Gmail is built in a way that the Google servers don't have access to all the emails? What about calendar, documents, drive? Google maps requests? How does Google offer information about what's happening "nearby" without knowing the location of the device?
And now back to the location in particular: what about the "Find Hub" / "Find my device"? You can go on the website and ask Google where your device is, after you click a popup that says you allow Google to retrieve this information. Doesn't this obviously show that Google has access to it? They could access it without asking the permission, couldn't they?
So to the question: "do we need to develop a new WiFi technology and deploy it in order to have the technical capability of mass surveillance?", the answer is "no, we have the technical capability already". And by "we", we mean "Big Tech".
> Or Google being fined for abuse of their dominant position
Also irrelevant. You are talking about antitrust. I’m talking about privacy.
> Do you think that when you make an LLM request, it uses full homomorphic encryption not to disclose your information?
Irrelevant. I never said anything about LLMs. No one has made LLMs work with homomorphic encryption, but at least I’m glad you know this technique and it is very close to being used in other Google technologies.
> Would you say that Google Search is built in a way that the Google servers cannot associate an IP to its search requests?
I never said anything about Google Search either.
Look, at this point I just stopped reading your comment because it’s a mumbo-jumbo of irrelevant facts.
> It is tired and lazy to argue that just because some Big Tech has the capability of doing something bad therefore they must already be doing it.
Irrelevant. The comment never said it was "doing something bad", they just said "they know already". Which is most likely to be true. When the user sends a list of nearby WiFis to Google and Google responds with a location, then Google knows where the user is.
The approach described in the article is much different and more interesting, as it's passive and doesn't require any electronics on the individual being identified.
Layer one also explicitly identifies stakeholders, and describes the current (AS-IS) situation with annonated screenshots so everyone quickly sees what we are talking about.
Layer two also lists alternative solutions considered and why not chosen.
Layer three is developers making a few notes on chosen tech design, most important is the choice reasoning here.
In all layers, add and use references. Less is more, a picture beats a wall of text.
Yes, agree with all of that. I don't think every design doc needs every possible section, but if readers would benefit from the background or the an enumeration of stakeholders, that belongs up front, before the design.
I use it to design functional parts for 3D printing at home, very solid 3d design software that works on any internet connected device. Can easily open/see/edit parameters quickly on a phone or tablet even, to address a design flaw.
Workflow in a nutshell:
- Start a sketch on a plane, use toolbar circle, draw, type dimensions, close sketch.
- Click the rectangle you made and use toolbar extrude.
- Click the resulting object (bottom left) and export as STL / 3MF file.
It is parametric design:
- Discover your oops. Go back to sketch or extrude, edit dimension, entire design updates by magic.
- Dimensions can be changed into a quickly editable #variablesYouName by typing # basically.
Documentation is short and to the point. Same for most videos that explain how to accomplish something specific. Love it.
I have long dreamt of automating Factorio in the way that HDL and a PCB router works: just specify the ingredients and it produces a Factorio Blueprint.
First MVP stupid designs, then optimized routing, and eventually usable ingame where it connects with provided in/outputs.
Would be more fun to develop than to play obviously..
I liked the nilhouse mega base with that factory-train-blocks blueprints, its basically Factorio DUPLO.
I think this could be a good starting point for what you describe! This stuff is always more fun to develop than play. Since started working on this project, I can't bring myself to play the core game myself...
I would simply argue that the “offline” editing is a people-problem and hence van not be solved using automation. People shall find a way to break/bypass the automation/system.
The only “offline editing” that I allow on human text documents is having people add comments. So not editing, no automated merging.
For “offline editing” that I allow on automation (source code) is GIT which intentionally does not pretend to solve the merge, it just shows revisions. The merge is an action supervised by humans or specialised automation on a “best guess” effort and still needs reviews and testing to verify success.
Yes I agree. But remember: but git will automatically merge concurrent changes in most cases - since most concurrent changes aren’t in conflict. You’re arguing you want to see & review the merged output anyway - which I agree with.
Ideally, I want to be able to replace git with something that is built on CRDTs. When branches have no conflicts, CRDTs already work fine - since you merge, run tests, and push when you’re happy.
But right now CRDTs are too optimistic. If two branches edit the same line, we do a best-effort merge and move on. Instead in this case, the algorithm should explicitly mark the region as “in conflict” and needing human review, just like git does. Then humans can manually review the marked ranges (or, as others have suggested, ask an llm to do so or something.). And once they’re happy with the result, clear the conflicting range markers and push.
The tricky thing about “most” is that it means more than half, but people tend to treat it like almost all.
I would agree that git works more than half the time.
Merge resolution is a problem so hard that even otherwise capable developers fuck it up regularly. And on a large team made of people who intermittently fuck up, we start getting into and aggregate failure rate that feels like a lot.
The whole idea with CRDTs was to make something humans couldn’t fuck up, but that seems unlikely to
happen. There’s some undiscovered Gödel out there who needs to tell us why.
I think it's just a UX problem. Once that is solved, which I believe is definitely possible, both CRDTs and git can be made much more user friendly. I'm not saying it's easy, because it hasn't been solved yet, but I don't think the right people have been working on it. UX is the domain of designers, not engineers.
I think we are sitting at about 75% on one of those problems that will go asymptotic at 90%.
And that’s if you swap the default diff algorithm for one of the optional ones. I’ve used patience for years but someone tweaked it and called it “histogram” when I wasn’t looking.
> Ideally, I want to be able to replace git with something that is built on CRDTs. When branches have no conflicts, CRDTs already work fine - since you merge, run tests, and push when you’re happy.
How is this different than git's automatic merging? Or another compatible algorithm
In the happy case? Its no different. But in the case where two offline edits happen at the same location, Git's merging knows to flag the conflict for human review. Thats the part that needs to be fixed.
I want a tool with the best parts of both git and a crdt:
- From CRDTs I want support for realtime collaborative editing. This could even include non-code content (like databases). And some developer tools could use the stream of code changes to implement hot-module reloading and live typechecking.
- From git I want .. well, everything git does. Branches. Pull requests. Conflict detection when branches merge. Issue tracking. (This isn't part of git, but it would be easy to add using json-like CRDTs!)
We can do almost all of that today. The one part that's seriously missing is merge conflicts.
This is why I started using a DNS ad blocker. It's not that I hate ads, it's simply that I use more than one tab, and my Apple Silicon doesn't have enough RAM and processing power to keep rendering the ever-changing ads in all the tabs.
It is a little ironic that these EV sites have some of the most annoying ads, including all those video rolls on every page, wasting an immense amount of power.
One caveat: once in a while, like maybe once a year, my wife or I encounter a site that AdGuard blocks and we need to set our DNS back to "normal" for things to work. But you'll probably have the same issue with PiHole, which relies on lists of spammers or other undesirables.
PS: no relationship with AdGuard other than happily using their service.
While blogspam sites are a symptom of web publishing's race to the bottom, "actual news" sites are pretty much indistinguishable when it comes to ads and popups.
On a more serious note: populist politicians seem to like making gettier claims; they cost a lot of time to refute and are free publicity. Aka the worst kind of fake news.
When applying ReactJS in webdev after doing all kinds of engineering in all kinds of (mostly typed) languages in many runtimes, I was so surprised that JS did not actually had a struct/record as seen in C/Pascal. Everything is a prototype that pretends its an object, but without types and pointers, and abstraction layers that added complexity to gain backwards compatibility.
Not even some object hack that many OO and compiled languages had. ES did not add it either, and my hopes where in WebAsm.
This proposal however seems like the actual plan that i’d like to use a lot.
A lot of the code complexity was to get simple guarantees for data quality. The alternative was to not care, either a feature or caveat of the used prototype model.
Any sub-meter precision or presence detection does not really matter, if these companies have all your other questions, queries, messages, calendars, browse history, app usage, and streaming behaviour as well.