Hacker Newsnew | past | comments | ask | show | jobs | submit | inkysigma's commentslogin

Well it did contain a request to not notify according to that same letter. I suppose that brings up several questions.

1. Does that mean the same thing in the ToS?

2. How valid are these requests?


Google acknowledges that they should have given notice per their own policy and that they violated it. In this case, they said that they violated it because they had failed to respond to the subpoena within ICE's 10-day deadline:

> On November 20, 2025, Google, through outside counsel, explained to the undersigned why Google did not give Thomas-Johnson advanced notice as promised. Google’s explanation shows the problem is systematic: Sometimes when Google does not fulfill a subpoena by the government’s artificial deadline, Google fulfills the subpoena and provides notice to a user on the same day to minimize delay for an overdue production. Google calls this “simultaneous notice.” But this kind of simultaneous notice strips users of their ability to challenge the validity of the subpoena before it is fulfilled.


At what point does Google’s incompetence imply organizations that use its services are liable for negligence?

What if this were a bogus subpoena for a lawyer’s privileged conversations with a client? A doctor’s communications about reproductive health with a patient? A political consultant working for the democrats?


A gag order would be from a judge. There would be severe penalties if a party breaks a gag order. A request not to notify is just a request; it has zero legal standing and there would be zero repercussions to ignoring it.

I'm very curious about this.

Google knows users care about their privacy, and it made the promise in its terms precisely for that reason. People pay attention to this stuff, as the popularity of this story shows.

Therefore, it's generally not going to be in Google's interest to break its own terms.

So what's going on? Did a Google employee simply mess up? Is the reporting not accurate or missing key details, e.g. Google truly is legally prohibited? Or is there some evidence that the Trump administration was putting pressure on Google, e.g. threatening to withhold some contract if this particular person were notified, or if Google continued notifying users belonging to some particular category of subpoenas?

Because Google isn't breaking its own terms just for funsies. There's more to this story, but unfortunately it's not clear what.


> Therefore, it's generally not going to be in Google's interest to break its own terms.

It is also not in Google’s interest to resist this administration. I would not be surprised if they decided to kiss the ring and be by internal policy more cooperative than what the law strictly says.

I guess we’ll get a better idea if more cases show up.


Previous administrations weren't easier to resist. Look up Joseph Nacchio's story. Short version: refuse to install https://en.wikipedia.org/wiki/MAINWAY without a warrant, go to jail.

>Google knows users care about their privacy, and it made the promise in its terms precisely for that reason. People pay attention to this stuff, as the popularity of this story shows.

Do Google users care about their privacy? I'd expect not, given that Google is (and hasn't been shy about telling us about it) reading all their emails in order to provide more targeted advertisements.

And, as I mentioned, Google hasn't been shy about saying that's exactly what they do (prioritizing their ad revenue over their users' privacy), so I have to assume that Google users don't care about their privacy.

If they did care about their privacy, they'd self-host their email on hardware they physically control.

That's orthogonal to Google giving up data to the government, with or without notifying the user(s) in question, except that the above makes clear what we already know: Google doesn't respect the privacy of their users.


> given that Google is (and hasn't been shy about telling us about it) reading all their emails in order to provide more targeted advertisements.

That hasn't been the case since 2017. Nearly a decade ago. They stopped precisely because Google users do care about privacy -- and tracking is one thing, but scanning the content of your e-mails is another.



Please don't be rude.

And what you're linking to is NOT what you described, "in order to provide more targeted advertisements".

Your links are describing Gemini integration. If you ask Gemini a question about your e-mails, obviously it needs to look at them. If Google is suggesting a smart reply, obviously it needs to process your e-mail to do so. But these are features designed to benefit the user.

You were talking about target advertising. That's not what your links have anything to do with.


>Please don't be rude.

[0]: "Google publicly announced in 2017 it would stop using Gmail content for ad targeting but continued to scan emails for spam, malware, and other non-ad functionality, which leaves room for ambiguity about downstream uses of metadata or other signals"

Who cares why Google is reading your emails? Not me.

Oh, it's just for non-ad functionality? In that case, go right ahead!

Ugh!

[0] https://factually.co/fact-checks/technology/email-scanning-f...


You're upset Gmail blocks spam and malware?

>Please don't be rude.

It's quite possible that google is more afraid of what will happen if they resist ICE than they are of bad publicity like this.

It's not just bad publicity. They may be sued

But yeah no matter the amount they lose in courts, it's inconsequential compared to angering this federal administration even a little bit


> Google knows users care about their privacy, and it made the promise in its terms precisely for that reason. People pay attention to this stuff, as the popularity of this story shows.

Does it know? And do users really care? Popularity on HN isn't popularity everywhere.

I'd wager most people don't care enough to move away from Gmail.

But even if they did, unfortunately this isn't the only variable a business is solving for. Corporations will generally just pick between the least unprofitable of two evils, not the lesser of.


Depends on how legitimate you consider an administrative warrant and how willingly you think complying with one is.

On a more practical level, forcing them to go to court might not be much better. If this went to a FISA court, those are essentially rubber stamps and give nearly 100% approval.


His anti censorship stance isn't necessarily born out by the data:

https://www.washingtonpost.com/technology/2024/09/25/elon-mu...


X under Musk has sustained more government takedown requests.

https://www.washingtonpost.com/technology/2024/09/25/elon-mu...


I think in this context, scaffolds are generally the harness that surrounds the actual model. For example, any tools, ways to lay out tasks, or auto-critiquing methods.

I think there's quite a bit of variance in model performance depending on the scaffold so comparisons are always a bit murky.


Usually involves a lot of agents and their custom contexts or system prompts.


Except none of these bills (California or the one in question) as currently written require an ID to actually be verified, merely that the user provide an age. This seems intentional as it's seems to solve the user journey where a parent is able to set a reasonable default by simply setting up an associated account age at account creation. It's effectively just standardizing parental controls.

I think this is a reasonable balance without being invasive as there's now a defined path to do reasonable parenting without being a sysadmin and operators cannot claim ignorance because the user input a random birthday. The information leaked is also fairly minimal so even assuming ads are using that as signal, it doesn't add too many bits to tracking compared to everything else. I think the California bill needs a bit of work to clarify what exactly this applies to (e.g. exclude servers) but I also think this is a reasonable framework to satisfy this debate.

I've seen the argument that this could lead to actual age verification but I think that's a line that's clearly definable and could be fought separately.


Kids aren't stupid. They'll just create another account when they're old enough to figure it out. They'll tell their friends how to do it and the rest of us will be stuck with these stupid prompts forever like it's a cookie banner.


Actually given boot chain protection, this will probably get harder as time goes on but even assuming some kids are able to, this is clearly definable as a user error: the fault lies with the kid and as a parent you need to think about your threat model.

Right now, it's not even clear how to create parental controls at a reasonable level so there's no clear path for what to do or how to respond.


Maybe we can agree that if you're mature enough to hack your own phone, you're mature enough to see a nipple. Why am I rate limited though? Dang must hate this opinion.


It’s because you are a sockpuppet.


I don't think "real" age verification with ids is immune to this either. (kids paying an adult to get an id for it or fooling an ai classifier, whatever).

Basically unsolveable, so why worry about that edge case? Kids will always get through to some adult content somewhere. A token system will make parents feel better in the meantime.


It gives the parents the tools to age restrict things, but does not require parents to use them or use them well.


From a parent's perspective, that's the great part about bubbling it up to the OS user account level.

Its trivially easy to see if the user (child) has indeed created multiple OS level user accounts with different permission levels if you want to spot check the computer.

You'll see it on first startup and then you can have "a chat". With Guest account access disabled, spawning a new account on a computer takes 2-3 minutes, will send emails and dashboard notices to the parent.

Its very much near impossible to verify that the child is not just going to Facebook etc. and using separate accounts and just logging out religiously.

That said I wish Apple/Microsoft/Google had more aggressively advertised their Parental Control features for Mac/Windows/ChromeOS as a key differentiator to avoid Ubuntu/Open Source distros from having to implement them.


> You'll see it on first startup and then you can have "a chat". With Guest account access disabled, spawning a new account on a computer takes 2-3 minutes, will send emails and dashboard notices to the parent.

On what OS? Microslop Windows? On my computer no one is notified when an account is created. And the account list isn't visible when I log in. I log in to the TTY.

Now, granted, I am not the norm. But my OS falls under these regulations. So what is my OS vendor supposed to do? For that matter, who is the vendor? What if I were using LFS? Who even would be the vendor for LFS? It's not even a distro!


Yes it doesn't show up probably because you were able to pretty easily mindlessly click through the part where you were asked if this is being provisioned as a child's computer.

When you provision a Windows, Mac or Chromebook these days as a child's device using your parental account, it will require a parental account to enable new user accounts and/or re-enable guest user on the device.

Like I said - my preference would have been for Microsoft, Apple, Google and Meta and TikTok to have made an industry effort to educate parents about the existence of such tools a priori of any legislation, we could have avoided Linux etc. getting sucked in.


It's pointless. Kids who want an uncensored internet will use a VPN or proxy the same way they've been getting around the restrictions and filers put on the computers and networks at schools. These laws will do nothing to protect children but will instead enable them to be targeted.


I don't think its quite so easy anymore that I can tell, with parental tools today - on a properly provisioned device you can require parental permission for app installs such as VPN, etc.


So you're advocating for stronger and more invasive controls?...

I think this is a sensible compromise. It gives parents more control than before without relying on shady third-party software or without turning every platform into a cop. Yeah, it also aligns with Meta's interests, but so what?

The age attestation solutions pursued by the EU are far more invasive in this respect, even though they notionally protect identity. They mean that the "default" internet experience is going to be nerfed until you can present a cryptographic proof that you're worthy.


> I think this is a sensible compromise. It gives parents more control than before without relying on shady third-party software or without turning every platform into a cop.

It doesn't give parents any control whatsoever. It just forces the OS to tell every website your child goes to how old they are. It doesn't require those websites to hide certain content for certain age groups. It doesn't define what types of content are appropriate for which age groups, it just makes sure that every advertiser bidding on your child's eyes knows what age range they fall into to.

If anything this takes control away from parents because even the cases where a website does their best to restrict content based on which age the OS tells them your kid is, it's the website setting the rules and not the parents. You might think that your 16 year old can read an article about STDs, but if the website your kid visits doesn't think so you as the parent don't get any choice.

With 3rd party software parents are controlling what software is used, they have the ability to decide which kinds of content are appropriate for their children and can be allowed and which types of content should be blocked. They can black/whitelist as they see fit. All of the power is in the parent's hands. This law gives parents one choice only: "Do I honestly tell my OS how old my child is". That's the end of the parent's involvement and the end of their power.


I mean on a UNIX OS you could make it yet another group the user needs to be part of. Like the group for access to optical media or for changing network credentials. Whether the child gets root access is on the parent, but that is like with anything else. A child can get around this, but it means finding and exploiting a 0-day on the OS. If they are able to pull this of I would congratulate them.


There is a huge attack surface for this. For example, kid manages to buy an old phone. Resets the phone and creates an account. Kid buys something like a Pi 3 manages to get a regular phone to become an access point. Etc. If a laptop is not completely locked down, a kid might boot a live USB stick.


Barriers like that for accessing 18+ sites would be so much better than nothing.

And cheat devices can be taken away as soon as the parent notices them.


The problem is that these laws tend to escalate. Once a government starts regulating, it doesn't stop.

It is also the wrong model. Instead of creating child-safe devices, just like there is a difference between toys and power tools, this regulation pretends that all devices are child safe and parents have to figure out which ones really aren't.


Well basically nobody is making child safe computers for ages over 7. Sitting around hoping that changes isn't useful.

So trying to force a very very basic child safe mode makes sense.

And I don't think this regulation pretends all devices are child safe.


I don’t care if it’s part of the user setup, but make it an App Store dotfile. Don’t issue fines to Debian for offering a Docker image without a user setup script.


Yeah, let's just boil the frog here. Makes sense.


Except how is this done on GNU/Linux or FreeBSD or Haiku? Who's going to implement it, who's going to ensure it can't be bypassed and who's going to be responsible if it's not?


I agree. There is a real drive to catastrophize here but so far, none of the bills actually take any steps to prevent users from lying about their age.


I think this is the third time this has effectively been posted see:

https://news.ycombinator.com/item?id=47362528

https://news.ycombinator.com/item?id=47365597


Some of these are also just like really weak? One of them for example seems to be some random employee at FB donating ~$1k to a politician and calling that a link. The entire "Proven Findings" is all over the place and provides no coherence. I don't think it's a particular secret that Meta would prefer age verification be done at the OS level so I'm not really sure what the added claim here is.

> A Meta employee (Jake Levine, Product Manager) contributed $1,175 to ASAA sponsor Matt Ball's campaign apparatus on June 2, 2025. Source: Colorado TRACER bulk data.

> No direct Meta PAC contributions to any ASAA sponsor across Utah, Louisiana, Texas, or Colorado. Source: FollowTheMoney.org multi-state search.

While it is true that Meta has funded groups that advocate for age verification, a lot of them also appear to have other actors so it's not like this is some pure Meta thing as some of the other commenters are suggesting.


Depending on the implementation, I could see that having rate limiting effects. There're only finitely many IDs so scaling sockpuppeting will saturate these IDs quickly but it's quite easy to spin up a new anonymous account. For example, I think the EU ID system has an upcoming way to create pseudo anonymous identifiers that can identify a user per website.

This presents the problem of governments being able to gatekeep speech which I am quite uncomfortable with but maybe there's some safeguard within the eIDAS proposal that makes this idea incorrect?


The internet is for the free exchange of ideas! Why would we want to limit it because some random gov somewhere is writing comments? Allow your citizens to think!


Just an FYI, but I don't know if being in the website field of GitHub really helps since there's a rel nofollow on the link.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: