Hacker Newsnew | past | comments | ask | show | jobs | submit | Topfi's commentslogin

I thought this was going to be on a softwarefix for the appalling inconsistency that are macOS Tahoe window corners. What I found deeply disturbed me, though I must agree, the edges are a bit more sharp then I'd like and a slight curvature could probably prevent them showing wear and tear [0]. Good on op for doing something they like, even if it's really out there and I could see more "pillowy" hardware becoming a thing now, after a few years of sharp edged devices.

Since I mentioned Tahoe, it bears repeating, my spotlight is still broken.

[0] https://ljpuk.net/2025/05/23/how-does-the-space-black-macboo...


I'd be very surprised if this wasn't in preparation for limited Mythos access. Same with ULTRAPLAN, ULTRAREVIEW, etc.

Quoting the original bill [0]:

> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.

I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.

> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.

However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.

[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...


I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?


> unless you specifically want to make it illegal to not be OpenAI [...]

If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.


> because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all

Oof. If you're an Illinois resident, please call your elected and at least ensure they understand this loophole is there. In all likelihood, nobody other than OpenAI's lobbyists have noticed this.


    > "Frontier model" means an artificial intelligence model that:

    > (1) is trained using greater than 10^26 computational operations, such as integer or floating-point operations; or

    > (2) has a compute cost that exceeds $100,000,000
Such a strange regulation, usually large thresholds like this are made to only apply burdening regulation to very-big-players (if you're spending 100 million on training, you can afford a dedicated team to follow such regulation).

But here it seems to be an anti- competitive move for market entrants who haven't made it into the big league yet...

Sounds like the saga for some players pushing for Biden's EO 14110 but this time at the state level?


That doesn't say much other than the rules are over in section 15.

To be protected they not only have to publish their security protocol, but adhere to it.

That's not just 'providing a PDF'

That particular section is entirely appropriate. A company can't do everything necessary to prevent every bad thing. They should do everything that they reasonably can. Someone else should decide what is reasonable.

The regulators are saying we've decided the what you have to do to be considered to have done all you could to be safe. Follow those rules, tell us how you've followed those rules, and if something bad happens and we find out that you didn't follow the rules you said we're going to nail you to the wall.

This hinges on Section 15. Which I think is inadequate because it does not meet the criteria of someone else deciding what is reasonable. Publishing their safety plans and adhering to them should be enough to grant protection from liability of harm directly to users, since the publication give individuals the ability to make an informed decision, provided they have done the safety work that they have said, a user deciding that is sufficient for them and choosing to use it should be allowable.

That should not extend to harm done to others. They don't get to choose. Consequently the standard required to be protected against claims of negligence has to be decided by a third party (experts hired by regulators ideally).

Blanket liability and blanket indemnity both go too far.

If someone makes a YoYo that blow's someone up because they made it out of explosives then they should be held liable.

If someone makes a YoYo that blow's up a city because it contained particles unknown and undetectable to any science we have, they shouldn't be to blame.

The key is that they have to have done what we think is required. Legislators get to decide what it is that is required. If a company does all of that, then they shouldn't be held responsible, because they have done all they were asked to do.

The problem is not that a law provides indemnity, the problem is that it sets the standard to qualify too low.


Shifting liabilities from corporations to the public coffer is what companies do. You'll often hear this described as "privatizing profits and socializing losses". Let me introduce you to the Price-Anderson Act of 1957 [1]. It's been repeatedly extended, most recently with the ADVANCE Act [2]. This limits liability for the nuclear power industry in a whole range of ways:

- It removes jurisdiction from state courts to the federal court. In recent weeks, the part of "states' rights" is doing similar to stop states regulating prediction markets, as an aside [3];

- All actions are consolidated into a single claim;

- That claim has an inflation-adjusted absolute limit, which is somewhere around $500 million (I'm not sure of the exact 2026 figure);

- Any damages beyond that are partially sharead by the industry and an industry self-funded insurance program;

- The industry as a whole has a total liability limit, also inflation-adjusted. I believe this is around $10 billion.

For context, the clean up from Fukushima is likely to take a century and the cost may well exceed $1 trillion for a single incident [4]. So if this happened in the US, the government would be on the hook for almost all of it.

So I have two points here:

1. If you oppose any effort to shift liability from AI companies to the government (as I do) with legislation such as this, how do you feel about the nuclear industry doing the exact same thing? and

2. Minor point but I noticed in searching for the latest details, Gemini made factual errors, stating that "the Act is set to expire in 2025" when it was extended in 2024 until 2045. Always check AI's work, people.

[1]: https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...

[2]: https://en.wikipedia.org/wiki/ADVANCE_Act

[3]: https://www.pbs.org/newshour/politics/federal-government-sue...

[4]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...


This is what government should be doing. Figure out how to do something safely, make that a regulation, then shield companies from liability as long as they follow that regulation. In practice you won't extract trillions of dollars from most companies anyways, because they'll go bankrupt long before they manage to pay all that back.

It's the "guns don't kill people" equivalent for AIs.

---

Before the pitchforks and downvotes:

- yes, it's a deliberate simplification

- yes, the issue is complex because you can also argue that you can't blame authors of encyclopedias and chemistry books for bombs and poisons, so why would we blame providers of LLMs

- and no, this bill is only introduced to cover everyone's assess when, not if, LLMs use results in large scale issues.


Quite an appropriate analogy: gun manufacturers were sued for their responsibility in US mass shootings. They won, so the mass shootings continue.


Wait, you think that if we shut down the gun companies the mass shootings would stop?

Stop no, reduce yes. Same with how seatbelts didn’t end car crash fatalities. Eliminating drunk driving also wouldn’t end road fatalities, but outside of Wisconsin no one believes drunk drivers shouldn’t be in jail.

Those comparisons don’t make sense. There are millions of guns in the country, and cutting US manufacturing wouldn’t change that any time soon.

> any time soon

Mandating seatbelts on new cars also did not immediately stop people from flying through their windshields.


In fairness, a well designed and tested weapon at least can be expected to reliably and consistently perform the same thing each time. We also understand deeply how they work and can easily investigate if something happens whether it was user error, a defect or design issue. LLMs, not so much.

This dodges the moral argument behind "guns don't kill people", which is worth confronting directly. I think people can reasonably disagree about whether second/third/fourth/etc. order effects carry moral/legal responsibility.

In light of such disagreement, and given the lack of any higher authority among free, equal, people to arbitrate it, the only reasonable way to coexist peacefully is to avoid imposing your ideas on others. This is the foundation of a liberal society.


“Guns don’t kill people” is actually a pretty strong argument for regulating who can access firearms. Things like red flag laws and background checks.

My first thought was that this must be related to the automated weapons issue that got Anthropic on Trump's shitlist. It makes sense that a company that will eventually be asked to build weapons that choose their own targets will want to limit liability when it will inevitably kill the "wrong" person.

Also, I am disturbed by the fact that in all the discussions on this topic during the last month, no one has mentioned the magic word "Skynet". This is clearly a terrible idea. And if a company needs immunity from liability, they know it is a terrible idea.

Skynet's flaw wasn't that it killed humans. It was a military machine specifically designed to kill humans. If it only killed "the enemy", it would have been hailed a marvelous success. It was only considered a failure because it killed the wrong humans.


There are four ̶s̶i̶x̶ ̶(s̶e̶v̶e̶n̶ five counting the web version) maintained Outlook variants on Windows 11, last I checked and I have issues with each one. Search especially, but then that has remained an unsolved problem for 30 years. I am sure "AI" will finally solve this.

Edit: Have checked and found that two I thought were still maintained (16 and 19) were EOLd in October.


In fairness, the transition away from MSFT 365 Copilot (as we all of course call Office now) might include more friction. Mountainous VBasic monstrosities are sometimes the way things get done in orgs I am personally familiar with and that can be hard to switch away from. In general though, I consider this focusing on edge cases as just not helpful, especially as one must start a transition to fully uncover them and get to addressing them too. I also don't think that ancient Excel scripts are an unsolvable problem, but one that needs to be very carefully handled.

Respectfully, so what? There have always be specific use cases and user bases requiring a specific OS. No one ever considered OpenBSD interchangeable with Windows, few see Linux distros as a 100% drop in replacement for someone relying on Logic Pro.

Thing is, I really don't get this knee jerk "but what about INSERT_RARE_EDGECASE". It isn't helpful and argues something no one actually working on these projects ever proposed. Even if MSFT software remains in use, any gained alternative is a win, license costs and strategic autonomy both being valuable.

And yes, as you hinted, a large contingent of clerical work may already happen in a browser, with any found exceptions potentially addressable in the coming years, especially as older implementation may be updated anyways.

Let's be honest, we all underestimate how much we (can) do solely inside the browser anyways and even more so severely misgauge how few people are reliant on any native (none Electron) software at all outside gaming.

Power user is such a nebulous term anyway. To me, someone spending hours on end in Confluence can be a power user, having never left the browser. The same for a designer using Figma. Course, if one truly requires native only software, they may more likely fall under the umbrella power user, but again, few are seriously discussing just forcing those over since, reasonably, one must presume they have a reason for doing what they are doing.


The fact that both knew that C++ is a programming language at all, must suffice as evidence, at least for the purposes of this Article. Weirdly a real divergence from the Theranos reporting, which on top of that, also was absolutely in the public interest as it affected both the health of patients and was on actual fraud. Here it it exposure for exposure sake and not well reasoned to boot.

But are they themselves newsworthy or is it what they created and that they hold a lot of coins?

There are many people, both FOSS devs and working for major corporations, that have contributed or singularly been responsible for very impactful technologies, but in general, if that person wants to keep their persona discreet and there is no evidence they have done anything of public interest, the reporting remains purely on what they have done. Akin to why Wikipedia generally has rules for notability (I’d argue Satoshi falls under ONEVENT if we are strict here).

To me, the way you describe it, the line appears to be less in whether there may be a public interest and more whether there is public attention. In other words, is the line in the sand whether people should know this or whether they want to (and thus buy copies)?

Genuinely asking, is there a rule set on this the NYT should adhere to? What is the APs position for dem asking a pseudononymous character only notable for a specific thing?


In Austria we put it to a vote. Right after building the first fission plant. We never switched it on after a narrow defeat. At least in our narrow case, the restrictions were exclusively democratic.

Signal, an App predominantly used by governmental officials to leak war plans or bypass historical recording obligations.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: