I like the idea of a central signing authority for open source. While this might go against the spirit of open source, I think it eventually creates a critical mass and outcry if Microsoft or Google would play games with them. Also foundations might be a good way to protect against legal trouble distributing OSS under different regulations. I am imagining e.g. an FDroid that plays Googles game. With reproducible or at least audited builds also some trusted authorities could actually produce more trusted builds especially at times of supply chain attacks. However, I think such distribution authorities would need really good governance and a lot of funding.
There is no real advantage of a central signing authority. If you use Debian the packages are signed by Debian, if you use Arch they're signed by Arch, etc. And then if one of them gets compromised, the scope of compromise is correspondingly limited.
You also have the verification happening in the right place. The person who maintains the Arch curl package knows where they got it and what changes they made to it. Some central signing authority knows what, that the Arch guy sent them some code they don't have the resources to audit? But then you have two different ways to get pwned, because you get signed malicious code if a compromised maintainer sends it to the central authority be signed or if the central authority gets compromised and signs whatever they want.
All PKI topologies have tradeoffs. The main benefit to a centralized certification/signing authority is that you don't have to delegate the complexity of trust to peers in the system: a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
The downside to a centralized authority is that they're a single point of failure. PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
> a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
So the apt binary on your system comes with the public keys of the Debian packagers and then verifies that packages are signed by them, or by someone else whose keys you've chosen to add for a third party repository. They are the pre-established root of trust. What is obtained by further centralization? It's just useless indirection; all they can do is certify the packages the Debian maintainers submit, which is the same thing that happens when they sign them directly and include their own keys with the package management system instead of the central authority's, except that now there isn't a central authority to compromise everyone at once or otherwise introduce additional complexity and attack surface.
> PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
Web PKI is the worst of both worlds omnishambles. You have multiple independent single points of failure. Compromising any of them allows you to sign anything. Its only redeeming quality is that the CAs have to compete with each other and CAA records nominally allow you to exclude CAs you don't use from issuing certificates for your own domain, but end users can't exclude CAs they don't trust themselves, most domain owners don't even use CAA records and a compromised CA could ignore the CAA record and issue a certificate for any domain regardless.
> It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
Only it isn't really centralized at all. Each package manager uses its own independent root of trust. The user can not only choose a distribution (apt signed by Debian vs. apt signed by Ubuntu), they can use different package management systems on the same distribution (apt, flatpak, snap, etc.) and can add third party repositories with their own signing keys. One user can use the amdgpu driver which is signed by their distribution and not trust the ones distributed directly by AMD, another can add the vendor's third party repository to get the bleeding edge ones.
This works extremely well. There are plenty of large trustworthy repositories like the official ones of the major distributions for grandma to feel safe in using, but no one is required to trust any specific one nor are people who know what they're doing or have a higher risk tolerance inhibited from using alternate sources or experimental software.
Nothing, I can’t think of a reason why you would want to centralize further. But that doesn’t mean it isn’t already centralized; the fact that every Debian ISO comes with the keyring baked into it demonstrates the value of centralization.
> Each package manager uses its own independent root of trust.
Yes, each is an independent PKI, each of which is independently centralized. Centralization doesn’t mean one authority; it’s just the way you distribute trust, and it’s the natural (and arguably only meaningful) way to distribute trust in a single-source packaging ecosystem like most Linux distros have.
If someone is willing to put in the work in governance, FOSS projects would be willing to fund it - at least Mudlet would be. We get income from Patreon to cover the costs.
There is ossign.org, Certum offers a cheap certificate for FOSS [1], and Comodo offers relatively cheap (but still expensive) certs as well [2]. Not affiliated with either service, but these are the ones I remember last time I had to dig into this mess, so there might be even more services that I don't recall at the moment.
Those cost don't even include the full lifecycle societal cost of nuclear energy [0] (25-39 cent by some studies). Sure the renewables also have lifecycle cost we might not pay yet, but in Germany even more important we do not even have a societal and research concesus what to do with the waste (might be much better in other countries)
The cost of energy is also full lifecycle cost including waste handling, deconstruction and security. I am not saying that everything is that equation for renewables. However, one truth at least in Germany is that we still have not solved the waste problem. Other countries have both better options and also a societal consensus. Here is a study of societal cost of nuclear energy:
Actually IMHO the promise would be beyond standard FP4 quants. I think the goal is more where 1.58 bit (ternary) quants are heading. Having said that it would be interesting to see performance on nonstandard HW.
No problem at all in the EU, as the user would either would need to review and redact the output or would need to put a transparency note up by law [0]. I am sure that Anthropic with their high ethical standards will educate their users ...
Code may not be, but opening a Merge Request undercover may be unlawful:
> Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system
Depends if it's a closed loop agent. If the agent opens the request, writes the body and is triggered by an answer on the MR, then I'd expect the law to cover this.
Good question. Actually, i was assuming that at least source code is treated as text under the legal regime (there is typically special rules in copyright law, but provision applying to text should apply). Furthermore I would think pull requests, etc are all text. So I would think this applies.
But it's not just text. Once again, it's explicitly defined as:
> which is published with the purpose of informing the public on matters of public interest
There is no "informing public on matters of public interest" in source code nor an MR. It's clearly meant to prevent "deepfake" news, like the image and video ones explicitly call that out.
You are absolutely right. However, the recitals point clearly beyond only protection against fake news. IMHO running such an agent in stealth mode can easily be illegal, Articl 50 (1) states :
> Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.
But you aren't a provider of AI services by using AI. There's a clear difference already called out between "provider" and "deployer". An AI user could barely be called a "deployer" as is, let alone a "provider".
In other words, what AI service are you providing by creating a PR?
I would argue that the person reading an AI generated pull request is if there is no human oversight with an Ai. And are you sure that you get out of this definition (at least as a company):
> provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
Actually the thing that might need that would be make -j. Maybe it would make sense to make a bsd or gnu make version that integrates those optimizations. I am actually running a lot of stuff in cygwin and there is huge fork penalties on windows, so my hope would be to actually get some real payloads faster...
I hate to say it but forkrun probably wont work in cygwin. I haven't tried it, but forkrun makes heavy use of linux-only syscalls trhat I suspect arent available in cygwin.
forkrun might work under WSL2, as its my understanding WSL2 runs a full linux kernel in a hypervisor.
EV chargers take a different approach. There is no power on the connector while you're plugging it in. It then locks in place before the contactor closes and power is delivered. Unplugging is the same, power is removed before the plug is unlocked for release.
Do the EV chargers not have an alternative disconnect sequence for when the two sides are forcibly mechanically separated? Either by accident, malicious intent, or mechanical failure of some component.
This is effectively how hot swapping is done for may modern systems. It seems counter intuitive to call it hot plugging, but it’s what you’ll find if you look up “hot swap controller”.
While you article is insightful. Can the blog author please redact the actual names and nicks from your orginal blog post (including the exact places where to find the information). As this was discussed below. While I think you had good intentions, but it might be good to also reflect on the rights of that person not be identified.
Edit: I misread the comment initially as from someone with more insight. However, I guess it is obvious that anyone can see the JavaScript and participates involuntarily in the DoS.
While I would it also better to a bit redact names and details mentioned in the original article in hindsight, I hardly find real defamation. I guess you want to provide random unproven evidence if someone is target of various foreign law enforcement and commercial sites.
In the article they even call for donations to archive.today . As far as I read the tone of the post is full of admiration. Funny thing is that IMHO the rather childish JavaScript attack gives credibility to the post after all.
In all this I somehow hope that we see a legal solution to all this major global copyright crisis that has been reinforced by LLM training. (If you want conspiracy theory: that I guess would be easy monetization for archive these days selling their snapshots)
Thinking about it, I think we might need better platform rules, maybe even regulations on this. There seems to be pretty much no line of defense, which might explain the rather desperate DoS. If you take anonymity as a right, discussion like ours here on HN are dangerous as well, as they easily make otherwise difficult to find knowledge easily visible. So while a single fan page might go unnoticed, in case of doxing amplification is also a problem. Just my spontaneous thought.
Edit: one afterthought. The story about hacking together a response to the GDPR takedown request quoting press rights and freedom of speech using an LLM shows actually the deeper problem. Actually rights come with obligations (at least ethical ones). At least in Europe press standards are typically rather aware of doxing risks. While actually celebraties also successfully use legal defenses, i still think the defenses for activist are weak balancing interest here (at least if you made something of public interest)
Smart move of Fujifilm. That will the future of software licencing with AI breaking copyright. Software will come encrypted and only run on secure processors. AI will push us further into an age of cloud, software DRM and software patents. The rest will be effectively public domain.
reply