They have written about it on github to my question:
Trivvy hacked (https://www.aquasec.com/blog/trivy-supply-chain-attack-what-...) -> all circleci credentials leaked -> included pypi publish token + github pat -> | WE DISCOVER ISSUE | -> pypi token deleted, github pat deleted + account removed from org access, trivvy pinned to last known safe version (v0.69.3)
What we're doing now:
Block all releases, until we have completed our scans
Working with Google's mandiant.security team to understand scope of impact
Reviewing / rotating any leaked credentials
Realistically I think it will come down to the aggrieved counterparties here. Who was on the losing side of the money, was it Joe Schmoe day trader or a bunch of funds who lost their shirt?
If it’s the hedge funds or institutional money, you can absolutely be sure this will come to a head. People don’t like being taken for a ride, and if they are repeatedly taken for a ride and they are organized market participants they will come around and make sure there is a comeuppance as a collective
There is credible reporting (Reuters etc.) that ships are being turned around, so it does appear that the mines (or at least threat thereof) have been deployed. Either way, as long as the threat of sinking is alive the strait is uninsurable and is for all practical purposes closed.
Fair point, but the IRGC telling ships to turn around, as opposed to the ships themselves doing it (as per reporting) would imply that the Strait has been blockaded in some fashion. It remains to be seen if this is all a bluff, I'm just as skeptical as this would be their last option, but given the strikes on other Gulf countries, the threat seems a bit more plausible of actually being real.
> The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong. You have to draw a very clear line between AI-driven prototyping and developer-driven code that developers must own. I think this article misses the mark on that by framing everything as a decision to DIY or delegate to AI. The real AI-assisted successes I see have developers driving with AI as an assistant on the side, not the other way around. I could see how an MBA class could come to believe that AI is going to do the jobs instead of developers, though, as it's easy to look at these rapid LLM prototypes and think that production ready code is just a few prompts away.
This is what's missing in most teams. There's a bright line between throwaway almost fully vibe-coded, cursorily architected features on a product and designing a scalable production-ready product and building it. I don't need a mental model of how to build a prototype, I absolutely need one for something I'm putting in production that is expected to scale, and where failures are acceptable but failure modes need to be known.
Almost everyone misses this in going the whole AI hog, or in going the no-AI hog.
Once I build a good mental model of how my service should work and design it properly, all the scaffolding is much easier to outsource, and that's a speed up but I still own the code because I know what everything does and my changes to the product are well thought out. For throw-away prototypes its 5x this output because the hard part of actually thinking the problem through doesn't really matter its just about getting everyone to agree on one direction of output.
But the world is not deterministic, inherently so. We know it's probabilistic at least at small enough scales. Most hidden variable theories have been disproven, and to the best of our current understanding the laws of the physical universe are probabsilitic in nature (i.e the Standard Model). So while we can probably come up with a very good probabilistic model of things that can happen, there is no perfect prediction, or rather, there cannot be
Dummit and Foote is the classic abstract Algebra textbook to learn about how to precisely define these. Its treatment of ring theory is very well motivated and easy to grasp
I don't think anyone is advocating for incentivizing forced/child labour.
Given that the ILAB link you posted itself is maintained by EO 13126 signed by the Clinton Administration, I think there can be nuance in the discussion around whether or not the blanket application of certain foreign policy instruments is the right way to induce a change in the domestic policy of another country to solve the problem of bad labour practices.
We can do this without it becoming an argument about whether trade is "good" or "bad" depending on what "side" you are on.
This difference in emotional reaction is because of the effort involved in the process. Functionally, we see YouTube video creation as a fundamentally difficult exercise (to do well) and results in a singular product (one video). Any additional content would need an ongoing investment of time and money from the creator. The LLMs though would not require an ongoing investment beyond the first training run, that is probably why you have a problem with it, they're an extremely high leverage way of taking advantage of content.
reply