I didn't know how old this actually is. I was surprised to learn Marcus Aurelius wrote his Meditations as a private journal... never meant to be published.
had the same question and after reading about it, I found there are multiple layers on each other.
Existing plans are built to run 40-60 years.. retiring creates "stranded assets". pension funds fight hard to avoid that.
The renewable projects that wait for permits exceed total existing capacity.. the bottleneck is not tech, but locality.
30 years and nobody (pharma, govts, etc.) have not found it worth to study this properly.
If it worked, would not at least some researched pick this up?
this made me look into how cloud hypervisors actually work on HW level.. they all offload it to custom HW (smart nic, fpga, dpu, etc..). cpu does almost nothing except for tenant work. AWS -> Nitro, Azure -> FPGA, NVIDIA sells DPUs.
what surprised me is how Proton works under the hood... no emulation at all!
wine translates win API -> Linux. Then DXVK converts DirectX calls into Vulkan in real time, and VKD3D-Proton for DX 12.
so it always native Vulkan.. no wonder performance is even better than in windows!
> no wonder performance is even better than in windows!
Every "benchmark" I've seen from someone claiming a game performs better on Linux via Proton than on Windows was written by someone that doesn't know anything about running benchmarks or how statistics work.
agreed, attackers can use these AI tools to scan open source code and find bugs very fast... if project maintainers do not have access to such tools, it because an ufair fight
Exactly.
The rate of acceptance right now is low.
Maybe less than 10% and most will not be relevant.
Also, if they can use it to categorise, validate and test it why not?
If they have 100 new bugs, but all useless ones already checked and close life would be almost normal again.
Using llm for things that require knowledge is sketchy and unreliable, but having fixed pipeline checks that runs few hooks, maybe some automated scripts, add context, link bugs, create clear versions of the conversation... That's ok!
We see many companies stumbling on the llm problems when the code get to big or too messy, and that will be it, imho. But using those tools as small quick gains is here to stay.
found this if anyone wants to explore: https://vectree.io/c/stoic-self-examination-marcus-aurelius-...
reply