Hacker Newsnew | past | comments | ask | show | jobs | submit | aruametello's commentslogin

to be fair there are some degree of "hand curation" of the data so while "it is the internet", the actual trained data is a derivation of that.

in a mild but productive analogy:

I could actually hand a K&R book C programming book + lots of specs to say "this is the linux source code" (the raw data that were all observations were made, aka "the internet") ...or just send them the "kernel the source code" (the refined training data, after a LOT of manual stuff) ... that your compiler consumes to generate the kernel. (the Open Weights model, what they actually shared)

Mildly related rant: honestly its a bit shit to say "open source model" in a "open weights" model, its like saying World of Warcraft is opensource because they gave you an executable of the game. (you can still change it, but in more restricted ways)


> Fork the kernel!

pre "clanker-linux".

I am more intrigued by the inevitable Linux distro that will refuse any code that has AI contributions in it.


Tardux Linux

4000 certainly did, the "shader execution reordering" gave an meaningful uplift to tasks that "underutilized warp units due to scattered useful pixels".

it seems to have helped path tracing by a lot.


its a very honorable mention in my eyes because its more appropriate of the tile of "first independent Graphics unit" than the Geforce 2. (did more than just blast already projected triangles at the screen)

not that it was an awesome product, but certainly it was flexible.

a good (albeit tiny) demo of that is that vquake has the same wobbling water distortion of the software renderer quake but rendered entirely through the gpu. Perhaps with some interpretation this could be called the "caveman discovered fire" of the pixel shading era.


+1 to that, when i first saw unreal tournament with the add-on compressed texture pack was a real WOW moment.


(VR enthusiast here, mostly under windows)

intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)

Footnotes:

* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.

* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)


It looked like when Intel jumped into this space, they tried to do everything at once. It didnt work well, they were playing catch up to some very mature systems. They are now being much more selective and restrained. The down side is that things like VR support are put on the back burner for years.

Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.


Post traumatic "nvidia TurboCache" disorder triggered.

https://en.wikipedia.org/wiki/TurboCache

(Not the same thing 1:1, but worth the joke anyway)


(not a teardown dev)

i had brainstormed a bit a similar problem (non world aligned voxels "dynamic debris" in a destructible environment. One of the ideas that came through was to have a physics solver like the physX Flex sdk.

https://developer.nvidia.com/flex * 12 years old, but still runs in modern gpus and is quite interesting on itself as a demo * If you run it, consider turning on the "debug view", it will show the colision primitives intead of the shapes.

General purpose physics engine solvers arent that much gpu friendly, but if the only physical primitive shape being simulated are spheres (cubes are made of a few small spheres, everything is a bunch of spheres) the efficiency of the simulation improves quite a bit. (no need for conditional treatment of collisions like sphere+cube, cube+cylinder, cylinder+sphere and so on)

wondered if it could be solved by having a single sphere per voxel, considering only the voxels at the surface of the physically simulated object.


from what i seen in "low end" ssds like the "120gb sata sandisk ones" under windows in heavy near constant pagging loads is that they exceed by quite a lot their manufacturer lifetime TBW before actually actually started producing actual filesystem errors.

I can see this could be a weaker spot in the durability of this device, but certainly it still could take a few years of abuse before anything breaks.

an outdated study (2015) but inline with the "low end ssds" i mentioned.

https://techreport.com/review/the-ssd-endurance-experiment-t...


it seems to be bad at spatial and some temporal tasks given it currently f*** s**'s at pokemon.

source: https://www.twitch.tv/claudeplayspokemon


You're allowed to say "fucking sucks" on Hacker News. It's not against the rules, and there's no "algorithm" that will penalize you.


glad to know, i am rather new here and somewhat used to the "don't do the usual forbidden stuff".


"fuck sex's"?


that's silly. obviously there's a missing apostrophe:

"it's currently Flan Sam's at pokemon"


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: