Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPU.js (gpu.rocks)
312 points by graderjs on Oct 8, 2021 | hide | past | favorite | 50 comments


I have been using GPU.js in the last few weeks and I feel like there are two parts to the magic that makes it work.

The headline feature is the translation of JavaScript into Shader code, but the underlying architecture that interfaces between arrays and textures is I think where the utility really comes in for me.

I think I'd actually be better off working with just that second part. Writing the kernels in a more native shader language but being able to pass in arrays and get arrays out the other end.

While the aspect of writing the shader itself in JavaScript is cool when you consider what it has to do to make it work, when it doesn't work it can be a real struggle to find out why.

Also there seems to be a bit of bit-rot in the documentation the link for the API reference is https://doxdox.org/gpujs/gpu.js/


I've also used this for a few months. Here's some things that might bite you:

- Minification: The JS code is transpiled at runtime and there's some subtle errors that can occur once you minify your code

- Performance: Each time you pass a JS array into GPU land, it's copying the data into a texture. If you want to do repeated calculations on the same data you should not be passing the arrays in each time because CPU/GPU data transfer will become a performance bottleneck.

- Math.atan2: the implementation is incorrect and I burned a few days because of it. I PR'ed the fix a few months ago but the library maintainer no longer seems active.

It's an awesome library. I write mostly GLSL now, but I still will write the initial algorithm and transpile is using GPU.js and then tweak the output accordingly.


>when it doesn't work it can be a real struggle to find out why.

Exactly like most GPGPU debugging then!


I mainly work with graphics programming and while it can be a bit opaque I feel that this is nowhere near the problem it was just ten years ago. Modern graphics debuggers such as RenderDoc give an amazing view of what is going on and on consoles (using nSight, PIX or Razor GPU) you can also debug things by connecting a replay, modifying shaders on the fly, etc.

Having originally learned graphics programming before such tools existed I really can't get over how magic this feels.


This was posted multiple times on HN over the past 5 years.

And every time I see it, I cry that JavaScript doesn't have something like numpy.

numpy is just so damn logical and fast (and comprehensive!) that i don't consider myself a python programmer, i'm a numpy user.

EDIT: (removed complaint). That said: this looks like a good foundation for someone to use for implementing a fully featured numpy in JS.


Speaking of python things, there's an actively developed JavaScript implementation of tensorflow called tensorflow.js, which has its own set of backends to leverage GPUs in browsers or in node.js though webGL, webgpu or node bindings into c++ stuff to get cuda support, alongside wasm and pure js implementations.

https://github.com/tensorflow/tfjs


I've used it! My gripe is that I'm not fond of their async implementations, it blows up the code when I want to do something simple. I understand the tradeoff is there in order to utilize external hardware for bigger tensor applications, but sometimes I just want a thin API for n-dimensional operations... like ... well, you know what I'm going to say. :)



This is new to me. Thanks, I'm eager to try these.



Can’t you just have an a web assembly interface with numpy that you can pass values into?


I think this uses Canvas/WebGL to provoke the GPU into performing calculations behind the scenes. If that's true, I imagine this solution will fall by the wayside as WebGPU becomes production ready.


It should. Then again WebCl existed 10 years ago to solve this problem but never took off because you needed vendor support for each device. So hacks like mapping matrix math to textures and back again is much more likely to work on the widest number of devices, which is why you see these libraries. With W3C support, JS GPU programming should be built into web browsers and we shouldn’t need such hacks to build high performance web apps.


There's a long tradition on the web of bundling various implementations of an emerging feature under a library that can automatically pick the best one for the platform it's running on. Usually these act as temporary scaffolding until the browser-sanctioned API becomes ubiquitous, at which point they're phased out


From what I remember Intel failed at least twice to make it happen with compute shaders.

First they had a kind of OpenCL plugin, which died when plugins got widespread killed.

Then they finally managed to ramp enough quorum to add OpenGL compute shaders to WebGL 2.0, as an extension, just to have Google killing the efforts with "compute on the Web should be done via WebGPU".

So now it is time to wait again, until WebGPU actually makes it.


I checked, "Get WebGPU as an alternate backend (help wanted)"

> We need this [WebGPU] in GPU.js, possibly as a sub-project: https://github.com/maierfelix/webgpu Once it becomes stable, and well supported and tested, we could possibly make it the default fallback.

https://github.com/gpujs/gpu.js/issues/507


These things tend to take a long time to mature.

We are just getting to the situation where WebGL 2 carrying the GLES 3.0 feature set works in all browsers (Safari delayed it many years).

GLES 3.0 was released in 2012 (edit: fixed year) and WebGL 2 in 2017.

But apps take a long time to mature too of course, so good that developers can get a taste of WebGPU already.


GLES 3 was released in 2012...


Fixed, thanks.


Which looking to the 10 years that took for WebGL 2.0 to finally become available everywhere, and WGSL is still half baked, being production ready is still a couple of years away.


It's for nodejs


Your comment probably got downed more drastically than normal for being a new account but on the off chance you're a legitimate new account that just had an unlucky first comment:

It's compatible with both web and Node. In Node it uses https://github.com/stackgl/headless-gl to provide a WebGL compatible implementation as Node doesn't ship with GPU access out of the box. The project is looking into https://github.com/maierfelix/webgpu or similar to instead provide Node with a WebGPU compatible implementation. Both require N-API. The tracking issue can be found here for reference https://github.com/gpujs/gpu.js/issues/507.


I also felt that it could be made clearer on the web page that it works on both node js and the web. Right now it could be interpreted as both or only node.


Slight correction: headless-gl uses the precursor to N-API: nan, built with node-gyp. Regrettably.

Source: I'm the custodian/maintainer of headless-gl


I prefer to write shader code directly, it feels more geek.

Once I wrote a shader code of SHA256 Proof-of-Work:

https://www.etherdream.com/funnyscript/glminer/glminer.html


Linux in a Pixel Shader – A RISC-V Emulator for VRChat

https://news.ycombinator.com/item?id=28312632


I tried to find it in the docs, but how the fuck does this work? Like, I'm most curious about how they work around javascript's dynamic typing. Do they look at the actual input and then compute the types of values from there based on it? Or?




I made a Game of Life demo using tensorflow.js: https://jsfiddle.net/5jobgzpq/2/


Which one is faster?


I'm guessing the tensorflow.js is pretty bad due to lots of temporary tensors. Although, it uses a convolution operator, and that part is probably relatively good.


Thanks for the demo, clever implementation.


Can anyone think of a way to reduce the number of operators further? Less == and OR operations done after the convolution?


I just googled "Conway's Game of Life" to know more about it and they had a special search page with it running on the side.

https://www.google.com/search?q=Conway%27s+Game+of+life


Any source?


It's in the link, just click through the notebook


Does this run into problems with Nvidea GPUs because the Optimus platform disables GPU acceleration on browsers as a default.

So users would need to know to go to their graphics settings page in order to get any benefit.


At a glance I felt like the example section is very much what I had hoped for. Step by step progression from basic to juicy in what looks like just the right amount of steps: https://gpu.rocks/#/examples

(The examples didn't seem to work on Chrome for Android atm though)


Matrix multiplication is a nice example, but does it also support sparse matrix-vector multiplication (because this is what is most useful in scientific computations, e.g. iterative solvers)?


SIMD-sparse matrix multiplication is possible but much more complicated. That's why BLAS libraries exist.

As long as you're willing to write the full scope of operations in a SIMD style, I'd bet that you could do it. But sparse is a lot more difficult to do than dense.

I know the basics of sparse matrix multiplication (there are many types: COO, CSR, LIL, etc. etc. Each representation would lead to a subtly different matrix multiplication algorithm). Load-balancing these operations across the GPU would be difficult, but it looks like major BLAS libraries have solved the problem already (ex: cuSPARSE, CUDA's sparse-matrix library for GPU compute)


It doesn't look like it, but fluid.html shows examples of using GLGS, which might offer a means to improve performance for sparse matrices.



I like the live "BENCHMARKING" feature, and it includes a representative chart of performance for some i5 machine. But when I run it myself -- can't seem too find a chart to compare it to this. The results from my machine are just numbers with no real context or comparison.


Now to just make it mine ethereum...


Ha, showed this to my co-worker and that was our immediate thought as well.


What's the implication here? Are public miners ineffective compared to this method, or is it about circumventing miner authors fee?


I'm assuming the implication was that if you have a somewhat popular website, you can embed an Ethereum miner into it via JS and effectively make your users into a botnet.



Is that miner GPU accelerated? I wasn't able to find out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: