I have been using GPU.js in the last few weeks and I feel like there are two parts to the magic that makes it work.
The headline feature is the translation of JavaScript into Shader code, but the underlying architecture that interfaces between arrays and textures is I think where the utility really comes in for me.
I think I'd actually be better off working with just that second part. Writing the kernels in a more native shader language but being able to pass in arrays and get arrays out the other end.
While the aspect of writing the shader itself in JavaScript is cool when you consider what it has to do to make it work, when it doesn't work it can be a real struggle to find out why.
I've also used this for a few months. Here's some things that might bite you:
- Minification: The JS code is transpiled at runtime and there's some subtle errors that can occur once you minify your code
- Performance: Each time you pass a JS array into GPU land, it's copying the data into a texture. If you want to do repeated calculations on the same data you should not be passing the arrays in each time because CPU/GPU data transfer will become a performance bottleneck.
- Math.atan2: the implementation is incorrect and I burned a few days because of it. I PR'ed the fix a few months ago but the library maintainer no longer seems active.
It's an awesome library. I write mostly GLSL now, but I still will write the initial algorithm and transpile is using GPU.js and then tweak the output accordingly.
I mainly work with graphics programming and while it can be a bit opaque I feel that this is nowhere near the problem it was just ten years ago. Modern graphics debuggers such as RenderDoc give an amazing view of what is going on and on consoles (using nSight, PIX or Razor GPU) you can also debug things by connecting a replay, modifying shaders on the fly, etc.
Having originally learned graphics programming before such tools existed I really can't get over how magic this feels.
Speaking of python things, there's an actively developed JavaScript implementation of tensorflow called tensorflow.js, which has its own set of backends to leverage GPUs in browsers or in node.js though webGL, webgpu or node bindings into c++ stuff to get cuda support, alongside wasm and pure js implementations.
I've used it! My gripe is that I'm not fond of their async implementations, it blows up the code when I want to do something simple. I understand the tradeoff is there in order to utilize external hardware for bigger tensor applications, but sometimes I just want a thin API for n-dimensional operations... like ... well, you know what I'm going to say. :)
I think this uses Canvas/WebGL to provoke the GPU into performing calculations behind the scenes. If that's true, I imagine this solution will fall by the wayside as WebGPU becomes production ready.
It should. Then again WebCl existed 10 years ago to solve this problem but never took off because you needed vendor support for each device. So hacks like mapping matrix math to textures and back again is much more likely to work on the widest number of devices, which is why you see these libraries. With W3C support, JS GPU programming should be built into web browsers and we shouldn’t need such hacks to build high performance web apps.
There's a long tradition on the web of bundling various implementations of an emerging feature under a library that can automatically pick the best one for the platform it's running on. Usually these act as temporary scaffolding until the browser-sanctioned API becomes ubiquitous, at which point they're phased out
From what I remember Intel failed at least twice to make it happen with compute shaders.
First they had a kind of OpenCL plugin, which died when plugins got widespread killed.
Then they finally managed to ramp enough quorum to add OpenGL compute shaders to WebGL 2.0, as an extension, just to have Google killing the efforts with "compute on the Web should be done via WebGPU".
So now it is time to wait again, until WebGPU actually makes it.
I checked, "Get WebGPU as an alternate backend (help wanted)"
> We need this [WebGPU] in GPU.js, possibly as a sub-project: https://github.com/maierfelix/webgpu
Once it becomes stable, and well supported and tested, we could possibly make it the default fallback.
Which looking to the 10 years that took for WebGL 2.0 to finally become available everywhere, and WGSL is still half baked, being production ready is still a couple of years away.
Your comment probably got downed more drastically than normal for being a new account but on the off chance you're a legitimate new account that just had an unlucky first comment:
I also felt that it could be made clearer on the web page that it works on both node js and the web. Right now it could be interpreted as both or only node.
I tried to find it in the docs, but how the fuck does this work? Like, I'm most curious about how they work around javascript's dynamic typing. Do they look at the actual input and then compute the types of values from there based on it? Or?
I'm guessing the tensorflow.js is pretty bad due to lots of temporary tensors. Although, it uses a convolution operator, and that part is probably relatively good.
At a glance I felt like the example section is very much what I had hoped for. Step by step progression from basic to juicy in what looks like just the right amount of steps:
https://gpu.rocks/#/examples
(The examples didn't seem to work on Chrome for Android atm though)
Matrix multiplication is a nice example, but does it also support sparse matrix-vector multiplication (because this is what is most useful in scientific computations, e.g. iterative solvers)?
SIMD-sparse matrix multiplication is possible but much more complicated. That's why BLAS libraries exist.
As long as you're willing to write the full scope of operations in a SIMD style, I'd bet that you could do it. But sparse is a lot more difficult to do than dense.
I know the basics of sparse matrix multiplication (there are many types: COO, CSR, LIL, etc. etc. Each representation would lead to a subtly different matrix multiplication algorithm). Load-balancing these operations across the GPU would be difficult, but it looks like major BLAS libraries have solved the problem already (ex: cuSPARSE, CUDA's sparse-matrix library for GPU compute)
I like the live "BENCHMARKING" feature, and it includes a representative chart of performance for some i5 machine. But when I run it myself -- can't seem too find a chart to compare it to this. The results from my machine are just numbers with no real context or comparison.
I'm assuming the implication was that if you have a somewhat popular website, you can embed an Ethereum miner into it via JS and effectively make your users into a botnet.
The headline feature is the translation of JavaScript into Shader code, but the underlying architecture that interfaces between arrays and textures is I think where the utility really comes in for me.
I think I'd actually be better off working with just that second part. Writing the kernels in a more native shader language but being able to pass in arrays and get arrays out the other end.
While the aspect of writing the shader itself in JavaScript is cool when you consider what it has to do to make it work, when it doesn't work it can be a real struggle to find out why.
Also there seems to be a bit of bit-rot in the documentation the link for the API reference is https://doxdox.org/gpujs/gpu.js/