Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This will likely build a version without GPU acceleration, I think?


I was trying to get AMD GPU support going in llama.cpp a couple weeks ago and just gave up after a while. 'rocminfo' shows that I have a GPU and, presumably, rocm installed but there were build problems I didn't feel like sorting out just to play with a LLM for a bit.

Kudos if Ollama has this sorted out.


Builds with Metal support on my Mac M2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: