Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it possible to run multimodal LLMs using their Vulkan backend? I have a ton of 4gb gpus laying around that only support vulkan.


Yes, llama.cpp has very good Vulkan support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: