Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can do local AI inference and get Claude Opus-level performance (Kimi K2.5) over a cluster of Mac Studios with Exo.Labs

Does it do distributed inference? What kinda token speeds do you get?

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: