Hacker Newsnew | past | comments | ask | show | jobs | submit | yags's commentslogin

We (LM Studio) found the bug with the 31B model and a fix will be going out hopefully tonight


I am not deep in this world. What does it mean when you (LM Studio) fixed a bug in a model Google released?


There is a surprising amount of code needed in each of the inference frameworks (LM Studio, llama.cpp, etc) to support each new model release. For example to format the input in the right way using a chat template, to parse the output properly with the model-specific tokens the model provider decided to standardize on for their model, and more.

This particular instance was a fix to the output parsing [1] in LM Studio, described like this:

"Adds value type parsers that use <|\"|> as string delimiters instead of JSON's double quotes, and disables json-to-schema conversion for these types."

[1]: https://github.com/ggml-org/llama.cpp/pull/21326/commits/a50...

edit: formatting


I am in this world, but am not familiar with this specifically.

My guess is that they found a bug with their implementation of the model using the weights Google released. These bugs are often difficult to track down because the only indication is that the model is worse with your implementation than with someone else's.


llama.cpp also fixed some chat template issues this afternoon. could be related.


LM Studio (lmstudio.ai) | Systems Eng, Application Eng, Frontend Eng, Applied AI | New York City | ONSITE | Full-time

We're hiring for the following roles:

- Systems Eng: work on LM Studio's runtime and internal libraries, contribute to llama.cpp and MLX (C++, Python)

- Application Eng: work on LM Studio's desktop app, daemon, SDKs, and CLI (TypeScript, NodeJS, C++, React)

- Applied AI: push the boundaries of what's possible for agentic software with local AI (Python, C++, TypeScript)

- Frontend Eng: invent new kinds of human / AI interaction UX, ship software to millions of people worldwide (TypeScript, React)

See all roles and apply here: https://lmstudio.ai/careers



Changed to the blog post from https://pypi.org/project/venvstacks/, since it gives more background. Thanks!


We probably just missed it. Can you please ping me on it? “@yagil” on GitHub


Thanks! We actually totally permit work use. See https://lmstudio.ai/enterprise.html


An email us link is a bit discouragement for using it work purposes. I want a clearly defined price list, at least for some entry levels of commercial use.


Or even just a ballpark. Are we talking $500, $5,000, $50,000 or $500,000?


This. When companies don’t list prices, it automatically gives me a “they want to rip you off” vibe. Put in the effort and define enterprise pricing. If you later find that it isn’t right, change it.


Thanks, what license is it under? This means that anyone that wants to try it at work has to fill that out though right?


Hello Hacker News, Yagil here- founder and original creator of LM Studio (now built by a team of 6!). I had the initial idea to build LM Studio after seeing the OG LLaMa weights ‘leak’ (https://github.com/meta-llama/llama/pull/73/files) and then later trying to run some TheBloke quants during the heady early days of ggerganov/llama.cpp. In my notes LM Studio was first “Napster for LLMs” which evolved later to “GarageBand for LLMs”.

What LM Studio is today is a an IDE / explorer for local LLMs, with a focus on format universality (e.g. GGUF) and data portability (you can go to file explorer and edit everything). The main aim is to give you an accessible way to work with LLMs and make them useful for your purposes.

Folks point out that the product is not open source. However I think we facilitate distribution and usage of openly available AI and empower many people to partake in it, while protecting (in my mind) the business viability of the company. LM Studio is free for personal experimentation and we ask businesses to get in touch to buy a business license.

At the end of the day LM Studio is intended to be an easy yet powerful tool for doing things with AI without giving up personal sovereignty over your data. Our computers are super capable machines, and everything that can happen locally w/o the internet, should. The app has no telemetry whatsoever (you’re welcome to monitor network connections yourself) and it can operate offline after you download or sideload some models.

0.3.0 is a huge release for us. We added (naïve) RAG, internationalization, UI themes, and set up foundations for major releases to come. Everything underneath the UI layer is now built using our SDK which is open source (Apache 2.0): https://github.com/lmstudio-ai/lmstudio.js. Check out specifics under packages/.

Cheers!

-Yagil


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: