Hacker Newsnew | past | comments | ask | show | jobs | submit | well_ackshually's commentslogin

Your Godot scene doesn't work in Unreal Engine. I always felt that composition is unnecessarily hard in game engines infrastructure. I do web development as a hobby, and any .js file targeting the DOM works on any browser. I can't even run GDScript in Unity!

You're just comparing the wrong things. Yes, when you're locked into one environment, everything works together well. The moment you interact with outside systems, all hell breaks loose. If anything, what you're saying is just that platforms should have a much larger stdlib, or abstract platform differences properly (hint: this is only doable if you're a game engine and can afford to absolutely ignore _everything_ the OS does and just concern yourself with reinventing every wheel).

Not to say there's nothing good in the games side of things: a bunch of software could benefit from accepting that some systems like a big fat central message bus and singletons can be good when handled well.


If your definition of "real businesses" is "Fortune 500, US based tech company with more money than sense or just happy to bleed VC money", sure, 99.999% of businesses are not real businesses.

You may also have a very narrow view of how the world actually works, left as an exercise to the reader to figure out which one it is


Fortune 500 and VC money are disjoint sets.

Indeed, yours has both more allocations and a bug (+3 instead of +5)

More allocations is a good point but you're being pedantic about the bug... how do you know the +5 isn't the bug? :P

The US is the biggest threat to the world right now, and is actively supporting a genocide in Palestine as well as war crimes in Lebanon.

I'm perfectly happy to let the chinese get a piece of the pie and fight the US, no matter how bad they are right now.


>Valve certainly doesn't seem to give a shit

Valve is nonexistent in the modern gamedev community, Source 2 is used by approximately noone, and overall releasing one game every decade doesn't exactly make Valve a prominent voice in the gamedev community.

However, yes, companies lock their engines at whatever it was when they started their project, and any upgrades comes from internal engineering. Decade long undertakings like the "recent" FF7 remakes are still on UE4 and will stick with that for the third game, because that's what they started with, and at this point the rendering pipeline and workflow is theirs


> he's not a con.

When you're putting the bar that low, sure.

He's about as knowledgeable as the junior you hired last week, except that he speaks from a position of authority and gets retweeted by the entire JS slop sphere. He's LinkedIn slop for Gen Z.


I need you to know that you're defending and justifying war crimes while blindly swallowing the propaganda of a genocidal regime.

Needless to say, it's not exactly making you look too good.


>solved the UX problem.

>One command

Notwithstanding the fact that there's about zero difference between `ollama run model-name` and `llama-cpp -hf model-name`, and that running things in the terminal is already a gigantic UX blocker (Ollama's popularity comes from the fact that it has a GUI), why are you putting the blame back on an open source project that owes you approximately zero communication ?


> Notwithstanding the fact that there's about zero difference between `ollama run model-name` and `llama-cpp -hf model-name`

There is a TON of difference. Ollama downloads the model from its own model library server, sticks it somewhere in your home folder with a hashed name and a proprietary configuration that doesn't use the in built metadata specified by the model creator. So you can't share it with any other tool, you can't change parameters like temp on the fly, and you are stuck with whatever quants they offer.


This was my issue with current client ecosystem. I get a .guff file. I should be able to open my AI Client of choice and File -> Open and select a .guff. Same as opening a .txt file. Alternatively, I have cloned a HF model, all AI Clients should automatically check for the HF cache folder.

The current offering have interfaces to HuggingFace or some model repo. They get you the model based on what they think your hardware can handle and save it to %user%/App Data/Local/%app name%/... (on windows). When I evaluated running locally I ended up with 3 different folders containing copies of the same model in different directory structures.

It seems like HuggingFace uses %user%/.cache/.. however, some of the apps still get the HF models and save them to their own directories.

Those features are 'fine' for a casual user who sticks with one program. It seems designed from the start to lock you into their wrapper. In the end they are all using llama cpp, comfy ui, openvino etc to abstract away the backed. Again this is fine but hiding the files from the user seems strange to me. If you're leaning on HF then why now use their own .cache?

In the end I get the latest llama.cpp releases for CUDA and SYCL and run llama-server. My best UX has been with LM Studio and AI Playground. I want to try Local AI and vLLM next. I just want control over the damn files.


Check out Koboldcpp. The dev has a specific philosophy about things (minimal or no dependencies, no installers, no logs, don't do anything to user's system they didn't ask for explicitly) that I find particularly agreeable. It's a single exec and includes the kitchen sink so there is no excuse not to try it.

That's one of my major annoyances with the current state of local model infrastructure: All the cruft around what should be a simple matter of downloading and using a file. All these cache directories and file renaming and config files that point to all of these things. The special, bespoke downloading cli tools. It's just kind of awkward from the point of view of someone who is used to just using simple CLI tools that do one thing. Imagine if sqlite3 required all of these paths and hashes and downloaders and configs rather than letting you just run:

   sqlite3 my database.db

> Ollama's popularity comes from the fact that it has a GUI

It's not the GUI, it's the curated model hosting platform. Way easier to use than HF for casual users.


It also made easy for casual users to think that they were running deepseek.

LM Studio also offers curation, while giving credit to llama.cpp and also easy search across all of Huggingface's GGUF's

Then don't. Obsidian's search is plenty good enough to find that note after.

Obsidian is the simplest thing in the world. Write text.


I find it the weakest link. It always tends to find the stuff that's least relevant first somehow. It's pretty terrible.

Having said that I have the organisational skills and affinity of a baboon so I really need to be able to dump my notes somewhere and still find them back somehow. This is not too typical for this kind of package, I notice a lot of people are meticulous and follow complex structures like zettelkasten.

For me that will never work though. It'll just subconsciously mark the system as "not worth the hassle" and never touch it again.


"I sent money to the god knows how many trillion parameters fully closed source machine built on billions of dollars and it worked better than the model that I can self host from the guys next door"

yeah, no shit ? All you're saying is that you're happily locking yourself in to models you have zero control over and that Anthropic can fuck you over at any time.

However, yes, Mistral is not in the business of providing you with a perfect, general purpose model. They fine tune from their base models for specific tasks.


Mistral OCR 3 isn't open weights and isn't available for download. It's only available via API, or to companies via paid consulting with Mistral.

"For organizations with stringent data privacy requirements, Mistral OCR offers a self-hosting option. This ensures that sensitive or classified information remains secure within your own infrastructure, providing compliance with regulatory and security standards. If you would like to explore self-deployment with us, please let us know."

https://docs.mistral.ai/models/ocr-3-25-12 https://mistral.ai/news/mistral-ocr-3


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: