Hacker Newsnew | past | comments | ask | show | jobs | submit | saberience's commentslogin

Who's the intended user for this?

Is it like, for AI hobbyists? I.e. I have a 4090 at home and want to fine-tune models?

Is it a competitor to LMStudio?


You would be surprised! Nearly every Fortune 500 company has utilized either our RL fine-tuning package or used our quants and models - the UI was primarily a culmination of pain points folks had when doing either training or inference!

We're complimentary to LM Studio - they have a great tool as well!


I don’t know why this is being downvoted. Danielhanchen is legit, and unsloth was early to the fine-tuning on a budget party.

Haha no worries at all :)

From the homepage looks like it: “Training: Works on NVIDIA GPUs: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc.”

you just answered your own question, "AI hobbyists who has 4090 at home". And they are pretty much targeted user of Unsloth since the start.

Actually the opposite haha- more than 50% of our audience comes from large organizations eg Meta, NASA, the UN, Walmart, Spotify, AWS, Google, and the list goes on!

I am unaware lm studio is being used for fine tuning. I believe it only does inference.

Happy to see unsloth making it even easier for people like me to get going with fine tuning. Not that I am unable to I'm just lazy.

Fine tuning with a UI is definitely targeted towards hobbyists. Sadly I'll have to wait for AMD ROCm support.


Thanks! We do have normal AMD support for Unsloth but yes the UI doesn't support it just yet! Will keep you posted!

That article is literally a definition of TDD that has been around for years and years. There's nothing novel there at all. It's literally test driven development.

The problem with these kind of tools now is that Codex is so good you can basically build something which is good for 99% of cases in a single day, and it's free...

Look at Tobi vibe-coding QMD, he's not a full-time engineer and vibed that up and now it's used as the defacto RAG engine for OpenClaw.


Funny you say that.

I spent the last two days building this exact thing for our internal use.

Managed to get a full RAG pipeline integrated and running with all of our company documents in less than two days work.

Chunking, embedding and querying, connected to S3 and Google Drive, and running on our own hardware (and scaling on AWS too if needed).


Yeah QMD is quite impressive! The main difference between us and them is the scale folks would be looking at indexing. The serverless ingestion engine I described in the post is optimized for processing large batch jobs with high concurrency. We depend on a lot of cloud compute for this which isn't something QMD's local-first environment is optimized for. That said, it's a great option for OpenClaw!

I’m having trouble understanding when/where I would use this? Is this a replacement for pi or codex?

This is not a replacement for either in my opinion. Apps like codex and pi are interactive but ax is non-interactive. You define an agent once and the trigger it however you please.

What about a company that killed 20000 to 30000 protestors with machine guns?

The US can't even confirm how many detainees have died in custody in immigration detention around the country, yet they have precise numbers on how many people the Iranian regime has killed? Give me a break.

If Iran is unwilling to let neutral international observers confirm the number, that suggests they are trying to hide a number they don't want the world to know.

Who gets to define what "neutral" is? According to the US, the International Criminal Court is not fit for this purpose. It certainly can't be a nation-state that's in a military alliance with the US.

Human Rights Watch, MSF, UNICEF? Woke grievance factories, the lot of them /s . World Health Organization? US just left it. It's slim pickings out there.


Which Iran did not do. There's a single report from an anti-Iran agency saying that Iran claimed 3,000 killed protesters (not 20k-30k). Iran never said that though, and I would challenge anyone to produce evidence that they did.

I find those numbers hard to believe, as it is obvious that the US was already planning a regime changing intervention for quite some time when those protests happened.

You can't trust people who paint Reza Pahlavi as a paragon of human rights and democracy. And neither you can trust every iranian refugee as a lot of those were corrupt members of the ruling government or worse, Savak members.


Why would you build this on top of OpenClaw? Like, an insanely bad decision.

Vibe coded slop on top of vide coded slop to spam people? What could possibly go wrong?


exactly my thoughts!

Wait that's it?

This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.

Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.

Basically, this method is as about as full of holes as a sieve.


suspect this problem is essentially unsolvable. what possible method wouldn't be vulnerable to this? it's fine if it's just a sort of larp but if people think this could actually work... man

OpenClaw and similar agents do that now without using MCP servers.


It's a nice terminal but it cannot be configured to the same level as iTerm, e.g. in terms of colors, look and feel, how the menus work, how the tabs work, etc.

Also, in practice, I find it hard to detect any performance difference between iTerm and Ghostty even though I know in theory that Ghostty is more performant...

So for now I go with iTerm because I prefer the UI.


I used to use the iTerm programmable notification bell which would ring when a particular output was printed in the terminal. Don't think I can do that in Ghostty


My general take on most vibe coding projects ("Hey, look, I built this over the weekend"), is general dismissiveness. Mostly because of the effort required, i.e. why should I care about something that someone did with almost zero effort, a few prompts?

If someone tells me they ran a marathon, I'm impressed because I know that took work. If someone tells me they jogged 100 meters, I don't care at all (unless they were previously crippled or morbidly obese etc.).

I think there are just a ton of none-engineers who are super hyped right now that they built something/anything, but don't have any internal benchmark or calibration about what is actually "good" or "impressive" when it comes to software, since they never built anything before, with AI or otherwise.

Even roughly a year ago, I made a 3D shooting game over an evening using Claude and never bothered sharing it because it seemed like pure slop and far too easy to brag about. Now my bar for being "impressed" by software is incredibly high, knowing you can few shot almost anything imaginable in a few hours.


I struggle with this feeling as well, a huge part of the Maker movement was excitement around people building and importantly learning how to build thing. Iterating and improving each time is a pretty common thread you'll see throughout the community. It's hard to have someone show you a thing they generated instead of made and to feel the same way. Yes, they played a part in that thing existing, and part of that person is reflected in the output, but I don't think most Makers would say the final output is goal, so what's there to be excited about?

It's hard to not be dismissive or gate-keeping with this stuff, my goal isn't to discourage anyone or to fight against the lower barriers to entry, but it's simply a different thing when someone prompts a private AI model to make a thing in an hour.


Do people build to impress with an implementation that no one cares about really? Or to share the end product?

I think now you are freed up to make a shooter that people will actually want to play. Or at least attempt it.

We probably need to come to terms with the idea that no one cares about those details. Really, 2 years ago no one would have cared about your hand crafted 3d shooter either I think.


> I think now you are freed up to make a shooter that people will actually want to play. Or at least attempt it.

Taking this to an extreme, let's say vibe coding becomes real enough, and frictionless enough, that you can prompt a first person shooter into existence in a few minutes or hours.

If/when this becomes true, nobody will want to play your shooter. You'll share your shooter with people and if they care at all about shooters, they'll just go prompt their favorite AI tool and conjure their own into existence.

Admittedly this is a bit extreme, and we aren't there yet. But I've thought about this in relation to art, and how some people now go "well, this empowers people who didn't know how to make a movie/cartoon/painting/game, it's empowering and democratizing". But in my mind, art is a form of communication between humans. Without the exchange between humans, art cannot exist. If all of us are each lost in our own AI-powered projects, and if anything can be easily conjured out of thin air, then why bother with the next person's art project (game or whatever)? I don't care about your game, let me make my own in a few minutes.

I'm thinking about potential counterpoints: ah, yes, but it's about "ideas". While we can both make our ideas reality, my ideas are more inventive, so my AI-powered projects are more appealing. I'm not convinced about this; I think slop will dominate and invade public spaces, but also... why draw the line at ideas? Why is "skill with a pencil" replaceable with AI-slop, but ideas aren't? Ideas are often overrated, what matters is execution, anyway.


It doesn't matter, neither of those scenarios makes the effort impressive in this case. The vibe coded thing might even be useful - that does not make it impressive though. Effort does.


This is what I think a lot of the people who advocate for 'AI generated images being art' don't get. There's no effort or intentionality into what's being created; it has the look and appearance of 'polished art' (that breaks down when you look closer) but behind it is nothing.

It's also why AI generated code is a nightmare to read and deal with, because the intention behind the code does not exist. Code outputting malformed input because it was a requirement two years ago, a developer throwing in a quick hack to fix a problem, these are things you can divine and figure out from everything else.


> The vibe coded thing might even be useful - that does not make it impressive though.

Then "impressive" shouldn't even be the benchmark. If someone gifted me $10K, I'm not going to care if they earned it in a competition or won it in a lottery. Value is value. I'm gratefully accepting it and not being snobby about it. I couldn't care less about how "impressive" anything is if it's useful to me.


But "impressive" is not a benchmark, it's a human reaction. I care about being impressed, as do many people.


This is the myth of the Protestant work ethic; that effort matters, not outcome.


Yeah - It feels similar to me.

Why share something that anyone can just “prompt into existence”?

Architecture wise and also just from a code quality perspective I have yet to encounter AI generated code that passes my quality bar.

Vibe coding is great for a PoC but we usually do a full rewrite until it’s production ready.

————

Might be a hot take, but I don’t think people who can’t code should ship or publish code. They should learn to do it and AI can be a resource on the way.. but you should understand the code you “produce”. In the end it’s yours, not the AIs code.


> Architecture wise and also just from a code quality perspective I have yet to encounter AI generated code that passes my quality bar.

You should consider trying to using AI in a programming language that scores high in the AutoCoderBenchmark.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: