So what does liberal even mean these days? California is passing bs like age verification in OS and Montana is protecting my right to leave the way I want in my own home, running whatever AI models suit me as long as I am not bothering anyone. That's just another "none of government business" personal freedom issue like pot or sexuality, why aren't blue states all over it. And yes, using tuned LLMs can be like an acid trip, but the distance between having a trip at home and tangible harm is much greater than in the case of access to guns, knives, power tools, cars and rodent poison yet at least some of these are widely available to law abiding citizens in every state. Government interventions can be staged at the points where there is evidence of actual imminent harm, like problematic public behavior. Why are Democrats the new "Reefer Madness" pearl clutchers and why should I still believe they have anything to do with living the way you want?
1) Right now, as things stand, the Democrats are our only alternative to something far more evil. Until we ratify a new constitution which implements a proportional, parliamentary government, the two-party system means you have to pick the lesser of two evils. In the interim maybe we can banish the Republican party to a niche constituency of openly reprehensible racists and bigots, and set up the Democrats as the new right wing and the Greens as the new left wing, but right now it is morally incumbent upon every American to vote Democrat.
2) Read your Marx. Liberty doesn't arrive from toxic individualism, it arrives from the fulfillment of man's nature as a social-creative animal. AI is a tool used by the bourgeoisie to further alienate us from each other; as is right-libertarian individualism.
There are 1 bit average GGUFs of large models, not perfect quality but they will hold a conversation. These days, there is also quantized finetuning to heal the damage.
Well, it's open training in the sense that the code is open source and you are free to fix it so it trains successfully. That's consistent with how open source works generally. In my experience unsloth is where new model training is usually fixed first.
Try writing code from description without looking at the picture or generated graphics. Visual LLM with a suggestion to find coordinates of different features and use lines/curves to match them might do better.
That's very impressive. Your LLM actually wrote a correct code for a full relational database on the first try, like it takes 2.5 seconds to insert 100 rows but it stores them correctly and select is pretty fast. How many humans can do this without a week of debugging? I would suggest you install some profiling tools and ask it to find and address hotspots. SQL Lite had how long and how many people to get to where it is?
The actual task is usually to mix something that looks like a dozen of different open source repos combined but to take just the necessary parts for task at hand and add glue / custom code for the exact thing being built. While I could do it, LLM is much faster at it, and most importantly I would not enjoy the task.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Sorry for the off-topic but what hardware are you running Qwen3.5-122B-A10B-NVFP4 on? Is it physically local or just self-administered? Thanks in advance.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out Qwen3 but would be interested in others.
Somewhat of a devil's advocate here, because I am very familar with corporate idiocy. But how do you define a non-sociopathic corporate scenario where a company makes a lot of money from a good product they develop? Even if done in maximally practically and emotionally intelligent way, this still requires changes from research phase no?
Do you have evidence that they were sacked rather than resigned because they would rather work in a different direction from the one company is taking?
>But how do you define a non-sociopathic corporate scenario
Corporate structures are sort of sociopathic by default. Theres no empathy globule on the corporate hierarchy and everyone is motivated to put the corporations interests first.
This isnt even a criticism really, its just the reality. Corporations are like, paperclip maximising AI's, but for shareholder profit.
Well, Alibaba hiring a Gemini guy to run Qwen suggests they want to make Qwen into a big consumer / enterprise business like Gemini. I am not sure that I blame them even if it clashes with how their top researchers were hoping Qwen would be run. Most obviously it's natural for a company to want to make money on things they paid for developing. But also, the world needs more competition in AI businesses just like it needs competition in AI research. I wouldn't mind Qwen code to grow into a commercial grade competitor to Claude code that is better, faster and cheaper. I am sure the talented researchers can find a new home in Moonshot AI or even US college or startup.
The problem is that Google treats its customers as college kids who can be banned from a college maker lab for using too much 3D filament rather than entrepreneurs who are trusting their livelyhood to a service provider that promises to be reliable. If War Department uses too many Gemini tokens, do they cut them off, make them go through recertification process and permaban the next time around?
Which means that anyone serious about AI and not going local route should be using a provider with better reputation. I don't know if Alibaba, Z.ai or moonshots AI are also known for hair trigger responses, could be decent options for coding AI otherwise? If not, time to look for smaller providers with good reputation?
"it's worth considering that there are many people with incredibly strong anti-LLM views, and those people tend to be minorities or other vulnerable groups."
I have pretty low expectations for human code in that repository.
The response mentioning minorities is obviously bad faith. Even if true, it's not really relevant, and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.
I just read it, and found no bad faith in it. It was polite, not pushy, explained the argument well (though of course you may disagree with it), gave a business reason, and even ended with “thank you for reading and considering this, if you do”.
> and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.
Assuming and ascribing nefarious motivations to a complete stranger can be considered bad faith, though. Probably not your intention, but that’s how it came across.
I have observed this pattern before. Usually minority groups are mentioned in an attempt to shift a debate toward values (which basically means no meaningful debate if you disagree) and away from technical considerations (which arguably deserve the most attention in a software product).
Aside from that, the statement is not empirically true (from my perspective at least). Evidence isn't provided either. I'm not saying that the commenter consciously wanted to tie LLM use to those negative things, but it could be done subconsciously, because I have genuinely seen those arguments before.
I understand your point and believe you believe it, which is why I mentioned I don’t think you were arguing in bad faith. What I am saying is I don’t think the commenter in question was acting in bad faith either, because that requires deception. In other words, it seems to me that commenter—like yourself—was arguing genuinely. If one agrees with their argument (or yours) is a different matter altogether, but bad faith it doesn’t seem to be.
reply