Hacker Newsnew | past | comments | ask | show | jobs | submit | organsnyder's commentslogin

I'd assume they have a botnet to parallelize it. Though depending on where you live (not that they'd be using their own machines) fast pipes are fairly common—I have a 5gbps symmetrical fiber connection to my home in Michigan.

Always has been.

Finding a vulnerability by looking at the diff that fixed it is very different than just looking through the code.

They're saying to do that scan to every diff before release, to see if it finds anything.

I believe their point was that:

"How likely is this diff a patch for an existing vulnerability?"

Seems to be an easier question to answer than

"Are there any new vulnerabilities introduced by this diff?"

In other words identifying that a patch is for a vulnerability is typically easier than finding the vulnerability in the first place.


If the diff will just be fed to LLMs regardless then what is easier is probably a moot point.

The point is that even if all code commits are scanned as safe by ai, black hats can still analyse the commits and diffs to find vulnerabilites for people who havent patched yet.

Scanning every commit doesnt automatically make everyone in the world patch immediately, vulns can still be found from commits and diffs and used against those who havent patched yet.


Look at GP to my comment again, the one I was clarifying: they're not talking about black hats or any other kind of hacker, they're talking about the original developers and preventing such vulnerabilities from existing in the first place.

Yes I am aware, however that still does not stop anybody examining your commits and diffs to find vulnerabilities.

Do you assume ai will just stop at a certain level? Or is it possible that it will keep increasing in intelligence? If the latter, then isnt it possible that even if you are auto checking all your commits, next week a more advanced ai model might be released that finds vulns in your old commits, even though they were checked by (an inferior) ai?

Blinding saying that auto checking commits will make you safe from ai based attacks and vulnerability free is just madness.


The diff yields the patched code which is used to produce the exploit.

You'd risk amplifying every fluctuation. It's often impossible to know what's a shift vs. what's noise until it's well under way (if not over).

One of my vehicles is a 2009 Civic. It continues to amaze me that with minimal maintenance, that 17-year-old vehicle will fire right up with the turn of a key, with hundreds (thousands?) of parts moving in a specific way, many of them with tolerances in tiny fractions of an inch.


2010 MB C300 I bought in 2013 from a dealer after the lease expired, parked outside without a garage or cover since then (Virginia).

About 3 years ago a large branch (about 8" diameter) from an old overhanging tree fell right on the transparent sunroof cover and shattered it into a million pieces. After picking them out of the sunroof mechanism (which no longer worked after the impact) and the inside of the car, I covered the opening with several sheets of magnetized vinyl. Works great, never a drop of water inside since then and it's stayed in place without any attention. Temperature control inside the car at rest or while driving at highway speed is like it was before the damage.

Being old now I never go anywhere since I can get stuff delivered. About every 3 weeks I go out and the car starts right up, I drive a 5-mile loop to circulate the oil and then park it for another 3 weeks. Been doing this for years. I do get an oil change annually.


The battery in my PHEV (Chrysler Pacifica) showed no appreciable degradation in eight years (and >100k miles) before being replaced under recall for a manufacturing fault last year.


Agreed. I have a work MBP 14" and a Framework 13, and I didn't realize until just now that they weren't the same screen size. The Framework 13 is very comfortable to use.


Windows 95 was still based on DOS, and didn't offer a lot in the way of isolation or other security features. Win98 and ME were similar. It wasn't until all Windows versions used the NT kernel (XP being the first consumer-focused release) that this changed.


Home computers weren't normally networked, but enterprise computers (such as the Walmart example given) absolutely were.


They weren’t connected to the internet though, and there weren’t state sponsored hackers on the other side of the world that could access them.


Many enterprise computers were connected to phone lines. A few were connected to global X.25 networks like Telenet/Tymnet. Some were even on the early Internet. True, they were unlikely to be DOS systems, but plenty of DOS systems also doubled as terminals.


I've found qwen3 to be very usable on my local machine (a Framework Desktop with 128gb RAM). I doubt it could handle the complex tasks I throw at Claude Opus at work, but it's more than capable of doing a surprising number of tasks, with good performance.


What tasks do you use qwen3 for? Coding? Are you running it on CPU or GPU? What GPU does that Framework have?

Thanks!


I have an Asus GX10 that I run Qwen3.5 122B A10B on, and I use it for coding through the Pi coding agent (and my own); I have to put more work in to ensure that the model verifies what it does, but if you do so its quite capable.

It makes using my Claude Pro sub actually feasible: write a plan with it, pick it up with my local model and implement it, now I'm not running out of tokens haha.

Is it worth it from a unit economics POV? Probably not, but I bought this thing to learn how to deploy and serve models with vLLM and SGLang, and to learn how to fine tune and train models with the 128GB of memory it gets to work with. Adding up two 40GB vectors in CUDA was quite fun :)

I also use Z.ai's Lite plan for the moment for GLM-5.1 which is very capable in my experience.

I was using Alibaba's Lite Coding Plan... but they killed it entirely after two months haha, too cheap obviously. Or all the *claw users killed it.


GLM 5.1 is extremely good, and ridiculously cheap on their coding plan. Its far better than Sonnet, and a fifth of the cost at API rates. I don't know if the American providers can compete long-term; what good is it to be more innovative it only buys them a six month lead andthey can't build the data center capacity fast enough for demand? Chinese providers have a huge advantage in electrical grid capacity.


True but Z.ai also just silently raised the price, and the entire Chinese frontier set is having to make profit now... hence Alibaba killing the Lite plan and not letting people sign up to their Pro one either; and why MiniMax has their non-commercial license, etc. etc.

So I agree with you, its better than Sonnet but way cheaper. I do wonder how long that will last though


Z.ai does really well at the carwash question!


Thank you. I've been using ollama for a much more modest local inference system. I'll research some of the things you've mentioned.


The Framework Desktop has a Ryzen 395 chip that is able to allocate memory to either the CPU or GPU. I've been able to allocate 100+gb to the GPU, so even big models can run there.

Most recently I used it to develop a script to help me manage email. The implementation included interacting with my provider over JMAP, taking various actions, and implementing an automated unsubscribe flow. It was greenfield, and quite trivial compared to the codebases I normally interact with, but it was definitely useful.


That's great. Ostensibly my system could also allocate some of the 32 GB of system memory to argument the 12 GB VRAM, but I've not been able to get it to load models over 20B. I should spend some more time on it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: