Hacker Newsnew | past | comments | ask | show | jobs | submit | rubiquity's commentslogin

This is what I ended up doing. A 2.5G switch for the few devices that can use it and a 1Gbit+PoE switch for all the other PoE and 1Gbit and less devices.

You should try it out. I'm incredibly impressed with Qwen 3.5 27B for systems programming work. I use Opus and Sonnet at work and Qwen 3.x at home for fun and barely notice a difference given that systems programming work needs careful guidance for any model currently. I don't try to one shot landing pages or whatever.


Are you using the same agent/harness/whatever for both Claude and Qwen, or something different for each one?


I use Pi at home and Claude Code at work (no choice). I use bone stock Pi. No extensions.


Is it available for API use? I don't have a laptop capable of running it.

At 8-bit quantization (q8_0) I get 20 tokens per second on a Radeon R9700.


Wouldn’t Cursor agreeing to such a deal be almost ironclad proof they are subsidizing tokens/inference out the ass? There’s wide speculation all the large revenue growing companies right now are selling inference at break even or a loss.


Are the mainboards and upgrade kits available for purchase now or just the whole laptop?

edit: I think I found it: https://frame.work/products/laptop13pro-mainboard-intel-ultr...


He doesn't work at Vercel but he is the type to never pass up any opportunity to chase clout.


He is affiliated with Vercel though


Almost like that’s his job.

Hey, I’m with you - I think social media needs to die specifically for this reason. I’m reminded of the term “snake oil” - it’s like the dawn of newspapers again.


Media as a whole needs to die


Including books and the internet?


I have both HW3 (2021 Y) and HW4 (2025 3). FSD in the HW4 is a delight. FSD in HW3 phantom brakes constantly both back when FSD was a pile of C++ and now with the "Lite" driving model. I don't see how Tesla can ever make FSD suitable on HW3 given the hardware (<200 TOPS).


Have you tried running llama.cpp with Unified Memory Access[1] so your iGPU can seamlessly grab some of the RAM? The environment variable is prefixed with CUDA but this is not CUDA specific. It made a pretty significant difference (> 40% tg/s) on my Ryzen 7840U laptop.

1 - https://github.com/ggml-org/llama.cpp/blob/master/docs/build...


Your link seems to be describing a runtime environment variable, it doesn't need a separate build from source. I'm not sure though (1) why this info is in build.md which should be specific to the building process, rather than some separate documentation; and (2) if this really isn't CUDA-specific, why the canonical GGML variable name isn't GGML_ENABLE_UNIFIED_MEMORY , with the _CUDA_ variant treated as a legacy alias. AIUI, both of these should be addressed with pull requests for llama.cpp and/or the ggml library itself.


You are right that it is an environment variable, and that's how I have it set in my nix config. Thanks for correcting that.

Unfortunately llama.cpp is somewhat notorious for having lackluster docs. Most of the CLI tools don't even tell you what they are for.


Hmm. Perhaps there's a niche for a "The Missing Guide to llama.cpp"? Getting started, I did things like wrapping llama-cli in a pty... and only later noticing a --simple-io argument. I wonder if "living documents" are a thing yet, where LLMs keep an eye on repo and fora, and update a doc autonomously.


I hadn't tried that, thanks! I found simply defining GGML_CUDA_ENABLE_UNIFIED_MEMORY, whether 1, 0, or "", was a 10x hit to 2 t/s. Perhaps because the laptop's RAM is already so over-committed there. But with the much smaller 4B Qwen3.5-4B-Q8_0.gguf, it doubled performance from 20 to 40+ t/s! Tnx! (an old Quadro RTX 3000 rather than an iGPU)


Not sure why you're being downvoted, I guess it's because how your reply is worded. Anyway, Qwen3.7 35B-A3B should have intelligence on par with a 10.25B parameter model so yes Qwen3.5 27B is going to outperform it still in terms of quality of output, especially for long horizon tasks.


Could be on a bike path where bikes are on the left and pedestrians to the right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: