Hacker Newsnew | past | comments | ask | show | jobs | submit | vlowther's commentslogin

Pretty sure it is sexually transmissible, so I would not be too sure about that.

Same. Opencode + oMLX (0.3.4) + unsloth-Qwen3-Coder-Next-mlx-8bit on my M5 Max w 128GB is the sweet spot for me locally. The prompt decode caching keeps things coherent and fast even when contexts get north of 100k tokens.

The 8 bit MLX unsloth quant of qwen3-coder-next seems to be a local best on an MBB M5 Max with 128GB memory. With oMLX doing prompt caching I can run two in parallel doing different tasks pretty reasonably. I found that lower quants tend to lose the plot after about 170k tokens in context.


That's good to know. I haven't exceeded a 120k context yet. Maybe I'll bite the bullet and try Q6 or Q8. Any of coder-next quants larger than UD-Q4_K_XL take forever to load, especially with ROCm. I think there's some sort of autotuning or fitting going in llama.cpp.


MBP M5 Max. 128GB ram. oMLX. unsloth-Qwen3-Coder-Next-mlx-8bit. opencode with the telemetry stripped out. This seems to be the sweet spot for now for my local dev. Helps me to not accidentally blow through $100 in Claude tokens in a day when exploring different performance tradeoffs the backend of my $DAYJOB codebase.


> opencode with the telemetry stripped out

Care to share? This happens to be important to me and I’m sure I’m not the only one (as evidenced by Github issues).

Did you also change the other questionable behaviors?


No really, no. the last thing Github needs is yet another vibe coded fork of a mostly vibe coded app in the first place.

If I ever get around to vibe rewriting-it-in-go I might share that.


My usecase was building an append-only blob store with mandatory encryption, but using a semaphore + direct goroutine calls to limit background write concurrency instead of a channel + dedicated writer goroutines was a net win across a wide variety of write sizes and max concurrent inflight writes. It is interesting that frankenphp + caddy came up with almost the same conclusion despite vastly different work being done.


this makes sense for your workload, but may the right primitive be a function of your payload profile and business constraints ?

in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.

curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.

if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?


In my case, "background writes" literally means "do the io.WriteAt for this fixed-size buffer in another goroutine so that the one servicing the blob write can get on with encryption / CRC calculation / stuffing the resulting byte stream into fixed-size buffers". Handling it that way lets me keep the IO to the kernel as saturated as possible without the added schedule + mutex overhead sending stuff thru a channel incurs, while still keeping a hard upper bound on IO in flight (max semaphore weight) and write buffer allocations (sync.Pool). My fixed-size buffers are 32k, and it is a net win even there.


right — no variance, question was off target. worth noting though: the sema-bounded WriteAt goroutines are structurally a fan-out over homogeneous units, even if the pipeline feels linear from the blob's perspective. that's probably why the channel adds nothing — no fan-in, no aggregation, just bounded fire-and-forget.


If the performance charts are to be believed, this has uniformly worse performance in fetching and iterating over items than a boring old b-tree, which makes it a total nonstarter for most workloads I care about.

It is also sort of ironic that one of the key performance callouts is a lack of pointer chasing, but the Go implementation is a slice that contains other slices without making sure they are using the same backing array, which is just pointer chasing under the hood. I have not examined the code closely, but it is also probably what let them get rid of the black array as a performance optimization.


That is the most cursed description I have seen on how defer works. Ever.


This is how it would look with explicit labels and comefrom:

    puts("foo");
    before_defer0:
    comefrom after_defer1;
    puts("bar");
    after_defer0:
    comefrom before_defer0;
    puts("baz");
    before_defer1:
    comefrom before_ret;
    puts("qux");
    after_defer1:
    comefrom before_defer1;
    puts("corge");
    before_ret:
    comefrom after_defer0;
    return;
---

`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.

In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.


Microsoft Active Career Copilot 365 ONE, thankyouverymuch.


Grow your .NET-work!


It took quite a bit of scrolling until I found my old faves of dexed and zynaddsubfx, and I didn't see Helm (https://tytel.org/helm/) at all.


I submitted a suggestion to add the sophisticated multi-engine FOSS soft synth that I use, Yoshimi (https://yoshimi.sourceforge.io/) which is a linux only fork of ZynAddSubFX.


Helm has been replaced in practice by Vital (same author), I think.


They are completely different synths.

Vital is a wave table synth; Helm is a subtractive synth.

Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.

It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.

That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.


Surge XT is also at the bottom of the list.


None of the big browsers can be trusted at this point. Firefox is at best the least worst out of the big cross platform browsers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: