Now that I think about it again, the part about "The Technical Challenge of Exporting Session Replays" might not be entirely true. Maybe it is possible (but likely costly), to pass all those replays data through a LLM and convert them to a different format/platform. Even better, maybe it's easy to ask the AI to create a conversion script between your desired formats, so you don't pass GBs of input data through the LLM.
Is it possible to do some sort of Binary* Search (Binary Star, as in A* star search algorithm, where we use heuristics).
a: [1,3,5,7,8,9,10,15]
x: 8 (query value)
For this array, we would compare a[0], a[3], a[7] (left/mid/right) by subtracting 9.
And we would get d=[-7, -1, 7]
Now, normally, with binary search, because 8 > mid, we would go to (mid+right)/2, BUT we already have some extra information: we see that x is closer to a[3] (diff of 1) than a[7] (diff of 7), so instead of going to the middle between mid and right, we could choose a new "mid" point that's closer to the desired value (maybe as a ratio of (d[right]-d[mid]).
so left=mid+1, right stays the same, but new mid is NOT half of left and right, it is (left+right)/2 + ratioOffset
Where ratioOffset makes the mid go closer to left or right, depending on d.
The idea is quite obvious, so I am pretty sure it already exists.
But what if we use SIMD, with it? So we know not only which block the number is in in, but also, which part of the block the number is likely in. Or is this what the article actually says?
Oh, that's what the article was referring to with "interpolation".
Weird that I didn't hear about it before, it's not that used in practice?
One reason I could see is that binary search is fast enough and easy to implement. Even on largest datasets it's still just a few tens of loop iterations.
But with new hardware comping out, and maybe models being smart enough to help with optimizing them and reducing inference costs even more, I think we should still expect the costs to go down.
The drivers often need per game optimisations these will be missing but I doubt Intel would nerf them, just rely on you not paying a lot for RAM the game won't use.
I actually meant it in a different way. I would get it for local AI stuff, but being able to game on it would be a huge plus, otherwise I would need two different machines.
Much as I want diversity; a 3090 would be a billion times better for games and can probably hold its own for a broader AI workload. Anything other then running highly quantised models that don't fit in 24GB with realativly small contexts.
I am a bit confused by the separation between VSCode and Copilot. If I cancel my Pro+ subscription, can I still use Copilot with my own OpenRouter key?
Not sure if that's a typo or not in Week 3...
As the next one is
> Old guy who brought his own towel
reply