You wrap the DNS request in a different layer of encryption than the relay server, so the relay server only knows you tried to resolve something, and the DNS server only knows someone tried to resolve a particular domain. That's how ODoH works.
To make it harder for parties to collude, you need additional encrypted hops, the way Tor does. ODoH doesn't do that, unless you're routing ODoH through Tor of course.
You would also need some kind of proof that the DNS records returned by the resolving DNS server haven't been tampered with, or a tracking DNS server could direct you to one of their IP addresses and proxy the request transparently. Unfortunately, the best solution we have for that is DNSSEC which is a very 90s take on DNS validation. It works fine if you don't abuse DNS in weird ways, but it's due for a redesign.
Why not? Cloudflare makes 1.1.1.1 available over tor although the latency is through the roof and you still need to consider the possibility of fingerprinting the client network stack.
of course the offer wasn't serious. did anyone see the interview GameStop CEO did with Andrew Ross Sorkin? he clearly didn't have the money and was trying to gaslight the world.
"AppleCare+ covers fall and accidental damage (drops, cracks, liquid) for a reduced, fixed service fee per incident. It offers unlimited incidents (or up to two per 12 months, depending on the plan), providing a significant discount over out-of-warranty repairs. A service fee, such as $29 for screen repairs, applies"
I hated my time as an SRE. But … can’t it be done with some combination of canaries and blue green deployments and extensive testing? Where when things look good you just swap all the traffic to the good stuff keeping the rollback hot etc etc?
That's how we got 99.99% at Netflix. And it cost a lot of money. But a canary implies that something may go wrong and you have to roll back. The canary is still production traffic, so some transactions would fail, which isn't allowed for this kind of workload.
I image you'd have to use shadow execution, where you roll out a full second copy, run every transaction through both, and compare the results. And then, only after a certain time, switch traffic to the new infra and tear down the old.
But you would need a ton of extra hardware (more than double) and a lot of ways to keep data in sync. And of course if you put an LLM or other non-deterministic system in there, that's a whole other can of worms.
Folks that keep the lights on 24/7 aka SREs are super heroes that wear capes. Thank you for your service.
I couldn’t do it. I like infra and all but it’s just not my cup of tea. Def true that in a trading pov the trade must be executed. It must settle. It must work. Or capital flight will be huge.
There are different kinds of updates that influence options and feasibility. Keeping in mind that deep in the heart of an exchange is a single threaded process, the sequencer. Therefore, you have three layers, external facing protocols, sequencer/matching engines, and internal interfaces. Internal interfaces are the easiest for b/g. External protocols, any change worth its weight changes the protocol and therefore requires participants to change their codebases too. Versioning protocols is an option, but still the integration with consumers is much more transparent and usually you have them test on pre-prod environments, occasionally also requiring attestation and conformance testing (regulated markets). Sequencer and matching engine are at the core. You could do parallel runs but not b/g. Theoretically you could abstract the matching engine and keep a barebones sequencer immutable, but this will have performance implications. So yes, you can do things, but not in a completely transparent way, unless if you introduce an “upgrade jitter” to give you a window for transparent upgrades. It’s an interesting domain, I think people will just accept occasional downtimes as a better option than constant jitter cost.
I think it is. We have been using it at my day job and we regularly choose sonnet 4.6 for well scoped things. Opus 4.6 was good but the 4.7 opus model burns so many tokens and dollars that it’s just not worth it given the incremental improvement in results.
They also changed how they count tokens. So you could end up with less reasoning while paying for more tokens. Anthropic’s profit margin is definitely higher on 4.7 then it was an 4.6. I’m pretty sure this was the main driver behind this update.
“"I would rather have him making these decisions and be in control," he said. "He may be controversial and polarizing and he does some crazy, bizarre things sometimes, but he’s a brilliant guy when it comes to building something completely new and building wealth” for himself and shareholders.”
From the article.
“Controversial” and “polarizing”. Yep. Cutting aid to USAID leaving many to die with no access to live saving aid is swept under the rug so long as annualized returns are 42%…
Money managers are some of the least principled people smh. If their portfolios could go up 1000% they’d burn villages and kill their siblings to do it.
From the shutting down of USAID, there are death toll estimates reaching over 100,000 dead children and climbing. It achieved a level of "plausible deniability" combined with total disregard of human life, resulting in what can be argued as mass murder, that has rarely ever been seen in human history.
reply