Intermediate Representation. It's an architecture independent "halfway point" that happens during compilation, in between reading the source code and writing the binary. It's part of the approach that makes it possible for LLVM to "plug-in" a new target architecture by just adding a bit that turns IR into machine code for the new chip.
I'm oversimplifying because it's all black magic to me, but I do know that acronym.
My primary use case for K3s is small machines, cheap VPS, Pi, etc. Would love to hear from folks who have had success with Talos in those spaces but last time I gave it a shot the welded shut hood prevented me from doing the little tweaks necessary to get running in those environments.
In the cloud or on prem I suspect folks are having better luck than I did, but also open to being wrong about this.
>prevented me from doing the little tweaks necessary to get running in those environments.
It's a bit of a mindshift change but essentially whenever you feel the urge to make such a tweak...you've strayed off the golden path & are attempt to do something the wrong way (in Talos world).
I came from k3s so was very used to the whole tweaks spiel too.
Where you do need a custom config pipe in patch commands, not modify the OS. i.e. any and all changes you're feeding in via API so that can be repeatably scripted. The Talos OS is immutable.
It's similar how you'd control a k8s cluster with kubectl...except you're doing that model at OS level. You control it by sending API commands no modifying settings in files. So you don't "tweak" anything. It's a bit of mindset shift I know
> you've strayed off the golden path & are attempt to do something the wrong way (in Talos world).
In this case the urge to make a tweak is synonymous with the urge to make the product _function_.
I admire their dedication to the schtick, but the upshot is that since you cannot reach inside to make Talos actually work in environments that aren't supported by that golden path, running the product on many devices is "Talos Wrong".
That's their perrogotive, but it's obnoxious in a "Windows 11 doesn't work on your perfectly functional laptop" kinda way.
I'm early on in my Kubernetes journey and have opted to focus on Talos. Would you be able to share a bit more about the issues and limitations you encountered?
K8s encourages thinking about workloads as "cattle not pets". App running in K8s falls over? Blow it away and let K8s recreate it, etc.
However clusers themselves often become the new pets. Many orgs do not reach a level of operational maturity where they can blow away and recreate whole clusters without downtime and toil.
A meta-pattern has emerged where higher order tooling managers a whole fleet of clusters. This is an implementation of that meta pattern which uses K8s itself as the higher order tool to manage other clusters.
It's not a new idea, just a new implementation of the pattern.
Thank you. Wow I had no idea this was a problem. Seems kind of nightmare territory. In a weird way it makes me respect elixir/erlang even more. It's not the exact same problem obviously but really had me thinking about beam etc
Imagine you are the developer of k8s hosted systems. Now imagine you want to test your systems in a repeatable fashion. You'd need some way to spin up a test k8s cluster, deploy your application, subject it to a test workload. That's simple and easy if you only need one physical cluster node: you can use k3s or perhaps kind. But if you want multiple physical nodes, not so easy. This solves that problem by leveraging an existing k8s cluster, which is a standard thing easily obtained. You might now ask why not just use that cluster (why the terduckin?) Answer: cost, time, hassle, you want a different version of k8s than the hosting provider gives you.
It is very close, but what's more interesting to me is that it's actually amusing. I've yet to see an LLM actually be originally funny (entirely possible I've missed the crossing of that line) and the opening lines put a wry grin on my face.
> x-maxxing morphed out of "minmaxing" in gaming culture.
Which comes from RPGs, the old school tabletop ones. Then I don't know if the ML/DS term "Minmax" predates the RPGs or if it's the other way around, neither would surprise me.
Yesterday my google home mini gave me the current temperature in farenheit. I live in Canada and use a pixel. Dumbest fucking AI going. May as well give it to me in coulombs per hectare.
IIUC it's not the model of buying domains from registrars which stinks of crap, it's the buying from registrars by domain squatters who then flip them for a profit having provided zero value that bears a whiff of shite. These ticket scalpers of the internet who contribute nothing can well and truly fuck straight off.
Speculators provide time-allocation of resources. They're pretty critical part of market dynamics to help resources get sold and developed when they are valued most. That is, they prevent domains from being captured prematurely for lower value use. Society profits immensely from their contribution.
Hey I get it, we all gotta sleep at night. Tell yourself whatever you like to get them zzz's. As far as ticket scalpers and domain resellers go, my assessment stands: they are bottom feeding zeros providing nothing of value and they can fuck off into the sun.
Despite being full of arrogant intellectual superiority, evidently the majority of the HN crowd has little understanding of basic economics.
While I personally wouldn't go as far as "Society profits immensely from their contribution", these types business people do serve an important function in the economy.
Much like traditional middle-men sellers, commodity speculators, insurance providers, and the like, domain name re-sellers take on the risk that no one else are willing to bear at some particular time (that the domain they're "squatting" could be worth nothing in X years). If and when the domains they're "squatting" later on become more valuable, either through their own direct efforts, or by re-selling them to other parties that can make better use of them, then the profits they make from such transactions are justified for the aforementioned risks they bore.
What risk? They contributed nothing, they have performed no function. Their only claim on it is having been first on the dictionary attack and laid claim to a bunch of useful letter combinations without providing any value or service.
If they didn't do any of this that combination of letters doesn't disappear, it just goes back to being available from the primary registrars.
The squatters are just vacuuming up some of the profit off people that would/could use that combination of letters to actually provide a service.
I don't view middle man parasitic behavior as valuable, and see no market value performed here other than extraction.
>I don't view middle man parasitic behavior as valuable, and see no market value performed here other than extraction.
Seeing middlemen businesses as "parasitic behavior" is a common misunderstanding of their role in the economy. They make possible commercial transactions between initial producers and ultimate end-consumers, where and/or when such transactions could never have taken place affordably without their presence.
Except in this case the middleman added thousands of dollars to the cost without adding anything of value: not curation, not discovery, nothing. Without this middleman acquiring an expired domain would have been whatever the nominal registrar cost (somewhere between $10 to $100 or so per year for a domain)
Useful middlemen do serve a role and add value. A parasitic middleman just extracts value without adding any value anything in return.
>Useful middlemen do serve a role and add value. A parasitic middleman just extracts value without adding any value anything in return.
And do tell how you distinguish "useful middlemen" from "parasitic middlemen". These are meaningless terms based on your own value judgements. In other words, they're completely useless in practice.
A universally recognized transaction-coordinating mechanism works much better. And guess what? We already have that: price.
>Without this middleman acquiring an expired domain would have been whatever the nominal registrar cost (somewhere between $10 to $100 or so per year for a domain)
Except you have no idea if $10-100 charged by registrars should be the actual price of those domains. The only two factors that should determine the price of something is the lowest price the seller is willing to sell it at, and the highest price any single customer is willing to pay. That's it.
If some government policy existed that enforced domain names must be priced below $x, then that functions as an artificial price ceiling, which necessarily results in a misallocation of the resource in question. In this case, that would mean, domains going to people who are less incentivized to put them to the best possible use.
Take the very example of friendster.com: when Mike Carson bought the domain from his park.io customer, friendster.com went from a website that only generated ad revenue to now a new social networking app idea he's developing, which I'm sure even you'd agree is an improvement to its previous use. And that was only possible, because Carson believed the 30k he was being asked to pay in order to acquire ownership of friendster.com was worth it (to him).
If all domain prices were artificially capped to $100 (or whatever other arbitrary threshold) and below, then in all likelihood, you'd see the problem of malicious actors who bulk buy then squat domains become worse, not better. You might counter, why would they do that? Since on the surface, it'd appear that they cannot profit from those domains by re-selling them at a higher price later on. Sure, perhaps not directly (but even this is debatable, because what'll likely happen is you'll just create a black market for it); but maybe they'll just tell the people who want to take the domain off of him that whatever app idea they're building, he wants a x% stake in?
In economics, your intentions don't matter, it's all about the incentives your proposed policies create. And to that end, price caps never work, because they just shift the collateral damage elsewhere, while making the economy worse in net.
I understand the appeal from AWS's perspective. Customer A pays for a 32 vCPU VM, which they run on 32-core hardware. Then they can also squeeze in customer B's 1 vCPU instance running a blog, and no one notices. Free money!
But I don't want to be either of those customers. It means the whole system has an extra layer of abstraction, so they can juggle VMs around. It's why you need slow EBS instead of just getting a flash drive in the same case as the CPU, with 0.01x the latency.
I'm oversimplifying because it's all black magic to me, but I do know that acronym.
reply