And they aren’t being objective and rational about the polls, they are funding and cherry-picking poll data that tells them to do what they want to do.
There’s no principle, no strategy, no goal. We’re living in the political version of Cube, and just like the movie: it’s a headless blunder operating under the illusion of a master plan.
The only polls they care about is their sales numbers and their sponsor dollars.
It really doesn't matter how popular or unpopular a candidate is, what matters is if their listeners are still willing to overpay on snake oil. Or if their oil barrons are still giving them a few million dollars for whatever message they want to sell.
AJ is probably the worst in this space. One of the things leaked in his emails is if you give him $20k he'll gladly bring you on the show and talk about whatever it is you want to talk about and sell. You could probably get him to shill for a book about the benefits of communism.
You don’t see any issue with insecure drivers for obsolete hardware, exactly the kind of thing that is most prevalent in an industrial control type applications?
Stuxnet should have been a wakeup call to everyone: the boring, obsolete, “safe because nobody browses TikTok on it” hardware is exactly the highest risk.
Would your ideal world apply to humans as well? Like if I see some art in a museum and it inspires me to create some of my own, I would need to pay a licensing fee to the original artist?
And what about the artists that inspired them? There is no art in the world that sprang fully formed from one single person, without any influences.
Should we reshape our economy to ensure knowledge and artistic provenance is maintained perpetually?
This whole discussion is so weird to me. It’s like AI has freaked everyone out so much that the instinct is to run to the safety of Disney-esque complete control and perpetual monetization of every work.
Which is exactly the opposite of how art worked for the first several hundred thousand years. Really, we want to double down on the perverse incentives and tight control that IP owners have given us in the past 50 years?
>Like if I see some art in a museum and it inspires me to create some of my own, I would need to pay a licensing fee to the original artist?
Nope, humans are admitted for free :).
>And what about the artists that inspired them? There is no art in the world that sprang fully formed from one single person, without any influences.
As long as you are a human you get to be inspired all you want :)
You seem very invested in licking the boot of the trillion dollar corporations. Your fellow humans are concerned.
>Really, we want to double down on the perverse incentives and tight control that IP owners have given us in the past 50 years?
Isn't it interesting that the EXACT second that copyright law impedes billion dollar corporations it is thrown out the window, really makes you think huh?
Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…
Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.
If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.
Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.
But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
I just don’t see it. Both professionally and personally I’m producing so much more now. Back burner projects that weren’t worth months of my time are easily worth a few hours and $20 or whatever.
You’re probably already experienced at your job and using AI to enhance that, or at least using that experience to keep the AI results clean. That’s something you or a company would want to pay for but it has to be a lot more than today’s prices to make it profitable. Companies want to get more out of you, or get a better price/performance ratio (an AI that delivers cheaper than the equivalent human).
But current gen AIs are like eternal juniors, never quite ready to operate independently, never learning to become the expert that you are, they are practically frozen in time to the capabilities gained during training. Yet these LLMs replaced the first few rungs of the ladder so human juniors have a canyon to jump if they want the same progression you had. I’m seeing inexperienced people just using AI like a magic 8 ball. “The AI said whatever”. [0] LLMs are smart and cheap enough to undercut human juniors, especially in the hands of a senior. But they’re too dumb to ever become a senior. Where’s the big money in that? What company wants to pay for the “eternal juniors” workforce and whatever they save on payroll goes to procuring external seniors which they’re no longer producing internally?
So I’m not too sure a generation of people who have to compete against the LLMs from day 1 will really be producing “so much more” of value later on. Maybe a select few will. Without a big jump in model quality we might see “always junior” LLMs without seniors to enhance. This is not sustainable.
And you enhancing your carpentry skills for your free time isn’t what pays for the datacenters and some CEO’s fat paycheck.
[0] I hire trainees/interns every year, and pore through hundreds of CVs and interviews for this. The quality of a significant portion of them has gone way down in the past years, coinciding with LLMs gaining popularity.
This is thoroughly debunked at this point. The frontier labs are profitable on the tokens they serve. They are negative when you bake in the training costs for the next generation.
So what. Fluctuations over a year or two are meaningless. Do you really believe that the constant-dollar price of an LLM token will be higher in 20 years?
I can see a world where energy costs rise at a rate faster than overall inflation, or are a leading indicator. In that scenario then yes I could see LLM token costs going up.
Lol are people like you going to be enough to support the large revenues? Nope.
A firm that see's rising operating expenses but no not enough increase in revenue will start to cut back on spending on LLMs and become very frugal (e.g. rationing).
There’s no principle, no strategy, no goal. We’re living in the political version of Cube, and just like the movie: it’s a headless blunder operating under the illusion of a master plan.
reply