Anthropic has shared that API inference has a ~60% margin. OpenAI's margin might be slightly lower since they price aggressively but I would be surprised if it was much different.
Yeah but the argument people make is that when the music stops cost of inference goes through the roof.
I could imagine that when the music stops, advancement of new frontier models slows or stops, but that doesn't remove any curent capabilities.
(And to be fair the way we duplicate efforts on building new frontier models looks indeed wasteful. Tho maybe we reach a point later where progress is no longer started from scratch)
Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.
If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.
Fusion isn't a good example. Self driving cars are a battle between regulation and 9's of reliability, if we were willing to accept self driving cars that crashed as much as humans it'd be here already.
Whatever models suck at, we can pour money into making them do better. It's very cut and dry. The squirrely bit is how that contributes to "general intelligence" and whether the models are progressing towards overall autonomy due to our changes. That mostly matters for the AGI mouthbreathers though, people doing actual work just care that the models have improved.
Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.
Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.
But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.
Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.
Then perhaps Bob should have it use functional Scala, where my experience is that if it compiles and looks like what you expect, it's almost certainly correct.
Actions are bad, but they're free (to start) and just good enough that they're useful to set up something quick and dirty, and tempt you to try and scale it for a little while.
Which, as far as I found so far, means Forgejo. Haven't found any others. And even Forgejo Actions says that it's mostly the same as Github Actions syntax, meaning you still have to double-check that everything still works the same. It probably will, but if you don't know what the corner cases are then you have to double-check everything. Still, it's probably the best migration option if you rely on GHA.
Team scale doesn't tend to impact this that much, since as teams grow they naturally specialize in parts of the codebase. Shared libs can be hotspots, I've heard horror stories at large orgs about this sort of thing, though usually those shared libs have strong gatekeeping that makes the problem more one of functionality living where it shouldn't to avoid gatekeeping than a shared lib blowing up due to bad change set merges.
This was a follow-on to a study of nurses showing coffee drinkers have lower all cause mortality.
Caffeine has been shown to exert effects via adenosine receptor antagonism and influence on cAMP & AMPK pathways. These same pathways are implicated in a lot of issues with aging. Caffeine also has some anti-inflammatory properties and Coffee beans are a strong anti-oxidant though I don't really think that matters much.
> Caffeine has been shown to exert effects via adenosine receptor antagonism and influence on cAMP & AMPK pathways. These same pathways are implicated in a lot of issues with aging.
That is like saying biological pathways are implicated in aging (because you said "pathways").
In any case, adenosine receptor antagonism has a pretty weak link if any to aging.
Additionally, we say that about virtually everything that is herbal, that it has anti-inflammatory properties. You are right, it does not matter at all.
The Bun acquisition made a little sense, Boris wanted Daddy Jarred to come clean up his mess, and Jarred is 100% able to deliver.
This doesn't make as much sense. OpenAI has a better low level engineering team and they don't have a hot mess with traction like Anthropic did. This seems more about acquiring people with dev ergonomics vision to push product direction, which I don't see being a huge win.
They do have a hot mess with traction amongst developers. Codex is far behind Claude Code (in both the GUI and TUI forms), and OpenAI's chief of applications recently announced a pivot to focus more on "productivity" (i.e. software and enterprise verticals) because B2B yields a lot more revenue than B2C.
reply