Hacker Newsnew | past | comments | ask | show | jobs | submit | ghc's commentslogin

how does this compare to gpt-oss-120b? It seems weird to leave it out.

GPT-OSS 120B (really 117B-A5.1B) is a lot bigger. better comparison would be to 20B (21B-A3.6B).

OSS-120 is too old to be relevant, and four times the size.

It seems like Claude has taken Github's place in terms of developer reaction to it being unavailable. It's like everyone forgot how they did things 18 months ago.

I'm so old I remember working on databases that were designed to use RAW, not files. I'm betting some databases still do, but probably only for mainframe systems nowadays.


> Oracle® Database Platform Guide 10g Release 2 (10.2) for Microsoft Windows Itanium (64-Bit)

Well, I guess that at least confirms Oracle on Itanium (!?) still supported RAW 5 years ago.

I'm guessing everyone's on ASM by now though, if they're still upgrading. I ran into a company not long ago with a huge oracle cluster that still employed physical database admins and logical database admins as separate roles...I would bet they're still paying millions for an out of date version of Oracle and using RAW.


> still supported RAW 5 years ago

I seem to remember Oracle 10g was first released over 20 years ago? It has been EOL for much longer than 5 years...


Oh you're right! I was looking at the last documentation update timestamp, but the original release was 2006. That makes a lot more sense than Itanium support in 2021.

It took me way too long to get that.

Here's my take on what he was getting at:

Build vs. buy is an eternal question in enterprises. I remember many in-house data teams trying to build tools for "digital transformation" and cloud migration about 10 years ago. The challenge was, building those tools was more expensive than those enterprises could budget for (IT as cost center), so a startup like Snowflake would easily outcompete in-house solutions with their custom, cloud-based tech stack that was necessarily complex because it needed to serve the needs of thousands of customers.

If he's right, the build vs. buy equation has shifted more towards build, at least as far as enterprise software is concerned. IT is still a cost center, but in theory an internal team can now handle more requests for custom tools without looking to outside vendors. Essentially the cost of building in-house might be collapsing and therefore enterprise software startups will be serving fewer customers (who would all pay you more because if solving the problem was cheap they'd do it).

If you had to build a stack for dozens of customers paying huge amounts of money, how would that stack differ from the stack you'd build to serve thousands of customers? Certainly it wouldn't need to be as scalable! And that's probably what he's getting at. I think what you'd do instead, to capture those higher price point customers, is solve their problems more specifically, in a higher value manner.

Many companies already do this, investing far more in field engineers than they do in their tech stack, since customization is essential.


Thanks, this is a good explanation, though I would not have phrased it the way he did.

The writing style seems a little unnatural, but the odd grammatical error convinced me that it wasn't the result of someone asking an LLM to review the libraries and write the reviews in the voice of an intellectual who went to Harvard.

What a world we live in, that suspecting an LLM guided by a specific prompt would be my first instinct.


The trope of HN comments determining whether an article was written by AI is becoming extremely tiresome.

I firmly believe that the quality of HN comments is made worse by people complaining about LLM generated content than by the LLM generated content itself.

At least the LLMs are contributing to the discussion.


If people generally thought the LLMs were contributing anything of value, then the high volume of comments against them that you're describing wouldn't exist. Instead, LLMs are contributing bad content and also the downstream criticism on top of it.

I'm of two minds, honestly.

On the one hand, I agree that LLMs wherever perceptibly used do nothing to aid legibility, and much to hamper it. That is legitimately irritating.

On the other, it isn't at all new, is it? How LLMs write best, or at least how they write most, is just an outgrowth of the same methylphenidate style that's characterized online writing broadly construed since the days of the original Buzzfeed, which might as well have been called "Slopchute" if we were using those words that way then. Certainly it more than any other one source is responsible for the decay of cultural discourse that made the current troubles first possible and then inevitable - especially thanks to the huge volume of such useless crap (and its worse imitators) in these models' training sets.

I would certainly like less of the slop, as much as anyone. On the other hand, it's surprising to me at this late date to encounter people who read a lot online, and have not become accustomed to dealing with wordy junk written by Adderall casualties - that is, accustomed to dispassionately filleting a longform article on sight, skimming and glancing back and forth to identify what thesis may be present if any, and only actually settling in to read sequentially in the uncommon case where something initially mistaken for "content" has proven to be worth that level of effort.

It's surprising to me because I expect people to respect the value of their own interested attention, and not permit it be idly wasted. Sometimes someone has something worthwhile to say, but not the skill to do a competent job of actually saying it, and so the reader is required to meet the writer considerably more than halfway. I described above what that process looks like in practice. It isn't really something I tried to learn, just something I began doing out of frustration with having my time wasted. (Is that unusual? A little while back someone here had to explain to me, with obviously strained patience, that most people experience pleasure as a direct effect of opiates, and not only as a side effect of the sudden surcease of pain. That clarified for me why so many people get hooked so easily, but it also suggests I may not be the best judge of what's "normal" in these matters, I suppose.)

In terms of difference in practice, LLM output is a little wordier, a little more of a slurry, sure - but on the other hand, precisely because the results tend to exhibit such a strong or "pattern language" form of stereotypy, I find it's actually often simpler to dissect a large quantity of LLM output for the sentence or two of actual thought underlying it, than to do the same with something of similar length which was written by a human, whose paragraphs will almost never be instantly dismissible en bloc, the way most LLM-output paragraphs are.

I suppose that last may sound distasteful, but consider: the paragraphs we're discussing, wherever originating, are filler and that's why we don't like their presence. These paragraphs have been filler since this was The Atlantic's unique house style back when that was still a real magazine, and these paragraphs were never not going to be anything but filler, so whether they were excreted by a human or a robot has nothing to say about the artistic quality of what we've already agreed, indeed taken as axiomatic, is not art. It's styrofoam! It's packing material, which we were never going to care more about than the minimal effort required to throw it away. So why care all that much whether it's hand-blown or machine-extruded?


Silver lining - it's a fun Turing Test. But yea, I absolutely agree with you. It derails entire conversations.

The author says he is a visiting researcher from ETH Switzerland. That is he is not a native English speaker.

Oh, that actually tracks. I should have checked his bio.

I dare say that he has access to at least some libraries that a random person can’t just breeze into.

> ...the odd grammatical error convinced me that it wasn't the result of someone asking an LLM...

That's easily solved by models intentionally introducing the odd grammatical error here and there, just enough to convince the sceptics, not so many as to give the impression of being unlettered. A bit like the mythical 'RHS button' (which stands for 'real human shitty' but in reality is called the 'Shuffle' or 'Swing' function) which is supposed to make mechanically-precise drum machines sound more like human drummers.


"We?" I had no such trouble. You should spend less time with LLMs, if you can.

So you're basically telling me to get off HN (and the internet), where lazy writing co-authored by LLMs is increasingly becoming the norm. Great.

I was telling you to protect yourself. Now you're told. Good luck with the rest.

Ok Deckard

As straw men go, this is an attractive one, but...

When I was fresh out of undergrad, joining a new lab, I followed a similar arc. I made mistakes, I took the wrong lessons from grad student code that came before mine, I used the wrong plotting libraries, I hijacked python's module import logic to embed a new language in its bytecode. These were all avoidable mistakes and I didn't learn anything except that I should have asked for help. Others in my lab, who were less self-reliant, asked for and got help avoiding the kinds of mistakes I confidently made.

With 15 more years of experience, I can see in hindsight that I should have asked for help more frequently because I spent more time learning what not to do than learning the right things.

If I had Claude Code, would I have made the same mistakes? Absolutely not! Would I have asked it to summarize research papers for me and to essentially think for me? Absolutely not!

My mother, an English professor, levies similar accusations about the students of today, and how they let models think for them. It's genuinely concerning, of course, but I can't help but think that this phenomenon occurs because learning institutions have not adjusted to the new technology.

If the goal is to produce scientists, PIs are going to need to stop complaining and figure out how to produce scientists who learn the skills that I did even when LLMs are available. Frankly I don't see how LLMs are different from asking other lab members for help, except that LLMs have infinite patience and don't have their own research that needs doing.


AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.

The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.


Maybe the solution is for an AI that acts as an instructor instead of just trying to solve everything itself. I do this with my kids, they ask me how to do something. I will give them hints, but not outright do it all for them. The article writer in the first part mentioned that this is how they would instruct too.

I recently heard that a professor said to the class, you can use an ai to solve the assignments. However I'll see if you really understand the material on the final exam.

Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.

The Apollo program cost about as much as 22 Gerald Ford class nuclear carriers.

Amortized over the whole program, each launch cost the same as building 2 Gerald Ford class nuclear carriers, or $26 billion USD.


Here I was thinking this article would tell me how to turn my unmanaged switches into routers, but no, "anything" actually means "any fully featured general purpose computer with networking".


I suppose if you manage to get OpenWRT or something onto your switch you could use it as a router.


That's theoretically possible but a bad idea for a managed switch, because they seldom have enough CPU performance or IO between the CPU and switch silicon to provide respectable routing performance. For an unmanaged switch, it's more likely that whatever CPU core is present (if any) doesn't have enough resources to run a real network stack.


Something many may not know is that beyond his own novels, Tracy was also deeply involved in Jonathan Harr's book, "A Civil Action." He and Harr were friends, and he told Harr about the courtroom case. Later, when Harr would get stuck, he worked with Harr to edit and give feedback on his drafts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: