Hacker Newsnew | past | comments | ask | show | jobs | submit | blackcatsec's commentslogin

> I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual.

What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so.

There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle...

But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit).


It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.


The need for human exceptionalism doesn't come from the psyche or anything like that, it's just basic survival skills. Humans believe themselves to be special because that's the only belief that isn't self-destructive.

You can choose to believe humans are not exceptional, in the same way I can choose to cut off all my fingers and eat them. Why would I do that?

If what you say about LLMs is true, that's bad for me. And for you. And for our families. Because it means our instrinic value of living just went down a lot. I choose not to believe it because I am not suicidal. And, ultimately, I think the people who do believe it can only ever make their lives worse. Probably my life worse too, but maybe if I'm all the way over here I'll avoid the blast radius.


I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way.

Sure, but this is absolutely not how people are viewing the AI lol.

This is a way too simplistic model of the things humans provide to the process. Imagination, Hypothesis, Testing, Intuition, and Proofing.

An AI can probably do an 'okay' job at summarizing information for meta studies. But what it can't do is go "Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

LLMs will NEVER be able to do that, because it doesn't exist. They're not going to discover and define a new chemical, or a new species of animal. They're not going to be able to describe and analyze a new way of folding proteins and what implication that has UNLESS you basically are constantly training the AI on random protein folds constantly.


I think you are vastly underestimating the emergent behaviours in frontier foundational models and should never say never.

Remember, the basis of these models is unsupervised training, which, at sufficient scale, gives it the ability to to detect pattern anomalies out of context.

For example, LLMs have struggled with generalized abstract problem solving, such as "mystery blocks world" that classical AI planners dating back 20+ years or more are better at solving. Well, that's rapidly changing: https://arxiv.org/html/2511.09378v1


No idea how underestimate things are, but marketing terms like "frontier foundational models" don't help to foster trust in a domain hyperhyped.

That is, even if there are cool things that LLM make now more affordable, the level of bullshit marketing attached to it is also very high which makes far harder to make a noise filter.


>Hey that's a weird thing in the result that hints at some other vector for this thing we should look at

Kinda funny because that looked _very_ close to what my Opus 4.6 said yesterday when it was debugging compile errors for me. It did proceed to explore the other vector.


> Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

This is the crucial part of the comment. LLMs are not able to solve stuff that hasn't been solve in that exact or a very similar way already, because they are prediction machines trained on existing data. It is very able to spot outliers where they have been found by humans before, though, which is important, and is what you've been seeing.


""Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." "

This is very common already in AI.

Just look at the internal reasoning of any high thinking model, the trace is full of those chains of thought.


But just like how there were never any clips of Will Smith eating spaghetti before AI, AI is able to synthesize different existing data into something in between. It might not be able to expand the circle of knowledge but it definitely can fill in the gaps within the circle itself

> LLMs will NEVER be able to do that, because it doesn't exist.

I mean, TFA literally claims that an AI has solved an open Frontier Math problem, descibed as "A collection of unsolved mathematics problems that have resisted serious attempts by professional mathematicians. AI solutions would meaningfully advance the state of human mathematical knowledge."

That is, if true, it reasoned out a proof that does not exist in its training data.


It generated a proof that was close enough to something in its training data to be generated.

That may be, and we can debate the level of novelty, but it is novel, because this exact proof didn't exist before, something which many claim was not possible with AI. In fact, just a few years ago, based on some dabbling in NLP a decade ago, I myself would not have believed any of this was remotely possible within the next 3 - 5 decades at least.

I'm curious though, how many novel Math proofs are not close enough to something in the prior art? My understanding is that all new proofs are compositions and/or extensions of existing proofs, and based on reading pop-sci articles, the big breakthroughs come from combining techniques that are counter-intuitive and/or others did not think of. So roughly how often is the contribution of a proof considered "incremental" vs "significant"?


Well, for one the proof would have to use actual proof techniques.

What really happened here was that the LLM produced a python script that generated examples of hypergraphs that served as proof by example.

And the only thing that has been verified are these examples. The LLM also produced a lot of mathematical text that has not been analyzed.


I see, thanks for the explanation!

Do you know that from reading the proof, or are you just assuming this based on what you think LLMs should be capable of? If the latter, what evidence would be required for you to change your mind?

- Edit: I can't reply, probably because the comment thread isn't allowed to go too deep, but this is a good argument. In my mind the argument isn't that coding is harder than math, but that the problems had resisted solution by human researchers.


1) this is a proof by example 2) the proof is conducted by writing a python program constructing hypergraphs 3) the consensus was this was low-hanging fruit ready to be picked, and tactics for this problem were available to the LLM

So really this is no different from generating any python program. There are also many examples of combinatoric construction in python training sets.

It's still a nice result, but it's not quite the breakthrough it's made out to be. I think that people somehow see math as a "harder" domain, and are therefore attributing more value to this. But this is a quite simple program in the end.


One of the possible outcomes of this journey is that “LLMs can never do X”. Another is that X is easier than we thought.

Or that some quixotic problems nobody cared about to the extent to actually work on them do have some solution.

This is it right here. I've long thought about this one and whether I should bother with an AI agent that can do all of this stuff for me, but the reality is both what you said and I'm not rich enough.

Do I want the AI Agent to take my bank account and automatically pay some bill every month in full? What if you go a little over that month due to an emergency expense you weren't prepared for? And it's not a matter of "I don't have enough in my bank account for this one time charge", but it's "I don't have enough in my bank account for this charge and 3 others coming at the end of the month." type deal.

Agents aren't going to be very good at that. "Hey I paid $3,000 on your credit card in order to prevent you from incurring interest. Interest is really bad to carry on a credit card and you should minimize that as much as possible." Me: "Yeah but I needed that money for rent this month." Agent: "Oh, yeah! I should have taken that into account! It looks like we can't reverse the charge for the payment."

Yeah, no fucking thank you LOL.


>Do I want the AI Agent to take my bank account and automatically pay some bill every month in full?

Also this supposed use case is called "Autopay" and requires zero AI. A lot of people still don't use it. Even when it includes a discount!


Could you imagine hitting a rest api and like 25% of the bytes are comments? lol


Worse than that - people will start tagging "this value is a Date" via comments, and you'll need to parse ad-hoc tags in the comments to decode the data. People already do tagging in-band, but at least it's in-band and you don't have to write a custom parser.


See also: postscript. The document structure extensions being comments always bothered me. I mean surely, surely in a turing complete language there is somewhere to fit document structure information. Adobe: nah, we will jam it in the comments.

https://dn790008.ca.archive.org/0/items/ps-doc-struc-conv-3/...


Not sure it's a fair comparison. The spec says:

"Use of the document structuring conventions... allows PostScript language programs to communicate their document structure and printing requirements to document managers in a way that does not affect the PostScript language page description"

The idea being that those document managers did not themselves have to be PostScript interpreters in order to do useful things with PostScript documents given to them. Much simpler.

For example, a page imposition program, which extracts pages from a document and places them effectively on a much larger sheet, arranged in the way they need to be for printing 8- or 16- or 32-up on a commercial printing press, can operate strictly on the basis of the DSC comments.

To it, each page of PostScript is essentially an opaque blob that it does not need to interpret or understand in the least. It is just a chunk of text between %%BeginPage and %%EndPage comments.

This is tremendously useful. A smaller scale of two-up printing is explicitly mentioned as an example on p. 9 of the spec.


Reminds me how old versions of .net used to serialize dates as "\/Date(1198908717056)\/".


> Could you imagine hitting a rest api and like 25% of the bytes are comments? lol

That's pretty much what already happens. Getting a numeric value like "120" by serializing it through JSON takes three bytes. Getting the same value through a less flagrantly wasteful format would take one.

I guess that's more than 25%. In the abstract ASCII integers are about 50% waste. ASCII labels for the values you're transferring are 100% waste; those labels literally are comments.

If you're worried about wasting bandwidth on comments, JSON shouldn't be a format you ever consider, for any purpose.

lol


HTML and JS both have comments, I don't see the problem


And both are poor interchange formats. When things stay in their lane, there is no "problem." When you try to make an interchange format using a language with too many features, or comments that people abuse to add parsable information (e.g. "type information") then there is a BIG problem.


« HTML is a poor interchange format. » - quote of the century -


It caused all kinds of problems, though those tend to be more directly traceable to the "be liberal in what you accept" ethos than to the format per se.


Likewise. I once got pulled over by the police because they insisted that my license plate had been turned in and I was driving without valid plates.

They called other officers, ran the plate, ran the VIN, ran the plate, ran the VIN. I dunno I think we sat there for almost an hour before they told me why they pulled me over and what was up.


While I'll make no judgment specifically on whether or not she is telling the truth, because the article itself isn't enough validation to say she is telling the truth here; I'll comment more on the comments in this thread.

At what point is automated enforcement a good or a bad thing for law breaking? We have yet to grapple with that as a society, and the short answer is there's no easy answer to this problem. Both for precisely the reason this article calls out (that overnight location of car is not a 100% accurate representation of residency, and fixing it seems like a mess); but also because people ARE inherently selfish and REALLY do not like the rules applying to them equally.

A great many people in the United States, particularly white (sorry, I'm going to bring race into this because it's important) enjoy some level of flexibility on what laws they follow and when. Certainly more flexibility than the average black experience. In fact, this problem is so bad that states like California have had to institute policies that allow things like license plate lights being out to exist because the profiling is so catastrophically bad that it's completely unfair.

So now, we have an automated system that at least tries to provide some level of fair enforcement. At least for now, things like speed cameras, red light cameras, license plate readers, etc. don't appear to openly consider racial bias in the immediate decision making process on whether the law is enforced or not. (There are other biases, of course, and even indirect bias with regards to where these things are placed, but I'll digress a bit here).

But even aside from the racial divide, the class divide on enforcement is a problem. And the upper classes have generally enjoyed a level of insulation from complying with laws, which just continues to go up the higher you climb (See: Epstein files). But that's on the more extreme end.

At any rate, better enforcement of laws that are now crossing the lower to middle class divide because automation allows us to do so is certainly an interesting social problem.


Is putting a bunch of red light cameras in a black neighborhood to catch and fine red-light runners an anti-black policy because it imposes automatic punishment on black drivers who are running red lights? Or pro-black because it helps secure the safety of black pedestrians who deserve not to have people breaking traffic laws around them? What if it turns out that even though the neighborhood is black the car traffic on that street has a greater percentage of non-black drivers than the neighborhood population? What if it turns out that black people run red lights at a rate much higher than other races everywhere in the country, so no matter where you put up red light cameras it will always catch and fine a disproportionate number of black drivers?

Regardless of whether you approve or disapprove of automatic red light cameras, you can construct an argument that either having them or not having them is the policy that is actually racist against blacks.

More generally, whether automated law enforcement is good or bad depends highly on how good or bad the law is, which people legitimately disagree about; and also how reliable the automatic enforcement is.


To be fair, the first point is a good point. But I'd argue that you should deploy them everywhere in order to not be racist since we already generally know that the red light cameras are revenue generating devices. Is there some data on whether they increase safety? Preferably unbiased (probably not). Unsure.

Nonetheless, a fair point that deserves analysis. (My vote, to be fair, is ask the community what they want and put it up to a vote. With honest information on safety data versus revenue generation)


What are the boundaries of the community that votes? What if the racial demographics of that community have changed recently, in ways that affect how the vote turns out? What if some people in that community are aware of these voting patterns and explicitly bring up race when engaging in public discussion about the merits or demerits of the red-light-camera-policy, because it's important? What if they try to change the boundaries of the voting district in order to include/exclude more people who they think will vote with/against them on the red-light-camera issue, in ways that highly correlate with race?


I hadn't considered this so eloquently with LLM text output, but you're right. "LLMs make everything sound profound" and "well-written bullshit".

This has severe ramifications for internet communications in general on forums like HN and others, where it seems LLM-written comments are sneaking in pretty much everywhere.

It's also very, very dangerous :/ Because the structure of the writing falsely implies authority and trust where there shouldn't be, or where it's not applicable.


I question that as well, it's also why Go is extremely popular. Could it just be a pendulum swing back towards static linking?

Wonder when some enterprising OSS dev will rebrand dynamic linking in the future...


CGO_ENABLED=0 is sigma tier.

I don't care about glibc or compatibility with /etc/nsswitch.conf.

look at the hack rust does because it uses libc:

> pub unsafe fn set_var<K: AsRef<OsStr>, V: AsRef<OsStr>>(key: K, value: V)


> I don't care about glibc or compatibility with /etc/nsswitch.conf.

So what do you do when you need to resolve system users? I sure hope you don't parse /etc/passwd, since plenty of users (me included) use other user databases (e.g. sssd or systemd-userdbd).


Most software doesn't need to resolve users. You also can always shell out to `id` if you need an occasional bit of metadata.


That's a fair point, and shelling out to id is probably a good solution.

I guess what bothers me is the software authors who don't think this through, leaving applications non-functional in these situations.

At least with Go, if you do CGO_ENABLED=0, and you use the stdlib functions to resolve user information, you end up with parsed /etc/passwd instead of shelling out to id. The Go stdlib should maybe shell out to id instead, but it doesn't. And it's understandable that software developers use the stdlib functions without thinking all too much about it. But in the end, simply advocating for CGO_ENABLED=0 results in software that is broken around the edges.


On the other hand, the NSS modules are broken beyond fixing. So promoting ecosystems that don't use them might finally spur the development of alternatives.


Could be interesting. What do you see as the main problems with NSS? I've never needed to use it directly myself. It seems quite crusty of course, but presumably there's more that your referencing.

Moving from linking stuff in-process to IPC (such as systemd-userdbd is promoting) _seems_ to me like a natural thing to do, given the nastiness that can happen when you bring something complex into your own address space (via C semantics nonetheless). But I'm not very knowledgeable here and would be interested to hear your overall take.


NSS/PAM modules have to work inside arbitrary environments. And vice versa, environments have to be ready for arbitrary NSS modules.

For example, you technically can't sandbox any app with NSS/PAM modules, because a module might want to send an email (yes, I saw that in real life) or use a USB device.

NSS/PAM need to be replaced with IPC-based solutions. systemd is evolving a replacement for PAM.

And for NSS modules in particular, we even have a standard solution: NSCD. It's even supported by musl libc, but for some reason nobody even _knows_ that it exists. Porting the NSCD protocol to Go is like 20 minutes of work. I looked at doing that more than once, but got discouraged by the other 99% of complexity in getting something like this into the core Go code.


Why are you excited for this? They’re not going to give YOU those peoples’ salaries. You will get none of it. In fact, it will drag your salary through the floor because of all the available talent.


I’m excited as a computer scientist to see it happening in my life time. I am not excited for the consequences once it’s played out. Hence my comment about retiring, and empathy for everyone who is still around once I do. I never got into this for the money - when I started engineers made about as much as accountants. It’s only post 1997 or so that it became “cool” and well paid. I am doing this because I love technology and what it can do and the science of computing. So in that regard it’s an amazing time to be here. But I am also sad to see the black box cover the beauty of it all.


I'm very confused about this. Salary is only one portion of your total compensation. The vast majority of tech companies offer equity in a company. The two ways to increase the FMV of your equity is: increase your equity stake or increase the value of the total equity available. Hitting the same goals with fewer people means your run rate is lower, which increases the value of your equity (the FMV prices in lower COGS for the same revenue.) Also, keeping on staff often means you want to offer them increased equity stakes as an employment package. Letting staff go means more of that available equity pool is available to distribute to remaining employees.

We aren't fungible workers in a low skill industry. And if you find yourself working in a tech company without equity: just don't, leave. Either find a new tech company or do something else altogether.


Equity is negotiable just like salary, and if supply of developer labor increases with the same or less demand, you'll get less equity just like you will get less salary.


I can't believe the person you replied to thinks that they're going to get some magical more amount of equity because you can hopefully do more with fewer people. That's assuming the entire business landscape doesn't also change with AI, disincentivizing so much investment in companies in the first place because someone else with AI can create a competitor in a shorter amount of time...


They’re also betting they’re the P99 engineer. Most do. 98% aren’t.


In the last three startups I worked at I didn’t bother exercising my vested equity - even a successful exit would at best triple the price of those shares - not worth the risk. One of those three startups already failed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: