Hacker Newsnew | past | comments | ask | show | jobs | submit | woolion's commentslogin

https://en.wikipedia.org/wiki/TRON_project

I did wonder, reading such a comment, whether it would be a hyperbole, but not only is it documented, it is way worse than that. The free market is only ever enforced in the direction that suits the US, and the vassal states get screwed.


Yeah, the demo where they showed multiple videos being played in separate windows on an 80186 was _amazing_ --- I _really_ wish that using TRON-OS for desktop use on commodity hardware was well-documented --- in particular, it would be _awesome_ for an rPi.

> The free market is only ever enforced in the direction that suits the US

I mean, come on. If it was free in both directions, the US might lose sometimes!!

Sigh. It's so sad. Stuff like this is why free-marketeers (and in particular libertarians) earn my ire. There is not a single economy in the world that is an actually free market. Capital can move fairly freely and labor not at all.


To echo your point, there is no "art" at all without "technology"; from cave paintings, paint tubes, to digital tablets...

Is that true? I think for it to he true we'd have to overly abstract the definition of technology to the point of uselessness.

You can draw images in the sand. Is a stick "technology"? What about using your finger?

Do we need paints? There are natural dyes. I don't mean in the sense of extracting things but some are as simple as "smash this berry". I believe the answer to this is rather critical since you specifically mention cave paintings. Many of those were done by hand, not by brush.

What about things like rock balancing? Sand sculptures? Singing/vocal instruments? Poetry (spoken, not written)? Story telling (ditto)? And so on

There is so much we consider art that can be done by any human with no tool use nor any external objects. I won't even mention how people call a sunset a work of art, and I do think we should avoid that as it has the same problem I bring up with defining technology. But I do not think most people would consider speech or vocal sounds technology, though certainly we would include things like writing.


You strengthen my point about τέχνη.

It takes a considerable amount of development before you can make the distinction at all between separate concepts of art and technology. For a long time there wasn't a split because it was difficult to conceptualize how to split the two.


KDE's hard-switch to Wayland broke so many things in my workflows, from what used to be a perfect system. For keyboard expansions espansso/ydotools crash bi-hourly and I couldn't pinpoint the source, clipboard sharing between applications doesn't work anymore, global shortcuts have been limited... The essence is the same, but it is so broken that it has a real productivity impact that will require a lot of effort to correct, and would depend on upstream fixes...

On that matter, wouldn't an AI flag for submissions help hn? I wouldn't flag a submission for LLM style as it is too harsh, but I don't want to read them -- if only because I don't like LLM prose.

There are so many submissions where most of the discussion is about whether the content has any human effort behind, or the LLM was just a purely assistive role like translating. It's really devaluing hn, IMO. Not sure how much an AI flag would help, or introduce new issues, given how difficult the problem is, though.


>talkie is a 13-billion-parameter language model trained on pre-1931 text >It can produce outputs that are inaccurate or offensive >but moderation is [only] applied

I don't think you can get even a moderate version of a person's opinion from the 30's. What even is the point of this? Open any book from the time and you will get far more "current day offensive" stuff. Given how hard it is to believe that there was no temporal leaking, and how inaccurate the results are, what use is there to it?

Moderation also seems to silently hang up the chat.


> What even is the point of this?

> language model trained on pre-1931 ENGLISH text

English-only omits Nazi propaganda. Pre-1931 omits Frankfurt School sophistry and all Communist propaganda. A low-background LLM.


At this point I've become paranoid of my own writing. The LLM style seems to have become worse with time, more formatted, having only a few syntactic template it forces everything to go through. So that spurred me to write a lot more write-ups and blogposts that were laying around. But now I'm reading my own lines wondering if that feels AI? The only thing I can be sure of are my ESL-isms, and my convoluted, unending, extremely hard to parse sentences that would just deter people from reading.

>As Philippe Schnoebelen discovered in 2002 [1], languages cannot reduce the difficulty of program construction or comprehension.

From a model-checking point of view. This is about taking a proof-theoretic approach...

Your last paragraph is also quite wrong: a machine learning could very well easily learn and solve an NP-complete problem, because this property does not say anything about average case complexity (and we should consider Probabilistic complexity classes, so the picture is even more "complex").


> From a model-checking point of view. This is about taking a proof-theoretic approach...

No. In complexity theory we deal with problems, and the model-checking problem is that of determining whether a program satisfies some property or not. If your logic is sound, you can certainly use an algorithm based on the logic's deductive theory (which could be type theory, but that's an unimportant detail) to decide the problem, but that can have no impact whatsoever on the complexity of the problem. The result applies to all decision procedures, be they model-theoretic or deductive (logic-theoretic).

> Your last paragraph is also quite wrong: a machine learning could very well easily learn and solve an NP-complete problem, because this property does not say anything about average case complexity

No. First, it's unclear what "average complexity" means here, but for any reasonable definition, the "average complexity" of NP-hard problems is not known to be tractable. Second, complexity theory approaches this issue (of "some instances may be easier") using parameterised complexity [1], and I'm afraid that the results for the model-checking problem - which, again, is the inherent difficulty of knowing what a program does regardless of how you do it - are not very good. I mentioned such a result in an old blog post of mine here [2]. (Parameterised complexity is more applicable than probabilistic complexity here because even if there were some reasonable distribution of random instances, it's probably not the distribution we'd care about.)

There is no escape from complexity limits, and the best we hope for is to find out that problems we're interested in have actually been easier than we thought all along. Of course, some people believe that the programs people actually write are somehow in a tractable complexity class that we've not been able to define - and maybe one day we'll discover that that's the case - but what we've seen so far suggests it isn't: If programs that people write are somehow easier to analyse, then we'd expect to see the size of programs we can soundly analyse grow at the same pace as the size of programs people write, and nothing can be further from what we've observed. The size of programs that can be "proven correct" (especially using deductive methods!) has remained largely the same for decades, while the size of programs people write has grown considerably over that period of time.

[1]: https://en.wikipedia.org/wiki/Parameterized_complexity

[2]: https://pron.github.io/posts/correctness-and-complexity#corr...


I'm not sure what to make of TFA (I don't have time right now to investigate in details, but the subject it interesting). It starts with saying you can stop generation as soon as you have an output that can't be completed -- and there's already more advanced techniques that do that. If your language is typed, then you can use a "proof tree with a hole" and check whether there's a possible completion of that tree. References are "Type-Constrained Code Generation with Language Models" and "Statically Contextualizing Large Language Models with Typed Holes".

Then it switches to using an encoding that would be more semantic, but I think the argument is a bit flimsy: it compares chess to the plethora of languages that LLM can spout somewhat correct code for (which is behind the success of this generally incorrect approach). What I found more dubious is that it brushed off syntactical differences to say "yeah but they're all semantically equivalent". Which, it seems to me, is kind of the main problem about this; basically any proof is an equivalence of two things, but it can be arbitrarily complicated to see it. If we consider this problem solved, then we can get better things, sure...

I think without some e.g. Haskell PoC showing great results these methods will have a hard time getting traction.

Please correct any inaccuracies or incomprehension in this comment!


Author here - thanks for engaging.

On existing techniques - Type-Constrained Generation paper is discussed in the blog post (under Constrained Decoding), and I'd group typed holes in the same bucket.

The problem with those methods is that they're inference time: they don't update the weights. In this case, constrained decoding prevents the model from saying certain things, without changing what the model wants to say. This is especially problematic the more complex your type systems get, without even taking into account that type inference is for many of these undecidable.

Meaning, if I give you a starting string, in the presence of polymorphisms and lambdas you might not always be able to tell whether it completes to a term of a particular type.

On the syntactic difference: I'd gently reframe. The question isn't whether syntactically different programs are semantically equivalent, it's that regardless of which form you pick, the existing methods don't let the model learn the constructor choice.

That's what the next section is about.


Thank you for your reply. FTR, I find the subject very interesting and I hope there will be more work on this line of approach.

>The problem with those methods is that they're inference time

I agree, I just thought it was missing some prior art (not affiliated with these papers :-P)

What is not clear to me at all is, is this the draft of a research idea? Or is there already some implementation coming in a later post?

It seems to me that such an idea would be workable on a given language with a given type system, but it seems to me there would be a black magic step to train a model that would work in a language-agnostic manner. Could you clarify?


There is an existing implementation validating this idea, and the plan is to make it publicly available at some point.

> It seems to me that such an idea would be workable on a given language with a given type system, but it seems to me there would be a black magic step to train a model that would work in a language-agnostic manner.

That's correct. The blog post alludes to infrastructure building as a necessary component of making that happen for that exact reason. I.e. while it's "easy" to generate a dependent pair in this way, generating an entire dependently typed AST is much more difficult. On the positive side, this is more of a software engineering effort rather than a research one.


Ok, thank you for the info. Do you have any idea when at some point might be? I'd love to check it out.

I've had my digital art flagged a few times for various reasons (automatic copyright infringement and NSFW filters) -- so this is nothing new (in particular the artwork blocked the upload for some artist songs). The only thing is to have a reasonable appeal process. In all cases we got an automated approval after appeal, but it can put an untimely delay.

Honestly I hope that the AI filter would be much better in terms of false positive than the aforementioned one, if only because it should be easier via statistical methods.


I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse. For example, knowledge of extremely rare diseases causes the mind to over-evaluate their importance by many orders of magnitude (there are many variants of this effect). Or trying to explain some concepts of the object/subject construction tend to use a language that is grounded on the concept of a shared objective reality that furthers from the concept true understanding -- in other words, "the tao that can be named is not the tao".

I didn't think "There Is No Antimemetics Division" did very well with its premise, but the premise is quite fascinating, and it's the closest I've seen to this concept. Are there other explorations of similar ideas?


> I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse.

There could be a hypothetical class of ideas that just knowing about them is actively harmful. For a fictional example, imagine learning how to detect a hostile alien race that has been living with us on earth all this time. Or if one day we invent a thought experiment that induces psychosis to anyone that tries to unravel it.

I think the keyword for these type of ideas is infohazard: https://en.wikipedia.org/wiki/Information_hazard - the See Also section has a few interesting examples.


I had seen this, but all the examples correspond to having an actual, external threat as a result of this knowledge. I thought more about the buddhist parable that men don't know when they'll die, because only buddhas are able to live with this information. I guess it's very close to 'malinformation', but this is still related to an external actor manipulating what you know with an external goal, rather than intrinsic to the information.


"The Silence" in Doctor Who touches on similar themes. https://tardis.fandom.com/wiki/Silent#Amnesia_and_hypnotic_a...


The opposite of this is also fascinating to me too. There are false beliefs that make people who hold them better in some metrics. Like, the idea that hard work leads to success. We all know there is some element of luck, but even so, people who discount luck and only believe in hard work tend to do better.


That's a good point. I think this one can be easily be resolved on a factual level, since hard work is one of the few variables you can actually control. But it is more interesting from an emotional point of view, since in many cases that would an article of faith with the implicit fear that it might not be true.

There are variations of this, such as composition theory in art getting good results based on completely false assumptions, but these tend to fall under epistemic underdetermination.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: