Hacker Newsnew | past | comments | ask | show | jobs | submit | hgoel's commentslogin

He wouldn't want to be accused of actually believing in anything

Fickle Clown Economics

I think this is just a tough spot for any of these kinds of companies to be in. It's easy to suggest that they should just ban or allow filtering out AI content, but the problem isn't that easy to deal with. Adding a manual "AI generated" tag stops being useful as soon a negative consequence exists for using it.

It brings to mind the issues with AI content on Pixiv. They added this tag and then made it possible to have AI generated stuff not shown to you. So, now, it's pretty common for users to not tag their AI content, and it's pretty hard for companies to keep up with the flood.

If they want to remain open to small time artists, they don't really have any choice besides labeling the artists they know to not be AI.


"several times the speed of sound" is obviously just meant to mean really fast to earthlings in relation to their speed of sound.

It’s not a constant on earth either, should have used km/h instead for a relate able number

It's genuinely so ridiculous to suggest that freedom and meritocracy (among other things) were why America was able to do this first. This stuff was before the civil rights act.

There are endless stories about Americans being sent to Europe needing to be told that they can't treat black people the way they do at home.

All of the chest thumping about being the land of the free rings hollow when considering how recent some of this history is. The current and previous president were alive when the civil rights act was passed!


I think it's useful to consider that NVIDIA bet on CUDA early, they've supported it since 2006. AMD has to do a lot less work, but it's still going to take a while to get all of the software in a competitive state.

Though, on the other hand, I'm not very convinced AMD is even seriously trying, with how much of a mess ROCm has continued to be. GCN was an excellent GPU compute architecture, but they never seemed to manage to make much of that.

I had been willing to put up with the software support struggles too, but the way ROCm support for the Radeon VII and 5000 series had been handled really put me off.


Presumably an analyzer that makes it an error to not have an immediately traceable zero check.

C# can do something similar with null references. It can require you to indicate which arguments and variables are capable of being null, and then compiler error/warning if you pass it to something that expects a non-null reference without a null check.


But that’s because null is a static type. Zero isn’t a static type. How can I know if a calculation produces zero if I can’t predict the result of it at compile time?

Post type check analyzers can work with more than just the type information, you can really do whatever you want at this stage. The normal highly optimized type checker handles the bulk of the checking and the post type check analyzers can work on the residual. You wouldn’t type check a file that doesn’t parse, and you wouldn’t run the analyzers on code that doesn’t type check.

The problem is these checks can be rather slow and people don’t want to wait a long time for their type checking and analyzers to finish. But LLMs can both wait longer and by internalizing the logic can reduce the number of times it will need to trigger them.

Edit: I’ll need to examine this project to know where (or if) they draw the distinction between normal type checking and a post type check analyzer. If they blend the two and throw the whole thing into Z3 it’ll work but it’ll be needlessly slow.

Edit: What I’m calling a post type check anyalizer they’re calling a contract verifier and it’s a distinct stage with ‘check’ (type check) then ‘verify’ (Z3).


I think it's about if there's a possibility of it being zero. Of course there's no way to tell at compile time that a value will definitely be zero.

So, in pseudocode

int div(int a, int b): return a / b;

Would probably be a compile time error, but

int div(int a, int b): return b == 0 ? ERR : (a /b);

Would not, or at least that's what I'd expect.


> Of course there's no way to tell at compile time that a value will definitely be zero.

Yes there is. Dependently typed languages like Idris can inspect terms at the value-level during compile time. Rather, instead of proving that the divisor will be zero, you must instead statically prove that the divisor cannot be zero; otherwise the code will not typecheck.


Okay,

int integer_division(int a, int b) { if (b!=0) return a/b; raise(SIGFPE); }

Great.


No. In this type of language, the typical division function does not check against zero. It has a precondition that requires the caller to ensure that the divisor is not zero. If the data the caller has is completely arbitrary, then yes, the caller must use an if statement or similar. If the caller knows something about its data and can be sure that the divisor is not zero, then it doesn't need to use an if statement. But it might need to convince the proof checker that it knows what it's doing.

You don't appear to understand the difference between runtime and static analysis/compile time, or term-level and type-level.

Great! Explain it to us while I read to my kid!

Don't get mad because you're too lazy to even ask the AI. You are first to be replaced in the workforce.

Or maybe it's over your head and you should just stick to reading children's fiction after all. Want some colouring books too?


Yes! We can always use more books and toys here!

The ‘let me google that for you’ is set to be replaced with ‘let me ask ChatGPT for you’.

This is a very antagonistic comment. Some people would call it "passive aggressive".

Dude, if you're reading to your kid you're clearly busy doing something else. No matter how simple the concept is, if you don't pay attention you're not going to get it so it's a failure on your part and not a failure on the part of the person patiently trying to explain something to you.


Or it's just some AI brain fart…

The whole things looks vibe-coded, and vibe-designed.


The context window also limits how deeply the model can "think", and it does this in natural language. So a language suited to LLMs would have balanced density, if it's too dense, the model spends many tokens working through the logic, if it's too sparse, it spends many tokens to read/write the code.

I think in the context of already trained LLMs, the languages most suited to LLMs are also the ones most suited to humans. Besides just having the most code to train on, humans also face similar limitations, if the language is too dense they have to be very careful in considering how to do something, if it's too sparse, the code becomes a pain to maintain.


I generally agree that humans and LLMs benefit similarly from programming language features. I would tweak that a bit and suggest that their ability floor is higher than the human lowest common denominator so I would skew towards the more advanced human programming languages. There are many typing / analyzer features that would be frustrating for humans to use given they’ll cause the type checking to be slower. This is much less of a problem for LLMs in that they’re very patient and are much better at internalizing the type system so they don’t need to trigger it anywhere nearly as often.

When I was still dependent on my parents, I was stuck with a pretty outdated PC, but was allowed to get a PSP and then a PS Vita (but also only 1 game ever). The reasoning was that a separate console could be taken away easily if my grades dropped, while an outdated computer couldn't do much besides schoolwork.

One of the ways I got into computing was making the most of this dumb situation by trying to run custom software (emulators, homebrew games, PDF viewers etc) on my console. So, something like this would've been handy.

Ultimately though, the devs probably made it because it was an interesting technical challenge .


The consequences to everyone that isn't as well fed as yourself are also good for you morally?

Nothing keeps the people well feed like the inability to grow crops.

Funny that you say that, because LNG exports from Persian gulf being blocked will result in fertilizer shortages and potentially a famine.

More short term thinking

Well, it seems to me that the liberal left agenda was kind of hijacked by big corporations. It used to be that Democrats cared about things like equal pay, labor conditions, education costs. Now it is all about abstract things that don’t matter in the real world: animal rights and carbon emissions.

The “long term thinking” you allude to is just a mind trick to keep you at bay.


The responses to this seem unnecessarily hyperbolic.

These tests are interesting even with the understanding that the AI is just reciprocating its training. It doesn't matter if the model is conscious or self aware if it still goes off the rails breaking things when prompted in this way.

As the article linked at the end of the tweet thread (https://www.arimlabs.ai/writing/loss-of-control) puts it, this is a class of vulnerability distinct from hallucination or prompt injection. The "AI apocalypse" bit was unnecessary in the title though, really doesn't match the message of the text.

Reminds me of a (computerphile?) video I watched some time before the LLM revolution, discussing the challenge of aligning AI towards specific goals, if you set the reward for the emergency shutoff button higher than or equal to the primary objective, the AI is encouraged to immediately press the button itself, but if you the reward lower, it's encouraged to prevent you from pressing the button.


> The "AI apocalypse" bit

That tells you how the researchers are thinking of not only the results but the experiment as well. You may be right that the reason the models behave this way is secondary to the fact that they do, but that’s not how the researchers are asking us to look at it. They ran the experiment 300 times, it sometimes did what they thought it would, and then they framed it as if that’s all that matters.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: