Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "

Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them



I'd love to be able to actually articulate what makes AI writing read like AI writing. A few of the common tells come to mind (contrast construction, hyperbole, overuse / wrongly used em-dashes, etc). The above quote doesn't have any of that, and yet it certainly feels AI. The first sentence (both what it says and where it's placed) suggest AI to me. But, I couldn't quite tell you why.


Before AI this style of prose was called "thank you for coming to my TED talk", with a little bit of "LinkedIn broetry". Confident assertions and pat explanations about truths that will make you a better person upon internalization; a pop psychologist convincing you of an unintuitive and surprising new idea about how the universe works that catches you off guard but then turns your perception on its head and revolutionizes the way you see the world. Contemporary marketing speak of a particular "coolly subverting your expectations and injecting the truth straight into your veins" flavor.


It is a style that AI (intentionally?) emulates for sure, though the "regression to the mean" and general vagueness seems to be what really separates the classic TED talk/puffy blog from AI. Humans like specific examples and anecdotes, AI fails at making those.


I think the main tell is that it says basically nothing, it reads like a human that is paid per word. Humans prefer easy to read articles that doesn't hide the point behind such fluff, so there is no reason to do it except just to spam words.


> it reads like a human that is paid per word

that's essentially it. But not only that, we learned to distinguish things written by humans for humans, and things written by humans (paid by the word) for SEO. LLMs tend to produce text that would be great for SEO, so it stands out as not for humans


Wikipedia has an excellent article about exactly this [1], in their editor information section. There's a section called "Undue emphasis on significance, legacy, and broader trends" that provides some examples:

>Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...

Once I read this, it started sticking out to me all the time.

[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


I like the take on "undue emphasis on significance." To me, that's such an obvious tell. That's actually an old pre-LLM tell, we just used to call it "pretension." Once we get into long lists of specific words, it feels like we're getting into rules. You can't use this or that word cuz LLMs do. That's crazy problematic. It has to be about the way the emphasis and the overuse of certain words in a single piece reflects inauthenticity. But, eff if I'm gonna stop using "significance" cuz some LLM does.


Hey, if AI is allowed to make vibe-judgements based on seeing a large corpus of data, why shouldn't humans :P


I've been having CC identify LLMisms in generated text. It does a decent job, but has no idea that em-dashes are sus.

https://claude.ai/share/8fa5de6a-e79d-414c-834f-9bc9aa87c9bc


I can not stand that I'm expected to adjust my use of em-dashes because LLMs use them (incorrectly, typically). It brings up all these feelings from my younger punk / indie days when normies would get into a band we were into, and then we were expected to not like that band anymore. Since then I've tried to abide by what I call the Farting Billion Principle. People shouldn't have to change their ways everytime a billionaire farts.


Well, the first two sentences are hyperbole. And you could argue that the last sentence is a less conspicuous sort of contrast construction


> The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them

Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.

And AI is still a product primarily aimed at investors and not consumers.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: