Hacker Newsnew | past | comments | ask | show | jobs | submit | misterflibble's commentslogin

Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!

I often wonder if the popularity of LLMs among company executives is that they are the perfect yes men.

They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.


I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.

I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.

I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.


I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.

Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.


I remember him tweeting about how he can "feel the AGI" when speaking to GPT

Another meaningless, extremely cringeworthy, tweet, hailed as a messianic message by many at the time.

Yeah, it's hard to say if he's doing marketing because that's his job or if he's really swallowed the whole pill

Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company? Do they not teach critical thinking anymore in schools, did it go away with phonics too? Why would you ever ignore the MASSIVE conflict of interest here, it's just really foolish but it's endemic not just in tech journalism or journalism in general where people just take the words of others and not apply any critical analysis to them.

It's all access journalism now, waste of time.


> Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company

The question isn't about what action he's taking, it's about what motivates him under the surface. Obviously what he is doing is marketing. What I'm curious about is whether he truly believes his own marketing or if he is just doing it because its his job


People that are good liars are good at it because they are lying to themselves at the same time. Even if they can initially compartmentalize I believe after a while it gets them too.

Definitely see our internal company agents enforcing the status quo!


> > The experience is strange; you aren't able to grasp any common human aspects because there are none. You can't reason with the human, because the human isn't doing the reasoning. You can't appeal to it, because the LLM behind it is in direct support of its own and the proxy's opinions and whims.

I've sometimes wondered if the chat context is why some people think LLMs are intelligent, it being divorced from their usual experiences, and they need something like this to feel the cognitive dissonance before they can notice LLM shortcomings.


I've been calling them "meat condoms". In the workplace, it's one or two warnings before completely ejecting them. On social media, instant block.

Seems to be becoming more common, even for folks that are otherwise quite pleasant to deal with. Perhaps social and workplace pressures causes people to opt for it, much like LinkedIn is a cesspool of bullshit

That's terrific lol thanks for the link BTW!

I'm dealing with the same nonsense. I get LLM-generated reviews of my work, documents, and plans which are not grounded in reality or nuance. Regularly have to explain why the AI is wrong. I was told I should run my docs through the LLM to make them read better. But they're not even being read by humans at this point.

This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.

I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.

Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell

"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.

The decline of writing is something that's been going on for a long time. Well written and grammatically correct emails are something that's been on the down turn. Consider how often people send emails in all lower case, lacking punctuation, or even without any sentence structure.

The "you need to write in a more professional business oriented way" is something that a lot of people are having difficulty with. Yes, this needs to be addressed earlier in someone's education more forcefully - but the SMSification of long form text started a while ago.

With that said, the "Ok, you need to write long form with correct grammar when sending an email that a director or VP is CC'ed on". It used to be Grammarly as the "install this and have it fix up your grammar and tone" ( https://web.archive.org/web/20191104093353/https://www.gramm... GPT-1 timeframe there). However, LLMs of today seem to be more accessible than Grammarly but it largely does the same thing - fix up and refine tone.

What I don't see from back then is people decrying Grammarly saying that it's making everything sound the same.

I'm also not sure if I would prefer the pre-fixup emails to what is produced by an LLM unless sending coworkers to remedial writing classes is something that is acceptable.


Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!

"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it?

The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.

Checking against an LLM then using your own voice feels completely fine

Why use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...


You have to make some mistakes in your communication (or anything) if you ever want to grow and learn.

You're absolutely right here and things have improved significantly at work after dropping this habit even if slightly.

>You're absolutely right

:|


Lol that triggered me too!

That is completely opposite of how I use LLM. If I do not have expertise, I ignore LLM and search for more trustworthy resource. LLM lie very confidently and if I do not have expertise, they will lie to me.

When I do have expertise, I use them because I am able to check.


Man that's so annoying I have a similar problem our devops person I ask question to literally gives me AI responses

Also annoying to me working with a "partner" non-technical they just send me an LLM dump of how to do something

I was trying to explain it to them in an analogy like showing up to a mechanic and telling them what to do based on what ChatGPT said


Well, has it been an improvement?

No that's why I'm complaining

Just because thoughts are translated doesn't mean they are consumed in the process.

However I don't doubt many "team leaders" can and should be replaced with LLMs.


AI doesn't have to be conscious or sentient to take over, all that needs to happen is for politicians, law enforcement, journalists, educators etc. to uncritically parrot everything it outputs. The military is already using AI to make targeting decisions. If they just go with whatever the AI says to strike, then AI is already fighting our wars.

As a bonus, mistakes can be blamed on AI.

For many that's not a bonus, that's the goal. Consequence-free life ahoy.

Fun and games until the AI decides extincting us is worth it.

Unfortunately you can really tell which people haven't seriously considered that possibility or seriously don't care if it happens

The scary thing is that AI decision making has been infiltrating society for decades as an unseen entity.

I would be looking for another job.

I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.

Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.


I was talking to managers and they were talking about how they'd use AI to write reviews about their employees to which I said I would not like a non-genuine review/not personal.

Their rational is coming off more professional


Good luck finding a company that doesn't have these people if LLMs are used

Yes true! It's everywhere now!

Yes exactly and I am actively applying for jobs. But I feel like the next job will also have this nonsense behaviour

I think you're gonna struggle to find companies that aren't infested with this kind of thing.

Observing the effect of LLMs on the "business side" of things, I'm increasingly thinking of these as a kind of infection against which the MBA set and their acolytes have no immune response, and I think it's going to eat a large proportion of the benefit of LLMs to most businesses (possibly overwhelming it and actually harming productivity, will depend on how much better these tools get).

LLMs are awesome at bloating your slide decks while making them really slick and complete-looking. They're great at suggesting an entire set of features on a ticket you've just barely started writing ...but did you actually want all those? You end up with redundant or in-context-gibberish features that leave the person actually doing the work tracking down WTF actually matters. They are adding overhead to communication, so far, not just by puffing up and padding language (which isn't great either) but by adding noise "content" that can't be stripped out without talking to the person who created it and making sure that was actually just AI bullshit and not something they actually needed; that is, you can't just do the "LLM, summarize this" trick, because the author used an LLM to plan it, too, not just to pad-out and gussy-up something they actually thought through and wrote.

LLMs are letting people present very convincingly as having a more-complete understanding of what's going on than they really do in ways that are messing up productive work, I'm not sure business-folks are going to be generally capable of tamping this down because it is so in-line with the way they already operate (but on speed), and helps them so very much to look good to one another while saving tons of time. This isn't just the MBA set I accuse above, either, I'm noticing that this improbably-complete deck communication upward is becoming necessary to look competent (and to ladder-climb) as an IC.

Like, I'm only starting to think this through and really observing what's going on through this lens as I've only noticed it in the last few weeks, but the more I see the more alarming this is. I think this is going to be a little like the largely-wasteful "legibility" obsession of upper management, something enabled by computerization that they find irresistible and are pretty bad at employing judiciously and effectively, but probably a lot worse in terms of harm-to-productivity, and directly affecting and changing the behavior of far more layers of an organization. They never (businesses as a whole, to anthropomorphize a bit) gained wisdom with their new powers to burn resources chasing legibility, and this is starting to look like another thing they just will not be able to use (internally! I don't even mean for actually producing external-facing results!) with restraint and taste.


I reckon you've hit the nail on the head and if you haven't done already, you should write your thoughts into a blog post. It is great to read someone's ponderings about the state of the industry and corporate uptake of LLMs

And I would bet he judges your work with AI, assigns you work generated by AI, and perhaps evaluates whether you yourself use enough AI.

That's exactly what he does...wtf are you spying on me?? Lol but seriously, I don't know how to handle his AI delegation

I'm perfectly content to post my poorly spelt words and grammatical errors to authenticate myself. But I know everyone it's probably using the AI filter now. Why don't we just do AI bot to AI bot communications for everything? I'm kidding I would not like that


Please keep posting updates about this because if I could instantly fire up a game in my browser, I would definitely pay for that and play with it all day!


Can I ask what prerequisite mathematics you would need to know before reading those? I'm really interested in that topic and better understanding functional programming.


If you wish to approach Category Theory from the viewpoint of a programmer, not a mathematician, I suggest Bartosz Milewski's book Category Theory for Programmers. For this, all you need is some previous programming experience. He uses C++ and Haskell iirc but as long as you can read snippets of code, you'll be fine.

I am suggesting this since you said you want to better understand functional programming. Category Theory, as mathematicians look at it, is an extremely abstract field. If you want to do pure math related stuff in Category Theory, and only then, I would say important prereqs are Abstract Algebra and Topology. I believe the motivation for Category theory lies in Algebraic Geometry and Algebraic Topology, but you definitely don't need to be an expert on these to learn it.


Hey thank you for the excellent tips! I really appreciate it!


Here is a birds-eye view of programming (classic, functional, quantum) vs category theory vs logic -- aka the computational trilogy:

https://ncatlab.org/nlab/show/computational+trilogy

It helped me a lot putting into context my existing programming knowledge while learning category theory


Thank you @ctbergstrom for this valuable and most importantly, objective, course. I'm bookmarking this and sharing it with everyone.


I was thinking the same thing and actually thought they should continue building it. It looks like a great game!


lol "Tackle Rising Animation Costs"


Yes I remember reading one of those SSD articles about mlcs and it was so well written, quality knowledge captured on the internet. I hope someone starts up another anandtech like website


I'm eager to hear other opinions on Eric's DDD book too :-D I still don't understand it!


Waist of time.


Don't give them ideas lol terrifying stuff if that happens!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: