I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.
I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.
I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.
Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.
Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company? Do they not teach critical thinking anymore in schools, did it go away with phonics too? Why would you ever ignore the MASSIVE conflict of interest here, it's just really foolish but it's endemic not just in tech journalism or journalism in general where people just take the words of others and not apply any critical analysis to them.
> Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company
The question isn't about what action he's taking, it's about what motivates him under the surface. Obviously what he is doing is marketing. What I'm curious about is whether he truly believes his own marketing or if he is just doing it because its his job
People that are good liars are good at it because they are lying to themselves at the same time. Even if they can initially compartmentalize I believe after a while it gets them too.
> > The experience is strange; you aren't able to grasp any common human aspects because there are none. You can't reason with the human, because the human isn't doing the reasoning. You can't appeal to it, because the LLM behind it is in direct support of its own and the proxy's opinions and whims.
I've sometimes wondered if the chat context is why some people think LLMs are intelligent, it being divorced from their usual experiences, and they need something like this to feel the cognitive dissonance before they can notice LLM shortcomings.
Seems to be becoming more common, even for folks that are otherwise quite pleasant to deal with. Perhaps social and workplace pressures causes people to opt for it, much like LinkedIn is a cesspool of bullshit
I'm dealing with the same nonsense. I get LLM-generated reviews of my work, documents, and plans which are not grounded in reality or nuance. Regularly have to explain why the AI is wrong. I was told I should run my docs through the LLM to make them read better. But they're not even being read by humans at this point.
This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.
I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.
Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell
"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.
The decline of writing is something that's been going on for a long time. Well written and grammatically correct emails are something that's been on the down turn. Consider how often people send emails in all lower case, lacking punctuation, or even without any sentence structure.
The "you need to write in a more professional business oriented way" is something that a lot of people are having difficulty with. Yes, this needs to be addressed earlier in someone's education more forcefully - but the SMSification of long form text started a while ago.
With that said, the "Ok, you need to write long form with correct grammar when sending an email that a director or VP is CC'ed on". It used to be Grammarly as the "install this and have it fix up your grammar and tone" ( https://web.archive.org/web/20191104093353/https://www.gramm... GPT-1 timeframe there). However, LLMs of today seem to be more accessible than Grammarly but it largely does the same thing - fix up and refine tone.
What I don't see from back then is people decrying Grammarly saying that it's making everything sound the same.
I'm also not sure if I would prefer the pre-fixup emails to what is produced by an LLM unless sending coworkers to remedial writing classes is something that is acceptable.
Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!
"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it?
The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.
Checking against an LLM then using your own voice feels completely fine
Why use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...
That is completely opposite of how I use LLM. If I do not have expertise, I ignore LLM and search for more trustworthy resource. LLM lie very confidently and if I do not have expertise, they will lie to me.
When I do have expertise, I use them because I am able to check.
AI doesn't have to be conscious or sentient to take over, all that needs to happen is for politicians, law enforcement, journalists, educators etc. to uncritically parrot everything it outputs. The military is already using AI to make targeting decisions. If they just go with whatever the AI says to strike, then AI is already fighting our wars.
I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.
Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.
I was talking to managers and they were talking about how they'd use AI to write reviews about their employees to which I said I would not like a non-genuine review/not personal.
I think you're gonna struggle to find companies that aren't infested with this kind of thing.
Observing the effect of LLMs on the "business side" of things, I'm increasingly thinking of these as a kind of infection against which the MBA set and their acolytes have no immune response, and I think it's going to eat a large proportion of the benefit of LLMs to most businesses (possibly overwhelming it and actually harming productivity, will depend on how much better these tools get).
LLMs are awesome at bloating your slide decks while making them really slick and complete-looking. They're great at suggesting an entire set of features on a ticket you've just barely started writing ...but did you actually want all those? You end up with redundant or in-context-gibberish features that leave the person actually doing the work tracking down WTF actually matters. They are adding overhead to communication, so far, not just by puffing up and padding language (which isn't great either) but by adding noise "content" that can't be stripped out without talking to the person who created it and making sure that was actually just AI bullshit and not something they actually needed; that is, you can't just do the "LLM, summarize this" trick, because the author used an LLM to plan it, too, not just to pad-out and gussy-up something they actually thought through and wrote.
LLMs are letting people present very convincingly as having a more-complete understanding of what's going on than they really do in ways that are messing up productive work, I'm not sure business-folks are going to be generally capable of tamping this down because it is so in-line with the way they already operate (but on speed), and helps them so very much to look good to one another while saving tons of time. This isn't just the MBA set I accuse above, either, I'm noticing that this improbably-complete deck communication upward is becoming necessary to look competent (and to ladder-climb) as an IC.
Like, I'm only starting to think this through and really observing what's going on through this lens as I've only noticed it in the last few weeks, but the more I see the more alarming this is. I think this is going to be a little like the largely-wasteful "legibility" obsession of upper management, something enabled by computerization that they find irresistible and are pretty bad at employing judiciously and effectively, but probably a lot worse in terms of harm-to-productivity, and directly affecting and changing the behavior of far more layers of an organization. They never (businesses as a whole, to anthropomorphize a bit) gained wisdom with their new powers to burn resources chasing legibility, and this is starting to look like another thing they just will not be able to use (internally! I don't even mean for actually producing external-facing results!) with restraint and taste.
I reckon you've hit the nail on the head and if you haven't done already, you should write your thoughts into a blog post. It is great to read someone's ponderings about the state of the industry and corporate uptake of LLMs
I'm perfectly content to post my poorly spelt words and grammatical errors to authenticate myself. But I know everyone it's probably using the AI filter now.
Why don't we just do AI bot to AI bot communications for everything? I'm kidding I would not like that
Please keep posting updates about this because if I could instantly fire up a game in my browser, I would definitely pay for that and play with it all day!
Can I ask what prerequisite mathematics you would need to know before reading those? I'm really interested in that topic and better understanding functional programming.
If you wish to approach Category Theory from the viewpoint of a programmer, not a mathematician, I suggest Bartosz Milewski's book Category Theory for Programmers. For this, all you need is some previous programming experience. He uses C++ and Haskell iirc but as long as you can read snippets of code, you'll be fine.
I am suggesting this since you said you want to better understand functional programming. Category Theory, as mathematicians look at it, is an extremely abstract field. If you want to do pure math related stuff in Category Theory, and only then, I would say important prereqs are Abstract Algebra and Topology. I believe the motivation for Category theory lies in Algebraic Geometry and Algebraic Topology, but you definitely don't need to be an expert on these to learn it.
Yes I remember reading one of those SSD articles about mlcs and it was so well written, quality knowledge captured on the internet. I hope someone starts up another anandtech like website
reply