Well, if you're asking if apple execs use that setting, the answer is probably that they don't.
I think the issue is that there are SO many piled up little features everywhere that SOMEone is using that keeping everything working while making any changes at all is very difficult.
I am a fan of more wood behind fewer swings. Don't add something like spaces unless you think you've got something so good that you are confident that it will be the common path.
This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.
What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.
>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.
I did answer it, albeit not directly. "Guaranteed to be the motivation" isn't a standard anyone can meet, and so framing it that way doesn't really probe anything meaningful about LLMs specifically. If what you want to hear is No, then sure, have your No, but it doesn't mean anything. There's just not much to the question.
Even though you had it up as one borne of a greater understanding of LLMs, the interpretability research we have so far, and our current very little understanding of the internal computations of these models does not support your position and certainly not how assured you are about it.
> our current very little understanding of the internal computations of these models does not support your position
Our current understanding is sufficient to know you can not ask the LLM to explain it's behavior and it can correctly do so, I'm not what research you've read to believe this could be possible in the first place, but happy to receive links to read through, if you're sitting on them.
The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.
> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.
It does fit a pattern where the general tone on HN has gone from "AI is going to eat the world of retail jobs and people like us are going to be the biggest beneficiaries" to "turns out that turning JIRA tickets into syntax which compiles might actually be something LLMs are better suited to than upselling fries and wiping tables" :)
> CEO
When things go shitty, who else would deserve a golden parachute?
Respect the position, people, not the person.
Or the multi-million dollar compensation.
The position doesn't get a golden parachute, the person does. If you're CEO when things go shitty you shouldn't get anything more than your bottom-line employee would, which is to say you should just be unceremoniously kicked to the curb.
You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.
(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)
>You still want to make payroll and can't just randomly fire people.
In the US you can.
>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.
But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.
I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.
Depends on the state I think. It's not Europe or Japan level.
At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.
> This is Hacker News, and the pro-business narrative is strong here,
I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".
(Mostly problems are caused by other things than that.)
Are you kidding me? Who’s going to align synergy and hold accountable KPIs and vision plan the 3rd quarter and.. and.. other MBA talk. Certainly AI could never.
I'm noticing one major early effect of them is making extensive, visually consistent, very impressive slide decks accessible to individual workers who need to actually do real work and wouldn't ordinarily have time to make those.
The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.
This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.
Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.
My first guess would be a MrBeast style stunt, in which (it is hoped) blowing a huge wad on something obviously stupid will attract enough attention and interest to be convertible into a net-positive ROI.
This seems like a silly thing to worry about. Assuming you live in a first world country and are somewhat tangentially involved in tech(based on the site we're on), odds are you spend a lot of money in ways that billions of the poorest people in the world would consider frivolous or outrageously, needlessly luxurious.
At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.
Yeah, I mean it's true to an extent, I agree. As scientific research though it's not very well thought out. A grant agency would not fund this. There's too much potential for causing harm and it's not clear what benefit or action we derive from the results. They tried this before with a vending machine, it failed, apparently all they concluded was "hm, models got better so maybe we should just try it again". How is that worth anything scientifically?
Re: not my money, true. It's just frustrating even to me to see people do stuff like this, and I'm not struggling to get by. My frustration mostly derives from feeling like I'll get lumped in with techies who have more money than sense. I already deal with enough tech hate in my life.
When people buy a super fancy car they don't (usually) blog about it, and instagram wealth influencers are also frustrating, yes.
That's a fair objection and I often feel like this, too.
On the research aspect, I see this as something pre-Research, yet still science - in a way, it's science at its core: trying something and seeing what happens. Proper Research usually follows once enough ad hoc attempts are made and they seem to show a pattern that's worth setting up a systematic study to verify.
Really it's the same as any other R&D investment in our capitalist system, it just happens to be more visible to the public, with more obvious risks to them. (Outright celebrated, even).
Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.
You're talking about funeral costs; the author generalizes _a_lot_ from funeral costs to "kinship societies are bad". That's the leap the comment you're replying to is discussing.
The factual material about funeral spending costs is very interesting, but when it gets into "Kinship societies are wealth-destroying societies" it seems rather… unsupported? That's a sweeping statement that actually requires understanding the whole picture, and the whole picture is not being presented. Is there reason to think the author truly has all the context to make these claims?
Korea used to have something similar to this phenomenon, although it wasn’t for the funeral. When the oldest man (probably the grandfather of a big family) has his 60th birthday, the entire family had to celebrate with basically throwing a days-long party. It was like a family duty for the rest of the family, and it was embedded into the culture so deeply so they wouldn’t simply think about the alternative of having a small one. Other elders in the local community would say “well done” only when the party was big enough.
After the big celebration, the rest of the family would sit on a massive debt, which couldn’t be reimbursed with their earnings for a foreseeable future. The old man dies, and the family lives along with the agony of the debt. It used to be the case until Korea became an industrial country and a lot more people started having more than 60 yrs of life.
My mom still talks about what it used to look like in those old days.
This is not a novel observation, eg Kapuscinski's "In the Shadow of the Sun" describes the same phenomenon: it's very difficult to get ahead because anything above bare subsistence is immediately siphoned off by your kin.
On a factual level the relationship between kinship societies and economic headwinds is fairly well documented [1] [2]. The mechanism is the same reason that communist/socialist societies often fail: when wealth belongs to everyone, nobody has either the incentive or the means to accumulate wealth, which prevents capital formation within the society [3].
The part that the article glosses over is that "Kinship societies destroy economic growth" is a Russell conjugate [4] of "economic growth destroys family formation". Kinship networks provide important intangible support to several important community functions, notably child-rearing. That's the whole "it takes a village to raise a child" aphorism. When you allow people to defect on their social obligations in the name of accumulating wealth, then it turns out they do, and the village suffers. It is exactly as the article said: "The kinship network has a strong interest in preventing any of its members from becoming prosperous enough to no longer need it: someone who no longer needs your help is also someone who might not help you." That's exactly what we've observed happening in modern industrialized economies, where people become increasingly atomized and those informal community organizations that create things like belonging and mutual aid (not to mention group childcare and socialization) die off as everyone chases the promotion that will let them afford ever-higher institutional childcare costs.
And this is why the fertility rate in every major industrialized country has cratered, usually right as it industrializes.
>And this is why the fertility rate in every major industrialized country has cratered, usually right as it industrializes.
I'm pretty sure it's actually because industrialization is upstream of the education and supply chains to make hormonal birth control widely available, and being pregnant and giving birth is an incredibly challenging, risky, and frequently unpleasant burden that's only shouldered by half our population.
Why are you acting like a vast majority of the population are capitalists? You're describing the actions of less than 1% of the worlds population, acting like it's the norm of human history and not the extreme aberration that it is. Not too mention we're living in the corporatist neoliberal dream that is a massive hellscape for workers where income inequality is at the highest levels, worse than the gilded age, where your single life is determined by factors the majority of workers can never control since the system is designed to benefit capitalists at the expense of everyone else.
Why are you assuming capital formation is even beneficial for people? Poor workers in Arkansas do not benefit when Ford sells their crappy wares around the world. Children in Utah aren't getting a better education when Zuckerberg sells more ads.
>Why are you acting like a vast majority of the population are capitalists?
Anyone who has saved money to buy something that makes them more productive is a capitalist. At least for any meaningful definition of the word. It's not 1%, it's some very large minority or even majority.
>Why are you assuming capital formation is even beneficial for people? Poor workers in Arkansas do not benefit when Ford sells their crappy wares around the world.
The guy that squirrels away $20,000 so he can buy a food truck, or hell, $300 for a hot dog cart is a capitalist. Every programmer here that ever bought a new laptop or phone acquired the "means of production" for the jobs they work.
The thing about marxists is, unfortunately, they're still stuck in the 1850s with him, trying to solve the problems of the 1850s, and refusing to engage in reality with any of us who don't want to live in the 1850s with them.
It's viewing the situation through the lens of Anglo capitalist opinions.
I found the same thing when working in Cambodia; Khmer culture is very, very, family-oriented, the extended family is the main survival mechanism for Khmer people, and individual wishes are often subordinated to the family. This is their culture, Khmer people are happy with it, this is how they choose to live. The Anglo ex-pats (including me) don't understand it, find it oppressive and have a natural instinct to "liberate" Khmer people from this oppression. Took me quite a while of talking with Khmer people to realise that they look at the world very differently from me, and from that perspective this all works and is a source of joy and comfort for them. Obviously there are outliers and people who this doesn't work for, but that's also true of Anglo culture.
> It's viewing the situation through the lens of Anglo capitalist opinions.
Yes and while I find the article to be quite insightful on the whole, I can't take it seriously as an anthropological study.
There is a strong ethnocentric bias that the author failed to declare / acknowledge, which reduces the credibility of his claims. Also there is little supporting data.
> It's viewing the situation through the lens of Anglo capitalist opinions.
Came here to say this. It's a very narrow perspective that shows in sub headlines like "Kinship societies are wealth-destroying societies".
One could also take the lens of "Kinship societies are making people's wealth more equal to reduce competition and jealousy, increase harmony and happiness" – although I have no data whether these people are genuinely more happy. It quotes some business-oriented Ghanians who seem quite unhappy about sharing their wealth. And yet, the perspective of indivual wealth over group wealth is assumed and never critically reflected upon.
I'm not saying that their way is better or something like that. I just think that reading the article is a good exercise in reflecting on one's own views on life and wealth.
It also assumes a myopic version of wealth. Rich people haaate when poor people do work for each other for free, because there is no opportunity to add a middleman.
There's a ton of sanitization of attachments. It just isn't foolproof.
On iOS messages attachments are decoded in a separate, heavily restricted and sandboxed process, and the decoded sanitized results are sent back to the UI process. It just isn't perfect.
I've done some work in this sort of area before, though not literally on a malloc. Yes you very much want to be careful, but ultimately it's the tests that give you confidence. Pound the heck out of it in multithreaded contexts and test for consistency.
> ...but ultimately it's the tests that give you confidence. Pound the heck out of it in multithreaded contexts and test for consistency.
I don't think so.
Even on LLM generated code, it is still not enough and you cannot trust it. They can pass the tests and still cause a regression and the code will look seemingly correct, for example in this case study [0].
AI is more than happy to declare the test wrong and “fix it” if you’re not careful. And the cherry on top is that sometimes the test could be wrong or need updating due to changed behavior. So…
I wonder if there's a startup out there selling AI-generated comments for astroturfing HN, Reddit, et al. And then I wonder if that startup is a YCombinator company...
I think the issue is that there are SO many piled up little features everywhere that SOMEone is using that keeping everything working while making any changes at all is very difficult.
I am a fan of more wood behind fewer swings. Don't add something like spaces unless you think you've got something so good that you are confident that it will be the common path.
reply