A great example of how current alignment is imperfect and bound to miss random behaviors nobody is trying to get.
This is cute now, and a huge problem when future AI does everything and is responsible for problems it isn't even directly optimized for. Who knows what quirks would arise then.
I think eventually you are going to end up with every smart AI continually checked by dumber AI's to make sure they don't do anything too crazy. Which probably does bring AI closer to how human intelligence works
Completely agree, top down “alignment” and RLHF is actually quite primitive and uses a lot fancy words to describe what is essentially just hitting the machine with a stick without the nuance, context, or feedback to help it model why the feedback was given.
Also to be honest I think OpenAI models struggle a lot with this, I primarily stopped using them in the sycophancy/emoji era but ever since the way they talk or passive aggressively offer to do something with buzzwords just pisses me off so much. Like I’m constantly being negged by a robot because some SFT optimized for that really strongly to the point it can’t even hold a coherent conversation and this is called “AI safety” when it’s just haphazard data labeling
I think LeCun has been so consistently wrong and boneheaded for basically all of the AI boom, that this is much, much more likely to be bad than good for Europe. Probably one of the worst people to give that much money to that can even raise it in the field.
LeCun was stubbornly 'wrong and boneheaded' in the 80s, but turned out to be right. His contention now is that LLMs don't truly understand the physical world - I don't think we know enough yet to say whether he is wrong.
He said that LLMs wouldn't have common sense about how the real world physically works, because it's so obvious to humans that we don't bother putting it into text. This seems pretty foolish honestly given the scale of internet data, and even at the time LLMs could handle the example he said they couldn't
I believe he didn't think that reasoning/CoT would work well or scale like it has
Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.
Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.
"100% of AI researchers think we will have AGI this century" isnt the same as "100% of AI researchers think theres a 100% chance that we will have AGI this century"
I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.
The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.
I don't think there's "magic" exactly, but I do believe that there's a high chance that the missing elements will be found in places that are non-intuitive and beyond the scope of current research focus.
The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.
Sufficiently advanced science is no different than magic. Religion could be directionally correct, if off on the specifics.
I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.
Yes, this sounds made-up/not Ryanair. I've used them for over a decade, paid with many different cards and have never encountered this with them (nor anywhere ever really).
How is it internal or speculative? Chatgpt is the 5th most poplar website. Gemini is 30th but they have increasing demand and a ton of it isn't on the gemini main site. And that isn't their only external demand of coruse.
I think they are referring to the fact that Google has shimmied AI into every one of their products, thus demand surge is the byproduct of decisions made internally. They are themselves electing to send billions of calls daily to their models.
As opposed to external demand, where vastly more compute is needed just to keep up with users torching through Gemini tokens.
Here is the relevant part of the article:
"It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace."
ChatGPT being the #5 website in the world is still indicative of consumer demand, as their only product is AI. Without commenting on the Google shims specifically, AI infrastructure buildouts are not speculative.
You don't think it's plausible that Google's need to 1000x infrastructure has a lot to do with their very liberal incorporation of AI across the entire product suite?
I don't really care either way what the source of the demand is -- but it seems like an uncontroversial take.
>rational adult of sound mind”, and “rational” there easily disqualifies every human being on the planet, with all our evolved biases, heuristics, and common predictable misjudgments.
If only they had someone deeply familiar with the field who had been there.
Exactly this tho with more than just 2 categories. You find more than ever optimized for the 60s category, that's true, and you do get longform silos - but those include one silo of channels that clock around 10m, as well as another in the hour+ podcasts case.
The main new takeaway is that the shortform category is bigger and more important than previously imagined but hardly the sole winner.
>OpenAI is losing a brutal amount of money, possibly on every API request you make to them as they might be offering those at a loss (some sort of "platform play", as business dudes might call it, assuming they'll be able to lock in as many API consumers as possible before becoming profitable).
I believe if you take out training costs they aren't losing money on every call on its own, though depends on which model we are talking about. Do you have a source/estimate?
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
reply