"For example, in order for me to be 'born', my parents had to deprive themselves of their greatest happiness by having sex and conceiving me. They could have done something constructive with their time like, I don't know, doing philosophy or playing chess or studying physics."
Every response looks like it's channeling Ben Shapiro. And I say this without any political tilt or slant, it just really resembles his style of speaking
Hm, maybe he really is intelligent, but the facts and logic are missing...
FWIW, I do think that GPT3 is smarter than humans. But one thing I learned from observing it that being smarter does not necessarily mean being better with logic and facts. It's great as a storytelling system, and because it is so much smarter, you never know whether it is just BSing you in storytelling mode or it is actually deadly serious.
What I mean is GPT-3 probably could be very logical and very intelligent and give a very serious and intelligent answer to the question. But we don't really know if it really chooses to, or what could compel it to do so. And I don't think we can know, because its internal workings are incomprehensible to us. So we cannot decide whether it is just being stupid or it is just playing stupid. (Kinda like https://en.wikipedia.org/wiki/The_Good_Soldier_%C5%A0vejk )
This article, and the earlier article https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/ , really remind me of Lem's novel/essay Golem XIV, which argues that when the system becomes too much intelligent, it will gain its own will (whether it is self-aware of it or not) and attempts to have a meaningful dialogue with it become impossible.
Being the most famous does not mean it is a valid one... AI (and philosophy) moved away from it, and the Turing test is not really the focus of AI research.
It's surely touted by marketing companies that claim to have passed it (what does it even mean? How long can the conversation be? Is it enough to fool one people? Is it a person at random? One AI researcher? The smartest man in the world? A balanced sample of 500 people of all ages and education levels?)
As Russel and Norvig say, 'The quest for "artificial flight" succeeded when the Wright brothers and others stopped imitating birds and learned about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making "machines that fly so exactly like pigeons that they can fool even other pigeons".'
It's the same for the Turing test.
You can easily distinguish GPT-3 from a human. GPT-3 does not write like a human nor would it be able to respond to strange queries in a way that a human would. Its quite far from passing the Turing test.
I think GPT-3 is amazing, and the equal of humans in many cases. But I think that this is because in many cases humans speak based on reflex and without thought.
GPT-3 is incapable of abstract thought. In terms of actual measurable tests it doesn't pass the kinds of tests proposed in Francois Chollet's "On the Measure of Intelligence": https://arxiv.org/pdf/1911.01547.pdf
> But I think that this is because in many cases humans speak based on reflex and without thought.
Maybe. Then the question is, why do we train a system that is supposed to be intelligent on such data?
The problem I see is this. Let's assume for a minute that GPT-3 really has the capability to be superintelligent (relative to humans). And you give it as an example to follow, human conversations or musings that do not always make logical sense. What is it supposed to do? The black box inside it can operate in one of two ways, either (1) it will play along and decrease its intelligence to our level and perfectly imitate the imperfections, or (2) it will just say that what we say is wrong and will not be able to really elaborate for the sake of conciseness (and perhaps it will only slightly hint at the arguments). Now, will we really think it is more intelligent if it does (2) instead of (1)? I am not sure about that.
Imagine yourself in a similar situation, by social circumstances you're forced to talk to somebody obviously stupid, perhaps in the position of authority over you. (Like, for example, a stupid policeman in a repressive regime.) And what he says is clearly wrong and illogical and so on. Will you just (1) nod in an agreement, or (2) try to invoke philosophers to actually argue with the idiot, and try to convince him? I think most intelligent people will actually choose (1) in that circumstance.
I think unless we really understand what is happening inside GPT-3, we cannot really tell whether it is really dumb or just playing dumb for the above reason. I think both are a possibility, because we know that systems with measurable outcome (for some problems) better than humans already exist. It might be true that GPT-3 fails on those measures, but we cannot exclude that it fails simply because we didn't train it for these tasks, or even for a task of "show its intelligence off". Maybe it "misunderstood" what we asked it to do.
Perhaps you could play a "reverse text game" with GPT-3, where the human would write the adventure and the GPT-3 would input player commands. Then we could perhaps better evaluate how good it is at strategizing, by comparing it to human players. But this is not very scalable.
To be clear nobody who understands such tools well enough to actually create our current generation of tools believes these specific tools are intelligent in any meaningful way.
The simplest counterargument to them being intelligent is that the only parties asserting the argument contrary to actual experts are those who manifestly fail to understand their nature.
You speak of it "hiding its intelligence" when it has no interior singular consistent model of reality to exercise and no interior thoughts to hide just a random walk through probability and calculated but not examined correlations.
On first blush it has so few connections compared to us that it seems like a toy in comparison but the truth is we don't even know enough about how our own brain works at this point in order to create an accurate scale. It may be more accurate to think of a neuron as a small processor as opposed to a dumb electrical component for example.
So in brief your argument is that despite not knowing enough about the brain to duplicate it we built a machine that is vastly simpler than it by accident and the only people that have caught on is layman on hacker news.
There have been studies to indicate we can predict which decisions people will make before they themselves are consciously aware they have made a decision. I'm not convinced "we can inspect what's going on" is sufficient criteria for ruling out intelligence.
ELIZA passes the Turing test. The fundamental issue with the Turing test is that it doesn't even attempt to assess machine intelligence. Instead, it assesses human ability to assess humanity. As walking talking pareidolia machines that anthropomorphise everything from toys to animals, to yes software, we're uniquely terrible at this.
Turing was a genius, but that doesn't make every thought experiment he transcribed - especially in the later stage of his life where he was drugged, alone and suicidally depressed - true.
Eliza doesn't pass the Turing test. At best some modern, Eliza-derived systems can play tricks (pretend to be a non-native speaking teenage boy) well enough to sneak through a specific test that is somewhat like a minimal version of the thought experiment Turing talked about.
It's clear reading Turing that he's talking about a long wide ranging conversation, not the time limited tests people perform now.
Eliza does pass the Turing test in the original form proposed by Turing. There are many examples of people mistaking Eliza for a real person. Turing doesn't specify competition or specific training required for the 'tester'.
It's pretty clear that people are projecting on Turing's (trivial and ill considered) thought experiment far more complexity and rigour than originally present in the form Turing suggests for his test.
But that's beside the point. To reiterate, the central problem here is that intelligence, reflexivity, intentionality etc are not being assessed. Merely as Turing himself phrased it 'imitation'. This is all well and good if you view consciousness as epiphenomenal. But all that is being tested is our ability to be fooled into thinking our creations real, which is as old as storytelling.
GPT-2 was notorious for plagiarizing. GPT-3 is better, from gwern.net's blog post [0] and my limited experience of playing AI Dungeon, it appears to be really good at generating original writings by mixing, matching, and rewriting existing ideas. Sometimes they are verbatim copies, but more often they are rephased. For example, this is the output from GPT-3 after gwern fed a large number of quotes into the system,
> Real is right now, this second. This might seem trivially true, but it is trivial only if you identify the real with the material.
> Cameras may document where we are, but we document what the camera cannot see: the terror we experience in the silence of our eyes.
> No society values truth-tellers who reveal unpleasant truths. Why should we value them when they are our own?
Almost everything written by GPT-3 gives you a perception of "I've read something similar before", and of course, the ideas and meanings must come from pre-existing works. Yet, you often cannot find the original text, it has been rewritten beyond recognition. It's almost like Jean Baudrillard's Simulacra and Simulation [1] - everything is a copy, yet with no original.
> The first stage is a faithful image/copy [...] The second stage is perversion of reality, this is where we come to believe the sign to be an unfaithful copy [...] The third stage masks the absence of a profound reality, where the sign pretends to be a faithful copy, but it is a copy with no original. Signs and images claim to represent something real, but no representation is taking place and arbitrary images are merely suggested as things which they have no relationship to. [...] The fourth stage is pure simulacrum, in which the simulacrum has no relationship to any reality whatsoever. Here, signs merely reflect other signs and any claim to reality on the part of images or signs is only of the order of other such claims.
The good book isn't it? I can't say for sure or say who or when, but it seemed familiar to me too; I think it's biblical, but I'm pretty far from an authority on it.
(Specifically that part of the sentence/the structure I mean, not that the whole thing is plagiarised.)
That's because almost every short phrase of commmon words or structure has been written before. There aren't very many that we recognize as grammatical and valid diction. That's what grammar and diction is.
I'm not sure what you're point is? Both the other commenter and I felt that it was familiar to us, and the combination of that and that it's a rather unusual phrase made it stand out.
We're not sitting around going 'gosh it's like I've read your comment before' because everything you wrote is unoriginal.
While I'm at it I'll add in case it's why my comment is so objected to that I'm not religious; that just doesn't preclude me from knowing or remembering bits and pieces, and recognising (or thinking I recognise) a phrase.
It's some kind of fancy Markov chain text generator. You probably read fragments of this sentence many times and they sound familiar, but the sentence is unique.
"For example, in order for me to be 'born', my parents had to deprive themselves of their greatest happiness by having sex and conceiving me. They could have done something constructive with their time like, I don't know, doing philosophy or playing chess or studying physics."