One common kind of interaction I have with chatgpt (pro):
1. I ask for something
2. Chatgpt suggests something that doesn't actually fulfill my request
3. I tell it how its suggestion does not satisfy my request.
4. It gives me the same suggestion as before, or a similar suggestion with the same issue.
Chatgpt is pretty bad at "don't keep doing the thing I literally just asked you not to do" but most humans are pretty good at that, assuming they are reasonable and cooperative.
> Chatgpt is pretty bad at "don't keep doing the thing I literally just asked you not to do" but most humans are pretty good at that.
Most humans are terrible at that. Most humans don't study for tests, fail, and don't see the connection. Most humans will ignore rules for their safety and get injured. Most humans, when given a task at work, will half-ass it and not make progress without constant monitoring.
If you only hang out with genius SWEs in San Francisco, sure, ChatGPT isn't at AGI. But the typical person has been surpassed by ChatGPT already.
I'd go so far as to say the typical programmer has been surpassed by AI.
My example is asking for way less than what you're asking for.
Here is something I do not see with reasonable humans who are cooperative:
Me: "hey friend with whom I have plans to get dinner, what are you thinking of eating?"
Friend: "fried chicken?"
Me: "I'm vegetarian"
Friend: "steak?"
Note that this is in the context of four turns of a single conversation. I don't expect people to remember stuff across conversations or to change their habits or personalities.
> Here is something I do not see with reasonable humans who are cooperative: Me: "hey friend with whom I have plans to get dinner, what are you thinking of eating?" Friend: "fried chicken?" Me: "I'm vegetarian" Friend: "steak?"
Go join a dating app as a woman, put vegan in your profile, and see what restaurants people suggest. Could be interesting.
I get your comment, which is that only the worst humans are going to suggest a steak place after you've stated you're vegetarian. And that ChatGPT does so as well.
I'm disagreeing and saying there's far more people in that bucket than you believe.
I know many people at my university that struggle to read more than two sentences at a time. They'll ask me for help on their assignments and get confused if I write a full paragraph explaining a tricky concept.
That person has a context length of two sentences and would, if encountering a word they didn't know like "vegetarian", ignore it and suggest a steak place.
These are all people in Computer Engineering. They attend a median school and picked SWE because writing buggy & boilerplate CRUD apps pays C$60k a year at a big bank.
I think what you're saying is both besides the point and incorrect.
Firstly, not studying, ignoring safety rules, or half-assing a task at work are behaviors, they don't necessarily reflect understanding or intelligence. Sometimes I get up late and have to rush in the morning, that doesn't mean I lack the intelligence to understand that time passes when I sleep.
Secondly, I don't think that most people fail to see the connection between not studying and failing a test. They might give other excuses for emotional or practical reasons, but I think you'll have a hard time finding anyone who genuinely claims that studying doesn't usually lead to better test scores. Same for ignoring safety rules or half-assing work.
> I think you'll have a hard time finding anyone who genuinely claims that studying doesn't usually lead to better test scores.
I know dozens of people that have told me to my face that they don't need to attend lectures to pass a course, and then fail the course.
Coincidentally, most of my graduating class is unemployable.
It's not a lack of understanding or intelligence, but it is an attitude that is no longer necessary.
If I wanted someone to do a half-assed job at writing code until it compiles and then send the results to me for code review, I'd just pay an AI. The market niche for that person no longer exists. If you act like that at work, you won't have a job.
While the majority of humans are quite capable of this, there are so many examples anyone could give that prove that capability doesn’t mean they do so.
One common kind of interaction I have with chatgpt (pro): 1. I ask for something 2. Chatgpt suggests something that doesn't actually fulfill my request 3. I tell it how its suggestion does not satisfy my request. 4. It gives me the same suggestion as before, or a similar suggestion with the same issue.
Chatgpt is pretty bad at "don't keep doing the thing I literally just asked you not to do" but most humans are pretty good at that, assuming they are reasonable and cooperative.