They haven't because this might actually finally wake up at least part of the people and cause problems for Dear leader. Let's hope Trump doesn't fold immediately and give the situation more time to develop.
One of my most valuable memories is when I dismissed ChatGPT the day it came out. I didn't bother trying it; GPTs were just autocomplete, and OpenAI had already stretched the limits of what autocomplete could do, so this ChatGPT would just be slightly better autocomplete.
> History tends to get rewritten by big successes, so that in retrospect it seems obvious they were going to make it big. For that reason one of my most valuable memories is how lame Facebook sounded to me when I first heard about it. A site for college students to waste time? It seemed the perfect bad idea: a site (1) for a niche market (2) with no money (3) to do something that didn't matter.
> One could have described Microsoft and Apple in exactly the same terms.
It's worth remembering that even the big successes often don't realize they're onto something -- you'll be less likely to dismiss the next big wave when you see it forming.
It is a very fancy autocomplete indeed, but it's not like you missed anything on day 1.
The engineering they put around GPT to make it work as a smart translator and eloquent "brainstormer" and whatnot is impressive, but so far there's no killer app in sight. (The biggest bang for the buck seems to be Copilot.)
It is the killer app in itself. You might be underestimating it's impact and how much usage it's getting (not always for good things, not always for bad).
It's totally permeated many areas of life for people.
It definitely is a killer app and light years ahead any competing product. Github copilot is good but different use case to me. If you meant MSFT Copilot, well that I am not sure what benefit if any exists.
As far as I understand, Bard/Gemini Advanced (the product), is backed by Gemini Ultra (the model). Gemini comes in Ultra/Pro/Nano, and I don't think Bard/Gemini the web product is using Nano at all as that's designed for on-device inference.