I see very few people use "toot" much on Mastodon now, though. It's certainly in use, but "post" seems to be more common, in part I think because it saves is from switching back and forth when talking about Twitter, but also because Mastodon is just one of many applications in the Fediverse. A number of my followers are on Pleroma or Misskey, for example, and a "toot" doesn't have the same connection to their software.
set -o vi: "Allow shell command line editing using the built-in vi editor. Enabling vi mode shall disable any other command line editing mode provided as an implementation extension."
No other mode, specifically emacs mode, is included in the POSIX standard.
Back in the heyday of POSIX standardization, the Emacs camp and the POSIX camp were both very opinionated and not entirely aligned. One example is the standard disk block size used in command line utilities such as du and df. RMS made GNU use 1kb instead of the POSIX standard 512 byte size, but this could be overriden by setting the environment variable POSIX_ME_HARDER.
You can edit the current prompt in a full vim by pressing ‘v’. On :wq the content of the file will be run as the command. In that case you can use the macros with q (and any fancy vim plugin you might like)
Yes! I never got the hang of the read line short cuts. Only one I remember and regularly use is reverse history search: ctrl-R.
The downside is that while vim mode works nicely on bash, other commands like gdb etc that also use read line aren’t as easy to get into vim mode, if they support it at all…
biggest barrier to this is the hardware requirements. I saw an estimate on r/machinelearning that based on the parameter count, gpt-3 needs around 350GB of VRAM. maybe you could cut that in half, or even one-eighth if someone figures out some crazy quantization scheme, but it's still firmly outside of the realm of consumer hardware right now.
The biggest I've found is GPT-J (EleutherAI/gpt-j-6B), which has a model size comparable to GPT-3 Curie, but the outputs have been very weak compared to what I'm seeing people do with GPT-3 Da Vinci. The outputs feel like GPT-2 quality. I'm probably using it wrong, or maybe there are better BART models published that I don't know about?
> Write a brief post explaining how GPT-J is as capable as GPT-3 Curie and GPT-2, but not as good as GPT-3 Da Vinci. GPT-J ia a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. sentence repeats
The existing models aren't fine tuned for question answering, which is what makes GPT-3 usable. Eleuther or one of those other Stability collectives is working on one.
It's very sad how they had to nerf the model (AIDungeon and stuff). I don't think anything on a personal / consumer GPU could rival a really big model.
Same thing with Copilot - most of the time, Copilot tells me what I already know (not in a bad way - kind of like how a pair coder would just nod their head as I'm typing), but every now and then it gives me something really surprisingly good.
I like copilot mostly for helping with forgotten function names or if I know what I want to do but my brain is running on empty it can give be a scaffold in a new class that I can mold into something better.
I'm in final interviews, so no signed offer yet, but travelling for a multi day interview at a red flag company is a waste of time when there's other options to prioritize.
Yes, I know they are going through large layoffs, and know the % in two cases.
I didn't vote on your answer, just saw it now and replied.