Is this the first type of sum-type option choosing statement present for C++ unions? I've been waiting for this feature since the year 1978.
Still, it's a wasted opportunity not to have a language-level overload to the `switch` statement that allows nice pattern matching. Even with std::is_within_lifetime C++ unions are prone to errors and hard to work with.
See the first IpAddr example here[1], where you have separate variants, both with string representations. You can't do this with std::variant. You have to use separate types.
I see what you mean now, thanks. To reproduce that example with std::variant I would need some kind of strong type alias, which as far as I know is missing from C++; so the only feasible way to do that would be wrapping the string in another class or struct.
My browser can handle tens of thousands of lines of logs, and has Ctrl-F that's useful for 99% of the searches I need. A better runner could just dump the logs and let the user take care of them.
Why most web development devolved into a React-like "you can't search for what you can't see" is a mystery.
Good place to ask: I'm not comfortable with NPM-style `uses: randomAuthor/some-normal-action@1` for actions that should be included by default, like bumping version tags or uploading a file to the releases.
What's the accepted way to copy these into your own repo so you can make sure attackers won't update the script to leak my private repo and steal my `GITHUB_TOKEN`?
There are two solutions GitHub Actions people will tell you about. Both are fundamentally flawed because GitHub Actions Has a Package Manager, and It Might Be the Worst [1].
One thing people will say is to pin the commit SHA, so don't do "uses: randomAuthor/some-normal-action@v1", instead do "uses: randomAuthor/some-normal-action@e20fd1d81c3f403df57f5f06e2aa9653a6a60763". Alternatively, just fork the action into your own GitHub account and import that instead.
However, neither of these "solutions" work, because they do not pin the transitive dependencies.
Suppose I pin the action at a SHA or fork it, but that action still imports "tj-actions/changed-files". In that case, you would have still been pwned in the "tj-actions/changed-files" incident [2].
The only way to be sure is to manually traverse the dependency hierarchy, forking each action as you go down the "tree" and updating every action to only depend on code you control.
In other package managers, this is solved with a lockfile - go.sum, yarn.lock, ...
I found that "Agentic Search" is generally useless in most LLMs since sites with useful data tend to block AI models.
The answer to "when is it cheaper to buy two singles rather than one return between Cambridge to London?" is available in sites such as BRFares, but no LLM can scrape it so it just makes up a generic useless answer.
My guess is that this is going to be the future for LLMs too. It will get harder or more expensive for AI companies to train their models on the latest information as most sites will block the scrapers or ask for a fee.
There might be a future where you’ll have to pay more for an up to date model vs a legacy (out of date) model
I've been impressed by how good ChatGPT is at getting the right context old conversations.
When I ask simple programming questions in a new conversation it can generally figure out which project I'm going to apply it to, and write examples catered to those projects. I feel that it also makes the responses a bit more warm and personal.
Agreed that it can work well, but it can also irritating - I find myself using private conversations to attempt to isolate them, a straightforward per-chat toggle for memory use would be nice.
Love this idea. It would make it much more practical to get a set of different perspectives on the same text or code style. Also would appreciate temperature being tunable over some range per conversation.
ChatGPT having memory of previous conversations is very confusing.
Occasionally it will pop up saying "memory updated!" when you tell it some sort of fact. But hardly ever. And you can go through the memories and delete them if you want.
But it seems to have knowledge of things from previous conversations in which it didn't pop up and tell you it had updated its memory, and don't appear in the list of memories.
So... how is it remembering previous conversations? There is obviously a second type of memory that they keep kind of secret.
If you go to Settings -> Personalisation -> Memory, you have two separate toggles for "Reference saved memories" and "Reference chat history".
The first one refers to the "memory updated" pop-up and its bespoke list of memories; the second one likely refers to some RAG systems for ChatGPT to get relevant snippets of previous conversations.
ChatGPT is what work pays for so it's what I've used. I find it grossly verbose and obsequious, but you can find useful nuggets in the vomit it produces.
Go into your user settings -> personalisation. They’ve recently added dropdowns to tune its responses. I’ve set mine to “candid, less warm” and it’s gotten a lot more to-the-point in its responses.
I was very disappointed with Supernova in the East. What started as a telling of the Pacific War from the point of view of the Japanese empire morphed into the usual "war is bad but American soldiers are heroes" that's very common for this period.
I tuned out when he spent 30 minutes describing a famous photo-op of General MacArthur going ashore to the Philippines. That is the complete opposite of the original promise of the podcast.
The podcast started as a sequel to Mike Duncan's classic The History of Rome, and in my opinion surpassed it. Where THoR eventually falls into the narrative trap of turning into "The Lives of Roman Emperors", THoB spends a lot of time talking about economic, demographic, societal, and technological changes within the Empire and the world.
Extremely recommended if you want a proper history podcast.
The thing AI miss about the internet from the late 2000s and early 2010s was having so much useful data available, searchable, and scrappable. Even things like "which of my friends are currently living in New York?" are impossible to find now.
I always assumed this was a once-in-history event. Did this cycle of data openness and closure happen before?
Still, it's a wasted opportunity not to have a language-level overload to the `switch` statement that allows nice pattern matching. Even with std::is_within_lifetime C++ unions are prone to errors and hard to work with.
reply