I'm developing concern for Steve. He's been a well known developer and writer in the industry for years now (See his popular 'Google Platforms Rant' essay from years ago) [0].
Now, Yegge's writing tilts towards the grandoise... see his writing when joining Grab [1] and Sourcegraph [2] respectively versus how things actually played out.
I prefer optimism and I'm not anti AI by any means, but given his observed behavior and how AI can't exacerbate certain pathologies... not great. Adding the recent crypto activities on top and all that entails is the ingredients for a powder keg.
He was right about Google in [1] when I was still drinking the Kool-Aid, in big and tangible ways that aren't discussed publicly.
[2] is 100% accurate, Grok was the backbone / glue of Google's internal developer tools.
I don't disagree on the current situation, and I'm uncomfortable sticking my neck out on this because I'm basically saying "the guy who kinda seems out of it, totally wasn't out of it, when you think he was", but [1] and [2] definitely aren't grandiose, the claims he makes re: Google and his work there are accurate. A small piece of why I feel comfortable in this, is that both of these were public blogs his employer was 100% happy about when hiring him to top positions.
I should be specific. I think the technical analysis is reasonable and I actually enjoy someone staking on a big vision, which is why I saved these pieces.
An example:
"I’ve seen Grab’s hunger. I’ve felt it. I have it. This space is win or die. They will fight to the death, and I am with them. This company, with some 3000 employees I think, is more unified than I’ve seen with most 5-person companies. This is the kind of focused camaraderie, cooperation and discipline that you typically only see in the military, in times of war.
Which should hardly surprise you, because that’s exactly what this is. This is war.
I am giving everything I’ve got to help Grab win. I am all in. You’d be amazed at what you can accomplish when you’re all in."
This is the writing of someone planning to make a capstone career move instead of leaving in 18 months. It's not the worst thing to do (He says he left b/c the time difference to support a team in SE Asia was hard physically, and he's getting older) and I support taking big swings. I'm just saying Yegge's writing has a pattern.
Crypto and what Yegge is doing with $GAS is dangerous because if the token price crashes and people betting their life savings think he didn't deliver on his promises... I like Steve personally which is why I'm saying anything.
This appears to be the coin in question: https://coinmarketcap.com/currencies/gas-town/ - up 222,513.21% in the past week! (And down 25.26% in the last 24 hours. But... suppose it goes back up again?!)
I didn’t see the source graph thing, but the Grab episode always seemed odd to me. He wrote these breathless rants about how epic it all was, then quit after a year or so. I just figured the long hours eventually stopped being awesome.
One nice thing if you work on the B2B software side - end of year is generally slow in terms of new deals. Definitely a good idea to schedule bug bashes, refactors, and general tech debt payments with greater buy in from the business
Does this use the txtar format created for developing the go language?
I actually use txtar with a custom CLI to quickly copy multiple files to my clipboard and paste it into an LLM chat. I try not to get too far from the chat paradigm so I can stay flexible with which LLM provider I use
There’s also ARIA [1]. I actually think the one linked above seems more interesting. At the same time, doesn’t seem all that different from just rolling your own solution.
> LLM Chain querying documents with citations [e.g. a scientific Zotero library]
> This is a minimal package for doing question and answering from PDFs or text files (which can be raw HTML). It strives to give very good answers, with no hallucinations, by grounding responses with in-text citations.
pip install paper-qa
> If you use Zotero to organize your personal bibliography, you can use the paperqa.contrib.ZoteroDB to query papers from your library, which relies on pyzotero. Install pyzotero to use this feature:
pip install pyzotero
> If you want to use [ paperqa with pyzotero ] in an jupyter notebook or colab, you need to run the following command:
> Semantic search and workflows for medical/scientific paper
python -m paperai.report report.yml 50 md <path to model directory>
> The following [columns and answers] report [output] formats are supported: [Markdown, CSV], Annotation - Columns and answers are extracted from articles with the results annotated over the original PDF files. Requires passing in a path with the original PDF files.
Considering the tight integration between Nix language v. Nixpkgs v. Nix program I mostly see CUE usable to generate data files but would eventually need to be consumed in a .nix config at some point
Theoretically languages other than Nix can be used to generate derivations and from that point forward to the finished build result everything should work the same way.
There is already some experimental setup for Nix to call Nickel for producing derivations using a function call importFromNcl. A more language-agnostic FDI (foreign derivation interface?) would be really exciting.
When writing Nix code it can often feel quite opaque what's going wrong, so writing actual package descriptions in a more constrained language like CUE, at least to me, feels like it might help. Better tooling in Nix and the kind of validation that Nickel proposes also look like good bets. I'm glad there is more than one horse in the race to improve the ecosystem.
I have been wondering for some time what would be required to define a package set just like nixpkgs in CUE (since it's not lazy and not touring complete). I only have very vague thoughts on that, so I would love reading about it from someone who knows more about how nixpkgs works or someone actually trying it.
Yes, derivations are the main primitive to target in an agnostic way
Main challenge is the value of the nixpkgs repo is enormous, and that looks tightly tied to Nix the language and its implicit constructs. I think instead of an FDI one would have to provide a true competitor that is more attractive on multiple dimensions like:
- Tackling a package repo and Nixos equivalent with day 1 reproducibility (not just repeatability) like https://reproducible-builds.org/
- Better UX experience on the "entrypoints" of Nix like home-manager and dev-shell flakes - I think CUE has some nicer language features for this and does not require figuring out derivation generation, just referencing the existing nixpkgs set
What do you mean by day 1 reproducibility instead of repeatability?
Isn't tracking and eliminating nondeterminism a gradual process anyways?
I don't see how nixpkgs or NixOS is a faulty basis for that kind of work, since I have not run into any flaws with regards to reproducibility that I thought would be easier to address on a clean slate basis.
You're basically right. I'm not 100% sold on this idea, but I think it's a possibility. Most of what I'm seeing right now is CUE facing outward, e.g., to generate typed things from within nix. This[1] is a good example of that. Given how flexible CUE is, and given how similar nix is to HCL, I think it's possible to have CUE emit nix and provide some basic typing that way.
Dan Wang explores this idea in 'How Technology Grows' [0]. To summarize, he asserts that the main downside of offshoring is the loss of process knowledge (the tacit knowledge that is learned by doing and transmitted through culture).
I enjoyed this blog post. Julia does a great job of distilling an idea down with examples.
I am fairly comfortable with Linux as a user for things like understanding processes, ports, key files and utilities, etc. The way I understand how to model abstractions like containers is to know the various OS primitives like cgroups, changing root, network isolation. Once one sees how those pieces come together to create the container abstraction, they can be mapped to the system calls provided by the OS. Usually they also have utilities bundled (like `chroot`) to interface with those primitives as an operator.
I have been confused about containers for so long but having read your comment and looking up the terms you mentioned allowed me to finally find the right articles that explained containers to me. Thanks!
On linux containers usually involve some more primitives than cgroups and namespaces. Bind mounts, overlayfs (TFA), veth network interfaces (to connect the network namespaces), network bridges, memfd, seccomp, procfs etc. are all bits and pieces that are used by most containers/sandboxes.
Many of those pieces can be useful on their own. For example you don't need a full container if all you want to do is to ensure that some applications use a VPN and others use your public network address. A network namespace is all you need and those are accessible through simple cli tools such as `unshare` and `ip netns` and don't require behemoths like dockerd.
The tricky part is using them all together correctly, initializing them in the right order, not getting the control daemons confused by running in the wrong context and so on. That's where many of the security vulnerabilities come from.
Now, Yegge's writing tilts towards the grandoise... see his writing when joining Grab [1] and Sourcegraph [2] respectively versus how things actually played out.
I prefer optimism and I'm not anti AI by any means, but given his observed behavior and how AI can't exacerbate certain pathologies... not great. Adding the recent crypto activities on top and all that entails is the ingredients for a powder keg.
Hope someone is looking out for him.
[0] https://courses.cs.washington.edu/courses/cse452/23wi/papers...
[1] https://steve-yegge.medium.com/why-i-left-google-to-join-gra...
[2] https://sourcegraph.com/blog/introducing-steve-yegge