Hacker Newsnew | past | comments | ask | show | jobs | submit | asymmetric's commentslogin

For me it’s org-mode. Although now that I think of it, there’s a Neovim implementation I’ve been meaning to try.

And so it begins.

Go on..

I'm curious, what do you think the future of the car industry is, then?

Have you tried it? I’ve been meaning to.


Yes. Somewhat expensive given its web only (no api) but it works very well and new features are added continuously.


> there are at least a dozen companies that provide non-Anthropic/non-OpenAI models in the cloud

Do you have some links?

Also I assume the privacy implications are vastly different compared to running locally?


Throw a rock and you'll hit one... Groq (not Grok, elon stole the name), Mistral, SiliconFlow, Clarifai, Hyperbolic, Databricks, Together AI, Fireworks AI, CompactifAI, Nebius Base, Featherless AI, Hugging Face (they do inference too), Cohere, Baseten, DeepInfra, Fireworks AI, DeepSeek, Novita AI, OpenRouter, xAI, Perplexity Labs, AI21, OctoAI, Reka, Cerebras, Fal AI, Nscale, OVHcloud AI, Public AI, Replicate, SambaNova, Scaleway, WaveSpeedAI, Z.ai, GMI Cloud, Nebius, Tensorwave, Lamini, Predibase, FriendliAI, Shadeform, Qualcomm Cloud, Alibaba Cloud AI, Poe, Bento LLM, BytePlus ModelArk, InferenceAI, IBM Wastonx.AI, AWS Bedrock, Microsoft, Google


I use Ollama Cloud. $20/mo and I never come close to hitting quota (YMMV obviously).

They don't log anything, and they use US datacenters.


for privacy preserving direct inference: Fireworks ai nebius

otherwise openrouter for routing to lots of different providers.


openrouter, for example, there are models both open and closed


The ideas in the update were previously explored by Gwern 2 years ago: https://www.lesswrong.com/posts/PQaZiATafCh7n5Luf/gwern-s-sh...


Specifically, Cochrane wrote:

> On reflection I have started to worry again. In 10 to 20 years nobody will read anything any more, they just will read LLM digests. So, the single most important task of a writer starting right now is to get your efforts wired in to the LLMs. Nothing you write will matter if it is not quickly adopted to the training dataset. As the art of pushing your results to the top of the google search was the 1990s game, getting your ideas into the LLMs is today’s. Refine is no different. It’s so good, everyone will use it. So whether refine and its cousins take a FTPL or new Keynesian view in evaluating papers is now all determining for where the consensus of the profession goes.

For more recent comments, see https://dwarkesh.com/p/gwern-branwen https://gwern.net/llm-writing https://www.lesswrong.com/posts/34J5qzxjyWr3Tu47L/is-buildin... https://gwern.net/blog/2025/ai-cannibalism https://gwern.net/blog/2025/good-ai-samples https://gwern.net/style-guide

The scaling will continue until morale improves. I advise people to skate to where the puck will be, and to ask themselves: "if I knew for a fact that LLMs could do something I am doing in 1-2 years, would I still want to do it? If not, what should I be doing now instead?"


Isn’t the linked article claiming that SM is superior to FSRS?


SM is claiming that the latest versions of the SM algorithm (namely SM-19) are vastly superior to FSRS (maybe it is?).

They state in contrast:

We do not dismiss the work behind FSRS. It is a commendable open-source effort and a marked improvement over ancient algorithms like SM-2.

For context, Anki uses SM-2's algorithm (albeit apparently heavily modified for various special cases) if FSRS is not enabled.


I think you’re wrong, especially if you consider more the just carbon (eg land use, deforestation, …) https://woods.stanford.edu/news/meats-environmental-impact


That says 14-18% of global GHG emissions is due to cattle, the person I was responding to said "the biggest impact you can have is by eating way less meat, cattle in particular". That doesn't seem like the biggest impact possible. For Americans, their entire diet is attributable to about "5.14 kg CO 2 eq. per person per day" https://habitsofwaste.org/wp-content/uploads/2020/11/2020-CS... (UMich Center for Sustainable Systems). For a family of 2.5, that equates to about 4.5 tons CO2e/year. The average American family footprint is about 48 tons CO2e/year. So slightly less than 10% for their entire diet. Of that, maybe a bit more than half is attributable to cattle, or 5% total.

By comparison, driving a pair of gasoline cars their average of 10k miles/yr is something like 16% of the average American family's yearly emissions, or 3x the beef.

Switching from heating with natural gas to a heat pump would also make a bigger dent for the average American family, let alone if they're living somewhere that gets properly cold, like New England. Or just spending $2,000 on air sealing and a layer of fiberglass, for those living in a leaky house - more impactful than not eating beef.

Looking into it a bit for Italian families, it looks like cattle might a larger proportion, partly because their overall carbon footprint is lower. But it's still a relatively small proportion (<15%).

Pretty sure if landowners weren't raising cattle, the alternative isn't going to be letting it return to nature and lowering the value of their land, without big government programs that essentially pay them to do that, so that whole thing seems kind of moot.


> On its stern, researchers were shocked to find extensive remains of a castle, a kind of covered deck where the crew would have sought shelter. Records show that castles were distinctive features of medieval cogs, but no physical evidence of them had previously been identified.

I suppose this explains why the thing that exists on more modern ships is called a “forecastle”.

PS go check the pronunciation for that word as it’s quite surprising.


The forecastle of a ship is in the forward part of a ship — at the front, not the back. Looking at renderings of cogs, the 'castle' at the stern seems more to anticipate the modern bulk carrier, with an accommodation block with bridge on top at the aft end, looking out over the cargo holds.


Ships of that era and leader had castles on both ends fore and aft. It's just the forward one than retained in usage as a sailing term, even after foredecks no longer looked like castles. The aft castle became a quarterdeck, a poop deck, a cockpit or a bridge etc.

Meanwhile, a built-up and elevated stern 'castle' is advantageous place to put the steering and command position, close to the rudder and with visibility of the whole ship, it's rig, plus where the ship is going. While maximizing mid-ship area for cargo. If you have to pick one end or the other, stern is the more comfortable end of the ship being most sheltered from wave action and weather. Being elevated and fortified also helps as a fighting/defensive position, but that is less important for modern cargo ships. 'Anticipation' isn't quite the right word as shipbuilders have always worked within the same basic design considerations and trade-offs, as the sea itself continues to enforce the same fundamental constraints.


Other types of ships also had castles, such as the carrack and galleon. They are super tall and ungainly looking compared to modern ships, even those from the 19th and 18th centuries.


‘Folksal’?

You aren’t wrong.


Really happy about how things are going for you, and the positive impact this is having on your family!

It’s good to get some good news sometimes. Thanks for that :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: