Herding cats is treating the LLM's context window as your state machine. You're constantly prompt-engineering it to remember the rules, hoping it doesn't hallucinate or silently drop constraints over a long session.
System-level governance means the LLM is completely stripped of orchestration rights. It becomes a stateless, untrusted function. The state lives in a rigid, external database (like SQLite). The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance. The LLM cannot unilaterally decide a task is done.
I got so frustrated with the former while working on a complex project that I paused it to build a CLI to enforce the latter. Planning to drop a Show HN for it later today, actually.
> The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance.
This sounds like where lat.md[0] is headed. Only thing is it doesn't do task constraint. Generally I find the path these tools are taking interesting.
I looked into lat.md. They are definitely thinking in the same direction by using a CLI layer to govern the agent.
The key difference is the state mechanism. They use markdown; I use an AES-encrypted SQLite database.
Markdown is still just text an LLM can hallucinate over or ignore. A database behind a compiled binary acts as a physical constraint; the agent literally cannot advance a task without satisfying the cryptographic gates.
Just posted it here: https://news.ycombinator.com/item?id=47601608
Thank you so much for the coffee offer, that genuinely made my day! I don't have a sponsor link set up. Honestly, the best support is just hearing if this actually helps you ship your personal project faster without losing your mind to prompt engineering. I really hope it gives you your sanity back. Let me know how it goes!
Comments are marked dead by automatic processes, not through downvotes. They're dead before anyone sees them, and you can't vote on a dead comment. amangsingh's comments have probably triggered some automated moderation. Probably at least partially because they sound LLM-generated.
Spot on regarding the automod. Unfortunately, the way I naturally structure my writing almost always triggers a 50/50 flag on AI content detectors. It is the absolute bane of my existence.
The filter instantly shadowbanned the Show HN post when I submitted it, which is why the link was dead for a while. Thankfully, human mods reviewed it and restored it. The link is fully live for a while now!
Noticed that and was wondering, thanks for the explanation. Does this imply that human-people need to go “vouch” for the flagged comments to bring them back into HM’s good graces?
Looks like it was downvoted to hell and marked as dead super fast. I leave the flag for "dead" on in my HN settings (leaves it super desaturated) and this seems unusual
Huh? Unless the sole purpose of the commit was to lint code, it would be unnecessary fluff to append the name of the automatically linted tools that ran in a pre-commit hook in every commit.
The solution to this would be a law forcing these sites to allow third-party suggestion algorithms, so that you can choose who and how content is being suggested to you.
It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".
This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.
The solution to all of Big Tech's monopolies is actually pretty simple: Interoperability must become a law - this includes using custom algorithms or allowing other platforms (like your own app) to access YOUR data on whatever platform 'hosts' it.
> While the dominance of Internet platforms like Twitter, Facebook, Instagram, or Amazon is often taken for granted, Doctorow argues that these walled gardens are fenced in by legal structures, not feats of engineering. Doctorow proposes forcing interoperability—any given platform’s ability to interact with another—as a way to break down those walls and to make the Internet freer and more democratic.
Most notably, he retells how early Facebook used to siphon data from its competitor MySpace and act on user's behalf on it (e.g. reply to MySpace messages via Facebook) - and then when the Zuck(er) was top dog, moved to made these basic interoperability actions illegal by law to prevent anyone doing to him what he did to others.
We can’t depend on these platforms to offer interoperability or even laws to force them to do so. The DMA forced Apple to allow 3rd party app stores in Europe and they still hampered it so rarely anyone uses it.
We need platforms to offer that interoperability and simply connect to these “marketplaces.” Take Shopify for example, sellers use that platform to list on Amazon, Google Shopping, TikTok shop, etc. We need open source alternatives to those where the sellers own the platform and these marketplaces are forced to be interoperable or left behind by those that are.
For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
It’s a tall task, but achievable and it will happen given enough time.
> For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
Sure, you can do that. But then the syndicated content usually ends up looking like low-effort slop and doesn't get much traction. Each publishing platform has it's own features, limitations, and cultural norms. If you want to have any impact then you can't just copy content around: you have to tailor the message to the medium.
How will it happen? Writing open source code is one thing, maybe enough people will volunteer their work. But running an operational marketplace or social media platform is something else entirely. You need a real revenue stream to pay for hardware, connectivity, operations staff, regulatory compliance, etc. That stuff isn't cheap.
I'm been building towards an interoperable marketplace[0] and realized I needed to launch open-source alternatives to Shopify[1], Toast[2], Instacart, etc to take on the proprietary marketplaces.
It really comes down to merit and how much value you can bring to the actual sellers in these marketplaces with the software. If enough sellers switch, marketplaces will follow.
The foundational problem with interoperability is that it can and will immediately be abused by bad actors as long as there is no price tag attached to every piece of communication.
Among social media, Mastodon (and anything Fediverse) has it the worst, obviously, but Telegram and Whatsapp are rife with spams and scams, Twitter back when it still had third-party apps was rife with credential and token compromises (mostly used to shill cryptocurrencies).
As for the price tag reference - we've seen that with SMS. It used to be the case that sending SMS cost real money, something like 20 ct/message. It was prohibitively expensive to run SMS campaigns. But nowadays? It's effectively free at scale if you go the legit route and practically free if you manage to get someone's account at one of the tons of bulk SMS providers compromised. Apple's iMessage similarly makes bad actors pay a lot, because access to it is tied to a legitimate or stolen Apple product serial.
But bad actors already do this, as there is a monetary incentive to implement adversarial interoperability. There is then an incentive to not scale it up too much, lest that implementation get cut off sooner. For example, I certainly don't think all of the spam ads I see on Faceboot Marketplace are from individual people manually creating accounts and typing them out.
Paywalls can have the opposite of the effect you want. Implemented incautiously, they can fail to disincentivize parties who can make profit in excess of the cost, and it can succeed at disincentivizing genuine, non-profit-motivated interaction.
Imagine how much less you would use text messages if they still had a per-message cost.
Because some hostile entity might rat fuck the a slightly better system, we're destined to use the same current shitty system because something better might have a downside?
Do you understand that this is all literally made up? The rules can change anytime and society can exert its will to make better world rather than letting a dozen people decide how technology will shape humanity (mostly in a negative capacity if you look at the current state of things).
>Because some hostile entity might rat fuck the a slightly better system,
And make it a worse system, is what you happened to leave off.
>Do you understand that this is all literally made up
You mean the existing system that evolved from billions and billions of interactions? Explain what is 'made up' about it.
The thing is if you start 'making up' random ass laws that piss people off, they will run screaming back to the billionaires to pwn them with locked down systems. Apple is a great example here. Shit is locked down and people love it.
> Apple is a great example here. Shit is locked down and people love it.
The key thing with Apple is that Stuff Just Works. Not necessarily with non-Apple things (the compatibility ranges from almost excellent aka AirPods on Windows/Android to disastrous aka ever tried to transfer files from an Android phone to a Mac), but as long as you stay in the Apple ecosystem of macOS + Apple TV + iOS + AirPods, the user experience is generally really, really frictionless.
In contrast, with Windows, it's an unholy mess of catastrophic drivers, Windows Update, aggressive pushing of AI and advertising. And external hardware is a hit and miss, with compatibility issues being around everywhere.
And Android, oh dear god. I used to prefer Android over iOS because the hardware was cheaper and I could reasonably root it so I could do actual backups that worked... but ever since Covid, more and more apps break on detecting root and there's still no backup solution, so I bit the bullet and went second-hand Apple. At least I got backups now.
Personally, I admit, I have aged - I'm 34, I don't want to fiddle and mess around with my daily driver constantly, and frankly I don't have time to bother with ads, so that's why I went with Apple for most of my things. When I want to fiddle, I got a fleet of Raspberrys plus a decent homelab. But there, I can choose to fiddle around if I want to.
Being afraid to do things because they might possibly, but never proven, be worse is just the political machinations of enforcing the status quo where our corporate overlords get to dictate how technology shapes our lives.
I'm sorry but that's deeply undemocratic, todays generation should have a direct say in how new things effect their lives.
Failure to do this might literally condemn our species to extinction, and this only took less than 200 years to achieve. I'm sorry but they've proven their failure and it's time to make drastic changes.
Good news is many people agree with this across the electorate, so now you get to decide which people you want shaping society. The previous world order of US imperialism is going to end and I rather have the people decide what to do than those that want to continue running head first into extinction.
This is a confusing comment. Interoperability and bad actors are separate concerns, because you get bad actors in systems of all kinds, not just in interoperable systems. Paywalling a system does not necessarily mitigate bad actors, either.
Breaking up these monopolies would be a good start. We aren't supposed to have those. There used to be something we called "regulations" but they got rid of that part I think. Elections have consequences.
Exactly. The deal of all these platforms is that there is a fuckton of up-front costs. Hard drives. Networks. Peering. Transit. Operators. Payment. Lawyers. SREs. And so on and so forth.
The solution to this used to be that governments provide the platform. You would think this wouldn't be hard to do, since people have now shown that this can work and so it's a guaranteed money maker, or as close as you're going to get.
Yet I can't find a single initiative.
So any such rules will just make all internet platforms disappear ... and nothing.
It seems likely that'd result in even worse suggestions becoming the norm as people adopt the third-party that gives the quick dopamine rush. It's like suggesting tastier heroin to fix drug addiction.
There's a difference between addictiveness and enjoyment, and definitely between addictiveness and satisfaction.
While the thing that gives you quick dopamine might win in the very short term, you can still step back and recognize when it's not satisfying in the long term and you're not even enjoying it that much.
And people aren't stupid. Junk food exists, yet lots of people choose to eat more wholesome food as the majority of their diet.
The problem with instagram or youtube is that you can't separate the good from the bad.
It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food. You would probably want to go to a different store instead.
What I'm suggesting is the possibilities of different stores, with different philosophies and standards, so that people can choose where they go. Corner stores (where almost everything is junk food) exist, yet people still choose to go to real supermarkets.
Parent poster has some… interesting and popular but entirely false views on neuroscience. Specifically, an extremely outdated view on concepts like the role of dopamine and dopaminergic neuronal populations in human cognition. Rather than an understanding based on science and the idea that incentice salience and valence is modulated by such populations, he is attributing pleasure and enjoyment to them because of a meme.
Certainly not. People don’t want the slop they push, the anxiety provoking, salacious, clickbaity spam that it has devolved into. Anybody that used YouTube before the last few years can tell you the difference is pretty major. This is not content people want, it’s content that maximizes clicks and ad sales.
People don't want to want it. But it's not obvious that merely allowing a choice of recommendation algorithms would allow people to escape the slop. Isn't anyone strong enough to choose a less addictive algorithm necessarily strong enough to not scroll Instagram for hours in the first place?
>Isn't anyone strong enough to choose a less addictive algorithm necessarily strong enough to not scroll Instagram for hours in the first place?
Absolutely not. It's much easier to make a one-time switch than to be continuously resisting temptation. Changing the things in your environment is an important tool to break bad habits. The book "Atomic Habits" talks about this at length.
I mean, the court case is about these platforms being addictive to kids, so if they said "accounts for users under X years have the algo and time caps delegated to their parents' account by default" it'd go along way to negate what they're being accused of.
They've already built all the tools they need around this at the moment, it's just they give them to advertisers rather than end-users.
Heh, it's funny watching people, like the one above you, say "This thing is addictive because it is a real object, but this digital object cannot be addictive at all". The argument is so illogical you begin to doubt you're talking to a real person.
I never made that claim, and in fact believe the opposite. I simply disagree that heroin is a drop in replacement as a mental model. The differences between the heroin trade and YouTube are meaningful. For example, one is a physically addictive illegal drug that is a commodity exported by certain foreign nations while the other is a digital platform that makes money by ad sales and is a monopoly. Both can be addictive, but they are not the same thing.
Anything that’s a premium paid feature will be irrelevant. Most people don’t subscribe to YouTube premium, even though they know their kids are watching a ton of ads. Adoption has also been incredibly brisk on the ad tiers of the formerly ad-free TV services like Netflix and Hulu.
I realize “less addictive algo” is a different thing to pay for than removing ads - but it’s, if anything, an even harder sell - I think the layperson wouldn’t even acknowledge that they are vulnerable to being psychologically manipulated. They think they spend so much time on these apps because it’s so enjoyable.
From most parents’ point of view, paying a monthly bill for their children to have a less toxic experience on TikTok, or YouTube will be considered an extravagance instead of a responsible safety expense.
Third-party recommendation algorithms would be interesting, but I think
they'd only address one layer of the addictive design the verdict is actually
about. Autoplay, infinite scroll, notification timing, the variable reward
patterns from likes and comments -- those are all independent of which
algorithm picks the next video. You could swap in the most wholesome
recommendation engine imaginable and a kid is still gonna sit there for hours
if the UI is designed around endless content with no natural stopping points.
At the very least, that should certainly be an option that users can select. And when the user selects a feed algo, it should stay fucking set until that same user actively chooses to change it.
> Before Spotify became popular, people would integrate Last.FM into their media players
I still scrobble to Last.fm from Spotify (and other media players). I rarely use it for discovery anymore, but it's occasionally interesting to look at my historical listening trends.
This seems like a clever (but perhaps overly clever) amendment to Section 230 protections for social media.
However, I've always thought that it's pretty bizarre for Section 230 protections to apply when the social media company has extremely sophisticated algorithms that determine how much reach every user-generated piece of content gets. To me there's really no distinction between the "opinion" or "editorial" section of a traditional media publication and the algorithms which determine the reach of a piece of user-generated content on Twitter, YouTube, etc.
Or just stop suggesting content. The landing page is just a matrix of already followed accounts with the text "Start by following some accounts you like..." as a placeholder if it's a new account.
I’m quite bullish on disintermediating the algorithms. AI makes it very easy to plug in your own. We just haven’t figured out the plumbing yet.
I’d be strongly in favor of interoperability laws to pry open the monopolies.
(One dynamic you do need to be careful about especially at first - interoperability also means IG can pull your friend graph from Snapchat, so it can also make it easier for big companies to smother smaller ones that are getting momentum based on their own social graph growth due to their USP. I don’t think this is insurmountable, just something to be careful of when implementing.)
That's just laundering the bad actions though a third-party.
The winning third party algorithm will be the one that gives people the same rush the first party algorithms currently do, because people will use it for the same reasons; they get to see cute AI animals do crazy things forever.
Nah. The main issue of addiction is the lack of clarity. You allow an addiction to pretend like it has a purpose and it can stick around. Reductionism is a great tool against this: reduce your addiction down to its most clear state and it loses all the mystique. Right now, people can pretend like they're into tiktok or youtube or instagram or twitter etc because they wanna engage in the social media landscape. Pull out the algorithm and replace it with a different one and they can't keep that lie up. They have to admit they're into the dopamine itself.
Virtually nobody would choose to pay a subscription for the non-addictive app version, and I'd even say this suggestion is a bit insulting to anyone who isn't high-income.
I will never pay a subscription for the current clickbaity slop. I might if the algorithm were better, closer to YouTube of 10 years ago, when it would suggest lectures, artfully done film shorts, and overall more interesting, high quality content.
10 years ago the most popular 100 videos on YouTube were all pop music videos. Justin Bieber had 3 of the top 10.
The youtube algorithm has been personalized for much more than 10 years and has never prioritized any kind of lectures or artful films over anything else it thinks a viewer will watch. You're asking for them to bring back an era that never existed.
If you're not getting those sorts of recommendations it's because you ddon't actually watch that kind of content, or you're removing your history.
I’ve watched YouTube daily for nearly 20 years. The majority of the content as well as the algorithm have changed substantially over that period of time. I’m not the only one to notice this btw. There is even a word to describe the phenomenon, “enshitification”. I do clear my watch history, and have never signed into YouTube, frequently resetting the app and watching online in private sessions with adblock. The frequency with which I have to reset the app to prevent the algorithm feeding me terrible undesired content has gone up overtime, I now do it once every few weeks. That’s how much I dislike what it pushes on me. I used to get stuff like “philosophy overdose” and sapolosky’s stanford lecture series, good operas. I now get stuff like “these 5 things are killing you while you sleep!!!” and “mom is shocked to find out her teenage son is raping and eating babies severed limbs.” I’m not being hyperbolic; that’s actually what YouTube recommends.
Seriously? You think they should allow random third parties to inject code into their platforms with all the possible security risks? Regardless the intent that is a terrible idea.
I guess this is the only way. I don't think we need novel approach and I don't consider this a novel one since we already have government agencies verifying approved processes in other areas so why not content distrubution.
The only solution is to outlaw all recommendation algorithms. Accounts should only have access to a chronological feed they choose to follow. The host can promote whatever they want, but it has to be the same promotions for everybody.
I like recommendation algorithms. If someone on my friends list posted about a major life event a few days ago and I haven't seen it yet then I want that prioritized first, before more recent posts. Chronological feeds should be an option for those who want them but they shouldn't be forced on anyone.
I think a better solution would be to repeal section 230 protection for any kind of personalized or algorithmic feed. The algorithm makes you a publisher, and you should be liable for what you publish.
That would make it very hard, nigh impossible, for a platform like YouTube or TikTok to exist as it does today, and would instead favor people self-curating mechanisms like RSS readers etc.
>and would instead favor people self-curating mechanisms like RSS readers etc.
That isn't what would happen.
What would happen is that only the platforms which can afford legal teams - in other words, the big platforms - would host user posted content under strict arbitration only terms, and every other platform (including Hacker News, which uses an algorithmic feed) would simply not. Removing one of the cornerstones of free speech on the web in favor of regulation will only centralize the web more.
And you wouldn't see mass adoption of "self curating mechanisms" because most people aren't like Hacker News people and would find the premise of having to manually curate data feeds from every they visit to be a tedious waste of their time.
I also think that platforms like Youtube and Tiktok shouldn't be illegal. I don't even think that personalized algorithms should be illegal - it's surprising that one has to point this out on a forum of programmers - but algorithms have no inherent moral dimension and the ability to use an algorithm to find and classify relevant content can be useful. The same algorithm that surfaces extremist content surfaces non-extremist content. The algorithm isn't the problem, rather the content and the policies of these platforms are the problem. And I don't think the solution to either is de facto making math illegal and free speech more difficult.
How is RSS self curating? It's just a way to get a feed from somewhere. And under the maximally external-locus-of-control culture this jury is using, those feeds would themselves be deemed evilly addictive.
There is no solution for this kind of verdict beyond appeal, or changes to the law to rule such suits out, because it's not rooted in any logical or legal principle beyond the idea that people should not be responsible for their own actions (or their children's actions). But there's no limiting factor to that belief. You can't fix it with RSS or federation or making people select who they follow or chronological feeds. Those would just get blamed for "addiction" instead.
Each blog you follow in the RSS model you opted in to. And each post comes from a person, or a publication, who can be held accountable for what they publish.
Ordinary media, like newspapers, books, radio, and TV, have worked this way forever — people publish “channels” and you decide what channels to follow. A channel can be held accountable.
The algorithm model is different. People just publish “content” into the platform, and the platform makes a custom channel for each viewer, inserting content from people you’ve never heard of and didn’t ask to follow. And it optimizes that custom channel for whatever addicts you the most. That’s fundamentally a different beast than opt-in media consumption.
And if that blog is a newspaper or other aggregator? What makes the RSS feed of the CNN front page fine, but not the RSS feed of the YouTube front page?
There's really no difference. Media companies all aggressively optimize for engagement, often to the point of A/B testing headlines.
There’s a huge difference. Everyone sees the same front page on CNN, or HN for that matter. Nobody sees the same page twice on YouTube or TikTok. That’s a fundamental distinction between human curated media (even with A/B testing), versus machine curated media.
People don't all see the same front page on CNN or HN. I just said, media companies all do things like A/B test headlines, they show different content to people based on geolocation, they change what ads people see and they select stories based on what they know will maximize engagement with their specific audiences. The fact that it's partly done by a human editor backed up by dashboards doesn't change what they're doing.
HN also shows different pages to different people. The set of headlines and their ranking is constantly changing, and user settings change whether you see dead/flagged articles or not.
The idea there's some fundamental difference here is people working backwards from the wealth of the operators to some conclusion they'd like to be true, usually one that lets them blame other people for their own decisions. But there's no validity to this.
All investment is kind of speculative: you're betting on the future, but typically for a reason.
A bubble, IMO, is what emerges when lots of people bet on the future purely because they see others betting on the future. People often don't realise they're doing it, like the people building AI SaaS apps. They think they're going to get rich because they think everyone is using the bubble tech.
Most of the apps are rubbish and could be implemented with something other than AI, same as a lot of crypto apps or dotcom websites in the bubble periods.
They look like they're useful in the bubble, because they're getting regular customers (as everyone comes in to try this newfangled AI/Crypto/dotcom tech) but once everyone's tried it, the only people who come back are the ones with the actual use for it, and there's never enough use to support the hype created in bubbles.
As the author said: "Our sense that something is weird is often accurate, but our stories about precisely what the weirdness represents are often way, way off." It might have been better to start off with the general impression you had.
Yeah, it's my plan to implement daily levels, as well as weekly levels.
I was thinking of making weekly levels be very big (like 12x12) where I don't know what the par is myself, and have it be a challenge to see who in the community can figure out a way to get the lowest number of moves.
And daily levels would probably be more similar to the regular levels.
Thanks for replying. Since you're considering bigger levels, maybe three daily levels: easy / medium / hard.
On my games I usually go with five levels [1], in different tabs that can be solved concurrently, from very easy to very hard. People seem to enjoy the progression.
reply