Hacker Newsnew | past | comments | ask | show | jobs | submit | madamelic's commentslogin

The problem I feel with federated solutions is basically the 'cold start' problem.

When you are wanting to join a federated network, you have two choices: join a pre-existing server thereby creating the exact same problem you are escaping, ie: a giant server that holds you to its whims, BUT you do get a big network to begin with.

Or you start your own server but your network is zero, discoverability is zero, your feed is empty, and you have to convince other sites to federate with you / not block you for the crime of being a 1 person server / etc.

Am I alone in this feeling or am I just doing federation wrong? (But also this may just be a problem / quirk of Mastodon)


Yeah that's why Tangled didn't go with ActivityPub (Mastodon protocol) and went with ATproto instead, which is specifically built to solve that problem, so individual servers are all aggregated by centralized AppViews (that anyone can host) that give a singular unified "view" of the network that is just as cohesive as a centralized network feels.

Ah ok! Thanks for digging up info that I didn't go looking for myself. That's fantastic news.

ATProto simply ignores the need for decentralizing incentives on a human/community level. What we get is a sort of a "top-down" federation rather than a grass-roots one. Whoever invests in the infra ends up running a domain.

I mean, practically no one is aware of any other ATPROTO provider other than Bluesky whereas the issue with AP is merely the lack of better implementations, so mastodon.social got the most attention and the hype died off with niche success.


There’s no such thing as “running a domain” or “atproto provider” in atproto. You’re approaching it with a Mastodon/AP mindset and it doesn’t match that.

In atproto, there’s two axes.

One is hosting. Bluesky offers hosting but some people host on their own (it’s just a Docker container with sqlite), some on Cloudflare, some on community-hosted nodes like https://npmx.dev and https://selfhosted.social. From app perspective it looks exactly the same way (unlike in Mastodon where “hosting” = “choosing a community”) and you can switch hosting anytime.

Another axis is apps. Apps aggregate from data from all hosts. Bluesky is an app, Tangled is an app, Leaflet is an app, Wisp is an app, Semble is an app, and so on. Those can all aggregate over the same data (which enables cross-app interop) but they don’t have to (eg Bluesky doesn’t overlap with Tangled much except that Tangled can reuse Bluesky avatar on login). Generally you don’t have people running copies of the same app (as in Mastodon) which is why there aren’t many “blueskyes”. But when someone has an incentive, they can. (Eg Blacksky is a complete fork including server and DB, allowing their own moderation decisions over same data.) Similarly you can build your own app on top of distributed Tangled data.

Hope that helps clarify why “atproto provider” as a concept doesn’t make sense. You have hosting, which is as distributed as you want, and you have apps, which anyone can make.


So does Bluesky app have control over what data it aggregates and can decide (without checking with a user) not to aggregate data from a host? I am trying to understand what are the implications for a user, and a bad scenario where one would disagree with an action of the app.

And if the answer is "yes" then at least when someone "makes their own app" can they easily use "Bluesky hosts list" + add special extra hosts (or remove specific hosts) so that the app relies on the platform, with the exception the disagreement point?


Yes to both.

An app can choose to ignore/ban some users (or even entire hosting servers if they’re specifically created for network abuse). This is similar to how any web app may choose to ignore POST requests from spammers.

And yes, someone can decide to aggregate data themselves and provide an alternative app over same data with different moderation policies. In fact that’s already the case (Blacksky runs their own application server that mostly piggybacks on Bluesky moderation decisions but overrides some of them. There are also clients that ignore moderation altogether and show you the raw data from hosting.)


So the app is equivalent to an AP instance.

No because apps are decoupled many-to-many with hosting.

Every app can display public data from every other app because the source of truth is outside both apps (in hosting).

App owner can’t do bad things to your account other than banning you in their particular app. Other apps (even for same data) independently choose whether to show your data. So app owners are only in control over how your data is presented in their apps, not over your actual data.

Whereas in AP, each app’s moderators literally control your entire identity.


Not really. From my understanding, in AP, your account belongs to an instance and your data is then synced to other servers. If the instance goes down, your account is gone.

In ATP, your data is stored in the "Atmosphere", hosted on decentralized "Personal Data Servers" (PDS). The app then simply parses and filters that data. They can apply moderation actions by choosing not to display or read certain posts, but your data still exists and another app could choose to display it. Similarly, if the app goes down, your data is still perfectly intact in the Atmosphere.

It might then seem like the PDS is equivalent to an AP instance, but as mentioned, they are decentralized. Identity is verified through signatures, so if your PDS goes down, you can migrate to a new one as long as you have your signing keys. Therefore, the account belongs to you and not any specific server.


You're interpreting my post with the assumption that I don't know what I'm talking about. You don't need to explain the protocol to me.

Domain here referred to the area of influence or control, like what the provider of a relay effectively has. The fact that other groups can run any element of the infra themselves doesn't change the fact that the drift towards centralization is much greater with ATP than with AP.

ATP has its own uses (quick aggregation) but it doesn't even attempt to solve fundamental issues of current ecosystem of social networking,. AP, on the other hand, offers the foundation for further development in the right direction.


How does a new server discover other servers?

A new hosting provider can preemptively request known relays to crawl it. Or relays (or apps) can lazily discover it when the user hosted there tries to log in for the first time, or their data is linked to by a known user. It’s similar to the relationships between websites and search engines.

Hosting providers don’t need to discover other hosting providers. Data only flows between hosting and apps; not between hosting and hosting or apps and apps.


This is more a mastodon thing. atproto doesn't really work the same way where every server is it's own semi-isolating zone. This gets into it well: https://atproto.com/articles/atproto-for-distsys-engineers

I think the gain sits in the middle: if the giant server starts to get iffy (moderation, content, policy, technical issues), people can leave it somewhat easily and form or grow another decently sized server which will have enough reputation from day one.

We already have other decently sized GH alternatives such as Gitlab, Codeberg and various OSS forge instances (freedesktop, Fedora, Debian, etc) which could be federated and become a safe harbor if we were able to maintained project visibility and discoverability.


That's been entirely my own experience, or at least the assumption that's kept me off all of them so far.

But I saw this project a few days ago and thought to myself "Hey, this one could actually work." The difference here is that the target audience has a pretty strong overlap with the part of society comfortable with self hosting services.

I don't need my whole network for this one to be useful, only that subset that's actually most likely to show up.


The CTO @pfrazee had a lovely New Year's Eve post that talks about Atmospheric Computing and specifically raising the cold start problem and addressing how atproto tackles it. https://www.pfrazee.com/blog/atmospheric-computing

Tangled here is a great example. An existing user base of a social network was able to rapidly join and start using a new app, a git forge, to share repos and collaborate. PRs and comments show up like any other record on the network.

As for how the network works: atproto tackles the cold start problem by layering architectural concerns. Each person is their own server ("personal data server" aka PDS). But aggregation layers ("relays") collect all PDS activity they can find and relay it to consumers. Then applications such as Bluesky or Tangled ("appviews") can be built by reading records of interest (of the right "lexicon" type) from the relays. Each person owns their data, relays make all data available, appviews distill out user experiences appropriate to the records they cover.


I think the appeal here is you can either self-host or even migrate between larger providers.

The server costs for the frontend should be very low allowing them to operate basically forever and they are fed in by a series of other hosts


For Mastodon, follow some tags through fedibuzz relay to populate your feed.

Not if you do it over git itself on the existing forge. You basically store everything in git and federate via git forks/mirrors.

Hooks are meant to be 'deterministic' because they are only used for executing scripts on a specific step. So, for instance, you can execute your lint on PostEdit so every time it edits a file in your project, the harness runs your linter.

With that said, part of hooks is you can return a json object to the agent which gives it instructions such as stop, continue, etc but those to my understanding are all very explicit constants rather than loosey-goosey prompts you can pass it.

If this person looked into hooks more, they could write a script that would run their project's tests and then tell Claude to stop if tests end via a non-0 exit code.


At least it isn't Bitbucket.

I think Atlassian and Microsoft are genuinely in a competition to see who can make worse software and still have customers.


At this point maybe even Azure DevOps is an improvement

I haven't used it in about 3 years; but 3 years ago it was not at all

As someone whose employer uses both: nope, not yet

I mean at least the pipelines almost always seem to work...

What issues do you have with bitbucket?

I disagree. Microsoft had been doing just fine at making completely awful and broken products before AI coding was a thing.

Yes, exactly. AI isn't some magic dust that you can sprinkle into your workforce and get more productivity and better results. It is at best a force amplifier for what you already have. If you're making awful and broken products, you will make even more awful and even more broken products at a higher rate than before.

It's not a coincidence that every impressive result done using AI has come from someone with a track record of impressive results before AI. AI isn't magic. It doesn't make you good at stuff you're bad at.


The .NET team is a counter example, aside from the GUI situation.

Microsoft had a very specific niche of making completely awful software that wasn't actually broken - in fact, that was often the infuriating thing.

If it just shat the bed completely, you'd have an easy argument to replace it with something else; instead, it would be technically competent (Hi, Raymond!) but covered in stuff that made it infuriating to use (Hi, Redmond!), especially if you didn't live in it day in and day out.


Homer: There's three ways to do things: the right way, the Microsoft way and the AI slop way.

Bart: Isn't that the Microsoft way?

Homer: Yes, but faster!


Disagree to a degree.

These types of meetings only work if the person who organized it has organizational power over the other participants. In my experience, these types of meetings always get deferred or cancelled if all participants are of the same level or worse, the organizer has less organizational power than the participants.

A progress meeting by a junior PM with a bunch of senior+ engineer is _guaranteed_ to get cancelled or gutted very quickly.

---

In the vein of other comments though: agree. The necessity of these types of meetings is an organizational stink and the problem lies with priorities and amount of work to be done.

If something really needs to be done, time and resources will be found for it.


Organizational power comes in various forms. If an executive cares about the project and believes the junior PM is capable of running it, then that can be all the "power" that the PM needs to herd more senior engineers. If the engineers really have that much of a problem with it, they can go complain and be promptly told to stop complaining and get back to doing their job, i.e., contributing to the project.

As an aside, whether you're a PM or not, this is a good way to get promoted. On more than one occasion in my career, I've effectively led a project whose participants were on my boss's boss's staff. All I did was identify something that was strategic and important to the organization but that nobody at the next level currently had time to lead. I'd present the idea to my boss, then we'd present together to their boss, and I was in.


There are multiple kinds of meetings.

There are the status updates that it's often good for people to know about even if only in a half-listening and simultaneously replying to emails sense. They're at least aware in a way that they wouldn't if they didn't read the memo.

There are decisions that really just need to be made, even if not critical, so they don't get strung out.

And there are meetings that don't require a decision today but do have a timeline and need at least a plan for a plan.


I am continually tripped out by the fact when I was 16, I didn't have a 'smartphone' beyond a Windows Mobile 6 phone that had no internet on it.

Now, I have this high-resolution shiny object that can near instantaneously get any information I want along with _streaming HD video to it_ *anywhere*.

15 years even feels like a stone age. I can't fathom what it has to feel like people in their 60s and 70s.


I'm not quite 60, but it's always interesting to me that I feel quite the opposite of this. When I was 16, I didn't have a computer, didn't have a phone, had never used the Internet, but when I think of how life has changed, it's frankly not much. I woke up this morning, scooped my cats' litter boxes, took out some trash, made myself breakfast, ate that, read some news while eating, then lifted weights in my garage, had some work meetings, wrote up some instructions per a customer request from Friday, and am about to go drive to the lake to go do a 9 mile longboard loop.

That's very close to a normal day in 1996. The biggest difference is I read the news on my phone instead of a physical newspaper. The news was not any more interesting or informative because of that. I guess I can also still do the loop reasonably well, but I'm a lot slower than I was in 1996 when I was a cross-country state champion.

My parents are closing in on 70 and I guess I can't speak for them, but I'm at least aware of the daily routines of their lives, too. Walk the dog, do housework, DIY building projects, visit kids and grankids. Seems much the same, too, with the biggest difference being they're now teaching my sister's sons to play baseball rather than me, but shit, one of her sons even looks like exactly the same way I looked when I was 7! The more things change, the more they stay the same.


Depends on where you live. My dad is almost 80, grew up in a very rural area, and when he was 16 they'd just gotten indoor plumbing. Up until he was 14, his school was a one-room school house with no heating other than a wood stove. If you were the first kid to arrive for the day, it was your job to get the fire going in winter months.

Housework meant no laundry machine, no dishwasher, and possibly no vacuum cleaner. That means hand washing everything, and beating rugs with sticks and brushes to get the dust off of them.


The early lives of my grandparents (in their 90s) are so fascinatingly different to that of mine. But even by the time my parents were growing up in the 60s, life was not so different in the west. The real differentiators in living standards - energy, household appliances and cooking, modes of transport - were more or less figured out then. By the time my parents were young adults in the early 80s, so many of the aspects of "modern life" had been figured out.

I look at the life my kids live, and it's not so different to my childhood. The toys are similar, their housing is similar. Probably the biggest difference is the availability of content on demand rather than much more fixed TV schedules.

The big difference in the last 30 years hasn't so much been in the kind of middle class life you can live, but the number of people who live that kind of life. In the 90s 40% of people globally were living in extreme poverty. Now its under 10%. The kinds of lives the middle class live in China and Vietnam are closer to those of Europeans today, when even 30 years ago most people in those countries were living much closer to the way your dad grew up.

I wonder if AI will result in a step change of living standards? Perhaps along with robotics we'll finally get to do nothing at all at home? I'm not convinced it'll be quick though. Maybe another 30 years.


If your parents are closing in on 70, I would have expected you to be closer to not quite 50 than not quite 60.

I am just over 50 myself and I agree with your points. Technology has changed but life is largely very similar to wear it was in the 90s. At least day to day. Attitudes are way worse now.


General agree... I still do the things (mid-50's) I used to do when I was a teenager with no computer, no phone.

But - now they are easier - I can read books on an e-ink screen and pretty much instantly find what I want to read next. I get my news on a phone. I used to watch TV/movies broadcast or on tape rentals. Now, I have just about everything I could ever want available - without ADs... those were such a time-waster.

What has changed is that I have access to MORE information than my local (or school) libraries could ever provide - in a variety of more accessible formats. Whatever tools I need to get "work done", I can find a myriad of free and open-source options.

But - the overall days and household family routines are the same - now, instead of reading a paper book while waiting to pickup my kids (or other family members) "back-in-the-day", I can read my device, or connect with my DIY communities online on my phone - or learn something new. I don't have to schedule life around major broadcast events, I can easily do many tasks while I am "out-and-about".

Friction has been reduced.


Thank you for this insight!

I always wonder the views of older people. My parents are very technology forward and have been my entire life so it is difficult to gauge how different life is compared to when they were growing up.

It's easy to hear "Oh well I only had 640kb of memory and typed programs out of a magazine I got in the mail!" and see as distinct from having 'unlimited' resources and the internet.

Your insight is good ("The biggest difference is I read the news on my phone instead of a physical newspaper") that life sort of stays the same but the modality changes. People still go to the store like they did in the mid-1800s but now it is by car.

I wonder what our "industrial revolution" will be where the previous generation lived (ie: out in the country on a farm) totally different lives to the current (ie: in the city in a factory). Maybe when space travel and multi-planetary living is normalized?


> It's easy to hear "Oh well I only had 640kb of memory and typed programs out of a magazine I got in the mail!"

Since I was there (young, but there), I want to point out that this crosses three eras which all felt quite different:

    1978: typed programs in from a magazine or loaded from a cassette (16kB, TRS-80)
    1983: loaded programs from a floppy (64kB, Apple ][ and C64 etc)
    1988: loaded programs from a hard disk (640kB, IBM PC and Mac).
Exact years vary but these eras were only about 5 years each. Nobody had a floppy in 1978 but almost computer user did by 1983; nobody had a hard drive in 1983 but almost everyone did by 1988.

To some degree this already happened with the move from the industrial city to suburbanization and then re-urbanization. In particular one of the most notable recent developments is that urban waterways are now pretty desirable places to be with parks and recreation; in most industrializing cities the waterfront was actively avoided because the industrial use made it polluted, smelly etc.

The news on the phone is worse, in fact.

Something I read elsewhere was "if someone is using an AI avatar, they were never going to be your customer anyway".

I used to commission avatars every year or two from a specific artist. It wasn't super cheap (hundreds of dollars). At the end of the day though, spending hundreds of dollars, waiting weeks, and then maybe getting 85% of what I wanted doesn't make sense when I could instead spend ~$0, wait 30 seconds, and get 98% of what I want.

In my view, artists should be moving up the 'stack'. If they are a commission artist, they should be having customers come to them with their '98% efforts' or only taking on commissions that either mean too much, too elaborate for AI, or otherwise sensitive.

Humans want art. Humans love pretty things. AI will never replace the entire need for artists. I see it as getting rid of the bad commissioners (price sensitive, beggars, etc) and making it easier for people to express themselves thereby making an artist's job easier to extract info from their commissioners.


This is very foreign to my way of thinking.

I commission artists somewhat regularly, and if I had to name the top two reasons, they would be 1) I really like their style, and want a piece in that style 2) I want to support them so they can continue making the art I like.

Meeting my checklist of inclusions is important, but definitely secondary to the reasons above. (And sometimes the deviations are reflections of the artist's particular style and therefore welcome.)


> Something I read elsewhere was "if someone is using an AI avatar, they were never going to be your customer anyway".

Is your point that what you quoted is false?

> I used to commission avatars every year or two from a specific artist. It wasn't super cheap (hundreds of dollars). At the end of the day though, spending hundreds of dollars, waiting weeks, and then maybe getting 85% of what I wanted doesn't make sense when I could instead spend ~$0, wait 30 seconds, and get 98% of what I want.

...because you just gave an anecdote that shows the truth is "if someone is using an AI avatar, they might have been your customer before AI".

> In my view, artists should be moving up the 'stack'. If they are a commission artist, they should be having customers come to them with their '98% efforts' or only taking on commissions that either mean too much, too elaborate for AI, or otherwise sensitive.

That doesn't make sense. That's not "moving up the stack," that's the work drying up and only a small remainder of the most difficult/sensitive things being left. And that might mean being driven out of your profession because there's not enough left for you to feed yourself.


> because you just gave an anecdote that shows the truth is "if someone is using an AI avatar, they might have been your customer before AI".

I stopped commission artists for avatars years before that because of "It wasn't super cheap (hundreds of dollars). At the end of the day though, spending hundreds of dollars, waiting weeks, and then maybe getting 85% of what I wanted"

I got tired of waiting weeks only to get honestly a middling result. I stopped buying art and won't go back because the economics don't make sense to me regardless of AI.

> only a small remainder of the most difficult/sensitive things being left

Yep. It's what happens to industries as technologies progress. Horse carriage drivers and elevator operators either found something more specialized or moved out of the industry. If someone is making a living off onesie-twosie low-dollar commissions and can't figure out how to translate that to something else in the industry, they don't have any other choice.

Personally I think a lot of technology progression is long-term positive for humans because it means humans get to do something more fulfilling than rote work. It's dystopian and awful but personally, it's a shove for artists to move onto better art.


> Personally I think a lot of technology progression is long-term positive for humans because it means humans get to do something more fulfilling than rote work. It's dystopian and awful but personally, it's a shove for artists to move onto better art.

Dude, you need to update your memes. It's 2026: technology progression means humans "get" to do more rote work as the fulfilling creative and intellectual aspects are automated away.

Have machines do the creative part of building the system, so you don't have to! Instead you get even larger piles of code reviews instead!


I believe what their point is is that if you give people a "extract-needle-from-haystack" machine and then tell them they have to manually find where in the haystack the needle was, it defeats the purpose of having the machine.

With that said, a good RAG solution would come with metadata to point to where it was sourced from.


> I believe what their point is is that if you give people a "extract-needle-from-haystack" machine and then tell them they have to manually find where in the haystack the needle was, it defeats the purpose of having the machine.

We've got to be careful to not let the perfect be the enemy of the good.

I'm not an LLM enthusiast, but I think you have actually compare it against what the alternative would really be. If you give the journalist a haystack but insufficient time to manually search it properly, they're going to have to take some shortcut. And using an LLM to sort through it and verifying it actually found a needle probably better than randomly sampling documents at random or searching for keywords.


I don't want to come off as an AI-maximalist or whatever, but, I mean, at some point, skill issue, right?

You can use Google to find you results reinforcing your belief that the earth is flat too; but we don't condemn Google as a helpful tool during research.

If you trust whatever the LLM spits out unconditionally, that's sorta on you. But they _can_ be helpful when treated as research assistants, not as oracles.


This is a bogus analogy leaidng to a bogus conclusion.

If something points to the needle in the haytack (saying "this haystack has a needle positioned eighteen centimeters from the top and three left of center"), it's much easier to verify that indeed there is a needle there than it would be to find that needle in the first place.

If an LLM spits out a claim that something happened (citing a certain article), it's less work to read the article and verify the claim than it would be to DISCOVER the article in the first place.

In other words, LLMs can be a time-saving search engine, and the idea that it's just as much work to find+verify information as it is to have the LLM find it and then you verify it is hokum.


when you use the extract-needle-from-haystack machine, verify that it actually extracted a needle.

that's much easier than manually extracting the needle yourself


Another interpretation is if you have multiple haystacks, and the machine tells you which haystack likely has a needle in it. You still need to extract the needle yourself,

I think everyone overblows the whole "AI is poisoning AI!" thing. It could be a problem but the genuine value in Reddit or any other human social media is honestly pretty low from my estimates. It's great for seeing how humans talk but in terms of 'nutritional' value for truth or answers... I am not sold. If I was choosing what to 'feed' AI, I wouldn't even bother with textual social media (besides Github / Gitlab / other source control)

There's way more value, if seeking out answers, in following the links to external sources, scraping books, and other sources that aren't "unwashed masses saying whatever they want".


> the genuine value in Reddit or any other human social media is honestly pretty low from my estimates. It's great for seeing how humans talk but in terms of 'nutritional' value for truth or answers...

> ...

> scraping books, and other sources that aren't "unwashed masses saying whatever they want".

The problem is there's a lot of knowledge that only exists as reddit comments, blog posts, or social Q&A.


> genuine value in Reddit or any other human social media

Kind of a steep slope to convince me that Reddit has a higher P/E ratio than Nvidia because of its authentic content… Reddit seems extremely overvalued because it is a network of people programmed to accept whatever they see and engage.

No doubt that there is good information there, discord too.

But Reddit is far more bots and “organic propaganda” than you are thinking.


You can put it in scare quotes all you want, doesn't stop you from sounding like Scrooge McDuck.

> At a very minimum, to repay the +$100b in investment within a reasonable timeframe, what's the minimum figure they have to bank post-tax each month?

I am completely confident that Amazon of all companies is totally fine with not taking a return for a long time.

Amazon didn't book a profit for the first decade of their company. It's completely modus operandi to burn, burn, burn to get as big as possible.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: