Hacker Newsnew | past | comments | ask | show | jobs | submit | chokolad's commentslogin

> This year, I'm betting less social media as being better and in the long-run a new protocol that learns from the mistakes.

Can you list protocol level mistakes made by ATProto?


Permissioned data is probably the most fundamental, the part I looked most deeply into myself. People want privacy over blasting everything out to the internet for anyone to scrape. The public by default forced upon users is a bad choice. The purported benefits never materialized. Many of the atmo developers have this belief they can skip the network effects, grift the data and social graph for their own use.

Here's the User Intent proposal that is super easy to implement, yet they have been sitting on it since: https://github.com/bluesky-social/atproto/discussions/3617 This would have been at least a middle ground to permissioned data, as would have been personal private data (bsky prefs generalized).

After that money, which I see as less of a protocol thing. A protocol or platform has to enable the people to make way more money than itself, at least 10x. (1) Bluesky should have created subscriptions for their service, they wouldn't have needed the private equity had they. (2) Bluesky did more to block others making money than enable it. Graze was in talks with them to enable the creators using their feed system to make money, until Bluesky walked away. (3) Permissioned data would unlock monetization without blockchain.

Permissioned data is being worked on, but the commentary from Bluesky is not promising. (1) Nobody in ATProto has built a permission system (that I'm aware of) (2) Bluesky are proposing a very simplistic system. This will put burden on app developers and create opposition the credible exit philosophy.

Record history / editing. The former should be at the protocol level, the later on feature that is highly desired, possible today, but they resist with fervor.

Bluesky could have put way more funding into the ecosystem, especially in hindsight with the $100M they picked up just after peak. Now they are struggling and stepping on that ecosystem, re: replacing Graze instead of supporting and integrating them with their latest "ai" stunt.

Compare this to Hytale and what they are doing. Night and day.

The Bluesky team has also made several PR mistakes, upsetting their base, they are really tone deaf. Hope the waffles are tasty!


The PLC comes up a lot, and I understand the criticism, but it is also good enough for now and on the right trajectory, though the pace could be better, hut like much of the Bluesky development it has molasses in winter vibes. Long-term, multiple identity authorities can exist. Something like Handshake would have been ideal, another great project doomed by poor leadership.

Supporting delete is a good decision in my opinion, and likely a legal requirement. I also like that ATProto stuck a balance between decentralized and user experience. Properly federated systems are unlikely to appeal to the masses, re: blockchain.

Two other well designed parts of ATProto are how the algos and moderation work. Modular, composable, and anyone can participate. This would change with a properly permissioned protocol (zanzibar + macaroons imo) and encourage smaller social instead of big social.


> There's everything wrong when "agentic" means that the regular bread-and-butter functionality of the OS becomes unusable

Can you provide an example of agentic functionality in Windows making regular bread-and-butter functionality become unusable?


Copilot in Notepad.

It's not true. Source - me, MSFT for 25 years.

Yes, because you know what all of the 200,000+ employees are doing in every wing and branch of the entire company.

Then again, Microsoft themselves directly dispute your statement:

Across the landscape of more than 750,000 devices in use at Microsoft, we support Windows, Android, iOS, and macOS devices. Windows devices account for approximately 60 percent of the total employee-device population, while iOS, Android, and macOS account for the rest. Of these devices, approximately 45 percent are personally owned employee devices, including phones and tablets. Our employees are empowered to access Microsoft data and tools using managed devices that enable them to be their most productive.

https://www.microsoft.com/insidetrack/blog/evolving-the-devi...

Not to mention that most app designers use OSX for the design tools, which means that there is going to be by default some bleed between the two systems on design choices alone.


> while iOS, Android, and macOS account for the rest. Of these devices, approximately 45 percent are

Pretty much everyone has an android or iOS device in their pocket. A lot of those devices are enrolled into Microsoft MDM in order to access email/teams/etc. These phones are part of the stats. Dev work in general is done on Windows boxes, unless you are in specific teams that have other requirements. Default is Windows, specifically Windows laptop.


200,000+ windows devices issued by the company.

200,000+ phones.

Worst case somewhere around 50,000-150,000 tablets.

That leaves ~200,000 unaccounted for devices with only macOS on the table. I think the saturation is higher than you have experienced, although I'll give that it's entirely possible that the areas you worked in were not one of them.


Have you worked in those areas with high saturation ?

I’ve seen Microsoft employees run public presentations from MacBooks on multiple occasions.

> I’ve seen Microsoft employees run public presentations from MacBooks on multiple occasions.

This is specifically done to show that Microsoft tech eg .net is not tied to Windows.


Hey.com calendar has recently shipped a very nice year view

https://world.hey.com/michelleharjani/building-hey-calendar-...


Can you clarify - are you implying that BlueSky team made protocol hard on purpose, in order to "tell regular people that they are not allowed to participate"?


No, OP is saying that they have over-engineered the protocol, and that this acts as an *effective* barrier to participation, regardless of whether it was intended or not. Bluesky's protocol is focused on twitter-scale use-cases, where every node in the network needs to be able to see and process every other event from every other user in able to work properly. This fundamentally limits the people who can run a server to only the people who are able to operate at the same scale.


Great, so what's the alternative? What's the "properly engineered" protocol?


Email, RSS, blogs, even Mastodon protocol (it's not ActivityPub) scales better. Anything that only sends data between interested parties, instead of to everyone.



> DHH has long past the point where anyone should be caring about his technical opinions. This is a 0 substance post.

Can you elaborate?


What can be stated without evidence can be dismissed without evidence. It is IMO pretty clear to me there is no substance to this post, without knowing anything about the author.

In general most such claims today are without substance, as they are made without any real metrics, and the metrics we actually need we just don't have. I.e. we need to quantify the technical debt of LLM code, how often it has errors relative to human-written code, and how critical / costly those errors are in each case relative to the cost of developer wages, and also need to be clear if the LLM usage is just boilerplate / webshit vs. on legacy codebases involving non-trivial logic and/or context, and whether e.g. the velocity / usefulness of the LLM-generated code decreases as the codebase grows, and etc.

Otherwise, anyone can make vague claims that might even be in earnest, only to have e.g. studies show that in fact the productivity is reduced, despite the developer "feeling" faster. Vague claims are useless at this point without concrete measurements and numbers.


This study does a good job of measuring the productivity impact. It found 1% uplift in dev productivity from using AI.

https://youtu.be/JvosMkuNxF8?si=J9qCjE-RvfU6qoU0


Actually it didn't

From the video summary itself:

> We’ll unpack why identical tools deliver ~0% lift in some orgs and 25%+ in others.

At https://youtu.be/JvosMkuNxF8?t=145 he says the median is 10% more productivity, and looking at the chart we can see a 19% increase for the top teams (from July 2025).

The paper this is based on doesn't seem to be available which is frustrating though!


I think you are quoting productivity measured before checking the code actually works and correcting it. After re-work productivity drops to 1%. Tinestamp 14:04.


That was from a single company, not across the cohort.


My bad. What was the result when they measured productivity after rework across the entire co hort?


They don't publish it as far as I can see!

In any case, IMHO I think AI SWE has happened in 3 phases:

Pre-Sonnet 3.7 (Feb 2025): Autocomplete worked.

Sonnet 3.7 to Codex 5.2/Opus 4.5 (Feb 2025-Nov 2025): Agentic coding started working, depending on your problem space, ambition and the model you chose

Post Opus 4.5 (Nov 2025): Agentic coding works in most circumstances

This study was published July 2025. For most of the study timeframe it isn't surprising to me that it was more trouble than it was worth.

But it's different now, so I'm not sure the conclusions are particularly relevant anymore.

As DHH pointed out: AI models are now good enough.


Sorry for the late response!

My guess is they didn't publish it because they only measured it at one company, if they had the data across the cohort they would have published.

The general result that review/re-wrok can cancel out the productivity gains is supported by other studies

AI generated code is 1.7x more buggy vs human generated code: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-gen...

Individual dev productivity gains are offset by peers having to review the verbose (and buggy) AI code: https://www.faros.ai/blog/ai-software-engineering

On agentic being the saviour for productivity, Meta measured a 6-12% productivity boost from agents programming: https://www.youtube.com/watch?v=1OzxYK2-qsI&si=ABTk-2RZM-leT...

"But it's different now" :)


Great example of something that actually has some substance beyond meaningless anecdotes.


The claim was > DHH has long past the point where anyone should be caring about his technical opinions.

I asked for evidence, you are replying to something else.


how on earth 3 links to Github actions price increases are relevant here ?


Try harder...


stop derailing the discussion...


> I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.

Can you elaborate a bit? This article talks about internal machinery of building .net releases. What does that have to do with "this churn", whatever that is?


My guess is if you build with .NET Framework you can just forever run your builds, but if your source code is based on newer .NET you have to update to a new version each year, and deal with all the work in upgrading your entire project, which also means everyone in your team is also upgrading their dev environment, and now you have new things in the language and the runtime to deal with, deprecation and all that. Plus lots of packages don’t update as fast when version changes occurs, so chances are you will probably take more work and use as few dependencies as possible if at all, which may cause a lot of work. Instead it’s best to, if you need to depend on something, to be a very big Swiss Army knife like thing.

I think node is just more flexible and unless .NET Framework like forever releases or much longer term support make a come back, there’s no good trade off from node, since you don’t even get more stability.


> if your source code is based on newer .NET you have to update to a new version each year

.NET has a really refreshingly sane release life cycle, similar to nodejs:

- There's a new major release every year (in November)

- Even numbers are LTS releases, and get 3 years of support/patches

- Odd numbers get 18 months of support/patches

This means if you target LTS, you have 2 years of support before the next LTS, and a full year overlap where both are supported. If you upgrade every release, you have at least 6 months of overlap

There's very few breaking changes between releases anyway, and it's often in infrastructure stuff (config, startup, project structure) as opposed to actual application code.


Ah, but if you use node.js you get breaking changes every other day from dependencies on dependencies you didn’t even know you had.


The in the box libraries for .Net (even if via Nuget) are much more stable by comparison.


> Odd numbers get 18 months of support/patches

The recently fixed the friction with odd number releases by providing 24 months of support.


I think it's important to remember that Dotnet projects can use code built for older releases; to an almost absurd degree, and if you don't go to before the .NET Framework divide, you largely don't even need to change anything to move projects to newer frameworks. They largely just work.

The .Net platform is honestly the most stable it has ever been.


Going from Core 1 to 2 then 3 had a lot of rough edges, but since then it's been pretty painless.


Recent experience report: I updated four of my team's five owned microservices to .net 10 over the past two weeks. All were previously on .net 8 or 9. The update was smooth: for the .net 9 services, I only had to update our base container images and the csproj target frameworks. For the .net 8 services, I also had to update the Mvc.Testing reference in their integration tests.

It's hard for me to imagine a version increment being much easier than this.


I'm currently migrating dozens of projects to .NET 10. All of them so far were basically one line changes and a recompile.

You should be able to go from .NET 6->10 without almost any changes at all.


The past three years of dotnet upgrades have been completely painless for me.


.NET Framework had back then, when it was not in frozen state as it is now, every release a list of breaking changes. Modern .NET breaking changes are not worth talking about. Keeping up with the state of the art however is more interesting... But that is needed to be a solution for today and to stay relevant.


Note how practitioners of .NET praise it and non-practitioners (users of .NET Framework) criticize it.


Or users of other programming tool chains.


AKA people who are willing to be honest about its faults. .NET has a lot of social commentary on how stable and robust it is yet for some reason in every project I've had the displeasure of brushing up against a .NET solution it's always "we're updating" and the update process magically takes longer than building the actual feature.

Or what is a pretty standard feature in other tech-stacks needs some bespoke solution that takes 3 dev cycles to implement... and of course there's going to be bugs.

And it's ALWAYS been this way. For some reason .NET has acolytes who have _always_ viewed .NET has the pinnacle of programming frameworks. .NET, Core, .NET framework, it doesn't matter.

You always get the same comments. For decades at the point.

Except the experience and outcomes don't match the claims.

Just before I get the reply, I'm pretty familiar with .NET since the 2000's.


What do you mean? The .Net ecosystem has been generalized chaos for the past 10 years.

A few years ago even most people actively working in .Net development couldn't tell what the hell was going on. It's better now. I distinctly recall when .Net Framework v4.8 had been released and a few months later .Net Core 3.0 came out and they announced that .Net Standard 2.0 was going to be the last version of that. Nobody had any idea what anything was.

.Net 5 helped a lot. Even then, MS has been releasing new versions of .Net at a breakneck pace. We're on .Net 10, and .Net Core 1.0 was 9 years ago. There's literally been a major version release every year for almost a decade. This is for a standard software framework! v10 is an LTS version of a software framework with all of 3 years of support. Yeah, it's only supported until 2028, and that's the LTS version.


The only chaos occurred in the transition from .NET Framework to .NET (Core). Upgrading .NET versions is mostly painless now because the breaking changes tend to only affect very specific cases. Should take a few minutes to upgrade for most people.


Except it is a bummer when one happens to have such specific cases.

It never takes a few minutes in big corp, everything has to be validated, the CI/CD pipelines updated, and now with .NET 10, IT has to clear permission to install VS 2026.


If you can't get permission to update/change IDE, the company processes aren't working at all tbh. Same if cicd is in another department that doesn't give a shit.


That is pretty standard in most Fortune 500, whose main business is not selling software, and most development is done via consulting agencies.

In many cases you get assigned virtual computers via Citrix/RDP/VNC, and there is a whole infra team responsible for handling tickets of the various contractors.


Similar story at my prior job. Heck, we still had one package that was only built using 32-bit .Net Framework 1.1. We were only just starting to see out-of-memory errors due to exhausting the 2 GB address space in ~2018.

I love the new features of .Net, but in my experience a lot of software written in .Net has very large code bases with a lot of customer specific modifications that must be supported. Those companies explicitly do not want their software framework moving major supported versions as quickly as .Net does right now, because they can't just say "oh, the new version should work just fine." They'd have to double or triple the team size just to handle all the re-validation.

Once again, I feel like I am begging HN to recognize not everyone is at a 25 person microservice startup.


I might be missing something but the combination of 'we mustn't break anything' and 'we can't test it without 2-3* team size' sounds like release deadlock until you can test it..

The migrations where I've worked at have always been a normal ticket/epic. You plan it in the release, you do the migration, you do the other features planned, do the system tests, fix everything broken, retest, fix, repeat until OK, release.

Otherwise you're hoping you know exactly how things interact and what can possibly have broken, and I doubt anyone knows that. Everyone's broken things at first sight seemingly completely unrelated to their changes at some point. Especially in large systems it happens constantly. Probably above 1% of our merges break the nightly in unexpected places since no one has the entire system in their head.

Or you're keeping a dead product just barely alive via surgical precision and a lot of prayers that the surgeon remains faultless prior to every release.


On the migrations... read the comments through this thread.. there are many, and none have mentioned any significant pain points at all, just hypothetical ones from people like you who aren't actually actively using it.

As to the CI/CD pipelines... I just edited my .github/workflow/* to bump the target version, and off to the races... though if you're deploying to bare metal as opposed to containers, it does take a couple extra steps.

As to the "permission to install..." that's what happens when companies don't trust the employees that already write the software that can make or break the company anyway... Defelopers should have local admin privs, or a jump box (rdp) that does... or at the very least a linux environment you can remote-develop on that, again, has local admin privs.

I'm in a locked down environment currently, for a govt agency and hasn't been an issue. Similar for past environments which include major banking institutions.


Each one is their own anecdote.


You're describing a specific case of working in a big rigid enterprise. It doesn't have anything to do with .NET itself, does it?


Guess where most .NET developers employeers happen to be?


I have no idea about most .NET developers. At my current job (a public software company in US with thousands of employees) it's up to engineers to decide when to upgrade. We upgraded our main monolith app to .NET 10 in the first week.


For me, customers IT and their management decides.


I've been using .Net since late 2001 (ASP+) including in govt and banking and rarely have had issues getting timely updates for my local development environment, and in the past decade it's become more likely that the dev team controls the CI/CD environment and often the deployment server(s).... Though I prefer containerized apps over bare metal deployments.


Some devs get lucky.


What's stopping you from doing it now ?


There's not as much incentive to right now, because I don't have an excuse to round up prices, and customers don't have a case for rounding down prices. This discussion's about the possible effects of rounding, not about whether businesses are in control of their prices.


> There's not as much incentive to right now

Yeah, because stores don’t have an incentive to raise prices usually…


Majority of actual Xbox services are working fine, xbox.com itself is busted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: