I never understood why daringfireball is such a famous blogger. They seem totally insane to me.
Claiming Steve Jobs was two steps ahead of cancer, the same guy who compared himself to Jesus and Gandi, the same guy who ate berries and nuts thinking he could flush the cancer out of his body, always two steps ahead huh?
> In August 2011, Steve Jobs was sick. For years he’d managed to stay a step, sometimes two, ahead of the pancreatic cancer he’d been battling since 2003, but no more.
I agree that I have no idea why people read this guy... Like in a "I must be genuinely out of the loop" type way. I feel like it's really romanticizing or fanboying.
Like I enjoy my apple products, and I'm sure glad Apple wasn't run by a psycho like Musk, and didn't put Ads in the OS like Microsoft. But I don't think any of this is heroic or anything. Like if anybody's a hero it's probably the open-source guys who do it for no money at all.
Jokes aside, seeing as this person has created their own clean room in a shed, and is making RAM, what exactly is stopping any company from doing this themselves and breaking into the RAM business?
I'd pay less for RAM that wasn't "certified" in some official way, at least it works.
It's really easy to set up a manufacturing process for basically anything if you can spend 100x per unit compared to the big optimized factories and you don't mind the product being a lot slower.
The clean room isn't the hard part about being competitive. It's using advanced lithography to cram billions of cells into a single chip. If you want make DDR2 chips on a 90 nanometer process, that is accessible to a whole lot of companies, but nobody will buy the product. And in the micrometer range you can DIY like this guy.
Correct, nobody will buy your 1GB stick of DDR2-speed RAM for the $100 it cost you to produce it.
> And what happens when smaller companies have to repair or scale their infrastructure and can't get affordable RAM?
That situation sucks but bringing up more obsolete fabrication isn't going to help. They can't compete with modern chips even when those modern chips have a 5x price penalty.
Ok sorry I misread you saying that it was easy with nm manufacturing but nobody will buy the product, you said it's easy to manufacture micrometer DDR2 speeds.
Single digit micrometer is really easy and makes toys and/or microcontrollers. 90nm is sort of easy if you have factory money, and is about what you need for DDR2. It gets a lot more difficult as you go beyond that.
In one of Dwarkesh's interviews, he mentioned that China is trying to replicate the entire stack. Ironically now that they have mastered all pieces of the stack for older nodes, they actually have an advantage in a collapse scenario. The US does not appear to have the ability to do all steps in the stack for any node. They still rely on other western countries that could go offline. China despite being behind does at least have top to bottom capability for older nodes. Combine that with their rock bottom electricity prices and they have a unique card that they can play.
Just imagine if electricity costs were trending towards 0. Instead of e-waste run all those machines till the chips burn out.
He's producing semiconductors with a 1000nm (one micron) feature size. This kind of tech was cutting edge in the mid 80s. You might be able to produce a 32KB memory chip with it.
It would be difficult to break into the RAM business with that sort of product as most of the demand these days is for higher capacities.
I'm not really sure what you mean by "certified"; I don't think JEDEC are handing out stickers. Although I am reminded of the Bunnie Huang article about SD cards, where the Shenzen vendor would give you the same SD card with the manufacturer logo + holographic authenticity .. of your choice.
The real problem with the RAM business is that it was commodified; normally manufacturers make a relatively small margin. No incentive to build a factory for that. These are not normal times because (a) someone has bought all the RAM and (b) someone has blown up a whole load of globally critical infrastructure in the Middle East.
The risk the existing RAM manufacturers are being cautious about is the risk of normal: if you start building a factory now, will you be selling into a RAM glut?
EUV lithography is the state of the art. It makes far denser chips and is quite out of reach for the backyard fellow. Find a documentary on how ASML machines work: they're near the pinnacle of human accomplishment!
> what exactly is stopping any company from doing this themselves and breaking into the RAM business?
nothing, except the terrible yields that they would obtain, and the lack of scale making the entire enterprise not profit generating (as the amount of profit per sale is too low if it even is positive, but you can't set it higher as there's cheaper, "better" ram available from pre-established fabs that do have economies of scale).
You could play the artisanal angle, and market it as home grown, organic ram. Not sure how much real buyers of ram care, but might get a few hobbyists in the market.
The angle right now I think is pretty obvious, there is a massive shortage that might cause actual incidents.
OpenAI should do their own production, I say slightly bitter because I'm in a health care sector that might be affected because we can't scale up or repair our infrastructure due to their massive pre-orders.
You can make eDRAM using logic processes (which are currently less bottlenecked than RAM, at least for non cutting-edge nodes) but the cost is still prohibitive compared to specialty DRAM processes (even when considering the recent increase in DRAM prices). If you were doing that, you'd want to use it for compute-in-memory instead, which basically pushes NUMA to an extreme of having lots of tiny cores each with direct access to its own local DRAM.
i assume the reason is that this is a very competitive market and you need hundreds of billions in investment to just start producing at a competitive quality and price, with massive uncertainty that you will be able to make that money back
i mean, you'd think if someone is willing to through 60B (or 10B for an option to buy at a 60B valuation) at Cursor, then other people would give it a shot at starting a SotA semiconductor fab, but apparently the profit expectations are not there
I know it's frustrating but media sort of reflects the most cautious and the most adventurous opinions of archaeology. Because saying vikings started at 793 is just a safe archaeological opinion, while even the romans built coastal forts along the british east coast to defend against "pirates".
Then the media will turn around and print something absolutely outlandish based on a total hypothesis, just because it attracts clicks.
Yeah but again we're coming up against safe archaelogical assumptions based on findings. But when we're talking Saxons and Frisians I find it hard to fail to mention the Angles and the Jutes.
INTPenis is mentioning Angles and Jutes because they were in present day Denmark (and England). You might ask what the cultural difference is, from Vikings, and I'd flounder. Vikings spoke Old Norse, a germanic language related to whatever the other tribes spoke (um, West Germanic, such as Old Frankish). They believed in gods related to the gods of these other tribes and used similar runes.
If you want to say this is an arbitrary modern set of categories ... I guess the Romans are responsible for the categorization really, by writing down tribe names such as Frisii.
Well, it's fair enough to observe that the Vikings spoke a North Germanic language while the Angles, Saxons, Jutes, Franks, and Frisians spoke a West Germanic one. Other than that, it seems pretty clear that the category "Germanic groups such as the Saxons, Franks, and Frisians" would include Vikings. "Such as" isn't exactly the mark of an exhaustive list.
(And interestingly enough, the cognate word ("wicing") is attested in Old English a long time before it's attested in Old Norse. It means "pirate". It wouldn't be at all surprising if Saxons raiding England referred to themselves that way, just like Danes raiding England did later.)
Confusingly, though, there's the chance we might still not be talking about real cognates. The Old Norse víkingr can be derived from (Old Norse) vík (inlet, cove, fjord) + -ingr ('one belonging to', 'one who frequents'), or possibly even something close to Old Norse vika (sea mile), originally referring to the distance between two shifts of rowers, ultimately from the Proto-Germanic ~wîkan 'to recede' and found in the early Nordic verb ~wikan 'to turn', similar to Old Icelandic víkja 'to move, to turn', with well-attested nautical usages.
The Old English wīc, on the other hand, has an old Germanic etymology referring to 'camps', 'villages' and the like.
God knows there are a lot of inlets and fjords in Scandinavia, which incidentally were also places from where the surplus "víkingr" males surged west, possibly having adapted the term as an ethnonym by then; at least in modern Scandinavian languages cognates like 'viking' (pl. 'vikingar') are definitely associated with the geographic root 'vik' — as are innumerable surnames like Sandvik, Vikman, etc. Then again, those roving Vikings did of course build up "camps" and "settlements" wherever they went, although this perhaps sounds more likely a name someone else would give to them...
As for the difference between the Norse (ie North Germanic/Scandinavian) tribes/people and their more southern cousins (Angles, Saxons, Franks etc.) prior to and at the beginning of the Viking era, you might say the former were in fact quite clearly relatively more isolated in terms of geography, language and still-very-much-pagan culture. (And while eg Angles and Saxons did invade and settle much of Britain from current Northern German and parts of Denmark, this was already a couple of hundred years before, and a lot happened since.)
Long long time ago buddy, I ditched TV 15+ years ago, can't even remember exactly when.
I only buy large monitors and mount them on my wall.
At first I only watched selfhosted media, but last 8 years it's been more and more Youtube. I'm not too happy about it, would like to wean myself off it.
I'm speaking from my own perspective here but scripted media is something I only watch socially, if my partner wants to watch with me. And I end up on my computer trying to type softly next to them.
All scripted media just seems so predictable now, I'm like Stan in that one South Park episode lol.
It also seems manipulative. I can see how a lot of shows just milk the story for more episodes until they can't milk it anymore. It doesn't seem genuine anymore, maybe it never was? Ratings have always existed, in my lifetime.
But the point is that the only newly produced content I watch is just regular people. One example is Antiques Roadshow, it's boring, maybe even "slow" TV, but it's real people. I much prefer watching real people than characters.
Something that really bugs me now is live action characters, I'd actually prefer cartoon characters. Because everything is so unreal and over the top, it might as well be a cartoon.
Lazy has nothing to do with it, codeberg simply doesn't work.
Most of my friends who use codeberg are staunch cloudflare-opponents, but cloudflare is what keeps Gitlab alive. Fact of life is that they're being attacked non-stop, and need some sort of DDoS filter.
Codeberg has that anubis thing now I guess? But they still have downtime, and the worst thing ever for me as a developer is having the urge to code and not being able to access my remote. That is what murders the impression of a product like codeberg.
Sorry, just being frank. I want all competitors to large monopolies to succeed, but I also want to be able to do my job/passion.
Maybe I'm too old school, but both GitHub and Codeberg for me are asyncronous "I want to send/share the code somehow", not "my active workspace I require to do work". But reading
> the worst thing ever for me as a developer is having the urge to code and not being able to access my remote.
Makes it seem like GitHub/Codeberg has to be online for you to be able to code, is that really the case? If so, how does that happen, you only edit code directly in the GitHub web UI or how does one end up in that situation?
For me it's a soft block rather than a hard block. I use multiple computers so when I switch to the other one I usually do a git pull, and after every commit I do a push. If that gets interrupted, then I have resort to things like rsyncing over from the other system, but more than once I've lost work that way. I'm strongly considering just standing up a VM and using "just git" and foregoing any UI, but I make use of other features like CI/CD and Releases for distribution, so the VM strategy is still just a bandaid. When the remote is unavailable, it can be very disruptive.
> If that gets interrupted, then I have resort to things like rsyncing over from the other system
I'm guessing you have SSH access between the two? You could just add it as another remote, via SSH, so you can push/pull directly between the two. This is what I do on my home network to sync configs and other things between various machines and OSes, just do `git remote add other-host git+ssh://user@10.55/~/the-repo-path` or whatever, and you can use it as any remote :)
Bonus tip: you can use local paths as git remote URLs too!
> but more than once I've lost work that way.
Huh, how? If you didn't push it earlier, you could just push it later? Some goes for pull? I don't understand how you could lose anything tracked in git, corruption or what happened?
Usually one of two things, mostly the latter: I forget to exclude all the .git/ directory from the sync, or I have in-progress and nowhere near ready for commit changes on both hosts, and I forget and sync before I check. These are all PEBKAC problems and/or workflow problems, but on a typical day I'll be working in or around a half-dozen repos and it's too easy to forget. The normal git workflow protects from that because uncommitted changes in one can just be rebased easily the next time I'm working in that on any given computer. I've been doing it like this for nearly 20 years and it's never been an issue because remotes were always quite stable/reliable. I really just need to change my worfklow for the new reality, but old habits die hard.
If you can rsync from the other system, and likely have an SSH connection between them, why don't you just add it as an additional remote and git pull from it directly?
You cannot git push something that is not committed. The solution is to commit often (and do it over ssh if you forget on a remote system). It doesn't need to a presentable commit. That can be cleaned up later. I use `git commit -amwip` all the time.
Sure, you might neglect to add a file to your commit, or commit at all, but that's a problem whether you're pushing to a central public git forge or not.
You'd create a bare git repo (just the contents of .git) on the host with git init --bare, separate from your usual working tree, and set it as a remote for your working trees, to which you can push and pull using ssh or even a path from the same machine.
If you have ssh access to the remote machine to set up a git remote, you can login to the remote machine and commit the changes that you forgot to commit.
Git supports having multiple remotes so you can have your own private git repo on any ssh server for sharing between your own machines and only need to rely on the public host for sharing with others.
For some projects, the issue tracker is a pretty integral part of the documentation. Sure, you can host your own issue tracker somewhere, but that's still shifting a center point somewhere, in a theoretically decentralized system. I've frequently wished the issue tracker was part of the repository. Also -- love them or hate them -- LLMs would probably love that too.
If it's so integral then you should really make sure that its something you can control and operate longer than a single service provider feels like giving out freebies.
> Makes it seem like GitHub/Codeberg has to be online for you to be able to code, is that really the case?
I can understand that work with other active contributors, but I agree with you that it is a daft state of affairs for a solo or mostly-solo project.
Though if you have your repo online even away from the big places, it will get hit by the scrapers and you will end up with admin to do because of that, even if it doesn't block your normal workflow because your main remote is not public.
I was shaking my head in disbelief when reading that part too. I mean, git's whole raison d'etre, back when it was introduced, was that you do not need online access to the repo server most of the time.
So those people are using the tool incorrectly, and would have a much better experience if they used it as designed. If everyone was running around using screwdriver handles to pound in nails, that wouldn't make it reasonable to say that any new screwdriver company has to have 5 lb handles.
> git's whole raison d'etre […] was that you do not need online access to the repo server most of the time
Not really. The point of git was to make Linus' job of collating, reviewing, and merging, work from a disparate team of teams much less arduous. It just happens that many of the patterns needed for that also mean making remote temporarily disconnected remote repositories work well.
The whole point of git was tm be a replacement for BitKeeper after the Linux developers got banned from it for "hacking" after Andrew Tridgell connected to the server over telnet and typed "HELP"
That too, though the point of using a distributed code control system was the purpose I mentioned. But even before BitKeeper getting in a tizzy about Tridgell's¹ shenanigans there was talk of replacing it because some properties of it were not ideal for something as large as the kernel with as many active contributors, and there were concerns about using a proprietary product to manage the Linux codebase. Linus was already tinkering with what would become the git we know.
--------
[1] He did a lot more than type “help” - he was essentially trying to reverse engineer the product to produce a compatible but more open client that gave access to metadata BitKeeper wanted you to pay to be able to access² which was a problem for many contributors.
[2] you didn't get the fulllest version history on the free variants, this was one of the significant concerns making people discuss alternatives, and in some high profile cases just plain refuse to touch BitKeeper at all
Philosophically I think it's terrible that Cloudflare has become a middleman in a huge and important swath of the internet. As a user, it largely makes my life much worse. It limits my browser, my ability to protect myself via VPNs, etc, and I am just browsing normally, not attacking anything. Pragmatically though, as a webmaster/admin/whatever you want to call it nowadays, Cloudflare is basically a necessity. I've started putting things behind it because if I don't, 99%+ of my traffic is bots, and often bots clearly scanning for vulnerabilities (I run mostly zero PHP sites, yet my traffic logs are often filled with requests like /admin.php and /wp-admin.php and all the wordpress things, and constant crawls from clearly not search engines that download everything and use robots.txt as a guide of what to crawl rather than what not to crawl. I haven't been DDoSed yet, but I've had images and PDFs and things downloaded so many times by these things that it costs me money. For some things where I or my family are the only legitimate users, I can just firewall-cmd all IPs except my own, but even then it's maintenance work I don't want to have to do.
I've tried many of the alternatives, and they often fail even on legitimate usecases. I've been blocked more by the alternatives than I have by Cloudflare, especially that one that does a proof of work. It works about 80% of the time, but that 20% is really, really annoying to the point that when I see that scren pop up I just browse away.
It's really a disheartening state we find ourselves in. I don't think my principles/values have been tested more in the real world than the last few years.
Either I am very lucky or what I am doing has zero value to bots, because I've been running servers online for at least 15 years, and never had any issue that couldn't be solved with basic security hygiene. I use cloudflare as my DNS for some servers, but I always disable any of their paid features. To me they could go out of business tomorrow and my servers would be chugging along just fine.
While I sympathise, I disagree with your stance. Cloudflare handle a large % of the Internet now because of people putting sites that, as you admitted, don't need to be behind it there.
> and use robots.txt as a guide of what to crawl rather than what not to crawl
Mental note, make sure my robots.txt files contain a few references to slowly returning pages full of almost nonsense that link back to each other endlessly…
Not complete nonsense, that would be reasonably easy to detect and ignore. Perhaps repeats of your other content with every 5th word swapped with a random one from elsewhere in the content, every 4th word randomly misspelt, every seventh word reversed, every seventh sentence reversed, add a random sprinkling of famous names (Sir John Major, Arc de Triomphe, Sarah Jane Smith, Viltvodle VI) that make little sense in context, etc. Not enough change that automatic crap detection sees it as an obvious trap, but more than enough that ingesting data from your site into any model has enough detrimental effect to token weightings to at least undo any beneficial effect it might have had otherwise.
And when setting traps like this, make sure the response is slow enough that it won't use much bandwidth, and the serving process is very lightweight, and just in case that isn't enough make sure it aborts and errors out if any load metric goes above a given level.
So, basically iocaine (https://iocaine.madhouse-project.org/). It has indeed been very useful to get the AI scraper load on a server I maintain down to a reasonable level, even with its not so strict default configuration.
First time seeing that, but yes, seems similar in concept. Iocaine can be self-hosted and put in as a "middleware" in your reverse proxy with a few lines of config, cloudflare's seems tied to their services. Cloudflares also generates garbage with generative models, while iocaine uses much simpler (and surely more "crude") methods of generating its garbage. Using LLMs to feed junk to LLMs just makes me cry, so much wasted compute.
Is iocaine actually newer though? Its first commit dates to 2025-01, while the blog post is from 2025-03. I couldn't find info on when Cloudflare started theirs. There's also Nepenthes, which had its first release in 2025-01 too.
Yes, except with the content being based on the real content rather than completely random. My intuition says that this will be more effective, specifically poisoning the model wrt tokens relating to that content rather than just increasing the overall noise level a bit (the damage there being smoothed out over the wider model).
Hot damn, this is a great idea! Reminds me fondly of an old project a friend and I built that looks like an SSH prompt or optionally an unauthed telnet listener, which looks and feels enough like a real shell that we would capture some pretty fascinating sessions of people trying to explore our system or load us with malware. Eventually somebody figured it out and then DDoSed the hell out of our stuff and would not stop hassling us. It was a good reminder that yanking people's chains sometimes really pisses them off and can attract attention and grudges that you really don't want. My friend ended up retiring his domain because he got tired of dealing with the special attention. It did allow us to capture some pretty fascinating data though that actually improved our security while it lasted.
This is one reason why most crawlers ignore robots.txt now. The other reason is that bandwidth/bots are cheap enough now that they don't need web admins to help them optimize their crawlers
> This is one reason why most crawlers ignore robots.txt now.
I don't buy that for a second. Those not obeying robots.txt were doing so either because they were malicious (they wanted everything and wouldn't be told “please don't plough through these bits”) or stupid (not knowing any better) or both.
Anyone who was obeying robots.txt isn't going to start ignoring it because we've put honeypots there. Why would they think “well, now there are honeypots there I'm going to go scan those… honypots, yeah, that's a good idea”.
> The other reason is that bandwidth/bots are cheap enough now that they don't need web admins to help them optimize their crawlers
Web admins are not trying to optimize their crawlers, they are trying to stop their crawlers breaking sites.
> Web admins are not trying to optimize their crawlers, they are trying to stop their crawlers breaking sites.
Actually they often do and that's one of the original purposed of robots.txt - to get search engines to stop wasting time on indexing worthless crap like endless dynamically generated pages. It's only relatively recently that most crawlers had a hostile relationship with website operators.
OP is about Github. Have you seen the Github uptime monitor? It’s at 90% [1] for the last 90 days. I use both Codeberg and Github a lot and Github has, by far, more problems than Codeberg. Sometimes I notice slowdowns on Codeberg, but that’s it.
To be fair, Github has several magnitudes higher of users running on it than Codeberg. I'm also a Codeberg user, but I don't think anyone has seen a Forgejo/Gitea instance working at the scale of Github yet.
To be fair, GitHub has several magnitudes higher of revenue to support that. Including from companies like mine who are paying them good money and get absolutely sub-par service and reliability from them. I'd be happy for Codeberg to take my money for a better service on the core feature set (git hosting, PRs, issues). I can take my CI/CD elsewhere, we self-host runners anyway.
I don't think OP was making a value judgment or anything. It's just weird to say you won't consider Codeberg because you need reliability when Codeberg's uptime is at 100% and Github's is at 90%.
I think the idea is that a Forgejo/Gitea instance should never have to work at anywhere near the scale of GitHub. Codeberg provides its Forgejo host as a convenience/community thing but it's not being built to be a central service.
My own git server has been hit severely by scrapers. They're scraping everything. Commits, comparisons between commits, api calls for files, everything.
And pretty much all of them, ByteDance, OpenAI, AWS, Claude, various I couldn't recognize. I basically just had to block all of them to get reasonable performance for a server running on a mini-pc.
I was going to move to codeberg at some point, but they had downtime when I was considering it, I'd rather deal with that myself then.
Anyone actually scraping git repos would probably just do a 'git clone'. Crawling git hosts is extremely expensive, as git servers have always been inadvertent crawler traps.
They generate a URL for every version of every file on every commit and every branch and tag, and if that wasn't enough, n(n+1)/2 git diffs for every file on every commit it has exited on. Even a relatively small git repo with a few hundred files and commit explodes into millions of URLs in the crawl frontier. Server side many of these are very expensive to generate as well so it's really not a fantastic interaction, crawler and git host.
If you run a web crawler, you need to add git host detection to actively avoid walking into them.
If you are hosting your own git repost you don't really need to provide diffs between any arbitrary revision - just pregenerate diffs between each commit and its parent(s) and tell people to clone the repo if they want anything more fancy. Maybe add a few more cases like diffs between releases if you are feeling nice.
And you also don't need to host a version of each file for each commit - those should just be HTTP redirects to a unique URL for that version of the file, e.g. to the commit that last changed it - or just don't provide it at all since most people are only going to be interested in branches anyway and others can clone the repo.
The same goes for many other expensive operations that other websites (including blogs and forums) do that cause the website to go down when a bad crawler finds it. It's almost all self-inflicted pain that doesn't even provide meaningful features to real users compared to a better designed website with a finite number of pages that you can even host statically if you want.
And yet, it's exactly what all the AI companies are doing. However much it costs them in server costs and good will seems to be worth less to them then the engineering time to special case the major git web UIs.
I doubt they're actually interested in the git repos.
From the shape of the traffic it just looks like a poorly implemented web crawler. By default, a crawler that does not take measures to actively avoid git hosts will get stuck there and spend days trying to exhaust the links of even a single repo.
For me it was specifically crawlers from the large companies, they we're at least announcing themselves as such. They did have different patterns, bytedance was relatively behaved, but some of the less known ones, did have weird patterns of looking at comparisons.
I do think they care about repos, and not just the code, but also how it evolves over time. I can see some use, if marginal in those traits. But if they really wanted that, I'd rather they clone my repos, I'd be totally fine with that. But i guess they'd have to deal with state, and they likely don't want to deal with that. Rather just increase my energy bill ;)
How do people even on hacker news of all places conflate git with a code hosting platform all the time? Codeberg, GitHub or whatever are for tracking issues, running CI, hosting builds, and much more.
The idea that you shouldn't need a code hosting platform because git is decentralized is so out of place that it is genuinely puzzling how often it pops up.
They said they want to be able to rely on their git remote.
The people responding are saying "nah, an unreliable remote is fine because you can use other remotes" which doesn't address their problem. If Codeberg is unreliable, then why use it at all? Especially for CI, issues, and collab?
The person you’re replying to is saying that you can do everything outside of tracking issues, running CI, ... without a remote. Like all Git operations that are not about collaboration. (but there is always email)
Maybe a hard blocker if you are pair programming or collaborating every minute. Not really if you just have one hour to program solo.
The original intent of the authors is by now irrelevant. The current "point" of git is that it's the most used version control solution, with good tooling support from third parties. Nothing more. And most people prefer to use it in a centralised fashion.
That doesn't remove the fact that when people are working on the code, their local copy doesn't disappear after they pushed their commits and a local copy is still available.
Only exception is when people are using the code editor embedded in the "forge" but this is usually an exceptional use rather than the norm.
> That doesn't remove the fact that when people are working on the code, their local copy doesn't disappear after they pushed their commits and a local copy is still available.
It doesn't remove it but doesn't make it very relevant either, because of all the tests that are necessarily done remotely and can't be done locally, and without that feedback in many cases development is not possible.
Probably has happened at some point, but personally, I have not been hit with/experienced downtime of Codeberg yet. The other day however GitHub was down again. I have not used Gitlab for a while, and when I used it, it worked fine, and its CI seems saner than Github's to me, but Gitlab is not the most snappy user experience either.
Well, Codeberg doesn't have all the features I did use of Gitlab, but for my own projects I don't really need them either.
> for me as a developer is having the urge to code and not being able to access my remote
I think that's the moment when you choose to self host your whatever git wrapper. It really isn't that complicated to do and even allows for some fun (as in cheap and productive) setups where your forge is on your local network or really close to your region and you (maybe) only mirror or backup to a bigger system like Codeberg/GitHub.
In our case, we also use that as an opportunity to mirror OCI/package repositories for dependencies we use in our apps and during development so not only builds are faster but also we don't abuse free web endpoints with our CI/CD requests.
That is what we have been doing for quite some time now, from what I gathered. Every time I see something becoming popular, I am like "Hmm, I've seen this before", and I really have. They just gave it a fancier name with a fancier logo and did some marketing and there you go, old is new.
I agree. I switched to Codeberg but switched back after a few months. Funny enough, I found there to be more unreported downtime on Codeberg than GitHub.
I have published 4 open source projects thanks to the productivity boost from AI. No apps though, just things I needed in my line of work.
But I have been absolutely flooded with trailers for new and upcoming indie games. And at least one indie developer has admitted that certain parts of their game had used the aide of AI.
I also noticed sometimes when I think of writing something, I ask AI first if it exists, and AI throws up some link and when I check the link it says "made with <some AI>".
So I'm not sure what author is trying to say here but I definitely feel like I am noticing a rise in software output due to AI.
But with that said, I also am noticing the burden of taking care of those open source projects. Sometimes it feels like I took on a 2nd job.
I think a lot of software is being produced with AI and going unnoticed, they don't all end up on the front page of HN for harassing developers.
reply