Hacker Newsnew | past | comments | ask | show | jobs | submit | more f_devd's commentslogin

No, it's gc-like. Up to 4x slowdown iirc


To better port C to Rust: 3C (Checked C), c2rust, Crown ownership analysis, RustMap, c2saferrust (LLM), Laertes


Where do you detect malice? The claims are quite accurate.


Accurate? Lets take the Wifi (Other users already commented the other ones). Open a wifi access point with the name of the restaurant, intercept the DNS requests and serve your filtered stuff.

PS: If the text is real and not trolling, the keyword in the text is 'rarely happen', which we could apply to car seatbelts then.


Then what? The user presumably sees TLS certificate warnings since you don't have valid certicates. HSTS would prevent downgrades to plain HTTP and is pretty common on sensitive websites.

Isn't the better advice to avoid clicking through certificate warnings? That applies both on and off open wifi networks.

There is a privacy concern, as DNS queries would leak. Enabling strict DoH helps (which is not the default browser setting).


I am afraid that it is not only about privacy (that they recommend ignoring), there are many options to chose, like CA vectors, lets say TrustCor (2022), e-Tugra (2023), Entrust (2024), Packet injection vectors, or Click here or use your login first vectors as you commented, bugs and configurations.

This ones known. Therefore I just cannot believe that those who wrote the open letter did not even though about such significant events from the past year, I remark the past year, or even on zero-days.

We are talking about people connecting to an unknown unsupervised network, that we do not know what new vulnerabilities will be published on main stream also, and the ones of the open letter know it because they are hiding behind the excuse of "rarely".


> like CA vectors

This gets complicated because you're not safe on your home or corporate network either when CAs are breached. The incident everyone talks about, DigiNotar (2011), had stolen CA keys issuing certificates that intercepted traffic across several ISPs. If that's the threat you're looking to handle, "avoid public wifi" isn't the right answer. Perhaps you're doing certificate pinning, application level signing, closed networks, etc.

> Entrust (2024)

I recently wrote a blog post[1] about CA incidents, so I notice this one isn't like the others. Entrust's PKI business was not impacted by the hack and Entrust remains a trusted CA.

> Click here or use your login

Password manager autofill is the solution there, both on public wifi and on a corporate network. Perhaps an ad blocker as well.

> people connecting to an unknown unsupervised network

Aren't most people's home networks "unsupervised"?

[1] https://alexsci.com/blog/ca-trust/


Why do you talk about home networks "unsupervised" when we are talking about public networks, access points, created to hunt people?

Do you notice that your proposed solutions try to fix a problem, isn't it? The open letter does not propose solutions; it merely denies them.

It is needed to be sincere with people, those "incidents" have happened for a long time, and unfortunately will keep happening (given the history), bad actors hunting, yesterday the CAs, and tomorrow? So if one connect to an open wifi one may fall victim to a trap, probably not at home but in an Airport or other crowded places with long waits, and even if you do not browse another app in background will be trying to do it.

It was needed many years to make people just sightly aware, and now they -if the text is real- pretend to undo it. But to be sincere I really do not mind much, I just perceive that open letter as malicious.


CA compromise feels like an exotic attack, beyond what "everyday people and small businesses" should worry about. There's no solution to CA compromise offered because the intended audience is not getting hacked in that way. If your concern is that high risk individuals need different advice, I agree, but the letter also makes that clear they are not the focus.

Are there specific, modern examples of CA compromise being used to target low-risk individuals? Is that a common attack vector for low-risk individuals and small businesses?


And how exactly do you plan to forge the SSL certificates to deliver your filtered contents?


> intercept the DNS requests and serve your filtered stuff.

How do you get from a malicious DNS response to a browser-validated TLS cert for the requested host?


what filtered stuff?

you mean partial web pages?

most browsers use DNS over HTTPS


> I have no idea why I should be against using LLM

It highly depends on your own perspective and goals, but one of the arguments I agree with is that habitually using it will effectively prevent you building any skill or insight into the code you've produced. That in turn leads to unintended consequences as implementation details become opaque and layers of abstraction build up. It's like hyper-accelerating tech-debt for an immediate result, if it's a simple project with no security requirements there would be little reason to not use the tool.


I never started for similar reasons


I have the same with journals, but the video archiving has actually come up a few times, still fairly rare though. I think the difference is that you control the journal (and so rarely feel like you need it's content) while the videos you're archiving are by default outside of your control and can be more easily lost.


A more modern approach of doing the same to use polymerized quantum dots (I believe it emits wide spectrum white when a voltage is applied), and passing that through a quantum dot film to get any specific wavelength.


I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.

[0]: https://www.forbes.com/sites/traversmark/2024/05/17/why-kids...


Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.


I would a agree a careful and very small amount of above brainrot in post-training could improve certain metrics, if the main dataset didn't contain any. But given how much data current LLMs consume and how much is being produced and put back into the cycle I doubt it will miss be missed


Depending on what you're trying to teach, I would think something like these would be nicer to read (but with minimal dependencies): https://github.com/jackkolb/TinyRSA or https://github.com/i404788/tiny-rsa


The point for me is a naive BIGNUM library that somebody who has only had high school level math can easily understand.


FYI XFS is not redundant, also RAID usually refers to software RAID these days.

I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work.


btrfs RAID is quite infamous for eating your data. Has it been fixed recently?


To be fair, your statement could be edited as follows to increase its accuracy:

> btrfs is quite infamous for eating your data.

This is the reason for the slogan on the bcachefs website:

"The COW filesystem for Linux that won't eat your data".

https://bcachefs.org/

After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.

The fact that its RAID is even more unstable merely seals the deal.


> Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

> In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk.

May seem daunting but it's actually only about 6 commands.


Hi. My screen name is my real name, and my experience with Btrfs stems from the fact that I worked for SUSE for 4 years in the technical documentation department.

What that means is I wrote the manual.

Now, disclaimer, not that manual: I did not work on filesystems or Btrfs, not at all. (I worked on SUSE's now-axed-because-of-Rancher container distro CaaSP, and on SLE's support for persistent memory, and lots of other stuff that I've now forgotten because it was 4 whole years and it was very nearly 4 years ago.)

I am however one of the many people who have contributed to SUSE's excellent documentation, and while I didn't write the stuff about filesystems, it is an error to assume that I don't know anything about this. I really do. I had meetings with senior SUSE people where I attempted to discuss the critical weaknesses of Btrfs, and my points were pooh-poohed.

Some of them still stalk me on social media and regularly attack me, my skills, my knowledge, and my reputation. I block them where I can. Part of the price of being online and using one's real name. I get big famous people shouting that I am wrong sometimes. It happens. Rare indeed is the person who can refute me and falsify my claims. (Hell, rare enough is the person who knows the difference between "rebut" and "refute".)

So, no, while I accept that there may be workarounds that a smart human may be able to do, I strongly suspect that these things are accessible to software, to tools such as Zypper and Snapper.

In my repeated direct personal experience, using openSUSE Leap and openSUSE Tumbleweed, routine software upgrades can fill up the root filesystem. I presume this is because the packaging tools can't get accurate values for free space, probably because Btrfs can't accurately account for space used or about to be used by snapshots, and a corrupt Btrfs root filesystem can't be turned back into a valid consistent one using the automated tools provided.

Which is why both SUSE's and Btrfs's own docs say "do not use the repair tools unless you are instructed to by an expert."


Hey. That's sounds like an awful experience. Btrfs has some rough edges especially since a lot of maintenance tasks are "manual", and you are right to try and address it. And it's annoying that becomes personal for some people with too much time.

For my perspective, my experience with btrfs has been flawless through 11 machines and at least 3 major releases on each without any maintenance but it could just be I'm not hitting the worst case (only use snapshots on 2 machines, raid on 3). And I've only used btrfs since fairly recently (~4 years now). I've had to recover one drive of a friend using the method I outlined before as he filled the entire drive with media. For me the trade-off of a few rough edges but more functionality & flexibility than other filesystems is worth it.

For your update issue, I think you're mostly correct; the package manager likely assumes the filesystem is not snapshotted (i.e. it will reclaim disk space), while btrfs with snapshots/CoW will use the entire size of written files unless it's in the same snapshot.


Thanks for that response.

It's a balance. All of life is a balance.

I worked at Red Hat very briefly, and for SUSE for longer than ever before. Both were good workplaces with a good atmosphere: RH is one of the friendliest places ever, and I'm still friends with former colleagues from over a decade ago.

OTOH, installing Fedora 14 was like having a bucket of cold water to the face. I used and reviewed Red Hat Linux in the 1990s and it was a massive PITA. It had no automatic dependency resolution, so complex software installation (e.g. going from KDE 1.x to KDE 2.x) was a huge task involving manually installing hundreds of dependencies.

RH would not bundle KDE (because Qt was not 100% FOSS) -- which is also why Mandrake was founded -- and so the GUIs on RHL were poor.

I switched to SUSE. Good package management, good GUIs, good system-management tools. (YaST was way better than RH's inadequate `linuxconf`.)

RH "fixed" this by... removing Linuxconf.

Trying Fedora a decade later and it was just as bad. All the external bits improved because upstream improved. GNOME was still a mess. KDE had got much more bloated. Xfce was better than ever.

But the RH in-house bits, while having nice visual design, were functionally terrible. The installer was an embarassment.

Over a decade of work since I reviewed RHL 9 and it was worse than ever.

A few years later, go to work at SUSE, and hey, openSUSE was lovely. All the good bits still there and improved. Yeah, a bit bigger and slower and clunkier than Ubuntu.

But there's always a downside.

An older team so less party atmosphere. Fewer "team building" sessions in the pub.

And while I was away, SUSE switched from ReiserFS to Btrfs, and as usual, SUSE of old being fond of experimental bleeding-edge filesystems, it's half-implemented and doesn't work right.

Snapper doesn't prune snapshots thoroughly enough. It fills your disk and because of unimplemented or non-working features it can't tell when this will happen.

Official SUSE answer: give it lots of space. Here, our FS falls over unrepairably when full, so give it all your space so it won't fill up! And you can't repair it so take lots of backups!

Every distro has downsides. Every filesystem has downsides but they are much less obvious.

Btrfs made my openSUSE boxes collapse and crash 2-3 times a year for 4 years. That is intolerable. I put up with that kind of crashy junk in 1997 or so but not 20 years later.


No. RAID 5/6 is still fundamentally broken and probably won't get fixed


This is incorrect, quoting Linux 6.7 release (Jan 2024):

"This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation."

I've not kept with more recent releases but there has been progress on the issue


Fixing btrfs RAID6 is becoming the Duke Nukem Forever of file systems


I believe RAID5/6 is still experimental (although I believe the main issues were worked out in early 2024), I've seen reports of large arrays being stable since then. It's still recommended to run metadata in raid1/raid1c3.

RAID0/1/10 has been stable for a while.


I realize this may be satire (Poe's law and all). But I disagree 'hamburger' should get protected status, in anything except the exact quotation without clear prefix/suffix. "Vegetarian Hamburger" (in near-equal font pt) should be fine, Veggie-burger shouldn't even be up for debate imo.

If fine-print is confusing consumers maybe we should improve our labeling standards rather than protecting a food category not in need of protection.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: