Hacker Newsnew | past | comments | ask | show | jobs | submit | turminal's commentslogin

This sounds like a great way to lose data when the machine dies unexpectedly.


Linux should replicate Microsoft's feature where they back up your "full disk encryption" keys to your cloud account, completely unencrypted, and share them with the cops.


They really should (no joke). That's how recovery works when you manage lots of devices. And I wouldn't be surprised if they can do that with Linux already via Intune.

Full disk-encryption doesn't mean your encryption key never leaves the device. Matter of fact, there is no point in FDE if the key is readily accessible pre-boot on the device. And no mature key management system relies on users remembering credentials as the end-all-be-all. Even login credentials have recovery mechanisms. With FDE, that is the recovery mechanism.

It helps with locking out disks after a device is lost/stolen. it also helps when the hardware is fried and you have important data that needs recovery. Imagine that but you have 100k devices to manage that way. Are you going to rely on a revolving-door of 100k+ employees to manage that credential? And I'm sure it's stored on disk encrypted in their DB, but eventually the unencrypted credential is needed. Block-ciphers ultimately need the plain-text secret provided to them to function, regardless of what complex systems you use, the ciphers need the same deterministic secret.

Ultimately this isn't any worse than being able to go to their website and have a recovery link sent to your email, except instead of the whole send email part, you have to be an authorized admin or owner in their portal, and you just get it from there. Pre-boot, there is no networking or internet, even things like correct time information can't be guaranteed, for more complex systems.


LUKS supports multiple decryption methods so you could for example add one with a really long string or a yubikey as a backup. Most folks replying here aren't encrypting anything at all.


You can print recovery codes. Just chuck them in your safe.

Cryptography is only safe against someone who doesn't come and beat the password out of you if they want it. In my case, only my laptop is encrypted so if I lose it when I'm out it's useless.


For lots of software projects, a release tarball is not just a gzipped repo checked out at a specific commit. So this would only work for some packages.


A simple version of this might be a repo with a single file of code in a language that needs compilation, versus, and the tarball with one compiled binary.

Just having a deterministic binary can be non-trivial, let alone a way to confirm "this output came from that source" without recompiling everything again from scratch.


For most well designed projects, a source tarball can be generated cleanly from the source tree. Sure, the canonical build process goes (source tarball) -> artifact, but there’s an alternative build process (source tree) -> artifact that uses the source tarball as an intermediate.

In Python, there is a somewhat clearly defined source tarball. uv build will happily built the source tarball and the wheel from the source tree, and uv build --from <appropriate parameter here> will build the wheel from the source tarball.

And I think it’s disappointing that one uploads source tarballs and wheels to PyPI instead of uploading an attested source tree and having PyPI do the build, at least in simple cases.

In traditional C projects, there’s often some script in the source tree that runs it into the source tarball tree (autogen.sh is pretty common). There is no fundamental reason that a package repository like Debian or Fedora’s couldn’t build from the source tree and even use properly pinned versions of autotools, etc. And it’s really disappointing that the closest widely used thing to a proper C/C++ hermetic build system is Dockerfile, and Dockerfile gets approximately none of the details right. Maybe Nix could do better? C and C++ really need something like Cargo.


The hacker in me is very excited by the prospect of pypi executing code from my packages in the system that builds everyone's wheels.


Launchpad does this for everything, as does sbuild/buildd in debian land. They generally make it work by both: running the build system in a neutered VM (network access generally not permitted during builds, or limited to only a debian/ubuntu/PPA package mirror), and going to some degree of invasive process/patching to make build systems work without just-in-time network access.

SUSE and Fedora both do something similar I believe, but I'm not really familiar with the implementation details of those two systems.


I’m only familiar with the Fedora system. The build is hermetic, but the source input come from fedpkg new-sources, which runs on the client used by the package developer.


This seems no worse than GitHub Actions executing whatever random code people upload.

It’s not so hard to do a pretty good job, and you can have layers of security. Start with a throwaway VM, which highly competent vendors like AWS will sell you at a somewhat reasonable price. Run as a locked-down unprivileged user inside the container. Then use a tool like gVisor.

Also… most pure Python packages can, in theory, be built without executing any code. The artifacts just have some files globbed up as configured in pyproject.toml. Unfortunately, the spec defines the process in terms of installing a build backend and then running it, but one could pin a couple of trustworthy build backends versions and constraint them to configurations where they literally just copy things. I think uv-build might be in this category. At the very least I haven’t found any evidence that current uv-build versions can do anything nontrivial unless generation of .pyc files is enabled.


If it isn't at least a gzip of a subset of the files of a specific commit of a specific repo, someone's definition of "source" would appear to need work.


To get a specific commit from a repo you need to clone usually, which will involve a much bigger download than just downloading your tar file.


Shallow clones are a thing. And it’s fairly straightforward to create a tarball that includes enough hashes to verify the hash chain all the way to the commit hash. (In fact, I once kludged that up several years ago, and maybe I should dust it off. The tarball extracted just like a regular tarball but had all the git objects needed hiding inside in a way that tar would ignore.)


I don't actually see why you'd need to verify the hash chain anyway. The point of a source tarball, as I understand it, is to be sure of what source you're building, and to be able to audit that source. The development path would seem to be the developer's concern, not the maintainer's.


> The point of a source tarball, as I understand it, is to be sure of what source you're building

Perhaps, in the rather narrow sense that you can download a Fedora source tarball and look inside yourself.

My claim is that upstream developers produce actual official outputs: git commits and sometimes release tarballs. (But note that release tarballs on GitHub are often a mess and not really desired by the developer.). And I further think that verification that a system like Fedora or Debian or PyPI is building from correct sources should involve byte-for-byte comparison of the source tree and that, at least in the common case, there should be no opportunity for a user of one of these systems to upload sources that do not match the claimed upstream sources.

The sadly common workflow where a packager clones a source tree, runs some scripts, and uploads the result as a “source tarball” is, IMO, wrong.


You know git allows history rewrite right?


of the head, or of any commit?


I’m not sure why this would make a difference. The only thing special about the head is that there is a little file (that is not, itself, versioned) saying that a particular commit is the head.


In other words, for the user, it's not a step forward. It doesn't matter if the spec is perfect.


But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.


I will never forgive Egypt for breaking my shit with a 3 day notice (what was it like 10 years ago?).

Thankfully for me it was just a bunch of non-production-facing stuff.


Was this Morsy's government or Sisi's? If it's Morsy's government you're holding a grudge against, I have some good news for you. (Presumably you're not holding that grudge against random taxi drivers and housewives in Alexandria.)


I don't know if the level of bureaucracy where that decision was made is really impacted by the leadership changing. Egypt continues to make super short notice timezone changes as recently as last year. (Just at least not 3 days notice this most recent time around)


As I'm sure many of the customers can tell you, the company and the products are very real. And they come in very real milled aluminum cases, the case in the images is 3d printed because it's a prototype.


I don't understand the initial motivation for converting regular dynamic library dependencies to dlopen dependencies. How does that help with reducing the footprint?


It makes the presence of those libraries optional: you no longer need them to execute the relevant tool at all. It'll just mean you can't use features which depend on those libraries. For libraries that are only pulled in for less-commonly used features it makes a lot of sense (the specific case they are doing it for is generating the initrd, which needs a copy of any libraries used by anything running in the initrd, which is almost always going to include systemd, but the systemd in the initrd is very unlikely to use any of these optional features)


dynloading libraries can have some side effects.

Which the recent xz attack used to mess with `sshd`, even if it never actually used functions from the library.

dlopen loading only loads the library *when it's actually used*, and doesn't include everything and the kitchen sink on startup.

If the libraries are used seldomly enough, it might also allow to make more dependencies optional in package management.


I think it's also relevant that the xz exploit made use of the fact that by running code before main, they could modify areas of memory that later get turned read-only. Any library that does get loaded with dlopen can of course still attack the process it's in, but it has less tools available to it for evading detection.


So... I repeat the GP.

What you gain is that the vulnerabilities will be harder to track down?


The specific way sshd was infected would not have happened with libxz as dlopen library.

Debian's sshd only uses libsystemd for the notify api. I.e. it doesn't need any feature that uses libxz. If it's dlopen()ed, it does not need to be loaded into the process context to use an unrelated feature.

FWIW, IMO upstream systemd should split their monolithic library and allow users to pick better that way, but this has other implications on DX.


> FWIW, IMO upstream systemd should split their monolithic library and allow users to pick better that way, but this has other implications on DX.

FWIW, upstream systemd has the opinion that no-one should load the library for startup notification, instead they should use the well documented api and just write a message to a socket.


Indeed, that's what the Debian maintainer of OpenSSH did soon after the quick security fix. He replaced the dependency on libsystemd with some hand-made code that notifies systemd by socket. https://salsa.debian.org/ssh-team/openssh/-/commit/cc5f37cb8...


That's not what they said in their update to the documentation[0] last week, which says that "although using libsystemd is a good choice, this protocol can also be reimplemented without external dependencies".

It calls it a "good choice". Did they say somewhere else that no-one should do it despite it supposedly being a good choice?

[0] https://github.com/systemd/systemd/pull/32030/files



sshd should not have used libsystemd in the first place for the trivial notification. And the ifunc stuff is its own security nightmare. Papering over this by dlopen-ing some libs in libsystemd does not address the deeper issues.


FWIW, IMO upstream systemd should split their monolithic library and allow users to pick better that way, but this has other implications on DX.

They've done the exact opposite afaict. Libsysd used to be split up, now it is monolithic.


Upstream systemd should just cut the unused crap from their notification protocol and promise it's stable.

They probably couldn't change it anymore if they tried to.


The systemd notification protocol has been stable and documented as such for years (probably even a decade at this point): https://systemd.io/PORTABILITY_AND_STABILITY/


The whole protocol is "write READY=1 to the socket found in the NOTIFY_SOCKET environment variable".


> What you gain is that the vulnerabilities will be harder to track down?

No, they're just as easily tracked.

What you gain is that you can refuse to install optional dependencies because they are now optional when they used to be required.

That's a fairly big deal.

Of course, in practice all those optional dependencies will be installed anyways because of other things in the distro needing them. You can fairly object that not much changed, at least for now.

A better approach would be to eliminate a lot of these dependencies somehow. Another approach would be to sandbox the dependencies that cannot be eliminated.


It meant that had the bad xz version been shipped to distros, only some people would been vulnerable instead of everyone. That is valuable.


Though only with this particular approach to the backdoor. If systemd had always had this approach (or distros hadn't patched sshd to link it in), the attackers would have focused on a different path from delivering malicious code to widely-used distros which executes in a priviledged context to network RCE.


Wouldn't this systemd feature add a convenient centralized point of attack to inject libraries? Not as open at the user level, but similar to a LD_PRELOAD kind of vulnerability.


Not in a way that isn't otherwise accessible, IMHO. I mean, if you're really concerned about injected vulnerabilities into high-trust software (and you should be!) you should be suspicious of any dynamic linkage at all. But if you're going to do it, doing it late and under affirmative control is almost certainly the right choice.


It's because of how dynamic linking/loading works on Linux. An ELF dependency means that symbols in the library can override symbols in the binary or in other libraries. That doesn't work when the library is loaded with dlopen().


Except for the users.


That’s the essence of consumerism, isn’t it? "how to milk out users as much as we can" (with variation around "milk gently lot of users" or "milk hard a few addicted junkies", with everyone dreaming of being Apple "milk really hard a lot of addicted junkies")


That would imply WAF gets to see unhashed passwords, so not good at all.


WAF always sees unhashed passwords -- passwords are sent TLS encrypted in a POST body (unhashed) and are hashed by the server software -- and that's regardless of the password policy.


How would a WAF do its job if it can't see the request payload?


There are various schemes where the password is salted, hashed or prehashed on the client side, to various effectiveness. They have never been really popular and the advent of ubiquitous https probably made them even less common, but they do exist. They do help protect you from your own WAF though.


can you elaborate on this? Or link something that does? My intuition is that whatever gets sent over the wire is effectively the password. Not sure how the server could validate some rolling hash of the password (based on like a timestamp or something) without having to store the pre-image(i.e. the raw password).


The SRP Authentication and Key Exchange System does not send the password from the client to the server. This scheme is supposedly used by Blizzard when authenticating users in some of their online games.

https://www.rfc-editor.org/rfc/rfc2945

https://security.stackexchange.com/questions/18461/how-secur...


Yes, that's the common counter argument. Your hash has now just become the password, and no amount of clever salting really solves that.

It still prevents the server (and any proxies, MitM attackers, etc) from seeing the plain-text password, which can help protect the user if they reused the password somewhere else. Assuming the client wasn't also compromised, which is very likely in web applications but maybe a valid scenario in apps and desktop applications.

The other imho valid idea is that you can run a key derivation function client-side (e.g. salted with the user-name), in addition to running your normal best-practice setup server side. This can allow you to run more expensive key derivation which provides more protection if your database is leaked, while also making dictionary attacks on your authentication endpoints less viable.


If your password is 123456, then client-side hashing will make this less obvious. If the site is compromised in a way that reveals passwords, then it will not trivially work on other sites that use your password. In addition, stronger total hashing can be used, since if your server can do M hashes persecond and your client can do N hashes per second, the total number of hashes to allow a one second login are (M/$NUMBER_OF_CONCURRENT_LOGINS)+N which is strictly larger than (M/$NUMBER_OF_CONCURRENT_LOGINS).

SRP[1] is an even better improvement, where an eavesdropper cannot authenticate as you; there is a challenge-response to login.

1: https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...


why would waf see hashed passowrds? passwords are hashed by application, so that is after waf does its job and hands request over to app.


These are SQL commands. The WAF would see the password unless you pre-hash it on the client side in JavaScript (not a bad idea). But the database really should never ever see the plaintext password. If it does you're doing a lot more wrong than just being open to SQL injection.


The funniest part of this is that they don't even check for all of the banned strings.

Source: I'm a student there and tried it out of curiosity.


They'll probably use the disclamer as an excuse to blame you if something breaks.


"... killed, or worse, expelled!" (https://www.quotes.net/mquote/41411)

Seriously though, I doubt there would be any consequences even if some BOFH tried to blame you


I had a pause when saw that the TLD is `si`. But then I found out that it’s just Slovenia.


It's easy to ensure compliance when the risk of non-compliance is getting expelled.


> The first 4 are implementation defined rather than undefined.

Third and fourth are only defined in some implementations.


That is fair for 4, although would explain why it is the case for 3?


If char is signed and ' ' * 13 is bigger than CHAR_MAX, you get UB by signed overflow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: