Dynamic linking also lets you update libraries due to things like security issues, it's not just a memory thing. Kinda agree on the space thing too (plus much less chance for things like buffer overflows..)
FWIW: I think everything has its place, and everything has tradeoffs. I can definitely see a lot of usefulness for dynamic linking. The point you raise probably being the best current reason.
... but since I'm already playing devils advocate :)
Dynamic linking also lets you update libraries ... and cause security issues simultaneously across all applications. Increasing the number of possible attack vectors to successfully utilize that vulnerability.
Actually, it's a wash. If all we had was static linking, people would statically link the same common libraries. So you'd have to update multiple binaries for a single vulnerability.
I've seen this in my day job at Pivotal. The buildpacks team in NYC manages both the "rootfs"[0] of Cloud Foundry containers, as well as the buildpacks that run on them.
When a vulnerability in OpenSSL drops, they have to do two things. First, they release a new rootfs with the patched OpenSSL dynamic library. At this point the Ruby, Python, PHP, Staticfile, Binary and Golang buildpacks will be up to date.
Then they have to build and release a new NodeJS buildpack, because NodeJS statically links to OpenSSL.
Buildpacks can be updated independently of the underlying platform. The practical upshot is that anyone who keeps the NodeJS buildpack installed has a higher administrative burden than someone who uses the other buildpacks. The odds that the rootfs update and NodeJS buldpack are updated out of sync is higher, so security is weakened.
This was a much more powerful reason before things like docker became common, and methodologies adapted to provide updates for docker images, which for this purpose are functionally identical to a static binary.
At least I hope "methodologies adapted", I don't use docker images, so that's an assumption on my part, but I feel it's a fairly safe bet.
Docker images don't have a nice way of updating without "rebuilding everything". There's a tool called zypper-docker that does allow you to update images, but there's no underlying support for rebasing (updating) in Docker. I was working on something like that for a while, but it's non-trivial to make it work properly.
Hmm, I assumed it would be something along the lines of the images being fairly static, and updated as a whole, and you just apply your configs and data, possibly through mount points.
I was responding to the comment that security updates to libraries make it harder to update static binaries. Docker has revived the problem, and there isn't a way of nicely updating images without rebuilding them (which in turn means you have to do a rollout of the new images). While it's not a big deal, it causes some issues that could be avoided.
Yes, but presumably you're running far fewer docker images than you have binaries that would be affected if you statically compiled everything. For example, I assume in a statically compiled system, an update to zlib will likely affect a lot more packages than docker images you are running (on a server I admin, there's 3 binaries in /bin that link to zlib, and 374 binaries in /usr/bin, which will condense down to some smaller, but still likely quite large set of OS packages). It's easier in a dynamically linked system, where you can just replace the library, but it's not that much better for the sysadmin, as if you want to make sure you are running the new code, you need to identify any running programs that have linked to zlib and restart them, as they still have the old code resident in memory.