I honestly don't really care how "unreliable" you think shared libraries are, using entirely static linking is how we get gigantic 500MB monoliths that waste disk space and RAM, don't integrate with the rest of the system properly due to mismatched library versions, and also don't get security patches from the system unless you manually update the binary itself.

Static linking may be "easier", but as programmers, it's our job to use the *right* solution, not just the easiest one.

Static linking may not matter on your development rig, with lots of spare disk and memory, but what about someone running your code on a netbook from 2009? Can your program even fit on their computer? Can they use multiple programs at the same time, or do they have to close everything else to free enough memory? What if they don't have fast Internet access, so they can't download the same security patch 50 times? These are all things you have to consider as a software engineer.

Show thread

Dynamic linking might not be perfect, but it's the *right option to use*. Rust, Go, and other such languages need to start supporting it, unless they want to join the developers forcing people to buy newer, more powerful, more expensive computers every year. (If they can even afford it - if they can't, they're just stuck with barely being able to use their computer.)

Show thread


Gonna prefix this w/: I fully sympathize w/ you.

My own experience is: if the Rust world embraces code sharing/reuse as we want, it will not be in the form of dynamic linking as implemented today. It would be done "busybox-style"- have a super append-only binary for all libs/deps.

Parametric polymorphism and Rust's unstable ABI means your application and library source code can't really vary independently. Changes in lib affect app and vice-versa.

@keith I'm one of the few ppl who love Rust and wish it didn't completely discount dynamic linking.

But even putting aside the (correct!) observation that "dynamic linking just doesn't play nice w/ parametric polymorphism", the dynamic link disdain is real.

@cr1901 there are ways it could be made to work without resorting to a "busybox-style" approach (which seems REALLY inefficient and terrible)

@cr1901 @keith Shucks. I'd been hoping the binary API instability was only a quality of the language being in the toddler phase, and it would grow out of it soon.

But if the community thinks of it more as an anti-goal than a future-goal, I guess I shouldn't get my hopes up.

@keith True of Rust and Go, but at the start of this thread I thought you were talking about containerized app distribution! e.g. Firefox's snaps, etc.

[keturn says, having just installed a whole set of flatpaks to try to build a rust desktop app.]

@keturn love flatpaks and snaps and AppImages, who doesn't enjoy completely broken and fragmented systems? every app should be its own operating system

@keturn I sure love when the statically linked version of Qt in an application I have to use is completely different from the one on my system so it doesn't inherit the theme I use (and for bonus greatness it also doesn't inherit my keyboard shortcut settings so I have to switch my brain into a separate mode every time I alt-tab into the app window)

@keith rust supports C dylibs but unfortunately there's not really any way to have a conventional uniform binary ABI for a language with monomorphized generics

@keith I think Go already supports it to some degree? Then again Go is very fast to compile so I don't mind it being statically linked so much. Rust however is a pain to create packages for on Guix.
It's weird though that a static binary would take up that much space. Doesn't it strip out unused parts of the libraries?
Guix (and presumably other distros?) for instance uses dynamic linking almost everywhere except for the initramfs, because the static binary is smaller.

@keith Hey, I am that someone! Just got a netbook from 2009 working.

It has 400MB of RAM and 4GB HD, barely enough for just a minimal Linux with xorg. It is barely compatible with anything those days, because even webpages are ridiculously heavy those days.

@eldaking Sucks that old hardware is barely usable due to sloppy development practices, though :/

@keith statically linked executables are rarely ever larger than 15mb??? and the runtime ram usage difference between statically and dynamically linked executables is basically negligible since dynamic linking has runtime overhead from loading the used shared object pages and static linking has runtime overhead from, well, not sharing the used functions

dynamic linking would be great if it worked the way its designed to

@AgathaSorceress It depends on the program. But even 15MB can add up if you're stuck on old hardware.

And, unless I'm completely mistaken, shared library code is shared between processes when loaded, so if a ton of things you're running rely on e.g. Freetype or OpenGL, dynamic linking would reduce memory usage.

@keith the idea of dynamically linking is great, and it does reduce ram usage when the same part of a library is being used multiple times, with the downside of using more ram if that part is only used once

even with my largest project, a full tree of 405 libraries compiles to a ~18mb executable

i really wish dynamic linking was always a viable solution but in a lot of cases it just makes everything break

@keith (ie a dynamically linked haskell dev environment results in ~500 system packages, a broken language server and constant errors about missing dependencies, when this can be solved by just downloading 4-5 statically linked executables)

@keith i think a much bigger problem is languages that don't compile to executables at all, making you download all the libraries in their entirety and resulting in multiple gigabytes of unused code

@AgathaSorceress yeah, true. interpreted languages are good for smaller things, and should interface with native compiled libraries for larger, more complicated tasks

@AgathaSorceress hm. i've never tried haskell, but that sounds more like an issue with pre-packaged software? i remember having similar issues with certain pieces of software on Ubuntu, which didn't ever occur when I compiled them myself via emerge on Gentoo

@AgathaSorceress Maybe linking should be done during installation rather than in the packaging process itself. It's not really reasonable to expect everyone's device to be able to compile software from scratch, but linking shouldn't take nearly as much processing power

@keith its just that when there are hundreds of libraries and all of them are dynamically linked, there's constantly missing versions because the packages aren't updated quickly enough and the only solution is to spend hours trying to manually fix everything, give up and use the statically linked executables

@AgathaSorceress yeah... that's really a distribution issue though, not a problem with dynamic linking itself

@keith @AgathaSorceress we use rust on all our embedded platforms at work and thus have a lot of different rust binaries. 15MB each would be unacceptable. it would be a lot nicer if rust could do even a little dynamic linking.

@iitalics @keith i feel like embedded is a lot different from the usual systems where you get much more resources, and well, can't rust do dynamic linking anyway? like, i've even used a rust project that builds to an .so file and can be used with LD_PRELOAD to block spotify ads, and i think it's also possible to dynamically link to other libraries (using cargo:rustc-link-lib=dylib=name), even if a bit more complicated because it's not the default

@keith @AgathaSorceress one point id like to offer is that if the libraries are large, surely its their responsibility, not the software developer who consumes them? like, plan 9 is much smaller (less capable too but thats for another time) and only has static linking - its still smaller bcos the libraries are smaller
@keith @AgathaSorceress similarly i ran a mostly static Proper Linux that was smaller than dynamic linuxes bcos it did the bare minimum optimisation - not include unused things(glibc is incapable of this), and it had gl and freetype and such
@AgathaSorceress @keith granted, if everything uses the same library(and lots of it), static linking makes less sense

@keith i apparently missed a whole software discourse

we're supposed to hate dynamic linking now??

that feels like hating oxygen or something

it's just how things work!

@keith i don’t usually comment, but i feel as a dev i gotta speak up. how do you manage the case when distro maintainers let known bugged library versions through, and then users complain that *your app* breaks?

at work we are very fed up with the way e.g. Arch managed software updates without the least due diligence. it’s why we resort to Snaps, AppInages and static linking, because otherwise we simply get the rug pulled every other week

@crystalmoon @keith I always feel like this carries an unspoken assumption that software devs are all more competent than distro maintainers, which... nope.

I don't trust most developers to keep libraries up to date or fix bugs, especially bugs in dependencies. I barely trust them to not put malicious code in their stuff. Distros sometimes do a bad job (and they shouldn't, ofc), but there is so much trash software out there that does a much worse job.

@eldaking @crystalmoon @keith I package a few projects that have to be statically linked (rust & go). At least half of them reference outdated libraries (I make spot tests sometimes). Rust warns me if a dependency has a known vulnerability (happened 1 time so far), but how many vulnerabilities does the tool know of? I don’t think I have ever seen one of the projects make a new release to fix an issue in a dependency.

Imagine if OpenSSL were statically linked into every project that uses it. :grimacing_eyes_wide_open:

@makeworld yeah but I don't think anyone has statically linked OpenSSL or the like into highly popular programs yet. if that was the case we'd be fucked

@makeworld also shit like Qt, if you use your own version instead of the system's it doesn't integrate with the rest of the system for e.g. themes

@keith good points, but it feels like those are more the exception than the rule. Having to package all my dependencies for many Linux distros to distribute my Go applications would be a nightmare.

@makeworld That's what distro maintainers and package managers are for. Use those. Don't reinvent the wheel.

@keith @makeworld Static linking *SSL programs isn't too uncommon. If you want to use boringssl or quictls instead of openssl in a program to take advantage of a feature like QUIC or ECH but your distro uses openssl, then the cleanest solution is to statically link the alternative libraries in.

Static linking can also help squeeze out some performance if you do PGO and LTO. I managed to shave off a few framedrops from mpv+ffmpeg this way.

It can also *save* disk space in some situations: if a program doesn't use an entire library, only the relevant bits get linked into a static binary. Again, PGO can really help reduce binary size here. If you compare a standard mpv package with all its ffmpeg/libass and encoder/decoder libs with a 44mb statically-linked PGO'd mpv, it's not much of a contest. A statically linked mpv and mpd on my system actually use less disk space than their dynlinked alternatives (including shared libs).

@Seirdy @makeworld
- PLEASE never statically link code with major security implications into your programs under any circumstances, if the distro stops packaging updates or a user doesn't install them because it doesn't look like a security patch, they're fucked
- Static linking saves disk space if only a couple programs use small parts of a library, true, but that's not something you can guarantee is the case for everyone

@Seirdy @makeworld Also, programmers deciding to reinvent the wheel and use [XYZ fancy replacement for libraries that are already provided by the system] in their software has caused nothing but trouble for me in terms of stability lol. I've almost never had an issue with dynamically linked libraries, but I've had tons with smart-ass developers bundling in random shit because they think they know better than the distro maintainers

Sign in to participate in the conversation
Anarchism Space

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!