In my observation, the problem is recompilation avoidance. If you miss it the first time around (maybe you're just trying to get the thing to run) they won't come back unless you do a full rebuild. With a little care you can build a build system that will print them out every time you run, but it takes some care.
Why avoid recompiling? Why this reluctance to do a full rebuild?
I started programming in a world where you had to wait a day to get 300 lines of Fortran built. Now I routinely build the whole 44Gb of community ports of a major Linux distro in three days on a three hundred buck box at home.
There are 24 hours in a day, and your full rebuild will go just fine while you sleep, so long as you DON'T use -Werror.
I don't want to eat a 24h overnight full rebuild every time I fix a single typo. I do want to verify it still builds (i.e. I haven't missed one of the use cases of a renamed variable.)
The shorter the feedback loop, the less context I have to rebuild for errors, the faster I can fix the error, the more efficient I am.
I'm not so far along that I make much use of the red squiggly lines generated by my IDE to highlight syntax errors before I even hit save - I use too many languages that can't be adequately parsed that fast for them to be terribly accurate - but it's a sign of just how much people want to shorten that feedback loop.
Have the CI server do full rebuilds overnight? Sure. Although I still have to bug my coworkers to pay enough attention to the CI server to even notice it's gone red from their changes, at times. Convincing them to read through the warning logs is a nonstarter.
That's a good idea for a makefile hack - somehow force files with warnings to rebuild every time. You'll see them, and also get motivated to fix the warnings.
> Why can't developers, y'know, ACTUALLY READ THE BLOODY WARNINGS?
Same reason checklists are handy: People automatically optimize away "dead code" (nevermind the fact that reading the warnings is sometimes useful.)
> And then, y'know, they could FIX MORE THAN THE FIRST ONE after a single build pass?
I do get multiple errors with -Werror and kin, FWIW. But again we run into human nature - C++ errors get unreliable after the first one, so people tend to optimize away the "useless" step of reading the second error...
Three negative comments on this submission, that's some impressive anti-cult thing you've got going.
Slackware doesn't do "security theatre" updates. Whenever there's both a credible risk and a reasonable fix, you'll see an update, and it'll be quicker than distros that are multiple steps downstream from Debian.
You know, I felt really troubled to have been using something for 20 years and realize things had changed and it was no longer right and no longer good for me. I felt like a fool for having stuck with it for quite so long.
I realize that sounds like someone talking about a bad marriage, and I suppose we can generalize usefully on relationships of whatever type.
I think we must end up with longer-lasting bad feelings about anything we used to value.
> sometimes they cause problems (hello, Debian weak keys).
> until recently nobody was packaging Chromium for example.
To name one example, SlackBuilds.org has had Chromium since 2010, although admittedly that's not so long ago as the Debian weak key cockup, which was 2006.
Maybe your examples could use an upgrade to the latest stable version.
Chromium may have been packaged in 2010, but it was released three years before. The Debian weak keys "cockup" may have been created in 2006, but it was discovered two years later. Maintainers had long windows of time to add value. Did they?
I don't mean anything personal by it. If I was maintaining 38 packages on my nights and weekends I'd do a bad job too.
But examples don't go out of date unless you present some force that takes 30k unmaintainable packages and turns them into 50k maintainable packages. What specific advance in software maintenance do you believe improved the art of software maintenance by an OOM?
But if you want to talk recent examples, we could talk about how nobody's packaging Swift.
Arch Linux packaging files have source history going back to then as well.
Chromium is also an example of a package that took a long time to appear in other distros, because it's really written with the app mindset. It copies and incompatibly customizes most of its dependencies. No one would consider that a great idea for ongoing maintenance and security patching for a typical project, but of course this is a google product-oriented thing with nearly a thousand well-above-average-skill developers assigned, so it's not a problem for them to manage surprisingly huge volumes of code and changes.
Web browsers like Chromium are also a good example of the kind of modern software which doesn't work well with the debian release model, because it's "unsupported and practically broken" in like half a year. That's not true for the "make" utility, or for gimp or inkscape or libreoffice or audacious or countless other useful applications and tools which are not browsers.
I really don't like the fact that modern browsers are a crazy-train and there's just no getting off it.
The planning authorities have an obligation to consider the potential historical and archaeological impact of a proposed development, and can impose an obligation on the developer for a professional archaeological investigation (funded by the developer) prior to construction, as a condition of approving the development.
Or in other words: "We know this site is teeming with archaeology. You don't get to trash it unless you pay for a proper dig before you start"
Yes, you're being ignorant. Use '--content-disposition' with wget, or '--remote-name' in curl. The top level directory in the tarball corresponds to this name and everything works out as expected.
It's not only the name of the tarball after downloading it. RPM SPEC files require me to pass a full URL to the tarball's upstream source[1]:
> Source0: The full URL for the compressed archive containing the (original) pristine source code, as upstream released it. "Source" is synonymous with "Source0". If you give a full URL (and you should), its basename will be used when looking in the SOURCES directory.
But the only URL that is available is <Project URL>/archive/v0.1.1.tar.gz. The SPEC file is therefore defining a file named v0.1.1.tar.gz although I downloaded my tarball as foo-0.1.1.tar.gz. The packaging process looks for v0.1.1.tar.gz, fails to find it in my SOURCES directory and aborts.
I'm not blaming GitHub here in any way, I just wish I had a bit more flexibility for the tarball URL. Would be neat.
And thus, every boneheaded project that ships with -Werror enabled will inflict an excrement tornado of FAIL on its grateful users and packagers.
THIS IS WHY YOU SHOULD NOT USE -Werror. NOT EVER. (And if you can't bring yourself to do anything about mere warnings, you shouldn't be in software engineering.)
And even if it were, it's a matter of adding -Wno-misleading-indentation to make builds green again. Not a big deal. And besides, it's likely that fixing those warnings could be done mechanically. (and it's hardly true that every project that ships with -Werror has such misleading indentation somewhere in its code)
Huh. I would think it to be the other way around. When debugging, let me comment stuff out without erroring due to an unused variable or an unused parameter. Let me use a C-style cast to check the bit representation of an object. Then when building the release build, enable -Werror to make sure that I'm not sweeping anything under the rug.
> I eventually left because slackbuilds.org was not being run very well and I couldn't get the packages I needed
I'd be interested to hear more about that if you're willing to share. Let's see if we can do something positive to fix it, even if it's too late to help you personally.
The biggest thing was that SlackBuilds weren't being updated for the new release. There was also some unfriendliness in the mailing list related to getting it fixed (in general, nothing I experienced). I don't remember exactly what it was though. This was quite a while back - probably 13.1 or 13.37.
Thanks. FYI, for the upcoming 14.2 release we have an officially unofficial repo with hundreds of fixes for 14.2 ready to go (and hundreds more needed).
I'll admit I get a little tetchy when people insist on telling me 'xxx is broken on -current' when I already know exactly which 400+ packages are broken, and when everybody should understand that -current is for beta testers.