Thanks. I really liked the top voted comment there by edw519 [0]
> I have a simple guideline for real life interactions with others that carries over quite well on-line, "Deal with issues; ignore details."
> It's amazing how well this works in person, especially when trying to get something done. My number one question to another is probably, "Is that an issue or a detail?" We can almost always decide together which it is. Then, if it's an issue, we deal with it, and if it's a detail, we move on to the next issue.
> This has also saved me countless hours and aggravation on-line. If I post something and someone disagrees, I quickly decide whether or not it's really an issue and only engage the other if it is. I realize that this is just a judgment call, but I'd estimate about 90% of on-line disagreements are just details. In these cases, I think it's best to simply move on.
In the meanwhile, I found that enabling page transitions is a progressive enhancement tweak that can go a long way in making HTML replacement unnecessary in a lot of cases.
1) Add this to your css:
@view-transition { navigation: auto; }
2) Profit.
Well, not so fast haha. There are a few details that you should know [1].
* Firefox has not implemented this yet but it seems likey they are working on it.
* All your static assets need to be properly cached to make the best use of the browser cache.
Also, prefetching some links on hover, like those on a navbar, is helpful.
Add a css class "prefetch" to the links you want to prefetch, then use something like this:
Really resonated with this, reminded me of the journey I went on over the course of my dev career. By the end, my advice for every manager was roughly:
* Don't add process just for the sake of it. Only add it if seriously needed.
* Require ownership all the way to prod and beyond, no matter the role. (Turns out people tend to really like that.)
* Stop making reactive decisions. If something bad happened on a total, extremely unlikely lark, don't act like it's going to happen again next week.
* Resist the urge to build walls between people/teams/departments. Instead, build a culture of collaboration (Hard and squishy and difficult to scale? Yup. Worth it? Absolutely.)
* Never forget your team is full of actual humans.
> Chomsky responds with the typical comment that it would take an unprecedented amount of coordination to have so many people keep the secret that it doesn't even make sense.
I remember hearing that argument made back in the 00's about government surveilance. The secret doesn't have to be kept perfectly, it just has to be made ridiculous.
A switch to proportional voting could require constitutional changes in most places. Also, proportional voting introduces all kinds of unnecessary problems like not having districts and thereby constituents having their votes diluted and not having a specific representative. Which is one things the American system gets right.
What we really need is a switch from "first past the post" to approval/score/range voting, which would dissolve the two party system by eliminating spoilers and thereby making third parties viable.
Score/range voting is "that thing the Olympics uses"; approval voting is "that thing the Olympics uses if the only possible scores a judge can give are 0 or 1".
As far as I can tell the biggest impediment to this is a lot of people proposing alternative systems that aren't as good (e.g. IRV) and then no change is made because proponents of change are divided on which change to make.
This is why I always gravitate towards software projects that are centered around making money (within ethical bounds, of course). The closer to the bottom line my code is, the larger the sales and support team is around my code, and the more customers there are (real paying customers, not internal employees who like to be called customers) using my code, the better.
It may sound overly hard-nosed and cynical to some people, but I find it's just the opposite. The drive to make more money is the only thing that trumps every other petty motivation people follow at work. It trumps favoritism, empire building, and intra-office rivalries. It trumps good ol' boys networks and tech bro networks. Money brings people into the same room who would never normally be in a room together, and they do it willingly. It forces people in power to listen to small fries. While money corrupts on an individual level, it purifies on an institutional level. Its universally accepted value allows a variety of individual motives to flourish.
This seems to change once a company goes public and hits a certain size, as the flow of money becomes less and less tied to actual sales and consumer behavior and more and more based on financial engineering and stock price.
Upon reading your, and GP, it occurred to me that we've lost the idea that it is the buyer that is responsible for the purchases.
It used to be that people and organizations making unethical purchases were the ones we considered, and held, responsible. For a long time we've had good, positive movements centered on informing the buyer. We added expiration dates, ingredient lists, nutritional value information, crashworthiness scores and reliability ratings, country of origin labels, even ethical sourcing labels. Perhaps too much of a good thing caused information overload and resulting numbness? Somehow, between the Prohibition, the "war on drugs", and the supply side moral regulations, we've lost the spirit of "well informed free agents making decisions".
Most of the services (FB and the likes) we're discussing here are morally neutral by their nature, and it takes concerted efforts to make them non-neutral[1]. It is the particular use they are being put to that is moral or immoral. Let's not shift vast moral powers from the wide society to a narrow cadre, shall we? The economy is a neat distributed system. It's the popular democracy before democracy became popular. Let's not give it up.
--
[1] example of non-neutrality: the current trend of algorithmic manipulation
Counting contributors & contributions is about as sensible as counting kLOC and pages of manual.
Conway's law about software architecture implies existence of management-space CAP theorem equivalent.
Communication suffers combinatorial explosion[1], and graph problems are pretty expensive to solve in meatspace.
Attention is an exhaustible resource, and easily DDoSed.
[edit]
dilandau - please note I've carefully avoided certain spicier keywords & expressions in the previous post as to not off put a casual reader and avoid attracting quick downvotes.
The couple spicy keywords in your response end up needlessly detracting a casual reader from the correct point you are making. And I'm not talking about the "fucking" adjective :)
> In pretty much every study within group differences are far and away more massive that between group differences.
You are perpetuating what’s known as Lewontin’s fallacy.
This claim is extremely narrowly true: clustering on a 1-dimensional fixation index is insufficient to reliably mechanically classify race. If you don’t know what that means, stop trying to use this claim in your arguments.
Here’s what it doesn’t mean: it doesn’t mean that there aren’t significant population-level phenotypic differences between ethnic groups. These differences can include intelligence (and 30 years of twin studies suggests they probably do).
The underground in underground programming refers to maintaining low visibility, as to present low target silhouette. Additional benefit is not attracting too many participants that are in it for the clout, rather than for solving problems & good engineering.
Open Source is no longer sufficient for software freedom; the current 'battlefield' is maintaining security from activist pressure or gradual take-over.
You've been paying a man to warn you if there are tigers on the main street. Every day for the last 10 years he's been telling you there are no tigers on the main street and you've taken the main street or he's told you there are tigers there and you've taken the side street. Today he said that there was a tiger on the main street but you took it anyway and didn't see the tiger. Has his trustworthiness collapsed or was he always scamming you?
Thank you, apocalypstyx - your comment is insightful and useful in analyzing certain present-day trends.
Bit of a tangent, I remember reading a comment regarding old art's propensity towards showing past as nearly the same as present. Take medieval christian paintings - while depicting scenes from era way past and cultures far removed, they were typically depicted with then-current clothing, equipment, housing, etc. It's only during renaissance (hopefully not mixing that one up?) that people commonly understood past & other societies were significantly different from the present, with different customs, culture, technology, etc. Accompanying the awareness was ongoing research into how the past actually was; what were the possibilities and limitations; what was the culture and what drove people to certain choices, however misguided they may seem in the hindsight.
Through your comment I realize we are in process of losing the ability to clearly delineate the past, and to hold it as imperfect, but necessary, stepping stones to the present day. And perhaps also to run effective, objective research into the past, without feeling the urge to nudge it into direction of a preferred narrative.
Thank you for the thorough and practical write-up.
About the only thing I would add to it is i18n concerns.
A few quick ones off of the top of my head:
- Words are separated by whitespace or dashes.
- Customers only ever enter ASCII.
- Customers only ever enter accented characters with/without accents.
- A "Unicode-capable" system will happily take in any valid unicode.
- A "Unicode-capable" system will pass through any valid unicode undisturbed.
- Software systems perform Unicode normalization.
- WinNT API is UTF-16.
- There is 1-to-1 mapping between uppercase and lowercase.
- Unicode collation algorithm is optimal for every single language.
- Unicode collation algorithm is optimal for multi-language document sets.
- Distinguishing/coalescing plural and singular forms of words is easy.
- There are separate plural/singular forms of words.
- Words have stem and optional suffixes, but not prefixes.
- Soundex etc. works for every language.
> "Galileo and Darwin are famous examples of this phenomenon,"
Famous, yes, but there can also be incorrect assumptions in that. Galileo, for example. A lot of people assume his heliocentric model was correct and was only rejected because heliocentrism was considered heretical because it contradicted the ruling assumption of geocentrism. But the Vatican at the time was seriously considering a number of models including some heliocentric models. One of the reasons Galileo's model was rejected, was because it contradicted observations. Planets didn't quite move in the way he predicted, and that's because Galileo clung to the incorrect assumptions that orbits had to be circular.
Of course his core idea of heliocentrism was less wrong than geocentric models, but at the same time it's an example of how addressing one incorrect assumption can lead you into another incorrect assumption. And that also lead to a lot of resistance to your idea, even if the core of your idea is correct.
As for Darwin, a lot of people at the time already assumed that something like evolution had to be going on, and that many animals had common ancestors. They just didn't know how it worked. Even while Darwin was working on his theory, Alfred Russel Wallace was working on the exact same idea. So in that case, the idea was actually obvious to anyone paying attention, and Darwin happened to be the one to get there first. But if he hadn't published about it, Wallace would most likely have done so.
I find it interesting for a different reason. At the risk of sounding like these fake news peddlers, I see the problem like this:
BULLSHIT SCALE
|[------]----------[---------------]-->
| | | |
| the range people think | fake news
| news publications |
| operate in the range
| they actually
| operate in
as close to truth
as one could get
That is, "fake news" is only a little bit worse than the stuff mainstream news publishes. It's different only because it dares to make that one extra step, cross the line and detach completely from reality. The goal of both is the same anyway - to drive ad impressions. To me, news publications writing about fake news is like pickpockets getting all outraged about muggers, who don't engage in the fine arts of trickery, and instead just hit their victim and take their wallet. They're just angry because they have competition.
And I don't buy the argument that the contents of news is just innocent bias, and that everyone has one; when one publication makes a square look like a circle, and another makes it look like triangle, then that's taking it a bit too far.
Evidence for my view is evident if you add up all the little things HN readers encounter regularly. For instance, it's a rare news article posted on HN (or in other relevant communities frequented by experts) that doesn't get thoroughly ripped apart when someone posts the original source behind it, or when someone who knows the topic first-hand or second-hand chimes in. Gell-Mann amnesia is a known phenomenon. The fact that you should always review articles quoting you prior to publications because journalists will misquote you, making you look like you believe the opposite of what you do, is a well-ingrained cultural meme, speaking to the ubiquity of the problem. Etc.
It's different only because it dares to make that one extra step, cross the line and detach completely from reality.
I get what your point, and generally sympathize, but unfortunately the "mainstream news" has already lost the plot in what they publish and broadcast with a straight face. The recent impeachment hearings were but the latest example, whereby the video of the hearings completely contradicted the paper headlines in the WSJ and ticker stream on CNN.
The genie is out of the bottle; as in, there is no truth any longer about things far away from you. The only way to deal with this is to become smaller -- smaller in our communities and concerns, and focus on that. Not worrying about national politics, or nonsense happening in faraway countries, but about our own families, neighbourhoods, and communities. The smaller scale will mitigate a lot of the reality distortion that is currently occurring, plus it will make for better living standards all around.
I would encourage anyone interested to read Edward Bernays (nephew, through both parents, of Sigmund Freud). He lays out why journalism is a loss leader for people who want control. It makes a lot of otherwise confusing market dynamics around media make sense. Making money, directly, on journalism is almost never the idea.
See also Stucchio's impossibility theorem: it is impossible for a given algorithm to be procedurally fair, representationally fair, and utilitarian at the same time:
Congratulations, you named the very reason this technique is effective. I am not being facetious here.
The whole idea is to make the user perform disruptive, unfitting actions -both in the mind and physically- to improve chances of catching errors.
First, to both make the mind "shift gears", or perform a "context switch", with all the related cache-flushing and prediction-discarding and all that.
Second, to activate brain regions that were hitherto suspended; the ones associated with other senses and skills, in particular with vocal skills. Fleshing out abstract concepts into concrete words helps catch mistakes, just like putting abstract feature requests into concrete diagrams or code helps with catching errors or inconsistent expectations & assumptions.
Working is not the problematic part. There are two interrelated problems: how much complexity is introduced, and how to proceed when something is not working, or is flaky.
You, me, and our fellow posters know very well that Lennart's solutions are complex - in fact, more complex than the problem domain. Accordingly, the ability to narrow down, fix, or work around problems suffers. Systemd and PulseAudio aren't UNIX applications [1]; they are entirely new, rather opaque, sub-systems unto themselves. Thus they require separate knowledge, separate intuition, and separate skills.
All so I can seamlessly play music on TWO bluetooth speakers while also browsing logs without learning regex.
>Lennart Poettering is a dev who has contributed MASSIVE AMOUNTS OF CODE
Massive amounts of code is a clear and present problem. Why is the solution more complex than the problem domain? Why is the success being measured by magnitude of effort, rather than by barriers removed, and standards adhered to?
--
[1] what makes an UNIX application? small POSIX applications tied together with shell scripts, communicating via pipes, exposing services as files, following the `Worse is better' and `Do one thing, and do it well' principles.
A modest proposal: let's use only static libraries (plugins aside). Let's help KSM efficiently merge the R/X, and the R/O memory pages of the static libraries of running processes. To maintain the current performance, let's give KSM a hint: the hint would be a string of standardized, canonical library pathname, including name and version number.
During the (static) linking of libraries, an extra metadata item would be stored in the ELF file: the canonical pathnames of all the static libraries used, along with base addresses of R/X segments (code) and R/O segments (r/o data). Upon forking a process, the information would be provided to kernel, and passed over to the KSM, to use as basis of same-page lookup and possible merging of the pages.
This effectively inverts the current mechanism. Currently when libraries are dynamically loaded, the dynloader uses actual file pathname of the shlib as the key for operation opposite to "merging" - i.e., re-uses the already loaded R/X and R/O pages, by adding proper memory mapping.
The proposed change is three-fold:
1) extend the linker to add the metadata to ELF,
2) extend in-kernel ELF interpreter to extract the info upon exec() and friends,
3a) extend the KSM with a limited mode, where it would look up & merge only the hinted memory regions, in linear fashion, right after an exec() & friends.
An alternative to 3a), to avoid fussing about with KSM:
3b) modify the VM subsystem to extend current swap/SHM so it provides an unique address range for each static lib canonical pathname. Requesting pages from this address range would map from the shared pages, if any process already loaded one such.
To handle adversarial processes on single machine, a further extension where crypto signatures are checked is possible.
Any time I see an article in a "mainstream" news outlet on a topic I'm familiar with, the inaccuracies make me cringe. The more familiar I am with the topic, the more inaccuracies I spot. The charitable interpretation is that news outlets have been so decimated by changes in their business model that they can't afford good editors. There are plenty of less charitable interpretations.
Here's a snarky reply, because I've laughed out loud upon reading your "they're on their way out" and the "we need stronger policies that cannot be swayed by public opinion" posted in one of responses.
>how you convince these people.
Four easy steps:
- show them the hockey stick graphs from models,
- show them the graphs from actual measurements,
- point out 2019 is colder than 2016, and probably than 2017, as per the article,
- point out the graph looks pretty close to a cluster of sine waves when watched at any scale
Congratulations, now they are convinced for life. You can celebrate that with them by watching the 2006 An Inconvenient Truth for all the beautiful animations.
Snark aside, the gist is: people who have been following news for some two decades or more don't generally get alarmed that easily.