I had a brief foray into Twitter, but had to stop. I followed my senator and found that no matter the post, the replies were filled with obvious disinformation, flat out lies, and irrelevant accusations.
I remember one person posting hand drawn graphs without a scale claiming global warming is just a cyclical process. Does it makes sense to report someone for bad science? They probably believe what they posted to be true, and others who read that unrefuted reply may begin to think the same. So I tried to teach critical thinking to random people on Twitter, but as you imagine this was a fools errand.
Twitter (and nearly every "social" media platform) is like democracy: a sewer hose of manufactured consent, ignorance, mob stupidity, disinformation, and bot-automated propaganda that you'll need more than a shower or 3 to rinse-off.
I gave up and blew up all of my "social" media because they didn't serve any purpose.
Maybe an invite-only platform could have higher signal with:
- multiple "vouches" of others to get an invitation
- frequency/reputation micropayment cost to post
- reputation/karma that isn't apparent or chased, and granted some with the invite
- elimination of pile-on
- multifaceted voting based on specific aspects of relevance, agreement, and insight
- humor voted/tagged and filtered by readers to avoid using dv for that
- dv has moderator-visible reasoning to double-check and prevent spurious dv
- prevention of dv retribution
- reduced anonymity (first name and picture) for higher-quality interactions
- mediation and de-escalation facilities such as pre-comment emotional content scanning (AI-based sarcasm detection would rock), posting delay of 2 hours, and side chats
- login required to view content, no search engine spidering
- operate as a sustainable nonprofit to avoid pressures of corporate profiteering
- servers and legally based in a country the US and EU cannot control
HN is not pursuing 10%/week growth, "engagement" etc. It doesn't care for bots, viral posts, there's a small, definable ruleset and largely enforceable.
It naturally attracts people interested in its themes and subjects, and doesn't try to cater to everyone needs. Hell, it isn't even trying to be beautiful or having any order other than chronological timeline and upvoted posts!
I loathe all ads, so HN is great in that aspect. The design is good with its beige and orange, very simple no pretentiousness. Also the community is smart and usually thoughtful in both replies and posts. Its really the only 'social media' I participate in.
The user base is sufficiently pretentious to bring the site up to the expected pretentiousness baseline for an SV product. Just needs a bit of quiet ukulele music in the background to really get it over the line consistently.
That's definitely true. Without messing up a good thing (HN), I wonder though how similar community platforms could be constructed incrementally better in terms of reasonableness, fairness, ethical/principled/respectful debate, curiosity, quality people, and signal.
It might be bad analogies but the lack of flash a-la Drudge Report (haven't seen it in years) or the old Fry's Electronics (stores and their website). I think it somewhat deters engagement addiction and focuses on content.
It is a toxic wasteland, though, at least sometimes. Also depends on who you are and how you experience the world - HN can be a very ugly place.
HN is no cakewalk. There are lots of very vocal climate deniers, homophobes, Nazis, etc. here. I've been called hateful slurs on HN that nobody has said to me anywhere else. Much of this flies under the radar of the mods and the users are frequently not warned or banned.
HN suffers all the same problems as Twitter or any of the others.
The difference is visibility. A few hours into a conversation, the top two comments are, more often than not, a well-reasoned argument for one side and a well-reasoned rebuttal. If you start in on a thread while it's early, you'll see a lot of garbage, but that tends to float to the bottom over time. In general, the HN system (tech+mods+community) rewards thoughtful content and penalizes shallow nonsense.
Twitter is the opposite. The most inflammatory comments trigger the most engagement, and so get the most visibility.
I don't agree with that. I think often hours in, the top comments often get more offensive here. Not less. The garbage floating to the bottom is not my experience here.
> In general, the HN system (tech+mods+community) rewards thoughtful content and penalizes shallow nonsense
I don't see this happening on HN. The shallow nonsense isn't the problem, it's the hateful opinions and "carefully reasoned, smart sounding" racism that is the problem. Calling it shallow nonsense makes it sound like no big deal or low effort hate posts. But that's not what I'm talking about.
People say the worst things here but they use a large vocabulary and so it seems to get a pass. The hate here is very similar to the hate I see elsewhere and often it is much much worse here than on Twitter, in my personal experience.
Ah, okay, I understand better what you're saying. So you do perceive Twitter as different than HN, but only in quality of writing, not in lack of hateful content.
Can you give an example of a thread that turned out that way? I'm genuinely curious if I've been missing something, or if I've just managed to steer clear of topics that end up like that.
I'm on Hacker News more than I care to admit and I don't see evidence of this widespread racism you proclaim. Please provide evidence if you're going to make these wild accusations.
I'm scratching my head on this one. There are passive-aggressive haters in the world, but I don't see much of that around here. People around these parts usually keep their biases to themselves or outright flaunt them and get hammered for it.
$5 words instead of plain speak is an accessibility problem but anti-intellectualism never solved anything. Maybe inferiority feelings or catastrophizing? Do what I do, subscribe to the Merriam-Webster Word of the Day. :) Go through the GRE prep materials if you want a bigger vocab. Heck, I would get a used unabridged dictionary and make it a point to work from cover-to-cover. Watch those obnubilated smarty-pants shudder in fear. :)
Twitter is dominated by outrage and disinformation. HN very much is not. You may still encounter conversations with people who hold terrible views, but they remain conversations.
I've never been downvoted for making a controversial point on HN. And, I have ONLY been downvoted for making glib, lazy, or intellectually weak arguments. This is exactly how it should be.
Your experience on HN does not resemble mine at all. I'm frequently downvoted for controversial opinions. And I see a lot of outrage and disinformation here.
And on twitter I see little outrage and disinformation. Our experiences are so far apart on social media that I'm not sure anecdotes will do much for the conversation here.
Anything critical of the failure that is the United States, it's crumbling democracy or the Frank insanity inflicted upon the world by the psychopaths operating out of silicon valley. Unbridled Capitalism of the American variety is cruel and big tech is complicit in propagating antidemocratic efforts through walled gardens and mass tailored propaganda.
How's that?
>I've never been downvoted for making a controversial point on HN. And, I have ONLY been downvoted for making glib, lazy, or intellectually weak arguments. This is exactly how it should be.
That's probably because you don't post any opinions that the HN hivemind finds controversial. Stray outside the lines just a bit and expect moderator censure and downvotes/flags.
It's hard but possible, I find, to post and discuss controversial things. You have to be very carefull how you present the topic, and you have to put a lot more effort into the discussion than you normally might to make sure it doesn't devolve, but if you wade through and cut off the drive-by commenters that misunderstand your position because they aren't actually bothering to think critically about it,and try to try to keep the discussion it on track, you sometimes get very interesting discussions out of it.
Sometimes I end up softening or changing someone's position on something, sometimes I soften or change mine or learn a lot of new things, and I have to imagine that happens with some lurkers as well, and I'm not sure what more I could hope for, besides wishing it was easier sometimes.
Compare this to Reddit, where some high-traffic sub-Reddits (/r/atheism coughcough) delete links to scientific evidence showing AA efficacy: https://archive.is/gEXfA Why let facts get in the way of a good social networking rage fest?
Certain topics like politics or war or religion or LGBT will always tend to produce flame wars and "toxic wasteland" no matter which platform. The technical threads are usually a lot better.
Please provide some specific examples. Even with examples, the important question is about how frequent they are, but without examples, your statement is just your personal experience.
PS: Your usage of “smart racism,” “nazis,” “homophobes” etc are strong bayesian evidence (to me) that you’re just looking to guilt-trip people and victimize yourself. The only kind of racism I have seen on HN is the kind I see literally everywhere: people don’t really care about people not in their bubbles. This is better named selfishness than racism, and it’s inherent in human nature. (If you’re curious, I am middle-eastern, and not exactly binary myself; I have been abused when I was younger for being “transgenderish.” Which kind of forced me to adopt more conforming, binary social masks.)
I've seen explicit scientistic (scientific-sounding) racism on HN somewhat frequently, usually as a mintoriy opinion, but somewhat tolerated - generally in discussions about IQ, stuff like The Bell Curve. Homophobia I've seen much more rarely, though maybe I didn't hit the right topics.
I've also seen anti-religious sentiments and anti-chinese nationalism popping up pretty proeminintely every now and again. Climate change denial is also rarely missing from any longer conversation about climate.
Dang's response came 4 days after the comment was written. Before being flagged, it was downvoted, but not to hell - it was gray, but still readable, like most of that poster's other comments. So I would say that someone may be justified in feeling somewhat attacked and not that well defended on such topics. As I said, it's a minority opinion, but it is somewhat tolerated.
Ideally, when someone claims that they can tell how smart someone is from how they look, that should be downvoted to oblivion and receive overwhelming counter-evidence or no attention at all.
Of course, what happened was quite OK, and much better than many other corners of the internet. But still, it's proof that that there is some explicit racism on HN, not just people not caring about others.
I understand the linearity of the Shrodinger equation under a model which MW entails. However, you are sidestepping the randomness involved in the choice of world that you reside in. You can dress it up all you want, but your consciousness is bound to one world of the many worlds. And that is a defacto random phenomena.
I’ve been abused all my life by religion, and the Iranian regime (which all evidence points to being much milder than the Chinese dictatorship). You’re no different than your predecessors; What are you doing to help people that actually goes against your bubble’s conventions? It’s not exactly an achievement to be pro-gay etc when gays etc are your bubble’s current fad.
I do not understand your point in any way. You claimed that there are no racists on HN and that claiming there are suggests someone is a drama queen. I pointed to some racists on HN to show that at least some do exist.
What does this have to do with doing something about the Iranian regime or Chinese regime or any other regime? I barely even discussed homophobia.
And note that you can be anti-fundamentalist without being anti religion, and you can be against the Chinese regime without being anti Chinese people (just like you can be anti Israel without being antisemitic).
I guess the issue is keeping moderation consistent (like bar exam grading) coupled with a manageable size of community that handles scaling. I wonder if social media platforms could cluster 10-25 people together into "troops" with a "troop leader" and a "guidance counselor." This way, it's not just a sea of individuals floating along ephemerally disconnected, but brings some tribal belonging and support back that people yearn for.
I've since come to believe that to have a high quality medium, you really need not just the editorial guidelines I've mentioned, but also someone who interprets those guidelines in the intended way, and the ability to enforce them properly.
That means you can, at best, have a small team of moderators/community managers, likely with the person who has manifested the editorial guidelines at the top. This does not scale, so the community is limited in size.
When I think back to the times of TV channels, professional magazines, radio shows etc., I remember how amazing the quality of that content could be. Reading the same magazines printed back then today confirms that to me.
Curated content wins.
Sure, some TV channels and magazines were terrible instead, but that's just because I did not agree with their curation.
Slashdot died from the incoming content, not the posts, as far as I recall from those days. Digg suffered the same fate. Reddit has so far been kept from it since moderators can only pin a few posts and only have "negative" control of the posts that appear at the top.
Yes, I was there. :) 23 years ago ;) I meant some sort of mechanism to improve the training/fairness/consistency of moderators rather than merely double-checking them.
Their metamoderation was innovative but ultimately pointless.
Instead of having one popularity contest, it was like a popularity contest that qualified you for another popularity contest. Theoretically the metamods were "good" posters, but being a "good" poster was ridiculously easy - you could just rack up karma by parroting the hivemind and bashing Microsoft or whatever.
That's true. If a community platform's moderation were more professional like the example I used of bar exam graders, who grade practice samples and do other calibration exercises, it would improve the signal and tend to reduce biases if the culture were one of strict professionalism.
HN has basically one moderator. It's just that we've trained an army of downvoters and flaggers that mostly clobber anything "un-HN" almost immediately. There's a community here and it defends itself.
Most subject-specific forums are actually ok. Because posting there demonstrates that you have something worthwhile to care about. Of course one can troll and flamebait on such forums as well but it takes effort and it's not going to seriously rile people up about anything. Twitter is poles apart from that, it's like being in a different universe.
Also, I noticed how most underdog / less socially-acceptable lifestyle/interest forums tend to be pleasant, humorous, and reasonable. The other aspect maybe that marginalized people (without chips on their shoulders resentment) know what it feels like to be othered / not treated well and go out-of-their-way to be friendlier. For example, I can't remember any LGBT+ people who aren't cool, decent, and sociable... and I'm the goofy, straight, ally interloper stealing all the pretty cis girls (or they're stealing me, IDK).
On niche interests-side where it's a small world, I think the cosiness reduced sized and inherent common interests also reinforce, promote better behavior, and friendliness.
Twitter and such definitely throw unbounded numbers of random people at each other, and so the odds of clashing are astronomically-higher. In this alternate (mainstream) universe, the sad part is that social and online ideological Balkanization has cemented echo chambers of memetic civil war; a people divided-and-conquered.
The reason for that is HN is not for direct profit, has a charter, is not afraid to moderate content, via flag, and to bar people, via marking them as dead, and actively hunts spam and trolls.
If Twitter/FB were to do this, they'll have 1/5th the customer base but will have more sane content.
I think the major ingredient for HN is focus on topics that are interesting to "techinical" people. When you focus on particular set of activities it becomes easier to just say no to a lot of other contents.
I don't have twitter but i check some users(like the pico8 dev) once a week for interesting content. I don't see anything offtopic there and it's very nice and sometimes i learn something even in the replies. Same with certain subreddits. Just consuming in polling mode, helps a lot.
HN is not social media. Social media has friends/followers, chats, inboxes, timelines... stuff like that. Social media involves some insularity. This is how so-called fake news spreads, because insular networks do not get outside feedback. .
If they introduce targeted ads or up-votes/interactions could be monetized in HN, even with the the same community, you would start to see the deterioration IMHO.
Cool headed, interesting or curious do not generate enough click through as much as controversial, conspiracy theory, outrageous, hateful, etc. It's interesting that there's no ban on political or controversial content in HN but still, you don't see them take over the platform. The incentive is simply not there!
HN has decently big scale. Somehow it works because of heavy handed moderation, manual, crowdsourced and automated.
I think twitter really needs a downvote button. But they prefer relying more on their AIs instead of crowdsourced moderation. Probably so they can sell more ads.
Oh my heavens no. There are certain topics that even this site can't discuss in good faith without groupthink, hurt feelings, big egos, and so forth. No, I will not list those topics here to avoid invoking them, but most of them are political. Sometimes they get just as toxic as twitter and reddit, just with less namecalling since that'll get you flagged off with a quickness.
On that note: If even HN can't do it, I think some of these topics can't be discussed online at all. Here you've got great moderation, a high SNR, and vanishingly few of the pathologies that infest most web fora. Almost everything else is a step down in quality.
"If even HN can't do it, I think some of these topics can't be discussed online at all."
They can be, just not in a any format where anyone can post, let's say, 10 paragraphs of whatever, and then hundreds of people can jam their 40 paragraph rebuttals and threats right underneath it. While convenient for many purposes, the formats where the interactions are this tight and integrated are not the only formats.
You need something more like a weblog-structured community, where people can post their lengthy thoughts at their leisure, and others can post their own rebuttals on their own weblogs, but I think it's actually important that there not be tight integration such that everyone is getting a phone notification every time someone posts some link to them.
I would agree that online platforms that stick everyone into one metaphorical mosh pit have certain topics that simply can't be discussed reasonably, but "metaphorical mosh pit" isn't the only option.
If we can respectfully disagree and see each other's point-of-views without ghosting each other, then we're dialoguin'. Otherwise, we're just talking past each other, seeking karma brownie points, or taking out our frustrations.. and then what point is there to participating if there isn't meaningful communication?
The political differences here are often stark. You also have a fair amount of Independents here which makes this place a bit more tolerable for me. I really can't stand left-wing or right-wing ideologues, much less the extremists.
HN caters to people from all across the US (most of the audience is outside Silicon Valley and the global audience continues to grow based on dangs postings).
You could say it's mostly male, but I've seen more usernames with women's names in them.
I have a dream that one day, there will be no political parties, only nuanced, informed debate on stand-alone issues. Tribal groupthink is one of my pet peeves (isn't that the tao of flat-earthers?) because it often places loyalty over honesty. Elections are almost as bad because they've devolved into celebrity popularity contests.
There is only one party in the United States, the Property Party, and it has two right wings: Republican and Democrat. Republicans are a bit stupider, more rigid, more doctrinaire in their laissez-faire capitalism than the Democrats, who are cuter, prettier, a bit more corrupt — until recently - and more willing than the Republicans to make small adjustments when the poor, the black, the anti-imperialists get out of hand. But, essentially, there is no difference between the two parties.
Our only political party has two right wings, one called Republican, the other Democratic. But Henry Adams figured all that out back in the 1890s. "We have a single system," he wrote, and "in that system the only question is the price at which the proletariat is to be bought and sold, the bread and circuses.
― Gore Vidal
Maybe it's me, but I don't think about participants' gender or if there's enough/too much of any particular attribute group. I infer your point is that HN extends well-beyond the stereotypical academic, software engineer, or tech entrepreneur: male, Caucasian/Asian/Indian subcontinental, high-income or college student, SF to Milpitas.
Vidals assessment of Democrats is a bit rosey, I think. This reflects the average dishonesty in politics though. An equally rosey picture of Republicans or equally bleak picture of Democrats (or both) would've made better sense in an honest reflection.
The rest of this is pretty spot on, and your assessment of my sentiment was spot on.
I don't think that's the only thing going on. Twitter has code written and an ML model trained to actively and intentionally surface material of indeterminate quality that is likely to drive engagement. HN has people using moderation and upvoting to surface material of high quality assuming that drives engagement.
I think there's a virtuous spiral when the one (specific platform features, philosophy, and conventions) reinforces the other (people's perceptions, attitudes, and interactions), and people care about excellence.
The for-profit, outrage-seeking, clickbait model of "engagement" is the opposite of that.
We can't fix everything with technology (if there's still people problems) or with good people (if the platform fails them) alone.
HN started niche and attracted a narrow audience intent on productive communication; mostly college grads and/or positive attitude people (successful attributes, even if a bit rowdy and troublemaking at times), and not many lottery ticket buyers [1]. It is very open, so it could be overwhelmed by less signal crowds over time should it hit mainstream visibility.
Do some platforms need to limit the number of participants and do stack-ranking dismissals? IIRC, the ASW platform culled a bunch of accounts.
There have been studies on social media interactions (I can't recall the links atm, and am almost done posting from the loo :) and "captological" aspects that influence people's online perception, behaviors, and reactions. I think the problems are the people, the power they're given, the presence/lack of fairness they perceive, what they're presented with, and whether or not the community defends itself and its values strongly (I think dang does a Herculean job with this).
We have had moderated platforms for discussion, but when people don't like being told they're wrong (especially when they are) they create their own.
Personal details don't deter it, as shown by the various platforms that were inhabited by the alt-right. There's a cultural lack of responsibility for the truth.
Any open social media platform where you can choose who you follow and who sees or doesn’t see your content is effectively an invite-only platform.
Twitter is pretty close to that, except it’s default opt-in rather than default opt-out. Meaning everyone can see your content by default, rather than you having to explicitly allow rando’s to see your content and reply to you.
But if you follow people who make high-quality posts, and unfollow, mute, and/or block people who produce all noise and no signal, you’ll have a pretty good professional and personal networking experience.
Most other social media have ways of curating your feed, but you have to proactively do it, can’t just rely on the social media platform to do it for you.
> - reputation/karma that isn't apparent or chased
The more I think about those problems the more I'm convinced up- and downvotes are a mistake in general. They can only cause damage and are completely useless as they're not even used for the same thing by different people. For example when someone gets 5 downvotes it's probably for at least 2 different reasons, none of which are communicated to the poster. When I get a random downvote I'd really like to know if it was warranted, but there's no way to find out.
If any kind of rating system had to exist I'd vote for something like tags; with users being able to tag any content with any 1-2 words, and frequent ones are visible without some extra clicks. For one this would give more nuanced information, and at the same time it would make tons of content much easier to find or filter.
That reminds me of the slashdot moderation system - when a user gets modpoints they get to spend them on posts as they browse and indicate why they spent that modpoint that way (e.g. 'Troll' or 'Flamebait').
Having used many broken moderation systems and designed a few (also broken) myself, a few observations.
- Popularity itself is a very poor metric for quality. It's mostly a metric for ... popularity. Which is to say: broad appeal, simplicity, emotive appeal (or engagement), and brevity. This does however correspond reasonably well to sales and advertising metrics.
- My own goal tends toward maximised overall quality, with a high favouring of truth value and relevance.
- There's some value to a multi-point rating scale. This is called a "Likert Scale", typically an odd-number of points (3, 5, 7, ...), most commonly encountered as a star-scale system. Amazon and Uber are the most familiar of these today, and highlight failure modes. If users' ratings are rebalanced based on their own average rating, at least some of the issues go away (e.g, a very positive rater giving away 5/5 will have those ratings discounted, a conservative rater offering 3/5 on average would see those uprated). The adjusted average becomes the rebalanced rating.
- Note that a capped cumulative score is not the same as an averaged Likert score. Slashdot's moderating system is an example of the former. It ... kind of works but mostly doesn't. Highly-ranked content tends to be good, but much content deserving higher ratings is utterly ignored.
- Taking number of interactions and applying a logarithmic function tends to give a renormalised popularity score. That is, on a log-log basis, you'll tend to see a linear scaling from "1 person liked this" to "10 billion people liked this" (roughly the range of any current global-scale ratings system). See also: Power Distribution, Zipf Function.
- Unbiased and uncorrupted expertise should rate more strongly. In averaging the inputs of 300 passengers + 2 pilots for an airplane's flight controls, my preferred weighting is roughly 3300*0 and 11. Truth or competence are not popularity games.
- Sometimes a distinct "experts" vs. "everyone" scoring is useful. I've recently seen an argument that film reviews accomplish this, with the expert reviewers' scores setting expectations for "what kind of film is this" and the popular rating for "how well did this film meet established expectations"? There are very good bad films, and very bad good films, as well as very bad bad films.
- "The wisdom of crowds" starts failing rapidly where the crowd is motivated, gamed, bought, or otherwise influenced. Such behaviour must* be severely addressed if overall trust in a ratings system is to remain.
- Areas of excellence ("funny", "informative", "interesting", etc.) are somewhat useful but very often the cost of acquiring that information is excessively high. Indirect measures of attributes may be more useful, and there's some research in this area (Microsoft conducted studies on classification of Usenet threads based on their "shape", in the 2000s. Simply based on the structure of reply chains, there were useful classifications: "dead post", "troll", "flameware", "simple question everybody can answer", "hard question many can guess at but one expert knows the answer", etc.
- Actual engagement with content, even just for a voting or other action is a small fraction of total views. Encouraging more rating behaviour often backfires. Make do with the data that occurs naturally, incentivised contribution skews results.
- Sortition in ratings may be useful. It greatly increases the costs of gaming.
- As is sortition of the presented content. Where it's not certain what is (or isn't) highly-ranked content, presenting different selections to different reader cohorts can help minimise popularity bias effects.
- Admitting that any achieved ratings score is at best a rough guess of the ground truth is tremendously useful. Fuzzing ratings based on the likely error can help balance out low-information states in trying to assess ratings.
Many social media platforms have tried this and failed Even at the lowest level of having to get an invite from someone you know - its never worked:
- Clubhouse
- Google+
- ELLO
- Mastodon
The paradox here is you need people to generate sustainable communities. When you don't have enough people, users will stop using the platform. Another classic case of this is all the "decentralized" social platforms like Diaspora and others. Great idea, great implementation, but without enough people, its doomed to fail.
I agree nearly completely with the inherent failur-proneness of invite-only networks, and participated in three of the four you mention. I'd dispute Mastodon as invite-only however.
That said, the exception is a network created for an extant community. In fact, most of the major successful social media networks have emerged from just such a community, and quite frequently one that's academically oriented.
Email, Usenet, and Facebook all emerged out of academia. Email and Usenet with early Arpanet and major research universities. Facebook was once literally Harvard. Several other early networks such as The WELL and Slashdot were strongly adjacent to these.
Several early BBS systems emerged out of or alongside military service communities. I don't recall if it was AOL, Prodigy, or another early network which was strongly popular among US military personnel and families (a large, reasonably cohesive community, widely distributed, with contacts and ongoing communications in distant locations).
YC's HN would be another example.
But generally, creating an early cohesive community is a challenge, and many of the tricks for short-cutting this process tend also to greatly diminish the long-term value and prospects of the discussion platform.
My own contention is that Google+ actually did have a strong internal-to-the-network (not just Google) community (though one that excluded a great many people). I feel the social network hurt itself by trying to open too quickly (Ello certainly did), as well as by Google's own greatly bifurcated affinity groups: technologists on the one hand, and marketers on the other. Marketing/advertising is toxic to social cohesion, and this showed early in G+ evolution.
> - login required to view content, no search engine spidering
Why do you think this is a good thing?
> - servers and legally based in a country the US and EU cannot control
I see two ways that could go: either somewhere that China and/or Russia have control over, or in an unstable third-world dictatorship. Do you have any specific countries where none of the above would apply, or do you prefer one of the latter two to the US and EU?
No search engine spidering so discussions aren't monetized or ripped-off, discussions can be freer (half-way to being YC dinners), and potentially greater incentives to apply.
Most of those things are good, but they address symptoms more than root causes I think.
The root cause is that platforms like Twitter rely on engagement (and maybe more importantly, growth of said engagement) for their lifeblood.
When that's the case, the incentive will always be to increase engagement at all costs and nothing drives engagement like flamewars and other lowest common denominator garbage.
Additionally, as long as the social currency is "how much other members of the userbase like your posts" you'll wind up with either a single hivemind or multiple warring factions IMO (e.g. conservatives vs. liberals on FB)
HN manages to keep its discourse level fairly high because of this, I believe. HN does not need to grow nor generate revenue directly. A Twitter-alike, curated as strictly as HN, might work. It might even be able to turn a profit, if the goal was sustainable profit and not some impossible dream of unbounded growth concocted by investors wanting the next trillion-dollar hit.
> So I tried to teach critical thinking to random people on Twitter
Everyone everywhere on the internet thinks that the "other side" lacks critical thinking skills. I'm not surprised your effort failed, I'm sure if you offered to teach me critical thinking skills I'd wonder who the heck you thought you were.
Frankly your confidence in your own impeccable critical thinking skills cast doubts. The smartest people are those who know they can be deceived. If you don't have the humility to check your own reasoning then you are probably wrong about something.
I wouldn’t offer to teach critical thinking skills, that would be ridiculous. Instead, I would probe with questions and get them look at what they were posting more clearly or refer back to actual sources.
I also said it was a fools errand - something that had little chance of succeeding against the waves of misinformation on Twitter. And finally, you’re right that knowing you can be wrong and challenging your own beliefs is fundamental to critical thinking.
I think that's fair. The internet is full of people who won't even go to the effort of reading the article they are confidently promoting as the truth. Sometimes something as simple as "actually this study is about lions. not people" is enough to bust some people's bubbles.
> Everyone everywhere on the internet thinks that the "other side" lacks critical thinking skills.
It's not even about critical thinking skills. The 'other side' is often starting with a completely different set of facts. The only difference being which ones were highlighted and which were omitted.
This doesn't even touch on straight up falsities yet.
Until the sides can agree on some base first principles, it's going to be a hard problem to solve.
> Everyone everywhere on the internet thinks that the "other side" lacks critical thinking skills.
This point cannot be emphasized enough. I have encountered people on the opposing side of an issue who have stronger critical thinking skills than people who I agree with (and probably even myself). The differences come about due to a differences in the foundations of our knowledge or on pivotal points where neither side can claim to have an definitive answer.
I find that a lot of the "big" issues boil down to trust - in businesses, in Wall Street, in the government, in the justice system.
If one person's POV is that <institution> should help while the other person's is that <institution> can only hurt, then they are never going to agree no matter how many links and memes and snarky comebacks they throw at each other.
Sounds more like relativism than nihilism. Relativism is extremely common in mainstream political and moral discourse, especially in mass media journalism. This often takes the form of "bothsidesism" or "false balance," particularly in political discourse. So often, the merits of any claim (about politics, morality, scientific facts, even very basic claims about well-documented events that happened very recently, etc.) are judged by nothing except how strongly people appear to believe in them.
You see this a lot on Hacker News too, like when the discussion touches anything related to moderation, community standards/guidelines, censorship, fact-checking, etc. A particularly popular viewpoint around here seems to be that the government (or sometimes, any powerful corporation) cannot possibly be allowed to be involved in determining the validity of any claim, particularly if that claim is controversial, i.e. there are prominent people on both sides who appear to feel strongly about the claim.
Many people can teach, but to do it successfully they must come from a place of respect and trust. If someone I know wants to teach me about their field of knowledge, that will be successful. If an anonymous stranger presents information with an attitude of "I think this will interest you as it interested me", that will be successful.
If an anonymous stranger comes to me with "Let me tell you that how you think is wrong" - yea, I don't think I'm going to buy that.
What's the difference between this and pointing out how someone's argument is flawed? i.e. "You said 'X therefore Y', but following that reasoning you could say 'X implies (obviously-wrong) Z'. X is not logically incompatible with !Y because..."
(Not that this is ever successful in places like Twitter.)
"Arguement is flawed" is still in the eyes of the poster. I certainly wouldn't just unthinkingly accept this sort of feedback, and the simple truth is that internet "sources" are rarely trustworthy beyond the writer opinion.
It is still a matter of trust. Approach me with respect and I'll consider your POF. Approach me with "Your reasoning process is flawed beyond your understanding" and really - who the heck are you? In the anonymity of the internet you could be anyone.
It’s more that “teach critical thinking” often just means “condescend to an internet stranger about how flawed their thought process is” which, even if their thinking is flawed, isn’t exactly a winning strategy for helping people See The Light and whatnot.
But a person posting on HN is statistically more likely to be the critical thinker when compared to the Twitter baseline.
The whole thing is philosophically weird, but practically speaking, one can somewhat know that people saying aliens built the pyramids are the ones lacking critical thinking. For example, I don’t think people promoting the cancel culture lack critical thinking, even though I’d bet on it being a long-term disaster. But most anti-vaccers are pretty obviously lacking some critical thinking skills.
It’s generally the ability to tell your political interests apart from your epistemic knowledge. In simpler terms, the ability to engage less in wishful thinking. While perfection is impossible, I do think it’s possible to improve in this ability. Proving it to others is another challenge; You probably need to predict counterintuitive results consistently for people to somewhat trust in you.
I mention this every time "critical thinking" is brought up.
Critical thinking is a skill that is almost useless to most people and can lead to being a net negative. It's the skill of critique. One can be amazing at critical thinking and be absurdly terrible at constructive thinking. Politics in general needs far less critique and far more construction. It's really bad to get people very aware of just how badly they're exploited but then to give them no potential solutions to solve it. That's basically what wokism has done recently. Every solution proposed by them is so unpalatable to the rest of the nation that there is no place for constructing new policies.
The left has this problem especially bad since the radical left makes being really good at critique a whole component of their intellectual tradition:
Yet there are many differences between the folk spreading information online and the folks you encounter in the pub: the former will often posture themselves as an authority, while the latter you may know well enough to trust or distrust their authority on a particular topic. When people seek authority online, they are typically seeking someone who they agree with. While authority may be found in a pub, it is not really a place where one seeks it.
All of this makes educating people in venues like Twitter (and some of these exist outside of the online world) a very difficult prospect.
Yup. I think, as others like Chris Hedges have noted, individuals in Western society have become increasingly atomized and isolated, likely by design to sell more products and keep people feeling powerless/helpless: we're so close (in physical and online proximity), but so far (ideologically, wisdom, knowledge, and experience).
Instead of lionizing celebrities, money, infamy, or hyper individualism, maybe it would be worth respecting wisdom, mastery, expertise, monetary-agonistic accomplishment, and insightfulness.
The book The Mirror Effect by Dr. Drew comes to mind.
> Instead of lionizing celebrities, money, infamy, or hyper individualism, maybe it would be worth respecting wisdom, mastery, expertise, monetary-agonistic accomplishment, and insightfulness.
I wonder if that's ever happened in the history of humanity. I have my doubts.
I also wonder, but perhaps some of these may approximate more "utopianish" collective community integration:
- Genuine hippie communes (Do kibbutzes count?)
- Amish
- Indigenous tribes where elders are respected
- Rural/suburban Minnesotans because they tend towards hardy dealing with life and climate struggles and unimpressed by immodesty
- In the old days (80's/90's), my grandparents knew most of their neighbors, grocery store cashiers, butcher, hair stylist, and a number of other people well. What ever happened to that? I don't even know any of the neighbors in my apartment complex despite introducing myself, and one (Louis Vuitton-strutting cliché) woman neighbor next door won't even acknowledge my presence with pleasantries in passing. WTH.
Politicians are the worst people to follow on Twitter outside of maybe conspiracy theorists. Well, some politicians are both.
It's better to follow creative people. They tend to have much more interesting things to say. Follow people like John Carmack, Simone Giertz, or The Onion instead.
You almost have to follow politicians or news if you wanted to keep up on the vaccine and re-opening front. Following news outlets is about the same in terms of replies.
Well, you could get your news from news sites directly rather than following them on Twitter! Then you're very unlikely to see stupid replies. (But choose news sites that either don't have comments at all or at least hide them by default.)
My Twitter experience isn't nearly as good as it was some years ago, to be sure, but I follow fairly few people outside the "friends of friends" perimeter and rarely follow people who are given to performative outrage, even if they are people I generally agree with. While I do block people occasionally, I'm more free with "mute temporarily", "mute forever" and, importantly, "turn off retweets" -- that can have an almost magically cleansing effect on your timeline.
Following politicians is fine. It's the replies to politicians that are never going to be worthwhile. There's no discussion there, just rants and cheers.
> I had a brief foray into Twitter, but had to stop. I followed my senator
I very much find the Twitter experience is what you make of it. I did it wrong the first time I tried twitter, and I hated and abandoned it, too.
Second time around, I started with a handful of tech people that post interesting content or work in projects I'm involved or interested in, and organically grew from there. I also follow a couple people that post local (to my small city) traffic/news/etc. And within the last year, I follow some people that post COVID stuff about my region, who produce charts and stuff that are 10x more useful than the official government sources (eg: updated and realistic R calculations, include charts with hospitalizations/deaths, etc).
What is absolutely not useful is anything political (the replies to COVID stuff tend to get political, so I also ignore that), or pretty much anything in "trending".
Also don't be afraid to mute or unfollow people, and click "Not interested in this -> show fewer retweets/likes from this person" -- all things that have made it tolerable and even useful. If disinformation/lies or other similar nonsense starts getting in my feed, I do what I need to get rid of it. This has meant sometimes unfollowing someone I otherwise like (eg, they're replying that disinformation and causing fights) but honestly, it's just not worth it to me.
> Does it makes sense to report someone for bad science?
It would be lovely to have a platform/forum where the whole concept was just that the moderation would ban people not only for spreading misinformation or making ad-hominem attacks, but also for applying unsound logic / not citing sources when asked / etc. All the same stuff that'd get a journal paper rejected during peer-review.
(With public records of moderator decisions, and the ability to appeal a decision; but where the "appeals process" just translates to your post going through a Slashdot-like "bunch of regular users given temporary moderation duties approve/deny your post" — which, given the type of user who'd want to be on a platform like this, likely wouldn't be any more friendly to your post than the mods would be.)
It'd sure be a niche platform, but that'd match well with how much work the moderation staff would have to do to keep up with discussion on it. I'd pay to be there!
(Yes, this is what scientific journals were originally supposed to be: heavily-moderated public forums for conversation between scientists. They don't serve this function well any more, as they've been parasitized by the function of serving the needs of academic clout-seekers.)
Its an extremely hard problem to solve. How do you hire the moderators and how to you track if they're doing a good job? You will need to hire experts in multiple fields. Things get especially tricky when you go into super specialized fields and only a person working in that field can smell the BS.
I work in biotech, and lets pretend I'm an expert on a topic- say immunology. When I get home from work, what would motivate me to sift through countless posts about misinformation and flag them? No amount of money is going to persuade me - but that's just me ofcource.
> When I get home from work, what would motivate me to sift through countless posts about misinformation and flag them?
Turn it around. Make it like Reddit's /new: have moderators able to sift through countless posts about misinformation and approve the good ones. It's not a large difference in what moderators end up doing — they still have to at least skim over all the misinformation. But it's psychologically very different — you can just "walk away" from annoying things that stink of quackery up-front, while "engaging with" only the things that seem good, and eventually "upvoting" the things that still seem good even after you've read them carefully.
Yes, I'm actually suggesting that every post on such a site would go through a moderation queue. (Just one that any user can dip into to look at, if they like, but only moderators can actually vote on.) Or, if not every post, then a good sampling of them; or maybe every post from users with less than N approved posts.
The big effect of that would be that there wouldn't be "countless posts about misinformation." There'd be a couple, mostly by new users with clear signal of that user just being an attacker to the community who doesn't actually want to become part of it (and therefore, can just be banned wholesale.) Noise would drop over time, because crackpots wouldn't even get a short blip of engagement. They'd get none. Their account would die in the crib, never witnessed by anyone but moderators and curious /new viewers.
Combine it with a KYC mechanism (so users can't keep making new accounts) and the moderation load actually becomes reasonable.
Assuming you managed to hire an army of experts who are good at moderating the posts across various fields - Often times people also link to external articles/blogs/videos so now the moderators have to read through several page documents or sit through hours of video. I just find a moderation system like that hard to practically implement for a platform like twitter. And to be honest, I see this as going down a dark path - something that will lead to the 'Ministry of Truth' type entities with their own in-groups/fighting/politics.
That's one practical aspect, the second is, people are often times misinformed themselves and are simply posting something they heard from their buddy or on TV/youtube/etc in good faith - they're not bad-actors looking to attack the community.
Those are just my thoughts, but what do I know, I'm not an expert on these topics :)
> Assuming you managed to hire an army of experts who are good at moderating the posts across various fields - Often times people also link to external articles/blogs/videos so now the moderators have to read through several page documents or sit through hours of video.
The moderators would never be expected to audit "posts" (top-level links to big things that need a long analysis process), just comments.
Or rather — "posts" can be, in some sense, raw evidence/data, not assertions about anything in particular. (Think e.g. a link to a scientific study. Nobody assumes that the poster of such a link is asserting, through the link, that they believe the study's own conclusions to be true — just that they believe the study to be interesting in some way — worth discussing.)
Moderators would be expected to poke their head into a post link for just long-enough to confirm that it's that "artifact to be interpreted" kind of post. If it is, it's allowed to stand.
Whereas "comments" — those that are part of a post alongside the link, or those in reply/reference to a post — are almost always the conclusions drawn from the data, editorialization by the participant user(s). Those are what need moderating.
If you prune only the bad comments, then bad posts no longer matter, because their engagement (which is univerally in the form of bad comments) disappears, and so the post itself is no longer "interesting" according to any kind of social recommendation system.
("Posts" can also be external-to-the-platform editorializations/opinion pieces. I would suggest just banning this type of content altogether. Moderator notices an external link is to an opinion piece? Out it goes. If you want to talk about some externally-written Op/Ed in the forum, you'd have to "import" it into the forum in full text — at which point it would be subject to moderation, and would also be the karmic responsibility of whoever chose to "import" it. You'd be claiming the words of the Op/Ed as your words. Like reading something into evidence in a court room — if it turns out to be faked evidence, that's libel on the part of whichever party introduced it.)
> are good at moderating the posts across various fields
I see what I think you're imagining here, but I never meant to imply that moderators are required to actually verify that statements are true (which requires domain knowledge), only to verify on a syntactic level that the poster is engaging in valid logic to derive conclusions from evidence via syllogisms/induction/etc. (which only requires an understanding of epistemics and rhetoric.) Basically, as long as the poster seems to be behaving in good faith, they're fine. It's up to the userbase themselves to notice whether the logic is sound — built on true assumptions.
In other words, the point of the moderators is to catch the same types of things a judge will notice and subtract points for in a debating society. But instead of points, your post just never shows up because it wasn't approved; and you edge closer to being banned.
> That's one practical aspect, the second is, people are often times misinformed themselves and are simply posting something they heard from their buddy or on TV/youtube/etc in good faith
I mean, that's the main thing I'd want to stop in its tracks: repeating things without first fact-checking them. Yes, preventing people from parroting things they've "heard somewhere" without citing an independent source, would kill 99% of potential discourse on such a platform. Well, good! What'd be left is the gold I want out of the platform in the first place: primary-source posters who can cite their own externally-verifiable data; secondary-source investigative-journalists who will find and cite someone else's externally-verifiable data to go along with their assertions; and people asking questions to those first two groups, making plans, and other types of rhetoric that don't translate to "is" claims about the world. Who cares about anything else?
That is your mistake. People do not respond by being told how they are wrong. Instead, people will change their minds when they see that their friends do.
>But then you imagine that they must be thinking the same thing.
I'm not so sure about that. There are some people who take seriously the fact that they could be wrong. They approach discussion with an open mind, listen to the other, look at evidence with a critical eye, and respond fairly.
Others come to a "discussion" from a totally different headspace. They are bitter, angry, and, to be frank, not very smart. It's not discussion they are after, because they aren't open to the possibility they may be wrong.
My theory is that this is the "rabble" that used to be led by the church, and then by mass media, and now by internet media. The internet is fractured into close-minded echo-chambers, and for many alive today, the classic error-modes of public online communication are new and exciting. The right is enamored with the brutal effectiveness of trolling. The left seems to prefer doxxing and blacklists. And internet companies care about metrics that don't capture any of the externalities of their platforms.
Im amazed Twitter doesn't have a report option for misinformation like Facebook has. But whats worse is if you report someone for making violent threats you get an instantly email, and I mean instant, saying they found nothing wrong.
As for elected officials, I wish they'd just disable replies altogether for them. Nothing good comes from it.
Global warming could be a cyclical process, just like covid could have come from a lab, and hand-drawn charts don't make it any more or less true. The two (claim and charts) can be separated, that is part of "critical thinking."
There are many people who can think critically who also happen to use Twitter just as any other public space. Frankly your comment comes off as condescending and aloof so I can see why it didn't go well.
I don’t think I said there weren’t, I was only referring to the posts that were clearly incorrect - they exist everywhere, but Twitter allows misinformed posters to have their speech elevated to the same level as sitting senators.
Dude when ISIS was a big thing, I had a phase when I would try to debate the recruiters on Twitter (the guys who "thank the lions", write half arabic half english and seem to live in the middle east). This was fun time :D
Twitter is such a trash, I mean on the IRC at least you can split in groups, kick out and prevent unsignaled readership. On twitter millions can read a random shit without segregation it's awful.
I had to get away from it when the US required foreigners to disclose their twitter accounts upon entry, with all my ISIS "friends"...
I remember one person posting hand drawn graphs without a scale claiming global warming is just a cyclical process. Does it makes sense to report someone for bad science? They probably believe what they posted to be true, and others who read that unrefuted reply may begin to think the same. So I tried to teach critical thinking to random people on Twitter, but as you imagine this was a fools errand.