Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Another scam paper published in a “scientific” journal (whyevolutionistrue.com)
84 points by DanielBMarkham on Jan 17, 2021 | hide | past | favorite | 46 comments


"All in all, in this study we present the results of our work with fishy birds (fide Baldassarre [1]). We hypothesize that, (1) despite climate change, it is still cold in Antarctica..."

With all the problems of academia, this had me laughing for a good few minutes.


Using a pizza as a basis for a research paper on fish, birds, fungus, beaks, etc?

I thought this was comedy gold. Good hacker read for a Sunday.


My sides are hurting.

And flying fish has higher bird-to-fish-appearance-ratio than barn swallow.


"Finally, we would like to acknowledge that this study would not have been possible had it not been for the predatory journal industry. Without it, academia and society would be a better place."

Acknowledgement is gold.


Academia fucked that up all on their own.


As a lay-person who reads scientific papers sometimes because they’re referenced on HN, how do I validate the credibility of a journal? Does a directory of journals exist with scoring, or is there a strategy I can use when evaluating a paper or journal to determine credibility?


One doesn't validate the credibility of a journal; one validates the credibility of the methodology in the research.

And it then turns out that most methodology even in reputable journals are rather wanting with many objections that can be leveled against it.

A large amount of scientific research can't even be reproduced, and of much that can, even though the cold data can be reproduced, the conclusions that follow from the data are rather dubious leaps of faith.

It doesn't take much for something to be called “science”; it certainly doesn't take reproducibility, despite various claims to the contrary.


It absolutely makes sense to simply ignore articles based on the credibility of the journal, it's an effective and cheap first filter (and you absolutely need filters) - there are many predatory journals (like this one) which will publish anything that's paid for, they probably even outnumber "real" journals, and it makes all sense to automatically discard them without reading.

There is a lot of noise already in "proper" journals - but in the predatory journals, the signal-to-noise ratio is so extremely low, it's not worth looking into the credibility of the methodology of the paper because that's far more time and effort that the paper deserves. If it was any good, it would have been published in a better venue. If it could pass peer review, it would have been published in a venue that actually does peer review as opposed to these (many) predatory journals who just claim to do so. The authors have strong practical incentives to not publish it there if they can avoid to, and the fact that they chose to do so anyway indicates that no respectable place would publish it.

Because of that, if a paper is published in a place like this, is a completely reasonable prior to presume that overwhelmingly likely the paper is very bad, without even looking at the paper itself.


This is true; it works in one direction, but not in the other.

But how the post which I replied to was worded suggested that one can automatically trust “science”, provided that it be published in a credible journal.

I also find that very often in the most reputable journals is where the most sensational papers end up before any attempt at reproduction has been made simply because the data they measured was far from the null hypothesis, which may entirely be a statistical fluke and not hold up under a reproduction attempt.

It is really quite easy to obtain spectacular data as a fluke.


There are impact factors and top lists, and you can check out the Wikipedia page of the journal if it exists. You can google what are the "top journals for <field in question>". You can check the publisher. IEEE and Springer for example tend to be genuine.

But you still have to look at the individual paper and can't believe it just based on a single article. Research articles are for sharing results among experts (and for advancing the careers of researchers), they are not aimed at laypeople. It's easy to misinterpret them if you lack the background knowledge.

Even in non-predatory journals many results fail to replicate and are produced due to a publish or perish pressure. You're better off learning from textbooks so you get info that has been verified, digested and distilled and represent consensus. Cutting edge research proposes new ideas by one group of authors, it's not a consensus yet.


> You're better off learning from textbooks so you get info that has been verified, digested and distilled and represent consensus.

A very large body of consensus ended up in textbooks that had either never been attempted te be reproduced, or was attempted, much later, and couldn't be reproduced, yet continued to remain in textbooks.

The truth of the matter is that most scientific research will never see an attempt at replication, though the replication crisis has no doubt influenced this culture, but before it, as little as 0.16% of peer reviewed results were attempts at replication, and most were unsuccessful. — of course, the successful ones were not as easily published, which is probably why no one attempted it.


That may happen in exceptional cases, but less so for sciences with more concrete results. To learn about human psychology and society perhaps you're better off reading great novels than p-hacked sexy "research".

Regarding replication... Replication is boring in the eyes of funding committees. They want new and sexy results for their money, preferably at a steady rate of a bunch of papers each year.


As a very rough first filter, for papers in the area of natural sciences, check if the paper is indexed in Pubmed (https://pubmed.ncbi.nlm.nih.gov/). Every legimitate peer-reviewed journal in that area is indexed there, anything that is not indexed is almost certainly a dubious journal. But you can't use this the other way around, Pubmed still contains some journals that publish questionable stuff.

Beyond that you can look a bit at the impact factor of the journal, you can generally find it by just googling for the name of the journal + "impact factor" or on the Wikipedia page of the journal. But it is kinda hard to interpret this on its own, and it does not translate directly into credibility. A high impact factor shows that articles in that journal are cited very often, and you can assume that journals with high impact factors generally have a reasonable peer review system. But that doesn't say much for a single paper, reputable journals can easily fail in individual cases.

The Pubmed check generally filters out most of the journals that would just publish anything without real review, so it's the most useful filter in my opinion. I would not try to gauge more meaning than that from the journal and focus on the individual article.


Maybe it's an unpopular opinion, and biased by fields I've worked in as a grad, but I think you really just can't validate a particular paper without a background in the field. You need lots of context: recent works of that university department, credability/background of the last author post doc/professor, state of the art and cornerstone works in that field.

You could probably gain an entry level background (except for highly mathematical or medical things) by spending 5-10 hours a week for a few months reading various papers/online discussions. Assuming you have access to those forums. You should know thats the lower bound amount of literature reading full-time students have to do to keep up with their own field.


> literature reading full-time students

Nobody keeps reading full time, that would not only be insane it would also be conterproductive since it’s time not spend actually doing something or writing papers. In most place it’s not necessary to know all the recent papers, and if a really important one is missing as a reference, a reviewer will note it in the comments.


I worded that poorly. I meant full-time students (as in they're not just reading about something out of curiosity while sitting on the toilet) that read literature as a part of their job.


Actually, asking for the credibility of a journal is the wrong question. The real one is about whether an article is credible. But the answer to that is disappointingly complicated.

Articles are peer-reviewed for their readability (for peers, not laymen), consistency (e.g. if there is a mathematical proof in there, the reviewer might check it if possible. if there is an experimental technique, the reviewer might check if that technique could produce the results claimed) and reproducability (is the experiment described in sufficient detail? does the software provided produce the expected results? do the provided numerical results fit the claims?). All those checks are what the reviewer can do in a few hours, days at most. That is what peer-review means.

All the things to check beyond those are left as an exercise to other scientists after publication. Reproduction is a big task, usually almost as big as the original paper. Proofs are often extremely hard to check, so a few days by any old reviewer won't cut it. So if you want to know if the results in a paper are true, watch literature for the following years and look at publications citing that paper and (dis-)agreeing with its results. Same for (sometimes) letters to the editor, retractions and "everybody-knows"-rumours.

More reputable journals tend to attract better-quality papers (according to the peer-review criteria outlined above). That tends to correlate with a higher probability of the paper being true. But it is not a very strong correlation, there is utter nonsense, groupthink and polished turds even in very high profile journals.


Right, I think many people believe peer review to be some big expert committee giving a paper lots of analysis and evaluation. In reality they are mostly PhD students with 1-5 years experience, not experts with decades of experience and one person may get like 5-10 papers to review at a time. Oh and it's unpaid work that gets time away from your own research and is not really incentivized beyond a vague sense of moral duty towards the progress of mankind's knowledge. The end result is that it's mostly pattern matching, does it look like the typical paper in the field? The gut reaction and impression strongly influences the decision, then the actual review is about justifying that decision.

I mean it's not totally arbitrary, really good reviewers do give it like 2-5 hours, but it's best thought of as a rudimentary filter rather than a meticulous verification.


There are a number of metrics[1] for journal ranking. The journal in question[2] claims to have (or have had) an impact factor[3] of 0.593 (2017-18); "the ratio between the number of citations received in that year [...] and the total number of "citable items" published in that journal[...]". That's not a very good score. Nature has an impact factor of ~42 (2019), or to pick another example, because I recently referred to it here on HN, Eurosurveillance has an impact factor of ~6.

I don't think blindly following these metrics is a good idea, but it's not as a bad first approximation.

[1] https://en.wikipedia.org/wiki/Journal_ranking

[2] https://juniperpublishers.com/ofoaj/

[3] https://en.wikipedia.org/wiki/Impact_factor


Impact factor works if you just to minimize the number of "bad" publications you look at. But if you rely on it too much, you are going to exclude lots of good publications too. Specialist journals tend to have lower impact factors, for one. Journal of Artificial Societies and Social Simulation is a good journal, but its impact factor has never been especially high.


One possible way is looking at the list of editors of the Journal, and seeing if the researchers in question have back-mentioned that Journal on their CV (or personal homepage). The fact that researchers/peers in a field are willing to openly connect their prestige to those of a journal's, is usually an indicator that there is at least something to the journal.

Most if not all scammy journals will fail this test.. but it is also quite labor intensive.


I believe you can probably get a good answer to your question with Beall's list: https://beallslist.net/


^ That was the standard blacklist, but I thought it was shut down in 2017?

It's really tricky, because the move to open access publishing (which typically requires that authors pay) is a good thing on one hand, but facilitates these junk journals (and inflates publication count) on the other hand.

There are also Cabell's lists, both of predatory and "quality" journals.

https://en.wikipedia.org/wiki/Beall%27s_List

https://en.wikipedia.org/wiki/Cabells%27_Predatory_Reports


When you see link to paper with interesting title posted to HN from nature.com domain, it's almost certainly from "Scientific Reports". It's not as bad as described here, but nobody publishes there if they have paper that is scientifically interesting. It's place to report what you did when nothing interesting came out of it.

Nature publishes many journals and "Nature" is the highest quality general science publication. "Scientific Reports" does just quick review for methodology.


A really simple good-enough approach is to go to Google scholar, type in the title, and count how many citations it gets. Adjust for years since publication. Many citations: probably important. Few citations: less important. No citations, more than a year old: probably not much good.

It isn't perfect. Different disciplines tend to cite at different rates; and you may believe that some disciplines are just cabals of bullshitters citing each other. I couldn't possibly comment.


You can start by making sure it has many citations, not self citations, and ideally also people challenging it.

Then you can judge the arguments of the paper and the papers referencing it.

Usually a paper with thousands of citations and only weak arguments against it is trustworthy.


Evaluate the authors directly.


While I think it's great that these journals get exposed and it's fun to read how they are being exposed in lots of interesting ways I disagree with the following: >Those who evaluate c.v.s, however, often don’t know which journals are bogus.

In fact the expert who evaluate cv's in academia typically know well which journals are bogus (or better they know which journals are not bogus and if there are journals they don't know about they look them up and quickly see if they are bogus).

Having published in such a journal will quickly get you removed from potential applicants lists or graded poorly in grants applications.

Therefore the only time I have ever seen these been cited were in PhD position applications from developing countries (because the students are told publications are increasing their chances and they want to get ahead, while in fact it disqualifies them).

So, like it is so often, these journals predate on the weak.


I think scientific journals and other infrastructures that have the potential to greatly impact society need to put defense mechanisms in place for bad actors going forward.


Or we should stop taking (everything) published by <take your publisher/site of choice> as gospel.


It is far too common indeed for people to religiously belief any conclusion so long as it is published in a format that makes it seem scientific, or worse, popular news report on such science that doesn't even retain the numbers and proffers even more spectacular claims.


Why would they? These journals make money from authors paying for a publication. Any review costs cut into their margins.

Also, I don’t think this journal has even the potential to greatly impact society. It isn’t intended to be read.


Ok, so here's what you do next: make a list of all of the authors published in those journals and track their careers.

Anyone want to bet the results will be entertaining?


Unfortunately, the truth is much more unfunny than you think. In the past, you could get tenure at community or regional colleges just by teaching. If you were interested you might from time to time publish an article in the institution's journal or maybe even in a real journal - as a botanist or astronomer you can do citizen science without much equipment. That's how it ought to be.

But nowadays, administration has taken over, and everyone gets to fulfill the publication metric, or there won't be tenure. So you have predatory journals (I prefer the term "journal of convenience") that fulfill a genuine need. The only one who is worse off is the taxpayer or tuition-paying student because they get to pay the page charges. That is a problem, but everyone reacts to incentives.

There are certain parallels to zero-tolerance policies at schools - in the past the popular kids got off lightly, consequentely because of real or perceived corruption the popular kids now get away with no punishment at all while everyone else has a 5-ton weight dropped on them, all in the name of fairness.


> In the past, you could get tenure at community or regional colleges just by teaching. (...) But nowadays, administration has taken over, and everyone gets to fulfill the publication metric, or there won't be tenure.

Completely true! I'm an adjunct teacher (meaning I have no tenure) at a non-US state college. Here, if we want tenure, we must go through a public panel when there are open vacancies, and the jury will validate certain criteria.

At my last attempt (to get tenure), the criteria was 50% "scientific" (papers, journals, projects), 30% pedagogical (teaching experience, pedagogical materials) and 20% organizational. Although I fared well in the pedagogical components, I got behind in the "scientific" one, as I only published 4/5 papers (in decent venues), 1 book (in a predatory publisher) and 1 chapter of a book (in another predatory publisher)..

The papers (which have almost no citations => no viewers) got more points than an ebook that I have about Python programming which has 3.1k stars on github (which got zero points).

Just to say that teaching scores less than number of papers..


I don't understand this claim - because publishing in a 'journal of convenience' won't work for tenure because they don't just count your publications they look at what impact your publications have.


"College" has a wide range of meanings in the US - Cornell is a college, but so is Yoknapatawpha County Community College in Mississippi, where 20 % of students are functionally illiterate.

Of course they'll consider impact factor at Cornell, but at Yoknapatawpha they just count.


There are "teaching schools", where teaching is your full-time job. Even these schools now require some publications, which is where the predatory journals come in. Schools where research is supposed to be part of your job will have a list of journals that count, and they may have a point system. Better schools have shorter lists. This is enforced at the dean level, since deans don't have the expertise to evaluate individual papers. Here predatory journals won't help you.


A quote from someone when I volunteered at the IEEE Southeast Regional Conference and wondered why almost none of the authors showed up: "They're students at schools that require a certain number of publications to graduate. We don't care as long as they pay the fee to register."


SCIgen's "Rooter" was accepted in 2005. I think the next step would be submitting an array of bogus articles citing each other to achieve an impressive h-index. Maybe adding AI-generated articles to the batches would be one of the ways to verify the reviewers in more serious journals.


I would love to read a parody scientific journal, we may even stumble on some hidden truths.


Check out the Journal of Irreproducible Results and its competitor, the Annals of Improbable Research.


Thanks I will do so.


The closest thing I know of is the annual proceedings of Sigbovik. It's a humorous conference traditionally held on April 1st and "was formed in 2007 to celebrate the inestimable and variegated work of Harry Quacksalver Bovik. We especially welcome the three neglected quadrants of research: joke realizations of joke ideas, joke realizations of serious ideas, and serious realizations of joke ideas."

In 2020, there was even a paper about improving the peer review process called "State-of-the-art reviewing: A radical proposal to improve scientific publication"

http://sigbovik.org/


this is exactly why peer reviews and editorial decisions need to be publicized, whether you're a tiny publisher or Nature


It all depends on what you consider 'science'. I choose to believe that science is anything I want it to be, therefore any publication can be scientific. Playbook and Penthouse are the best scientific publications in the field of human sexuality. Done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: