Hacker Newsnew | past | comments | ask | show | jobs | submit | jpeloquin's commentslogin

Is the goal to get rid of the journals or ensure open access? Because the US already has open access mandates for federally funded research. Immediate and without embargo. https://www.lib.iastate.edu/news/upcoming-public-access-requ...


it is open access, but often 'golden' or 'diamond' OA, where the researcher has to pay few K$ for this.


This is discussed in the article.


The article's discussion of how the open access mandate works is wrong. Federally funded research, when published (even in a closed-access journal) must be deposited in https://pmc.ncbi.nlm.nih.gov/ or a similar repository.

Edit: OA advocates have won pretty much everything we wanted, there's not much left to be outraged over.


But that's what the article says? The (correct) criticism of that system is that the publishers just replaced the subscription fees with article processing charges for OA publication, and still profit from public funding the same as before.


Post-mandate, I've been submitting to closed access journals and getting OA on the side for free due to the mandate. Pre-mandate, I only submitted to paid OA journals, and paid ~ $3k each time for it.

The article claims the solution is "every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled." That's an equivocation fallacy. Whether a for-profit journal publishes the work at some point is orthogonal to whether it is available un-paywalled, which it now must be.

You say that publishers replaced subscription fees with APC charges, but I haven't seen this happening when I've submitted papers recently. Journals need new submissions or they lose mindshare. Authors are price-sensitive and will shop around. Starting a new journal isn't that hard (it can been done as a side project) so high margins will likely be undercut. I have no idea why the author chose to pay a $12k APC; they probably didn't need to. Finally, closed-access journals will have residual subscription income from their closed-access archives for many decades; if the author wants to kill that income stream off, their proposed solution will not do it. So while I agree with the article's condemnation of the publishers, who are certainly no friends of science, I think it's wildly off-base on pretty much every other point.

Author-pays APCs are even potentially a good thing as long as they aren't much higher than the cost of publication. Universal APCs would provide some pressure against publishing many low-value papers that aren't really worth the time it takes to read them. The paper spam is kind of getting out of control.


I don't think authors are very price sensitive. If they can get their paper into a good journal they will pay the charges, as long as they are not absurdly high. After all, it's not their personal money, often the employer also pays, as they also need high impact papers to generate funding. Nobody who can get their paper into Nature will publish with PLOS Biology instead because it's cheaper. I'm pretty sure if I went to my institute director and said I need 10k APC to publish this in Nature but ran out of funding they would be annoyed, but still find the money.

In the end the details do not really matter, it is still absurd that a high percentage of the money paid in whatever way is just profit for company investors, for a system in which 80% of the skilled work is done for fee by the scientists.

I'm not sure if APCs will reduce the paper flood. Especially if the APCs are near the real cost of publishing, having a published paper will always be worth more to the authors. This is how we get all the junk journals that publish anything as long as you pay after all.


Evaluating a function using a densely spaced grid and plotting it does work. This is brute-force search. You will see the global minima immediately in the way you describe, provided your grid is dense enough to capture all local variation.

It's just that when the function is implemented on the computer, evaluating so many points takes a long time, and using a more sophisticated optimization algorithm that exploits information like the gradient is almost always faster. In physical reality all the points already exist, so if they can be observed cheaply the brute force approach works well.

Edit: Your question was good. Asking superficially-naive questions like that is often a fruitful starting point for coming up with new tricks to solve seemingly-intractable problems.


Thanks!

It does feels to me that we do some sort of sampling, definitely is not a naive grid search.

Also I find it easier to find the minima in specific directions (up, down, left, right) rather than let’s say a 42 degree one. So some sort of priors are probably used to improve sample efficiency.


From main text:

> Discussions with different stakeholders suggest that many currently perceive systematic fraudulent science as something that occurs only in the periphery of the “real” scientific enterprise, that is, outside OECD countries. Accumulating evidence shows that systematic production of low quality and fraudulent science can occur anywhere.

From supplement (section about the output of the "ARDA" paper mill):

> We obtained 20,638 documents and were able to impute country of authorship for 13,288 documents (64.4%). Of these documents, more than half were solely from India (26.4%), Iraq (19.3%), or Indonesia (12.2%).

The identity and reputation of the authors, and the publication venue, is (for now) still a strong signal when evaluating the credibility of an article.

The article is spot-on though in that there is a real risk of paper mills infecting formerly reliable journals, and this is not helped by the publishers' commercialism. For example, it used to be easy to ignore Hindawi journals (they are characteristically low quality); then Wiley started publishing them under its own brand. The good is now mixed with the bad under the same label. Practicing scientists can fall back on whether they know the authors personally but that doesn't really help non-practicing professionals or the general public.


I find going by citation good for established work. Harzing's publish or perish is useful for this.


True, but the RF coils do get turned on & off. Heating of non-magnetic metal from the radio waves used for scanning is another concern, not just magnetic force.


> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?

It seems messy. Just one example that I remember because it was on HN before: https://www.hillelwayne.com/post/this-is-how-science-happens...


Even extremely privacy-conscious authors could submit their paper to the service at the same time they publish their preprint v1, then if the service's feedback is useful, publish preprint v2 and submit v2 as the version of record.


...or run it themselves. The code is open source: https://github.com/robertjakob/rigorous

Note: The current version uses the OpenAI API, but it should be adaptable to run on local models instead.


Industry research is generally R&D (applied science, engineering research), not basic research (basic science). Not to disparage either; both are needed, but they are quite different and a person may be suited to one but not the other. It can be hard for someone looking for work to determine where an organization's focus is, as an outsider.


Multiple comparisons and sequential hypothesis testing / early stopping aren't the same problem. There might be a way to wrangle an F test into a sequential hypothesis testing approach, but it's not obvious (to me anyway) how one would do so. In multiple comparisons each additional comparison introduces a new group with independent data; in sequential hypothesis testing each successive test adds a small amount of additional data to each group so all results are conditional. Could you elaborate or provide a link?


Publications with public funding have already escaped the paywall, partially as of 2013 and completely as of this year:

https://par.nsf.gov/

https://pmc.ncbi.nlm.nih.gov/

https://ospo.gwu.edu/overview-us-policy-open-access-and-open...

https://www.nih.gov/about-nih/who-we-are/nih-director/statem...

https://www.coalition-s.org/plan_s_principles/

The intent of the Bayh-Dole Act was to deal with a perceived problem of government-owned patents being investor-unfriendly. At the time the government would only grant non-exclusive licenses, and investors generally want exclusivity. That may have been the actual problem, moreso than who owned the patent. On the other hand, giving the actual inventors an incentive to commercialize their work should increase their productivity and the chance that the inventions actually get used.


Once something has a predictable ROI (can be productized and sold), profit seekers will find a way. The role of publicly funded research is to get ideas that are not immediately profitable to the stage that investors can take over. Publicly funded research also supports investor-funded R&D by educating their future work force.

The provided examples do not clearly support the idea that industry can compensate for a decrease in government-funded basic research. Bell Labs was the product of government action (antitrust enforcement), not a voluntary creation. The others are R&D (product development) organizations, not research organizations. Of those listed, Xerox PARC is the most significant, but from the profit-seeking perspective it's more of a cautionary tale since it primarily benefited Xerox's competitors. And Hinton seems to have received government support; his backpropagation paper at least credits ONR. As I understand it, the overall deep learning story is that basic research, including government-funded research, laid theoretical groundwork that capital investment was later able to scale commercially once video games drove development of the necessary hardware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: