Prestigious journals are more reliable than lower ranked ones, right?
Research suggests that the reliability of research results does not necessarily depend on where the results were published. Source: Shutterstock

If your lecturer or professor requests that you use high-ranking journals for your university assignments, you may feel confident that the information you’ve carefully cited is upheld to the highest standards.

However, research suggests that not all journals, including those considered to be prestigious, are as reliable as many would like to believe.

Björn Brembs, author of Prestigious Science Journals Struggle to Reach Even Average Reliability, said a common assumption held by many is that only the best scientists are published in highly-selective and prestigious journals that command a large audience.

However, his study, which was published in Frontiers in Human Neuroscience, found that “data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with [the] increasing rank of the journal”.

Conversely, he wrote that a growing body of evidence suggests that “methodological quality and, consequently, [the] reliability of published research works in several fields may be decreasing with increasing journal rank”.

A researcher’s ability to publish his or her work in high-ranking journals is not only crucial for the advancement of science, but also an author’s career advancement opportunities.

“Even before science became hypercompetitive at every level, now and again results published in prestigious journals were later found to be false. This is the nature of science. Science is difficult, complicated and perpetually preliminary. Science is self-correcting and better experimentation will continue to advance science to the detriment of previous experiments,” said the report.

“Today, however, fierce competition exacerbates this trait and renders it a massive problem for scholarly journals. Now it has become their task to find the ground-breaking among the too-good-to-be-true data, submitted by desperate scientists, who face unemployment and/or laboratory closure without the next high-profile publication. This is a monumental task, given that sometimes it takes decades to find that one or the other result rests on flimsy grounds.”

But why is this worrying?

Brembs wrote that “hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science”.

Meanwhile, Statistics Done Wrong, described as “a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals”, notes that “statistical errors are rife” and that they are prevalent in “vast swaths of the published literature, casting doubt on the findings of thousands of papers”.

This may partially be due to a lack of adequate training in statistics.

They note that “few undergraduate science degrees or medical schools require courses in statistics and experimental design – and some introductory statistics courses skip over issues of statistical power and multiple inference.

“This is seen as acceptable despite the paramount role of data and statistical analysis in the pursuit of modern science; we wouldn’t accept doctors who have no experience with prescription medication, so why do we accept scientists with no training in statistics? Scientists need formal statistical training and advice.”  

Liked this? Then you’ll love…

How to read a scientific journal for free

How to apply for a research assistant position at your university