How Do You Know When Results Are Not Significant

Credit: Image Source/Getty Images

5 tips for dealing with non-meaning results

It might look like failure, but don't let go just withal.

xvi September 2019

Jon Brock

Image Source/Getty Images

When researchers neglect to find a statistically significant event, it's frequently treated every bit exactly that – a failure. Non-pregnant results are difficult to publish in scientific journals and, as a event, researchers oft choose not to submit them for publication.

This ways that the evidence published in scientific journals is biased towards studies that find effects.

A study published in Science by a team from Stanford University who investigated 221 survey-based experiments funded past the National Scientific discipline Foundation found that nearly two-thirds of the social scientific discipline experiments that produced null results were filed away, never to be published.

By comparison, 96% of the studies with statistically strong results were written up.

"These biases imperil the robustness of scientific prove," says David Mehler, a psychologist at the University of Münster in Federal republic of germany. "But they also harm early career researchers in item who depend on building upwards a runway record."

Mehler is the co-author of a recent article published in the Periodical of European Psychology Students about affectionate the significance of non-significant findings.

Then, what can researchers do to avoid unpublishable results?

#1: Perform an equivalence exam

The problem with a non-significant result is that it's ambiguous, explains Daniël Lakens, a psychologist at Eindhoven University of Technology in the Netherlands.

It could hateful that the nothing hypothesis is true – there actually is no effect. Just it could besides indicate that the data are inconclusive either way.

Lakens says performing an 'equivalence exam' can help you distinguish betwixt these two possibilities. It can't tell you that there is no event, but it can tell you that an issue – if information technology exists – is likely to be of negligible practical or theoretical significance.

Bayesian statistics offering an alternative way of performing this test, and in Lakens' experience, "either is meliorate than electric current practice".

#2 Interact to collect more data

Equivalence tests and Bayesian analyses tin be helpful, but if you don't take enough data, their results are likely to be inconclusive.

"The root problem remains that researchers want to conduct confirmatory hypothesis tests for effects that their studies are by and large underpowered to detect," says Mehler.

This, he adds, is a particular trouble for students and early on career researchers, whose express resources oft constrain them to small sample sizes.

One solution is to interact with other researchers to collect more data. In psychology, the StudySwap website is one mode for researchers to squad up and combine resources.

#3 Utilize directional tests to increase statistical power

If resources are deficient, it'southward important to use them every bit efficiently as possible. Lakens suggests a number of ways in which researchers can tweak their inquiry pattern to increase statistical power – the likelihood of finding an effect if information technology actually does exist.

In some circumstances, he says, researchers should consider 'directional' or 'one-sided' tests.

For example, if your hypothesis clearly states that patients receiving a new drug should take better outcomes than those receiving a placebo, it makes sense to test that prediction rather than looking for a difference betwixt the groups in either direction.

"Information technology'south basically costless statistical power just for making a prediction," says Lakens.

#4 Perform sequential analyses to improve data collection efficiency

Efficiency can also be increased past conducting sequential analyses, whereby data collection is terminated if there is already enough evidence to support the hypothesis, or it's articulate that farther data will non pb to it being supported.

This approach is often taken in clinical trials where it might be unethical to examination patients beyond the betoken that the efficacy of the handling tin already be determined.

A mutual concern is that performing multiple analyses increases the probability of finding an event that doesn't exist. Withal, this tin be addressed by adjusting the threshold for statistical significance, Lakens explains.

#five Submit a Registered Study

Whichever approach is taken, information technology'due south important to draw the sampling and analyses clearly to permit a fair evaluation by peer reviewers and readers, says Mehler.

Ideally, studies should be preregistered. This allows authors to demonstrate that the tests were determined earlier rather than afterwards the results were known. In fact, Mehler argues, the best style to ensure that results are published is to submit a Registered Report.

In this format, studies are evaluated and provisionally accepted based on the methods and analysis plan. The paper is then guaranteed to be published if the researchers follow this preregistered plan – whatever the results.

In a recent investigation, Mehler and his colleague, Chris Allen from Cardiff Academy in the UK, found that Registered Reports led to a much increased rate of null results: 61% compared with v to xx% for traditional papers.

Read adjacent:

First assay of 'pre-registered' studies shows abrupt rise in naught findings

This simple tool shows you how to choose your mentors

Q&A Niamh Brennan: 100 rules for publishing in top journals

castillosidest.blogspot.com

Source: https://www.natureindex.com/news-blog/top-tips-for-dealing-with-non-significant-null-results

0 Response to "How Do You Know When Results Are Not Significant"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel