Two experts have spurned a 2015 psychology replicability study that said that most of the experiments in the fields could not be repeated with accuracy.
Two Harvard professors with their team of researchers have proven that a replicability study on psychological studies was bogus. The study which was carried out in 2015 tried to prove that experimental replication was an inexact art in the field of psychology.
Don't Miss: iPhone 8: Everything You Need to Know
What this research was trying to postulate was that psychology was a pseudoscience. However, the latest evidence says that this study has nothing to back it up and is plain wrong.
What happened in 2015 was that a consortium of 270 scientists decided to replicate the results of 100 psychological experiments. The failure rate was 50%.
This led to a storm in a teacup as the headlines all over the world’s newspapers declared psychology to be mere mumbo jumbo and with little substance to it.
The replication revolution was re-examined by two researchers who found that the original consortium of scientists had made some serious blunders themselves. Thus, it would be premature to say that psychology has failed as a proper, rigorous, hard science.
The methods employed in the replication attempts were radically different from how the original experiments were carried out. This led to many mistakes that were fundamentally taken to mean that psychology experiments could not be replicated at all.
The consortium members had actually started jumping to conclusions. They ought to have had the patience to ask for the proof of their proof as well. Statistical glitches took place.
The holes in their theories only came to light when these two researchers re-investigated their findings. Flawed premises will lead to flawed hypotheses.
While the original study was shocking and provoked the ire of many scientists against those who carried out the psychological experiments, a little extra attention would have gone a long way. At least, it would have saved so much embarrassment which the consortium had to face later on.
A random sample was the least that should have been brought into consideration by the consortium members. If this was not the case, then at least the naturally arising statistical errors ought to have been corrected in the nick of time.
But the consortium members were slovenly and careless in their methodological techniques. Therefore, the end result was a flawed study that was in fact a major calumny against psychology as a science. The idiosyncracies of the consortium led to a lie being able to masquerade as the truth.
"Let's be clear, Gilbert said. "No one involved in this study was trying to deceive anyone. They just made mistakes, as scientists sometimes do. Many of the OSC members are our friends, and the corresponding author, Brian Nosek, is actually a good friend who was both forthcoming and helpful to us as we wrote our critique," Gilbert said.
"In fact, Brian is the one who suggested one of the methods we used for correcting the OSC's error calculations. So this is not a personal attack, this is a scientific critique. We all care about the same things: Doing science well and finding out what's true. We were glad to see that in their response to our comment, the OSC quibbled about a number of minor issues but conceded the major one, which is that their paper does not provide evidence for the pessimistic conclusions that most people have drawn from it."
"I think the big take-away point here is that meta-science must obey the rules of science," King said. "All the rules about sampling and calculating error and keeping experimenters blind to the hypothesis--all of those rules must apply whether you are studying people or studying the replicability of a science. Meta-science does not get a pass. It is not exempt. And those doing meta-science are not above the fray. They are part of the scientific process. If you violate the basic rules of science, you get the wrong answer, and that's what happened here."
"This paper has had extraordinary impact," Gilbert said. "It was Science magazine's number three 'Breakthrough of the Year' across all fields of science. It led to changes in policy at many scientific journals, changes in priorities at funding agencies, and it seriously undermined public perceptions of psychology. So it is not enough now, in the sober light of retrospect, to say that mistakes were made. These mistakes had very serious repercussions. We hope the OSC will now work as hard to correct the public misperceptions of their findings as they did to produce the findings themselves."