
#PSYCHOPY ENVELOPE SOFTWARE#
This could be attributed to changing display technologies, multitasking operating systems, and manufacturers striving to reduce component cost and quality by offloading previous hardware tasks to software alternatives. Unfortunately faster hardware has not improved accuracy and one might argue that the degree of experimental control on offer today is worse than 20 years ago. Secondly, can they honestly state what the timing accuracy of their study was? And thirdly, are they confident they could replicate their experimental effect in another laboratory using different hardware and software? As replication forms the cornerstone of the scientific method requests such as these should not prove unwelcome or overly onerous and can only enhance the standing of our field. Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a locus for replication failure? With an increasing number of publications making use of complex computer-based experimental methods I feel now is an appropriate juncture to remind researchers that if they present stimuli, synchronize between equipment, or report response times in units of a millisecond, they should consider whether what they do is always reliable, accurate, and valid. Plant (*) The Black Box ToolKit Ltd, PO Box 3802, Sheffield S25 9AG, UK e-mail: Ĭonditions’ in a laboratory or more widely online across the web. However, few have considered that a growing proportion of research in modern psychology is conducted using a computer, whether that be under ‘controlled R. Some have gone so far as to suggest that Bayesian tests might be applied to quantify the results or efficacy of replication attempts so that the field might know which studies are more valid and go some way to ameliorate the issue (Verhagen & Wagenmakers, 2014). Areas for concern range from experimenter expectancy and statistical power through to publication bias and the file drawer problem to outright research fraud. Computer-based experiment There is an ongoing ‘replication crisis’ across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon (Pashler & Wagenmakers, 2012). Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a cause of millisecond timing error and subsequent replication failure? This article serves as a reminder that millisecond timing accuracy in psychology studies remains an important issue and that care needs to be taken to ensure that studies can be replicated on current computer hardware and software.

However, few have considered that a growing proportion of research in modern psychology is conducted using a computer. PlantĪbstract There is an ongoing ‘replication crisis’ across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon. A reminder on millisecond timing accuracy and potential replication failure in computer-based psychology experiments: An open letter Richard R.
