About the speaker
Fiona has a degree in Psychology, with a second major in Sociology, and was awarded her PhD in History and Philosophy of Science at the University of Melbourne in 2006. She is now a Professor at the University of Melbourne, based between the School of BioSciences and the School of Historical and Philosophical Studies. Fiona is also an Australian Research Council Future Fellow, leading a range of meta-research projects across ecology and conservation science, as well as projects in psychology, and other social science fields. Her meta-research work is driven by an underlying interest in how scientists and other experts reason, make and justify decisions, and change their minds. She is co-lead (with Simine Vazire) of MetaMelb, a recently launched meta-science/meta-research group at the University of Melbourne, and she leads the repliCATS project, for Collaborative Assessments for Trustworthy Science.
About the talk
The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, repliCATS is a group activity, centred around assessing the trustworthiness of research claims. Reviewers first make private individual assessments about a research claim—judging its comprehensibility, the prior plausibility of underlying effect, and its likely replicability. Reviewers then share their judgements and reasoning with group members, providing both new information and the opportunity for feedback and calibration. The group interrogates differences in opinion and explores counterfactuals. After discussion, there is a final opportunity for privately updating individual judgements. Importantly the repliCATS process is not consensus-driven – reviewers can disagree, and their ratings and probability judgements are mathematically aggregated into a final assessment. At the moment, the repliCATS platform exists primarily to predict replicability. Launched in January 2019 as part of the DARPA SCORE program, over 18 months repliCATS elicited group assessments and captured associated reasoning and discussion, for 3,000 published social scientific research claims in 8 disciplines (Business, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology). The repliCATS team are now working to extend the platform beyond merely predicting replicability, to deliver a more comprehensive peer review protocol. Suspected advantages of a repliCATS process over traditional peer review include: inbuilt training and calibration; feedback that is intrinsically rewarding; an inherently interactive process, but one which does not implicitly rely on ‘consensus by fatigue’; and a process that actively encourages interrogation. This talk will present some preliminary findings, and discuss the future of the platform.