The present invention relates to a system and method of review for empirical literature.
Subsequent discussion of the empirical literature and this invention will be primarily described in the context of scientific research and development, including its literature and community. However, it will be recognized by those skilled in the art that this invention may be implemented in various other types of empirical literature and the associated communities, including but not limited to, medical research & clinical practice literature, project and “do it yourself” instructionals (such as Pinterest projects, instructional videos, instructional audio, seminars et cetera), school lesson planning, et cetera. Moreover, herein “article” can be used to refer not only to scientific publications but also refers to any such publically available instructional, including, but not limited to, text, video, seminars, or audio as appropriate to the particular circumstance.
One of the most substantial products created by scientific research & development is the publication of findings in the associated literature including both scientific publications, and patent literature. However, when an item of work is published in the empirical literature there remains a degree of uncertainty as to the accuracy of the findings reported. By some estimates, over half the findings reported in the scientific literature have significant inaccuracies. The significant probability that the information reported in any given publication is inaccurate creates a degree of uncertainty associated with the accuracy of any given report—that is, the probability of inaccuracy inherently reduces the rational expectation of accuracy. This uncertainty of probable accuracy will herein be referred to as ‘unreliability’. (It is noteworthy that the reliability of a particular article may be increased by any information that clarifies the probable accuracy of the article e.g. information confirming the accuracy of the article OR information providing evidence of the inaccuracy of the article.) This unreliability significantly decreases the effective useful value of the empirical literature—both individually, and as a whole. Consequently, since new research is largely based on previous published findings, inaccuracies therein can lead to labor and economic inefficiencies in performing new research.
In the prior art, two primary methods have existed for reducing the unreliability of the empirical literature: 1) expert commentary peer review and 2) published replication trials.                1. Peer review takes several forms for the empirical literature with pre-publication peer review being the most commonly used. Peer review has long served as a publication filter for keeping “unmeritorious” and/or “obviously unreliable” reports from being published in the respected empirical literature, or to point out flaws in them after publication (e.g. in the form of letters to the editor, etc.). However, expert commentary peer review serves primarily to protect the plausibility of the empirical literature. Peer-reviewers are generally not required to attempt replication of the results themselves. Therefore, expert peer-review allows a substantial amount of “plausible” but ultimately inaccurate reportings to be published in the empirical literature which creates associated statistical unreliability. Nor is this key difficulty adequately resolved by more recent trials of “post-publication peer review”, wherein commentary is invited after the article has been released for general reading and review.        2. The primary method for reducing the unreliability associated with reported literature which has been published (often after passing through peer-review) has been through replication trials wherein independent groups attempt replication of findings reported in the empirical literature and then themselves report the findings of these replication trials (either positive or negative) within the empirical literature. However, this method possesses certain intrinsic inefficiencies, is relatively slow, and in recent times has been utilized less and less with more groups and publications shifting to focus on publication of original work. Furthermore, corrections, errata, and technical rebuttals are often poorly associated with the original work, and are easily missed during routine literature searches. Other methods have also been proposed and/or developed for improving reliability of the scientific literature, such as paying for findings verification by independent laboratories (which incurs additional costs in time and monetary resources). In other avenues, people have reported on the reliability accuracy of the reports in the experimental literature through media such as personal internet platforms (e.g. blogs, twitter, etc.), informal “word of mouth” conversations, or (unquantified) forums such as ResearchGate.com and PhysicsForums.com, which lack a focus on reproducibility. In addition to other limitations, one of the key failures of these methods is the haphazard reportage and associated pitfalls.        