C.V.

T-hacking: Another ethics problem for science?

In recent years, scholars, policy makers and the general public have learned about unethical behavior across academic disciplines — creating a crisis of confidence in the reliability of research findings. Whether due to publication pressures or the pursuit of fame, researchers have been caught performing data acrobatics ranging from “p-hacking,” trimming data or conditions, to outright fabrication (example 1, example 2, example 3). In any case, the perpetrator’s goal is to create persuasive results that are worthy of peer-review publication.

Whereas previous inquiries focus on behaviors that affect a paper’s methods and results, I am writing this post to highlight a potentially unethical practice that affects a paper’s theoretical development. I call the practice t-hacking.

T-hacking — short for “theory hacking” — is the practice of excluding or mischaracterizing relevant theory or findings from the conceptual development of a paper. T-hacking benefits the t-hacker by boosting the theoretical contribution of the research and thus increasing the likelihood that a paper is accepted and subsequently cited. The benefit may indirect; for instance, an author may not cite a paper so that the omitted paper’s author is not asked to peer review the submitted work.

T-hackers hurts science by crowding out other (more deserving) papers from journals (the zero sum problem) and drawing valuable citations away from the authors of the original work (the impact problem).

Unlike data manipulation, which may be detected using statistical analyses, I suspect that it is more difficult for scholars to identify that the omission of existing scholarship was purposeful. My conversations with my academic peers reveals compelling stories of doubt (“a simple Google search reveals the omitted paper”) and certainty (“I reviewed the paper at multiple journals and the previous research continued to be omitted even after it was called to the authors’ attention.”) Other conversations reveal that scholars view t-hacking as less serious (than p-hacking) because of: 1) the presence of plausible deniability (e.g., “the author probably overlooked the existing research by accident”) and 2) the belief that the problem is easily solved by pointing out the missing scholarship during peer review.

Though I can’t say how pervasive t-hacking is, I believe it is a topic that journal editorial boards and scientists should be discussing.

I welcome your comments and your t-hacking stories.

———————————————–

NOTE: Others may have written about this idea, and I was unable to find their work. In other words, I have not knowingly t-hacked my t-hacking idea.Please let me know if someone has written about this topic. I found a paper by Maddox (1991; Another Mountain out of a Molehill) in Nature, which discusses dangers of plagiarism, but downplays another scholar’s accusation that he was t-hacked.

———————————————–

Addendum on 5/4/15:

Scott Rick pointed out this editorial in the Journal of Consumer Research, notably:

In this view of science, each publication constructs a bridge from past findings to future investigations. Hence, as noted above, unreliable data are not just putting a bad “fact” into the record; they undermine the progression of scientific discovery by constructing a shoddy scientific bridge that misdirects future scholars and resources. Misrepresentation of prior research findings to make a current paper seem
a superior advance by ignoring, selectively citing, or distorting the labors of researchers who have come before similarly undermines this progression. Acknowledging previous research accurately is just as important as reporting data faithfully. Akin to falsifying or manipulating data, fudging the literature also throws up shaky scientific bridges. In addition, we encourage scholars to search the literature more
broadly when they seek related results and frameworks and to look for links across concepts rather than just across variables.