Recent months have seen a string of academic fraud cases, uncovered through both the Institute for Replication’s efforts and independent investigations. These include evidence that Matray’s dividend paper was manually altered to produce favorable coefficients during peer review, and failures to replicate Asad Islam’s development economics research. The most recent case involves Aidan Toner-Rodgers, a second-year PhD student, who allegedly falsified either the analysis or the underlying data of his study claiming that artificial intelligence accelerated materials science research by approximately 40%, using confidential data from a private firm.
Here’s what we know about the newest case:
There was an internal review at MIT that found that *something* in the paper was bunk. Either the data, or the analysis, or both.
The paper was submitted to the QJE sometime in the last few months
The student is no longer at MIT
I think what happened is exactly as MIT put it; someone at the university suspected the paper was fraudulent and flagged it internally, which led to an investigation and the student getting caught.
What really bugs me, though—and I’ve said this on Twitter—is that it took an internal MIT investigation to catch it, instead of the fraud getting spotted during peer review. That’s not how we’re supposed to catch bad science. When you explain why science is trustworthy, you don’t say, “Well, if it was fake, MIT would’ve investigated” You say, “It was peer reviewed by experts.”
Other people speculate that QJE tipped off MIT, but I see that as even worse. If that’s standard practice, it means journals quietly hand things off to universities rather than handling it themselves. And that raises all kinds of conflict-of-interest concerns—especially when the university stands to benefit from the researcher’s funding. Let me tell you though, it would make a lot more financial sense as to why Matray is still at Harvard while the student is no longer at MIT.
However, I think there is an important point of reflection on the psychology of the discipline that these errors are becoming more well known. In particular I don’t believe the kind of social scientist that commits fraud fits into Jason Abaluck’s trichotomy of social scientists of Pornographers, Plumbers, and Social Justice Warriors. Where the pornographer makes research for personal pleasure, the plumber problem solves, and the social justice warriors want to advance their pet cause.
Jason’s trichotomy doesn’t really account for fraud—at least not most of it. Fraud doesn’t solve problems (so it’s not a plumber’s job), and it’s rarely intellectually satisfying (so it’s not for pornographers). Sure, a Social Justice Warrior might bend the truth to push a cause, but that doesn’t seem to fit here either.
However, look at Aidan’s CV:
This is not the CV of someone obsessed with advancing AI. It’s a pretty standard MIT econ CV, touching on everything from econometrics to natural disasters. There’s no ideological branding, no social media persona, no sign of a grand mission. That makes the “pet cause” explanation pretty weak.
Instead, I think the better explanation is simpler—and one I’ve seen often in academia. There’s a broad category of people I’d call careerists. They don’t do research for the love of the topic, or to solve problems, or to change the world. They do it for accolades. For them, success isn't about truth—it’s about recognition.
Careerists aren’t passionate. They’re strategic. They may not care much about their field but found early success in it—enough praise, awards, and attention to keep going. In my experience, they’re the kind of PhD students who sit silently in field courses because they’re terrified of saying something wrong in front of a professor. That silence used to confuse me. I always thought academia was supposed to be a place for intense, joyful debate—nerds locked in a room, arguing for the fun of it. But often, the room would be quiet, and I, an MA student auditing the class, would end up answering half the questions.
Careerists often don’t have much going on outside their work. I’ve had more than one RA tell me, “I just do research and when I’m tired, I watch train videos on YouTube.” For some of these people, research isn’t a passion or even a job—it’s their identity. And that’s exactly what makes them more tempted to fraud, or fraud-adjacent habits like p-hacking. The careerist’s drive is for recognition, for having a good reputation with your peers.
Of course, nobody is a pure type. We all have some careerist in us. I do. You do. That hunger for recognition can be healthy—it can make you work harder, care more. But it can also turn toxic. It’s bad when it makes you resent vacations with your family because someone else published first.
Here’s the real problem: the more competitive academia gets, the more pressure there is to publish, the more careerist incentives start to dominate. Pornographers, Plumbers, and Social Justice Warriors all have natural limits. They’ll burn out, or walk away when the costs outweigh the passion or the cause. But careerists don’t stop. They can’t—because it’s not about the work. It’s about who they are. That makes fraud not just a moral failing, but a market distortion. Fraud is easier than honest research. If it goes unchecked, it raises the bar for everyone else. And at some point, honest people decide the bar is too high and leave. The system becomes noisier, not smarter.
That’s why it’s so important to be honest about motivation. Not everyone is in this for discovery. Plenty of people are just chasing prestige. And sometimes, they’re willing to cheat to get it. That’s why Aidan did it. Not because he believed in AI. Not because he had a point to prove. But because he wanted a QJE publication—and he thought he could get away with it. And, to be honest, MIT might have agreed. If they thought it was QJE’s job to catch it, they would’ve left it there.
So what should change?
Journal editors need to accept that fraud is real and that some people will lie just to get past peer review. That means shifting some focus away from twelve redundant robustness checks and toward the stuff that actually uncovers wrongdoing: data audits, code checks, replications. That’s what I4R is doing. That’s what these internal university investigations consistently find are the problem. But for years, journals gave researchers a free pass on data and code quality—because, in a less competitive era, they could afford to.
They can’t anymore.
Musical Coda:
I think you make some great points. For me academia is a mix of passion and purpose, but it often feels like pursuing either is orthogonal to my career interest. It’s sad because I feel like academia is a perfect place for me in theory, but in practice im not sure how long I will last…