Read: 5 min

A leading academic integrity watchdog is calling for Canada to raise its academic standards around published research. 

Dr. Ivan Oransky, the co-founder of Retraction Watch, a renowned research database that reports on retractions of scientific papers, says Canada needs better oversight of its academic research.

“When you have a system where universities are the ones investigating their own [researchers] — most people wouldn’t find that reasonable in corporate culture, and there are laws against that,” Oransky told Canadian Affairs in an interview. 

“People have been very reluctant to treat scientific fraud like other fraud.”

According to Retraction Watch, retractions of published research due to breaches of scientific integrity rose nearly 90 per cent worldwide from 2022 to 2023. Few of these involved Canadian academics — but Oransky says this is only because Canada’s oversight processes are too lax. 

“There should be more retractions from Canada,” Oransky told a parliamentary committee in December 2024.

‘Fall into a trap’

This August, the New York Times reported on a new statistical analysis showing explosive growth in fake or low-quality research papers. This low-quality work — which is outpacing the growth of legitimate research — risks undermining trust in science at large. 

Sources say the incentive structures within academia are partly to blame. 

Universities and governments reward researchers for publishing prolifically. University ranking systems, such as Times Higher Education, place significant weight on how much an institutions’ researchers are cited in journals and their research impact. 

This creates pressure to produce more papers — sometimes at the expense of quality.

“There is an uncomfortable truth behind the press releases, advertisements and other material universities and countries use to crow about their high rankings,” Oransky testified in December to the House of Commons’ Science and Research Committee, which was studying how federal funding criteria shape research excellence in Canada.

“These rankings are based on a house of cards built with a stacked deck,” he said, referring to the way university and country rankings rely on metrics like publication and citation counts.

“With good intentions, it’s easy for governments and funding agencies to fall into the same trap. After all, we all rely on heuristics, apparently validated shortcuts … to make decisions, particularly when faced with a large number of choices.”

In his testimony, Oranksy said that at least two per cent of published research exists solely to game academic metrics rather than to advance knowledge. 

The New York Times article noted some researchers pay so-called “paper mills” to add their names as authors to studies they had no role in. Paper mills may supply the pre-written manuscripts — sometimes fabricated, sometimes recycled — and then shop them to journals with lax or corrupt peer review processes.

Paper mills may also artificially drive up how frequently a researcher’s work is cited. 

“There are review recommendations [that] go back [as] letters to the authors, and they say, ‘We would really appreciate it if, or it would be better if, you cited a paper from our journal,” Oransky said in his parliamentary testimony.

“It gets even more complex — and a little bit harder to track — where [fraudsters] have these … citation cartels, where people actually organize citation rings.”

An imperfect system

Peer review, the standard process for verifying the integrity of academic work, may fail to stop low-quality work from getting published.

“Is [peer review] a perfect system? Absolutely not,” said Lisa Given, a former director of the International Institute for Qualitative Methodology at the University of Alberta.

In peer review, an independent panel of experts is asked to evaluate a paper’s methods, data and conclusions. The experts are not paid for this work.

Catherine Paquet, director of the Office of Research Ethics and Integrity at the University of Ottawa, says peer review is considered a professional responsibility and a contribution to the academic community. 

“It’s kind of a give-and-take,” she said. “Researchers are expected to do administrative tasks … reviewing journal articles would be part of that.”

But the unpaid nature of the work means reviewers often squeeze this task into already heavy workloads, which can undermine the attention they devote to reviews. 

“If you’re going to have to cut corners, or you have to contain your time, it’s probably going to be on things that you’re volunteering to do, rather than things your employer is paying you to do,” said Given.

Reviewers are also only asked to evaluate the study itself — leaving any concerns over authorship or broader misconduct outside their purview. 

“If there was, say, someone who is disputing [that] they should have been an author on that paper, I wouldn’t know about that,” said Given. 

“[A peer reviewer is] reviewing the content of the work in terms of completeness, thoroughness [and] the methods.”

Canada’s patchwork oversight

If a peer reviewer or reader spots a problem with a study, they typically contact the journal editor or the author directly. Journals can issue corrections, expressions of concern, or retractions if the concern is validated.

Whistleblowers may also alert the researcher’s institution. For federally funded research, institutions must alert the federal Tri-Agency Framework: Responsible Conduct of Research, which may conduct an investigation.

This framework defines misconduct broadly, as including fabrication, plagiarism, improper authorship and other serious errors. Serious cases can be referred to the Tri-Agency Secretariat, which can impose sanctions ranging from letters of reprimand to lifetime funding bans.

Canada rarely publicizes research misconduct, and criminal proceedings are almost unheard of. 

Between April 2024 and March 2025, the secretariat handled 114 cases. Of the 51 resolved cases, 29 involved at least one breach, most often resulting in letters of awareness or reprimand. 

Gengyan Tang, a PhD student at the University of Calgary who studies research integrity, says there is a lack of transparency surrounding retractions in Canada.

Retraction Watch’s database shows Canadian institutions have been linked to more than 900 retractions over the past 52 years, but details are often unavailable.

“This lack of transparency makes it very challenging to study research misconduct in Canada, as data are difficult to obtain and universities are often reluctant to release such information,” he said. 

By contrast, the U.S. Office of Research Integrity, a federal body which oversees misconduct in federally funded biomedical research, publicly lists all confirmed cases.

‘Conflict of interest’

Where research is not federally funded, universities are solely responsible for enforcing integrity standards. Paquet, of the University of Ottawa, says the process can be strained.

“We don’t have a lot of training on this stuff — on the process,” she said.

Universities can track patterns of poor conduct. Unintentional but repeated errors — such as improper quoting, plagiarism or insufficient data collection — can trigger escalating consequences, such as limits on supervising students, paper retractions or, in serious or repeated cases, dismissal.

For Oransky, universities are in a “a pretty obvious conflict of interest.”

“If your reputation is based on your researchers’ reputation, and your ranking and your success are based on how much funding you can attract, then you’re not going to be that excited about pointing out where things went wrong,” he told Canadian Affairs.

Tang says universities exercise the least oversight where it matters most.

“Universities pay considerable attention to academic integrity at the student level,” he said. “For example, their annual academic integrity reports will tell you how many students ‘broke the rules’.” 

“But there is no equivalent transparency for faculty misconduct.”

In 2010, the Council of Canadian Academies — an independent body that supports science-based policy — recommended the government create a Canadian Council for Research Integrity, a national office similar to the U.S. Office of Research Integrity. Canada never moved ahead with the proposal, and has no current plans to do so. 

Tang says the idea stalled largely because research oversight crosses federal and provincial lines, making a centralized body politically difficult to achieve.

“Whether Canada will continue refining this decentralized model or revisit the idea of an independent national council may depend on future high-profile cases,” he said. 

“Particularly in areas like health research and AI, that test the limits of the current framework.”

Related Posts