By SHANNON O. WELLS
A policy update related to research misconduct that Faculty Assembly approved at its Nov. 5 meeting affirms the University’s commitment to “upholding research integrity in all scholarly endeavors” and outlines Pitt’s process of handling misconduct allegations in line with its responsible-conduct guidelines and U.S. Department of Health and Human Services regulations.
In addition, the updated Research Misconduct Policy and Research Misconduct Policy Procedure are intended to align with the 2008 National Institutes of Health Public Access Policy and its 2023 Data Management Sharing Policy, said Katherine Wood, co-chair of the Senate Research Committee and research assistant professor in Pitt’s Vascular Medicine Institute.
“There have been some technological changes, such as saving data to electronic records and use of (artificial intelligence) that also needed to be incorporated,” she said, noting that the Senate Research Committee discussed the policy and voted unanimously to forward it to Faculty Assembly. “This policy must be implemented by Jan. 1, 2026, so there was bit of a time crunch there.”
Tony Graham, senior policy specialist in the Office of Policy Development and Management, said Policy R1-07 has been accepted by Rob Rutenbar, senior vice chancellor of research, along with Pitt’s academic leadership team and operations council.
Susan Sesack, of Pitt’s Office of Research Protections, said that policy updates have been discussed and formulated in recent months “to conform to changes in the federal regulations. That encompasses the bulk of what we’ve done, but given that we’re doing these modifications, we might as well make a few changes that we considered necessary to help our process and procedures.”
Sesack and Mara Horwitz, as Pitt’s two research integrity officers, have seen their caseload triple in the past three years. “So anything we can do to smooth operations, we incorporated into the new policy.”
Since the policy was written in 2005, Pitt has changed its approach to separating policies and procedures. “A bunch of other changes were incorporated to include that structure,” she said. “That’s really where a lot of the changes are coming from.”
Noting that some of the policy wording “may sound a little stilted … or maybe lawyerly,” it actually copies directly from the federal regulations. “We are often adhering very closely to those,” she explained.
With NIH being the “main driver” of the policy and procedure changes, much of the policy appears geared toward health sciences-related research. Sesack said the existing policy tends to adequately cover more straightforward instances that arise in other disciplines.
“To the extent that we have pursued cases in arts and sciences or engineering or computer science and information, the policy works pretty well,” she said. “Plagiarism is plagiarism. Data falsification is data falsification. It works as a global policy, even though it’s being driven primarily by health-related research.”
The policy, which applies to all allegations of research misconduct involving faculty, research associates, students, postdocs and other individuals involved in research at Pitt, establishes procedures for reporting and investigating allegations of research misconduct; the responsibilities of the Research Integrity Officer in addressing research misconduct allegations; the rights of the University in the event of finding research misconduct; and the rights of a complainant, respondent or whistleblower in these cases.
The policy was last reviewed in 2017, with minor edits made in 2023, leaving core content from the 2008 version. To accommodate new federal requirements, the committee was charged with assessing and recommending revisions to align the policy with current regulations, addressing broader changes in University practices, and providing clear guidance on Pitt’s responsibilities in handling research misconduct and maintaining research integrity.
Public health, panel peers
Responding to a Faculty Assembly member’s question, Sesack said she believes the updated policy accommodates scenarios in which the allegation of research misconduct — regardless if it’s proved to have occurred — would allow for a public health exception.
“If something came across our desk that suggested that human subjects in human-related research were in imminent danger, then the language in the policy gives us the right to interrupt that research to prevent harm,” she said, noting that would be an extreme case. “We’re not going to do that just for something that could be solved more easily.
“The research integrity process is long … for a reason. The accused has to have multiple opportunities to defend themselves,” and have the chance to appeal, she added. “That can take months, sometimes a year. If people are in imminent danger, we can’t wait for the end of the research integrity process.”
Another person asked who, in the absence of a dean as the appropriate supervisory official, would most likely investigate alleged misconduct.
Mara Horwitz said if something like that came up, “such as in a school where the dean didn’t really have oversight in that unit, then we would have a discussion with the dean and the provost as to who would … be the deciding official,” she said. “In cases where it was unclear, it would be up to the provost to nominate the deciding official.”
“We got comments saying that the federal government should care about every allegation, so they don’t want to stop at department chair or dean or provost or chancellor. They want to go all the way to the top,” Sesack added. “The truth is, a great deal of what we handle as allegations of research misconduct are simply honest mistakes, and they can be handled between the author and the journal.”
For example, the research has a wrong figure in it and they do a correction, “and that’s it.”
Another discussion about whether a respondent outside the tenure track stream is entitled to have at least one equal peer on the investigative panel, led Senate Council President Kris Kanthak to suggest adding “non-tenure stream faculty” to the list that includes “staff, students, and post-doctoral fellows.”
“The way that (section) is written, to President Kanthak’s point, is that at least one member on the panel should be a peer — a postdoc, a trainee,” noted John Stoner, teaching professor in the Department of History. “Whether, in theory or not, an appointment-stream person is a peer with a tenure-stream person — I would respectfully argue that it is not the case — and that a peer should be guaranteed to whomever the respondent is, regardless of their title, which in tenure and appointment stream are not the same.”
The discussion led to a vote to amend the policy to include “appointment-stream faculty” to the peer list. The amendment was approved with 35 voting yes, two voting no, and zero abstentions.
Fast learner
Responding to a comment regarding a trend of research-based “misconduct or mistakes” resulting from the use generative AI tools, Sesack acknowledged the trend is evolving at a “terrifying rate.
“I just saw a demonstration last week where artificial intelligence can completely fake western (immune) blots, which is a key form of molecular biology data. And they are flawless,” she said. “The workshop leader said these words, (that stabbed) my heart, ‘There is no current remedy.’
“Now we’re just on the brink of some things — (AI’s) a fabulous tool, and can do fabulous good — and it can also do tremendous harm, and it can just drive a stake in the heart of science.”
Sesack clarified that, although there is a notable uptick in misconduct allegations, generative AI is not the primary culprit.
“We are currently embarking on an investigation — the first one since 2019 — so these are not that common, and most of them are corrected at a lower level,” she said. “It may be honest error, it may be poor research practice … The national (or international) average that I’ve seen is riding around 1% in terms of really bad apples who’ve done terrible things and shouldn’t be doing scientific research.
“The vast majority of the other cases are sloppy science (and principal investigators) not looking at the raw data before they publish something,” she added. “It’s a mistake. They really need to be doing that.
“(With) artificial intelligence, my concern is the speed at which it’s learning to fake things. And that has not yet appeared in one of our cases, but I see it coming on the horizon.”
She did note that many of the misconduct allegations nationwide have come from “groups outside of academic institutions who are trolling for mistakes in manuscripts, especially in images, and they are using artificial intelligence to detect all of those images.”
One way to prevent image duplication is to use AI programs like Proofig or Imagetwin to detect any problems before publication. Sesack encouraged Faculty Assembly to push the Pitt Research office to get an enterprise license for Proofig that everyone could use.
Taking a vote, Faculty Assembly unanimously approved the amended Research Misconduct Policy and Procedure, with 33 voting yes and two abstentions.
The policy likely will be discussed and voted on at the Senate Council meeting on Nov. 13.
Shannon O. Wells is a writer for the University Times. Reach him at shannonw@pitt.edu.
Have a story idea or news to share? Share it with the University Times.
Follow the University Times on Twitter and Facebook.