A team of researchers funded by the National Institutes of Health (NIH) have developed an artificial intelligence (AI) tool that provides decision support to clinicians by predicting if patients are at risk of intimate partner violence (IPV). Using data routinely collected during medical visits, the team trained a machine-learning model, a type of AI, that was highly accurate in detecting IPV among patients in a study.
IPV refers to abuse from current or former partners that results in serious effects such as potentially life-threatening injuries, chronic pain and mental health disorders. It affects millions of people in the United States – both men and women – at some point in their lives. However, many cases go undetected, because patients can be hesitant to disclose abusive relationships due to safety concerns, fear and stigma.
In their study, the research team led by researchers from Harvard Medical School, Boston, introduced three AI models for IPV detection in healthcare settings, comparing their performance in predicting it.
“This clinical decision support tool could make a significant impact on prediction and prevention of intimate partner violence,” said Dr. Qi Duan, Ph.D., director of the Division of Health Informatics Technologies at NIH’s National Institute of Biomedical Imaging and Bioengineering (NIBIB). “Given the prevalence of cases, the tool could be a game-changing asset to public health.”
Many cases of IPV go unrecognized, leading to missed opportunities for timely intervention, according to the study authors. They report that current screening tools capture only a fraction of cases, while clinical and imaging records provide valuable information in detecting IPV risk. Notably, radiologists have an advantage in recognizing the signs of IPV, including the frequency of certain patterns of physical trauma.
The researchers used several years of hospital data from nearly 850 affected female patients and 5,200 unaffected age- and demographics-matched control patients. Because the collection of relevant clinical data varies across healthcare settings, the team designed two distinct AI models, one trained on structured patient data, in table form, and another trained on unstructured patient data from medical notes, including radiology reports. Further, they developed a multimodal model that is a fusion of both structured and unstructured data.
All the models achieved a high performance in the study. However, the multimodal fusion model outperformed the models that used either just structured or unstructured data. It performed accurately 88% of the time. Both the tabular model and the fusion model can detect IPV risk on average more than three years before patients enroll at hospital-based domestic abuse intervention centers. While the tabular model achieved slightly earlier recognition of IPV risk, the fusion model was able to detect more IPV cases in advance.
The fusion model achieved more stable performance than relying on either modality alone. The scientists explained that the different modalities are processed separately and only merged at the prediction stage. They found that the tabular framework is particularly relevant in healthcare, where there are variations across different hospitals in data availability and in the recording of unstructured data.
The researchers emphasized that the use of AI tools such as their machine learning models could assist healthcare providers in having timely conversations with patients about IPV and connecting those patients with appropriate support resources. Such AI tools are not intended for making definitive diagnoses.
“For decades, our healthcare system has depended largely on patient self-disclosure to identify intimate partner violence, leaving many cases unrecognized and unsupported,” said Bhati Khurana, M.D., senior author of the study and an emergency radiologist at Mass General Brigham and associate professor of radiology at Harvard Medical School. “Our work represents a fundamental shift from reactive disclosure to proactive risk recognition within routine clinical care. By analyzing patterns already present in healthcare data, this approach supports healthcare clinicians in initiating earlier, safer and more informed conversations with patients.”
According to the researchers, when used in a patient-centered manner, this tool can serve as a key component of a proactive approach to IPV intervention, enabling timely and effective support and ultimately leading to improved long-term health outcomes for at-risk patients. The team developed guidance at the project website to help clinicians thoughtfully approach conversations with patients.
“The goal is never to force disclosure, but to help clinicians communicate with patients in a supportive way and to connect them with resources and support,” Khurana said.
The research team plans to use AI models to develop a decision-support tool embedded in electronic medical record systems to provide real-time IPV risk evaluations in clinical settings.
For more about IPV: About Intimate Partner Violence | Intimate Partner Violence Prevention | CDC
For more about Automated IPV Risk Support: https://bhartikhurana.bwh.harvard.edu/airs
This research was co-funded by NIBIB grant R01EB032384 and the NIH Office of the Director.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.