University leaders from across the country say they are optimistic about the potential of artificial intelligence to improve health care outcomes and access, but they caution there is a lot of work ahead to guard against risks from the technology.

Representatives from across health care fields, universities and businesses are meeting this week at the University of Pittsburgh for a summit on the intersection of health, AI and tech, organized under the Global Federation of Competitiveness Councils.

The tone during the opening panel Monday stood in contrast to a summit at Carnegie Mellon University in July organized by U.S. Sen. Dave McCormick (R-Pa.), during which executives from the tech and fossil fuel sectors talked about the need to “unleash” energy production to power massive data centers to support AI development.

Panelists on Monday highlighted the ability of artificial, or “augmented,” intelligence to speed up research into new treatments and keep people healthier for longer periods, while stressing the need to be intentional about the technology’s use.

Pitt Chancellor Joan Gabel said the university is looking at ways to use AI to train future clinicians to care for patients.

“ We’re thinking and working very closely with our health systems on the most optimized delivery of care,” Gabel said. “And then we’re also looking at things like … the idea of what public health means in order to power — using AI — our wellness before we ever get sick in the first place, and then live longer, live healthier.”

Ted Carter, president of The Ohio State University, said certain tools can help support the community and make health care more accessible. For example, OSU’s health system adopted an AI tool within the past year.

“And all it does basically is reduce the administrative load for clinicians, which has actually created over 12,500 additional free hours for our doctors and physicians to actually spend more time with their patients,” Carter said.

But Carter said AI can be used for “nefarious” purposes, such as spreading misinformation, and more governance is needed to ensure tools are used ethically.

Carter, a retired U.S. Navy vice admiral and pilot, said he has had to make life-or-death decisions about firing a weapon. He said there were moments he did not fire to protect civilians.

“ If I was on an automated system that was using AI, the weapon probably [would have] been released,” Carter said.

Carter also warned about the amount of energy AI uses.

 ”When you use AI on a Google search, that’s 25 times more energy than just a normal Google search,” he said. “If we don’t start paying attention to that, we’re going to run out of energy. So this is a really important part for research institutions like ourselves.”

Jeffrey P. Gold, president of the University of Nebraska system, said AI is a tool that should “support the creativity of the next generation of workforce.” It is important to educate people in ways to use AI effectively, he said.

“But it is not perfect. And so we also need to be very clear that there are limitations, there are ethical barriers, there are intrinsic biases based in machine learning and in artificial intelligence,” Gold said. “At the end of the day, if there’s not a trusted relationship with the implementation of this tool, it will come off as extremely hollow.”

The pace of AI development is so fast, said Carnegie Mellon University President Farnam Jahanian, that people are just “scratching the surface” of the ethical, policy, privacy, and security implications.

“I think government organizations, private sector, academia — we can’t even catch our breath to be able to consider all of these things, and that’s really a candid assessment of it,” Jahanian said. “We’ve seen this in human history, that whenever there are new emerging technologies … the policy implications and the ethical implications have to essentially catch up with it.”