The City of Philadelphia is finally planning to review how it rolls out and trains city workers on AI — but top leaders appear unsure about the tech’s broader impacts.
Residents and City Council members searched for answers yesterday on how the city is using AI and its plans for future policy at a hearing hosted by Councilmember Rue Landau.
Representatives from the Parker administration provided testimony and answered questions, but did not include specific details, like an inventory list of AI tools being used throughout the city, that Landau requested.
“I didn’t hear today that was clear to our city officials, ways in which we can use this to make the city more efficient.”
Councilmember Rue Landau
“The testimony from the City of Philadelphia administration brought up more questions to our questions,” Landau, chair of the Technology and Information Services Committee, told Technical.ly. “The city basically said we’re working on it … and it just felt unsatisfactory.”
After years without formal AI use policies, Philadelphians are concerned about how their data is being used and how the use of AI could lead to civil and human rights violations. And those fears are grounded in reality — AI tools have bias built into them that can harm marginalized communities.
The testimonies did not address how AI is being used by the police department or if the administration is concerned about the technology being used by Immigration and Customs Enforcement in the city, for example.
City leaders only revealed loose plans to create a framework guiding how public sector employees should use AI. By late winter or early spring of 2026, officials hope that’ll lead to more training opportunities and a cross-departmental AI governance committee to review and update future AI policies, according to Kristin Bray, chief legal counsel to the mayor and director of Philly Stat 360.
Putting these policies into place will hopefully teach city employees not to put government data into online tools like ChatGPT and Gemini, the city’s chief information officer Melissa Scott said. Bray also emphasized that human judgment, public trust and accountability would be top of mind for officials.
However, Scott and Bray did not offer much more than that, such as how often the governance committee would meet, if external AI experts would be involved and how they plan to implement feedback from the public.
“The suggestion that it was time to form a committee and develop a policy with no timeline, no commitment to who would be on it,” Kendra Albert, a technology lawyer, said during the public comment period, “suggests that this issue is seen as new, novel and sort of suddenly something that landed on the lap of our city officials. Those claims are wildly incorrect and deeply concerning.”
Cities step up where feds fall short
Without federal laws regulating the development and use of AI, local governments have stepped up to do it themselves.
All 50 states introduced AI legislation this year, according to the National Conference of State Legislatures. Of those, 38 states approved or enacted legislation related to AI. Like Pennsylvania, for example, which established a law making the use of AI for non-consensual deepfakes or voice clones a third-degree felony.
Other legislation around AI varies widely, ranging from guidelines around companies developing the tech to establishing dedicated AI task forces to bills that guide how AI should be integrated into everyday life.
Last fall, Landau introduced a resolution to the City Council requesting that it hold hearings in Philadelphia to learn more about the potential impacts of AI. Yesterday’s hearing was the first iteration, and Landau said she plans to hold follow-up sessions to keep up with the city’s progress on these policies.
“There’s ways in which the public could benefit from its municipal government embracing some elements of the technology,” Landau said. “I didn’t hear today that was clear to our city officials, ways in which we can use this to make the city more efficient.”
Careful implementation is necessary, experts say
The Parker administration, however, won’t have to go about crafting this policy alone. Experts offered suggestions for the city’s next steps.
AI can help local government bring services to residents faster, but the city needs a plan in place for when it inevitably fails and gives residents the wrong information, said Sorelle Friedler, a professor of computer science at Haverford College and former assistant director of data and democracy at the White House Office of Science and Technology Policy.
“In order for you all to have control and make sure AI systems used by the administration actually do the job,” Friedler said. “We need careful guardrails, including testing requirements, human review processes and transparency into how AI is being used in our city.”
The city also needs to think about what departments work with vulnerable populations and how they would be negatively impacted when AI makes mistakes, Friedler said.
In addition to those processes, public feedback is also vital in making sure negative effects on the community are being prevented, she said.
Ravi Chawla, VP and chief analytics officer at Independence Blue Cross, added that while AI tools can improve employee productivity, vendor management of these tools is important, especially if technology companies are adding AI features and changing how they manage user data.
Keeping humans in the process is also essential, Chawla said. A human should be reviewing AI outputs to make sure they make sense. Friedler added that a human appeals process may need to be put in place.
“We shouldn’t be denying people services based off of an algorithmic decision,” Friedler said. “That should always be … based off of human review.”