Philadelphia is taking steps to regulate AI, but businesses and residents alike are already feeling the tech’s effects on daily life.
“As a smaller company … we don’t have $10 million to comply with a regulation.”
Nyron Burke, cofounder and CEO of Lithero
Regulation could impact the way businesses, especially smaller companies, work with their customers and could have potential financial implications. From the government’s perspective, though, regulations need to be put into place to protect the data and privacy of residents.
AI being used to cause harm is not good for any of the stakeholders, so creating opportunities to hear each side is beneficial as these conversations go on, said Nyron Burke, cofounder and CEO of Lithero, which uses AI to screen marketing content for life sciences companies.
“The folks in the industry that are working on technology, and I think regulators as well,” Burke said. “We want things to go well.”
Here’s where AI regulation efforts stand today, and what moving forward could mean for startups and the surrounding communities.
The current state of AI regulation
The Philadelphia City Council hosted a public hearing about AI usage in city government in October. The hearing brought up concerns from constituents about how the police department is using AI, especially about the impact of bias in the technology.
City officials were questioned about how they’re monitoring AI use, but didn’t have many clear answers. The city plans to put together a framework and governance committee in the coming months.
For Burke, local policies make the most sense for regulating local stakeholders, like police departments. Wider regulations at the state and federal level make more sense to get everyone on the same page about topics like AI in education, he said.
Pennsylvania has made some progress in putting AI-related legislation in place. This summer, the commonwealth established a law making the use of AI for non-consensual deepfakes or voice clones a third-degree felony. Most recently, the Pennsylvania Senate passed Senate Bill 1050, which requires mandated reporters to report AI-generated child sexual abuse material.
Right now, there aren’t any federal policies about AI, but President Donald Trump drafted an executive order earlier this month that would prevent states from instituting individual AI policies.
Businesses brace for new rules
As policymakers discuss the possible risks that AI could bring, it’s important to also consider the potential impacts on businesses, Burke said.
From a business standpoint, it makes more sense for there to be national AI regulations, so companies don’t have to adjust their product based on what geography a client is located in, Burke said.
He isn’t concerned about potential AI regulations impacting his company yet because of the clients they serve and the way the platform works. But for many small companies, extreme regulations could put them out of business, he said.
“I get a little bit concerned about AI regulation that’s targeting the really big players,” Burke said. “As a smaller company, and thankfully, so far, we aren’t impacted by this, we don’t have $10 million to comply with a regulation.”
AI is already pervasive in public spaces
Philly’s public sector is already working with private AI companies to implement these tools, so it’s important for there to be transparency around data management and privacy.
This spring, SEPTA and the Philadelphia Parking Authority worked with Hayden AI to implement a network of AI cameras that issue tickets to vehicles parked in bus zones, for example.
The Philadelphia Police Department has also experimented with AI, including facial recognition technology and drone usage. During the City Council’s hearing in October, members of the public expressed concerns about surveillance and bias in the technology.
Attendees also shared their worries about data protection when AI tools are being used. City leaders shared that government employees do use some AI tools, but the agencies are working to protect government data by teaching employees not to use online AI tools like ChatGPT and Gemini.
As governments continue to use these tools, there needs to be a plan in place for when they fail, Sorelle Friedler, a professor of computer science at Haverford College, said at the hearing. This is especially important for protecting vulnerable populations.
At the end of the day, AI is controlled by humans, Burke from Lithero said. Human stakeholders need to work together to reduce those risks.
“There’s too much talk about the AI doing this, or the AI doing that, and not enough about, well, what are we doing?” Burke said. “What is our responsibility as human beings? At the end of the day, we’re in charge.”