State workers in Delaware now have a new set of policies and guidelines to assist them in their use of artificial intelligence tools.
The Delaware Department of Technology and Information (DTI) has released its Enterprise Policy on Generative Artificial Intelligence, which puts in place policy on when it is permissible to use “consumer grade GenAI tools,” like ChatGPT. The document calls out prohibited tools, which are those originated or located outside of the United States; and distinguishes between the use of public generative artificial intelligence (GenAI) tools and those developed for the state as enterprise GenAI tools.
“I think this is the first step, of many, in partnership with the agencies, to help the employees know what that acceptable use is, and how we, together, can protect the valuable data assets that we have within the state,” Anthony Collins, DTI director of enterprise architecture and solution integration, told the Delaware AI Commission at its quarterly meeting Friday.
DTI has been fielding a number of calls with questions around “what is acceptable use” of GenAI, Collins said. “They’re using it in their personal lives. They might be using it in their work lives, but they’re not really sure what is acceptable use.”
As a next step, a subcommittee of the Delaware AI Commission will begin developing the necessary employee training related to the policy and the use of AI in a “responsible, as well as principled manner,” he said.
The policy, Collins said, “conveys the notion of guidance, as well as just enough governance to guide people in the use of this technology. It enables people to learn, to experiment, and yes, to innovate, as well as improve their skills in this exciting technology.”
The new requirements make distinctions between the use of public AI tools and those developed for state use, known as enterprise GenAI tools. Enterprise GenAI tools are those the state contracts for; they are designed and licensed to protect state data, and will integrate with the state’s identity access management technology. The policy clearly states the prohibition of using public AI tools with confidential data.
Before a staffer uses an enterprise GenAI tool, the task would go through an approval process, and “data steward approval” would be needed, Collins told the commission. “Once again, trying to ensure that the right stakeholders are understanding the risk, as well as the benefits of this tooling.”
State officials welcomed the development of policy for AI use. Earlier in the day, the AI Commission took steps to create a framework for a “sandbox” for testing innovative and novel technologies that use agentic AI, a form of artificial intelligence that includes autonomous decision-making.
“It’s really heartening to see this policy developed by DTI, and the effort to strike a balance between the need for a statewide policy, and being flexible to agency needs,” Owen Lefkon, a commission member, said during the meeting.
“In almost every employee review there’s a question about, ‘What else do you want to be doing for your job?’ And everybody is saying, ‘I want to use AI to help me do my job,’” Lefkon said. “We, of course, want to empower our staff. But we want to do it in a compliant manner, with the appropriate training.”
Skip Descant writes about smart cities, the Internet of Things, transportation and other areas. He spent more than 12 years reporting for daily newspapers in Mississippi, Arkansas, Louisiana and California. He lives in downtown Yreka, Calif.