Sign up for Chalkbeat New York’s free daily newsletter to get essential news about NYC’s public schools delivered to your inbox.

By last spring, it had become clear to Manhattan Assistant Principal Joe Vincente that his school had an artificial intelligence problem.

Staffers at East Side Community School were holding multiple meetings a week with students suspected of using AI inappropriately for schoolwork. AI platforms were multiplying at a dizzying pace. And the city Education Department had offered no formal guidance for schools.

Navigating the moment without a policy felt unsustainable, Vincente recalled. “The need was reaching a fever pitch.”

In the months that followed, Vincente convened a committee of staff members, gathered input from students and parents, hired substitutes so the committee could hole up in the library uninterrupted, and spent hours of his own time out of work hammering out a 12-page draft AI policy. They based their approach on a core tenet: “As a community dedicated to deep learning, we prohibit the unsupervised use of generative AI for all schoolwork and assessments.”

The city’s Education Department is expected to release its own long-awaited citywide draft AI policy for schools on Tuesday, more than two years since ChatGPT upended what students and teachers could do with the click of a button. In that time, schools across the five boroughs have come up with a wide variety of approaches on their own, according to interviews with educators, parents, and students at 10 schools. The school-level efforts offer a preview of the complex task facing the Education Department as it seeks to chart a path forward at a moment when the value and role of AI in education is fiercely contested.

Some, like East Side Community, have crafted comprehensive policies detailing acceptable and unacceptable uses of the technology, along with how the school should respond. Others have tried to keep AI out of classrooms altogether. Many, however, haven’t broached the topic in a formal way.

A growing chorus of educators and parents are calling for a full moratorium on its use in school, pointing to the academic, environmental, and mental health risks of AI. Others argue it would do city students a disservice not to expose them to a transformative technology already deeply embedded in our economy and society. Those conflicts have burst into public view with increasing frequency in recent months in debates over whether to approve contracts with AI companies and a new AI-focused high school in Manhattan. (After the city releases its policy, the public will have 45 days to offer feedback.)

Caught in the center of the swirl are schools and teachers.

“There are a lot of extreme opinions, and I think somewhere in the middle is our responsibility as educators to wrestle with the tension of all of those things and then decide what’s best for our students and our communities,” Vincente said.

At East Side, which is part of the New York Performance Standards Consortium that allows students to complete portfolio assessments rather than Regents Exams to graduate, educators decided it was most important to nurture students’ critical thinking skills — even if it meant curbing or supervising AI use.

“If we send them out in the world with all of these other really strong skills that we believe in, critical reading and strong writing, they’ll be able to learn AI if they need to,” Vincente said.

NYC schools struggle to keep up with growing AI use

Keeping up with the dizzying changes in AI has bewildered many city schools. The city Education Department’s approach has swung wildly, from initially banning ChatGPT on school devices in January 2023, to pledging in May of the same year to become a national leader in embracing the use of AI in schools.

There’s now a variety of chatbots capable of doing complex work, and AI is built into functions as basic as email and Google search, making it nearly impossible to avoid. Educators who spoke with Chalkbeat said they routinely encounter students using AI to shortcut writing assignments — and can’t rely solely on online “AI detectors” to ferret it out.

At many schools, administrators have kept their distance, waiting for the city to develop a clear set of guidelines and leaving it largely to teachers to handle the day-to-day instances of AI use.

That’s led to a frustrating two years for some educators, especially in humanities courses that rely heavily on writing.

“I would love clear rules … and I feel that I do not have backup,” said a Brooklyn high school English teacher who requested anonymity for fear of jeopardizing her job. “The administrators … do not recognize the extent of the problem.”

Many English and social studies teachers have figured out their own ways to restructure the writing process, moving away from take-home essays and having students write their work by hand during class time in order to ensure students don’t turn to AI.

We’re on a need-to-know basis.

Every weekday morning, Chalkbeat New York is bringing thousands of subscribers the news on public schools and education policy that they need to start their day. Sign up for our free newsletter to join them.

“I have gone back to paper. For last semester, I did all in-class writing,” said Jessica Radin, a social studies teacher at The Beacon School in Manhattan, which also shifted its admissions essay in-person last year in order to avoid concerns about students using AI or paid tutors.

There are downsides of shifting to in-class assignments, including shorter essays that eat up more instructional time. But Radin said she’s been pleasantly surprised by how well she’s gotten to know her students’ writing this year.

Adam Stevens, a social studies teacher at Brooklyn Technical High School, the city’s largest with almost 6,000 students, knows he doesn’t have the bandwidth to scour every assignment for signs of AI. So he makes the case that students will be putting themselves at a disadvantage in college by leaning on AI — and tries to pose questions with a personal component that students will find “too important for them to feed into a chatbot.”

Enterprising staffers take the lead

At schools trying to take a more comprehensive approach to AI, it often comes down to having a staff member with the expertise and motivation to lead the charge.

At Sunset Park High School in Brooklyn and John Bowne High School in Queens, teachers who attended professional development sessions sponsored by the American Federation of Teachers through a $23 million “AI Academy,” brought back resources and helped launch faculty committees at their schools.

“The reality is it’s here, and [students] are going to use it,” said Jennifer Goodnow, an English as a Second Language teacher at Sunset Park High School, who attended the union-sponsored training. “We can’t stop that but … our school can teach kids about the potential pitfalls.”

Goodnow said she was particularly concerned to learn some students are turning to chatbots to discuss mental health struggles, and she hopes the school can play a bigger role educating kids and parents about the possible risks and offering alternative outlets.

Some schools have taken more aggressive measures to try to keep AI out of classrooms. At Williamsburg Charter High School, officials decided earlier this school year to ban ChatGPT on school devices, said Jeremiah Dickerson, an 18-year-old senior. (School officials didn’t respond to a request for comment.) But while Dickerson acknowledged some students misuse AI, he argued the school’s approach isn’t likely to be effective or productive.

“If AI is banned in my school,” he said, “and then I go to college and an AI is everywhere, how does that really effectively prepare me to use it?”

As schools develop AI policies, they dig into the gray area

There’s little disagreement that asking AI to generate an entire essay is a violation of academic integrity. But as schools develop their AI policies, they are digging into more nuanced scenarios — and finding the answers aren’t always clear cut.

Can AI be used, for example, to direct students to sources for a history paper? Or is there something valuable students would miss in the process of finding those sources in a different way?

It’s also not always obvious what consequences to apply: Sometimes it’s unclear whether students even knew they were using AI, given its ubiquity online.

At Brooklyn Collaborative School in Carroll Gardens, administrators acknowledge the rules might vary depending on the teacher and assignment. They ask educators to include a “traffic light” on each task that outline a few clearly permissible uses of AI (green light), a few where students should exercise caution (yellow), and a few clearly impermissible ones (red), said Chrissy Prince, the school’s data specialist.

It’s not only student AI use that can grow murky. Educators are also experimenting with AI to craft lesson plans, create resources for students, and offer feedback.

Vincente, who was part of a district-sponsored fellowship to learn how to use an education-oriented AI tool called Playlab, acknowledged the lines can be blurry and emphasized the importance of understanding the school’s values while trusting teachers to carry them out. That’s why East Side staffers decided to only endorse student AI use that’s supervised by school staff.

“Just kids out there roaming in the wild of AI,” he said, “more often than not, what they’re going to do and experience with that is actually going to be a detriment to the learning that we want to happen.”

Michael Elsen-Rooney is a reporter for Chalkbeat New York, covering NYC public schools. Contact Michael at melsen-rooney@chalkbeat.org.