The worst classroom AI policy is the one students have to guess. "Do your own work" is not enough anymore. Students need to know whether they can use AI for brainstorming, outlining, grammar help, translation support, source searching, drafting, revision, or none of the above. Teachers need a policy they can actually enforce without turning every essay into a courtroom.
Start with three questions. What uses are allowed? What uses must be disclosed? What uses would make the submitted work no longer the student's work? A student who asks a chatbot for possible counterarguments is not doing the same thing as a student who submits a generated essay. A student who uses a grammar checker is not doing the same thing as a student who asks a tool to rewrite every paragraph in a more academic voice.
A useful policy is short. For example: "You may use AI tools to brainstorm questions, test an outline, or get feedback on clarity. You may not submit AI-generated prose as your own writing. Any AI use must be listed in a short note at the end of the assignment, including the tool and purpose." That policy will not solve every case, but it gives students a working line.
UNESCO's guidance on generative AI in education and research, updated in 2026, emphasizes privacy, age-appropriate use, human-centered policy, and pedagogical design. That is a better frame than panic. The question is not simply how to catch students. It is how to decide which uses support learning and which uses remove the work students came to school to practice.
Detection tools should be treated with caution. Stanford researchers reported in 2023 that several AI detectors were especially unreliable for non-native English writers, with more than half of TOEFL essays in their test set classified as AI-generated by the detectors they studied. The Stanford HAI summary is blunt: detectors were unreliable and easy to game. Even detection companies that claim low false-positive rates usually warn that results should be one signal, not proof by themselves.
In classroom terms, that means a detector score should not be the whole case. If a paper looks suspicious, gather process evidence: outline, notes, draft history, source annotations, in-class writing, and a short conversation with the student. Ask the student to explain one claim, one source choice, and one revision. This is not a trick. It is a way to bring the issue back to learning.
Assignment design matters more than detection. If the prompt is generic, AI will handle it easily. "Discuss symbolism in Macbeth" invites outsourcing. "Using the passage we annotated Tuesday, explain how one image changes meaning between lines 12 and 20, and connect it to your group's claim from the board" is harder to outsource and better for learning. The goal is not to make cheating impossible. The goal is to make the student's thinking visible.
The disclosure note should be boring and routine. Put it at the end of every assignment, even when students did not use AI: "AI use: none" or "AI use: I asked ChatGPT for possible counterarguments, then wrote my own paragraph." If disclosure only appears when something suspicious happens, students will treat it like an admission of guilt. If everyone fills it out, it becomes part of the writing process.
Require artifacts. For a history essay, ask for a source log with two sentences explaining why each source was kept or rejected. For a literature essay, ask for a marked-up passage and a paragraph about a changed interpretation. For a digital project, ask for a design note. AI tools can fake some of this, but they make less sense when the work is tied to class-specific materials and staged decisions.
There are productive uses. Students can ask an AI tool to generate questions about a primary source, then evaluate which questions are historically weak. They can ask for three possible thesis statements and explain why two are too broad. They can use a tool to identify places where a paragraph loses focus, then decide whether the advice is right. In each case, the student is judging the tool, not surrendering judgment to it.
Teachers can model this openly. Put a short student-like paragraph on the screen. Ask an AI tool for revision suggestions. Then reject some. Accept one. Explain why. Students need to see that tool output is not authority. It is material to be examined.
It is worth being specific about revision. A student may use AI feedback to notice that a paragraph lacks a topic sentence. That is different from asking the tool to rewrite the paragraph. One keeps the student in charge of the sentence-level decisions. The other can erase the student's voice and make the teacher's feedback almost useless. A policy can say this plainly: feedback is allowed, replacement prose is not.
Teachers should also protect in-class writing time. Not every assignment needs to be written under supervision, but students need regular practice producing claims, sentences, and revisions without a tool. That practice gives teachers a baseline and gives students confidence that writing is still something they can do, rather than something they can only manage through software.
For many classes, that baseline is the policy's quiet backbone. It makes later conversations about AI use less abstract and less punitive.
Privacy and access still matter. Do not require students to create accounts for tools that collect personal data unless the school has approved them. Do not assume every student has the same device, paid subscription, or home internet. Do not ask students to paste private classmates' writing or sensitive personal reflections into a public model.
A useful AI writing policy is boring in a good way. It says what is allowed, requires disclosure, protects students from detector-only accusations, and keeps the center of gravity on drafts, evidence, revision, and explanation. The work of writing is bigger than the final paragraph. It is the thinking that produced it. That is the part a classroom policy has to defend.