Bringing a Gift to the AI Party: Teaching Ethical AI Use in First-Year Composition 

Max Everhart has been an English instructor for 17 years and currently teaches at Florence-Darlington Technical College. He is the author of the Eli Sharpe mystery series, and his short stories and nonfiction have appeared in a dozen publications.

Imagine you’re invited to a fancy dinner party. You comb your hair. You shine your shoes. You carefully select a cocktail dress or a three-piece suit. After days of preparing, you show up properly groomed, ready to engage. And yet . . . you would never dream of arriving empty-handed. You’d bring a thoughtful gift—a nice bottle of wine, some bespoke stationery, or a handcrafted flower arrangement—because that’s what good guests do. 

The way I see it, working with AI is a lot like attending that dinner party. In my first-year composition courses, I stress that AI isn’t responsible for creating the party—or the writing assignment—from scratch. Instead, it should be seen as a facilitator, a gracious host who helps the party flow once we show up with something meaningful to offer. Whether students are using AI tools or I’m using them myself, the expectation is the same: we bring thoughtful work to the table first. 

Belabored metaphors aside, this “gift-giving” mindset has become a cornerstone of how I teach the ethical use of AI in my English 101 and 102 classes. AI is a tool for development, reflection, and refinement—not a shortcut to avoid the thinking and writing processes themselves. Here’s how that philosophy looks in practice, both from my side of the desk and from my students’. 

The Instructor’s Perspective: Partnering with AI to Support Revision 

In my ENG 101 classes, which average around twenty-five students, I never simply assign an essay and collect a final draft for a grade. Every major writing project is a two-step process: students submit a first draft, I provide extensive feedback, and only then do they revise for a final grade. 

This workflow gives me a perfect opportunity to model the ethical, effective use of AI. When I grade first drafts, I don’t just copy and paste a paper into ChatGPT and ask for a rubric score. Instead, I open two screens side by side: the student’s draft on one side, my AI platform on the other. 

Before I even begin, I feed the AI key information: the assignment directions, the grading rubric, and any specific skills we’ve emphasized (such as developing a thesis, using textual evidence, or organizing paragraphs logically). Then, as I read the student’s work, I actively take notes—on the paper itself, and simultaneously in the AI chat. For example, I might note that an introduction is too general, or that a thesis statement is promising but needs sharpening. I highlight specific strengths (clear transitions, strong use of a source) and pinpoint weaknesses (lack of topic sentences, unclear citations). 

When I finish annotating, I instruct AI to transform my notes into user-friendly, actionable feedback that the student can understand and apply. In mere seconds, AI helps me produce clearer, more structured commentary—saving me valuable time without sacrificing the personal touch. In an informal experiment, I graded five research papers without AI assistance and discovered that, on average, it took me roughly 10 to 12 minutes to provide meaningful feedback. Then I graded five papers using AI with the method I described and it took me roughly 5 to 7 minutes per paper. 

The key here is intentionality. AI doesn’t “grade” for me. I’m guiding the feedback. I’m making the judgment calls. AI simply helps me organize my thoughts in a way that’s encouraging, specific, and accessible to my students.  

Which brings me to an important point: AI can make mistakes, so it is crucial that you always carefully review your feedback that AI helped you organize. I learned quickly that I cannot simply skim the AI feedback; I must read it and make absolutely certain that it is clear and honestly reflects my intentions. I’d say 98% of the time, there is nothing I need to correct or amend. However, occasionally, AI will make, for example, an error with MLA documentation and I will need to correct that in the feedback.  

At least twice a semester now, I demonstrate to my students in class or via a recorded video how I actually use AI to assist me in grading, killing two birds with one stone. I can be totally transparent about my grading process, and I can give them more explicit instruction on the ethical use of AI. Win-win. 

The Student’s Perspective: Bringing Work to the AI Party 

On the student side, the message is just as clear: Show up prepared. Bring something meaningful. Do the thinking first. Early in the semester, I discuss with students how and when they can, if they so choose, utilize AI in the writing process. I do in-class demonstrations on how to ethically use AI for prewriting, brainstorming, drafting, and revision, each time emphasizing how they need to bring work to AI, not have it do the work for them.   

After receiving feedback on their first drafts, students are required to revise based directly on my comments. I point them toward strengths and weaknesses in content, organization, grammar, sentence clarity, and (where appropriate) MLA formatting. 

Once they’ve made those revisions—really worked through them thoughtfully—they can then ethically use AI to double-check their improvements. I teach them how to paste my feedback alongside their new draft and ask AI a question like: “How well have I addressed these revision goals?” This becomes a conversation starter, not a crutch. Students can dialogue with AI about structural improvements, clarity, and sentence flow, just as they might work with a peer tutor or a writing center consultant. 

In this process, students are using AI not to write their papers, but to reflect on their revision choices. They are learning how to take ownership of their writing while leveraging AI for meaningful, critical support. So far, the response from students has been positive. Many express how it feels like a meaningful collaboration, not too different from working with a skilled (and egoless) tutor. And from my end, I can’t complain: the grades have improved but, more important to me, the quality of the writing has improved and students are learning a new and vital professional skill.  

Why This Approach Matters 

Teaching AI ethics isn’t just about preventing academic dishonesty (although that’s certainly important). It’s about preparing students for the reality of future workplaces, where AI assistance will be increasingly commonplace. I tell my students: In five or ten years, your boss isn’t going to care whether you used a grammar checker or a brainstorming bot to draft a report. They’ll care that you know how to use those tools thoughtfully, responsibly, and effectively. 

Students need to see that AI is a tool, not a magic wand. If they come empty-handed—expecting AI to generate ideas, structure arguments, and polish sentences without their own engagement—they’ll miss the point entirely. Worse, they’ll miss the chance to become better writers, thinkers, and problem-solvers. 

But when they show up prepared—with outlines, rough drafts, ideas, and honest effort—they can collaborate with AI to refine their work, strengthen their arguments, and deepen their learning. That’s not just ethical use; it’s empowered use. 

Final Thoughts: A Party Worth Attending 

At the end of the day, AI is like the gracious host of a great party: there to keep the conversation flowing, not to manufacture it from scratch. As educators, we have a responsibility to show students how to approach AI the same way they’d approach any meaningful collaboration—with preparation, respect, and a spirit of partnership. 

If we can instill that mindset early on, then AI doesn’t just become another technology to fear or ban. It becomes a tool that enhances creativity, encourages critical thinking, and invites students to take ownership of their own growth as writers. 

And that, in my opinion, is a party worth attending. 

MEET THE AUTHOR

Max Everhart has been an English instructor for 17 years and currently teaches at Florence-Darlington Technical College. He is the author of the Eli Sharpe mystery series, and his short stories and nonfiction have appeared in a dozen publications, including CutBank, Potomac Review, and OC87 Recovery Diaries. He recently completed a nonfiction book titled Talking to the Algorithm: What 30 Days of AI Conversations Taught Me About Being Human. He lives in South Carolina with his wife and son.

Image Credit: Max Everhart

Leave a Reply