Traynor Hansen III, PhD, is Associate Professor of English & Writing and the Writing Program Administrator at Seattle Pacific University. His research is rooted in British literature of the long-eighteenth century, focusing especially on nonfiction prose of the Romantic period.
As the Director of Campus Writing at a small, private, liberal arts university, much of my administrative work over the past two and a half years has focused on helping my colleagues figure out how to respond to the emergence of generative AI tools. It didn’t take long to recognize that some of the most severe reactions to generative AI (such as suggestions that student writing should be assigned as in-class work in blue books, or calls for a new generation of AI-checking software to complement the existing arsenal of plagiarism checkers) seemed more grounded in faculty fears about student integrity than rooted in principles of good writing pedagogy. When overstressed faculty assume that students are lazy, unmotivated, and just looking for the path of least resistance through required assignments, it is easy to cultivate attitudes of suspicion about student work—and about the students who produce it.
The first-year writing classroom is a key stress point where faculty fears about GenAI and student vulnerability are put to the test. In my university’s first-year composition program, students commonly struggle to negotiate the demands of writing in an unfamiliar environment, where many of the skills and genres that they were taught in high school—such as the five-paragraph essay or writing for AP exams—are no longer sufficient for addressing the wider expectations of audience, modes of inquiry, and flexible processes of writing in college.
Teaching Generative AI Literacy in the First-Year Writing Classroom
This last fall I tried my first run at teaching a first-year composition course where students would be permitted to use generative AI on most assignments. After spending two years supporting faculty in developing strategies to respond less reactively to student use of generative AI and crafting policies and guidelines meant to mitigate inappropriate uses of the tools, it was time for me to try a taste of my own medicine. My course readings focused on how generative AI tools work, what their affordances and limitations are, and some of the ethical and environmental implications of the large-language models (LLMs) that generative AI tools run on. We used resources from The Little Seagull Handbook and They Say/I Say to learn about how to read generative AI output critically, how to cite and acknowledge use of AI tools, and how to develop prompts that would clarify generative AI’s role as a supportive tool in the writing process without surrendering a writer’s agency.
My hope was that by bringing generative AI tools out of the shadows, my students and I could have frank discussions about what these tools can and cannot do for students who are learning academic writing. By allowing and even inviting students to experiment with the tools in a relatively consequence-free learning environment, I thought we could look together at what kind of role generative AI might play in drafting and revising student writing. We could also see where an over dependence on those tools leads to failures of communication or a breakdown in the writing process. In the spirit of wanting to create a writing classroom characterized by hospitality rather than suspicion, my goal was to help students develop crucial AI literacies that would continue to support them in future writing situations beyond my class.
Almost every formal writing assignment allowed students to use generative AI tools and provided detailed instructions for how to cite and acknowledge the tools when they were used. Two assignments explicitly asked students not to use the tools and offered a rationale for why, explaining that I wanted to see their own thinking and clarifying that they would not be penalized for messy thinking or writing. Even for these assignments, I gave students guidelines for citing the tools if they decided to use them anyway. I also explained that generative AI was off-limits for informal reflection writing; I do not believe that generative AI can reflect on behalf of the student. After attempting to create legitimate pathways where students could try out generative AI tools without repercussions, I was surprised to see the variety of ways students responded to the offer.
Encountering Prior Assumptions
Some students started the quarter by using generative AI tools on their first assignments—those that did not allow it and those that did—without acknowledgement or attribution. When I asked one student why she didn’t cite her use of a tool, since that was the minimum expectation to use the tools without consequences, she began to tear up. She explained to me that she knew she was allowed to use generative AI, but she was embarrassed to admit that she “needed” to do so. As a student writing in a language that was not her first language, she was worried that she did not understand our readings well enough and that she would not be able to write well enough to communicate the way she wanted to. She had often been criticized for her writing in high school before she turned to ChatGPT in the year before graduation. Now she was afraid that, in a university environment, the perceived shortcomings in her writing would be laid bare for all to see.
Other students staunchly resisted generative AI tools throughout the term, regardless of whether the assignment permitted their use or not. Since I wanted to help my students develop AI literacy, I had hoped that more students would at least be willing to experiment with the tools, even if they didn’t embrace them, but these resisting students seemed completely uninterested. When I asked my class to reflect on the usefulness of a specific set of guidelines for prompting tools and acknowledging their use, one student wrote that the guidelines were “impractical and unhelpful”: “This is mainly because I don’t plan on ever using AI for academic writing (or writing in general) unless I am forced.” Another student confidently told me that she would never allow herself to experiment with generative AI because she was afraid that after seeing what the tools could do, she would become “lazy,” and she never wanted to be “that kind of person.”
Both types of response have helped me see that even the most transparent and well-intentioned efforts to address generative AI in the classroom will fall short if they don’t make space for students’ prior assumptions about generative AI tools and what it means to use them. Many of these assumptions are influenced by the direct and indirect comments of authority figures, including the moral proclamations of authority figures (such as teachers and parents) about the “kinds of people” who use AI; testimonials on social media and from classmates about how AI tools can save students time by doing their work for them; and the marketing hype that presents all kinds of AI as a supposed technological “miracle” on the cutting edge of all future work and writing. So much of this communication carries a deep emotional charge of fear, anxiety, shame, hope, and excitement. It should be expected that students carry this mix of feelings about AI tools—as, of course, do their professors—regardless of whether we address the tools in our teaching directly or not.
Next Steps
As I look forward to the next time I teach this class, I plan to take new steps toward helping my students think about generative AI while giving more care to the way the technology intersects with the often-unspoken social and emotional aspects of students’ writing experiences.
- Introduce more opportunities for focused reflection early in the term: Far more than a quiz or survey, informal reflective writing can be a useful, low-stakes strategy for instructors to gauge student attitudes, presuppositions, and concerns about writing and to learn about their students’ individual histories. Even more importantly, it affords an opportunity for students to discover their own thoughts about writing, generative AI, and the values that they associate with each. While there is always a chance that writers might underreport their experiences or misrepresent their views about generative AI, my hope is that offering more chances for reflection early on will create a norm within which students become more comfortable externalizing their thoughts about the technology.
- Connect reflection to in-class conversations: The work of reflection becomes even more valuable when it bears a relationship to the shared work of the classroom community. In my previous attempt at teaching the class, I spent too much time asking spontaneous questions to wary students (concerned, perhaps, that I was looking for a “right” answer about generative AI). Many of these questions were met with little more than the sound of crickets chirping. However, when students share their reflections with each other (as opposed to just me), they briefly decenter me and my agenda (whether real or imagined) and begin to learn from each other’s experiences. And, when students are asked to reflect after an in-class discussion, they can draw new connections between what they hear from classmates and their own thoughts about generative AI.
- Pay attention to how I express my own assumptions: Reflection is an important tool for instructors, too. This post is part of my reflective process. While I worked hard in my class to be transparent about my own thoughts, I now realize that I also telegraphed unexamined assumptions about why students might use generative AI and presuppositions about values students held about the tool. I am becoming more mindful of the range of experiences students have and attitudes they hold. This is not to say I see generative AI as ethically neutral—indeed, one of my goals in teaching AI literacy is to help students develop wisdom about the complex choices involved with using generative AI—but I only work against my own pedagogical goals if my teaching about the tools reinforces feelings of shame, self-criticism, or judgment of other writers.
The first principle of the MLA-CCCC Joint Task Force on Writing and AI’s working paper on Generative AI and Policy Development guidance recommends that policies must focus not only on academic integrity and learner outcomes, but that they must keep the teacher-student relationship at their core. This seems like sound advice, and as I planned my class I thought I was following it. I had been focusing only on my own posture as a teacher—and I believe that as faculty respond to the “threat” and “promise” of generative AI we should continue to maintain this focus. While I trust that my course gave students some new ways of understanding generative AI, I think my first attempt ultimately put more strain on my relationship with this group of students—and it really didn’t budge the needle on their attitudes about the tools. In my eagerness to address the challenge of generative AI head-on, and to help my students develop AI literacy, I failed to account for the complicated mix of emotional, intellectual, and ethical presuppositions that informed my students’ thinking about these tools. This is a reminder that teaching about generative AI and writing, just like teaching writing in general, should invite students to make connections with their own experiences even as they are being invited into new strategies of academic inquiry.
MEET THE AUTHOR

Traynor Hansen III, PhD, is Associate Professor of English & Writing and the Writing Program Administrator at Seattle Pacific University. His research is rooted in British literature of the long-eighteenth century, focusing especially on nonfiction prose of the Romantic period. His teaching and administrative commitments as a WPA are concerned with helping students and faculty think more clearly about generative AI tools—especially when managing student writing anxiety and bridging the gap between first-year writing classes and more advanced disciplinary writing. He is also, unfortunately, a lifelong Seattle Mariners fan.
Image Credit: Traynor Hansen