AI Ethics in the Classroom

Lewis Vaughn is an independent scholar and freelance writer living in Amherst, New York.

Artificial Intelligence has become ubiquitous, with its bots and algorithms permeating every aspect of our lives. For several years now, it’s been too late to turn back; AI (often invisibly) powers our smartphones, vehicles, homes, and businesses. But with the more recent advent of widely available generative AI, educators now face the divisive challenge of how, or whether, to use this technology in the classroom. Some advocate for a ban, fearing that generative AI undermines critical thinking and threatens academic integrity. However, many teachers and institutions believe the best approach is to teach students how to navigate this reality, using AI productively and responsibly. For example, DePaul University’s policy toward AI reflects this attitude: 

Universities have a special challenge: on the one hand, we need to prepare our students for a world of work in which AI will certainly play a part, but on the other hand, we want our students to understand and practice integrity in the use of any sources, including those generated by Artificial Intelligence. DePaul faculty and staff have been charged [as the statement specifies elsewhere] “to create conditions for members of the University community to learn about both the benefits and dangers of AI and act responsibly.” (Center for Teaching and Learning)

Artificial intelligence refers to any computational system capable of imitating intelligent human behavior by performing tasks that, when done by humans, would be considered intelligent. AI systems can perceive and sense, predict outcomes, analyze vast amounts of data, translate languages, recognize images and voices, interpret medical tests, and operate tools and complex machines, such as autonomous vehicles. AI can also create. Generative AI, which includes large language models (LLMs) like ChatGPT, can produce videos, images, audio (including voices), and computer code. To the dismay of many teachers, it can also draft text that mimics human writing in student essays, as well as poems, songs, jokes, and more. 

However, AI is flawed in ways that present technical and ethical challenges to both students and teachers. These challenges can be overcome by applying what AI lacks: critical thinking and a moral perspective. 

  1. Technical Challenges 

A serious misconception about AI in the classroom is that it can do students’ thinking for them, that it can provide them with ways to outsource their brains. This is a fantasy. According to Harvard’s AI Pedagogy Project, 

Large language models are deceptive, because they appear to understand, think, and reason. They do none of these things. They are designed to mimic humans, but they are not alive, do not have subjective experiences (despite their use of the term “I”), and cannot think or feel. (AI Pedagogy Project)

The main technical issue with AI is that it can sometimes provide inaccurate, misleading, plagiarized, or outdated information. It also has limited ability to assess the value of this information or specify its source. As the AI Pedagogy Project points out,  

Large language models, like all tools, are better at some things than others. Since they are designed to seem accurate rather than be accurate, there are many circumstances where large language models shouldn’t be used…. 

If you are in need of accurate information, do not use a large language model. Large language models are not designed to provide factual information. In fact, they can confidently state falsehoods! This is known as a “hallucination.” (AI Pedagogy Project)

The best way to combat such digital sloppiness is by using the same critical thinking skills that we want students to apply to any information, whether it comes from humans or chatbots. This involves developing a healthy skepticism toward AI-generated statements, using multiple independent sources to verify facts, checking the reasoning behind arguments and assumptions, and identifying any relevant viewpoints or data that may have been omitted. 

  1. Ethical Challenges 

For many teachers, AI’s technical problems are not as pressing as its threat to academic integrity and the moral norms that guide teaching and learning. A fundamental norm in the classroom is that students should not claim credit for work that isn’t theirs. This principle suggests acceptable and unacceptable uses of AI. For example, it seems acceptable to use AI for:  

  • Brainstorming ideas
  • Summarizing large amounts of information 
  • Checking grammar and logic 
  • Exploring aspects of a topic 
  • Fine-tuning research questions 
  • Drafting outlines 
  • Critiquing an essay (while deciding for yourself how to respond to the criticism). 

By contrast, it seems unacceptable to: 

  • Use AI-generated text in an essay without verifying its accuracy and providing citations (style guides, such as the Modern Language Association’s, now give guidance on how to cite AI material) 
  • Paraphrase AI-generated text without citation
  • Use AI to write discussion-board posts and responses

Consequently, it is important for students to maintain academic integrity by properly citing AI-generated content and not claiming its output as original work. By adhering to these guidelines, students can leverage AI responsibly in their work. 

Learning, of course, is a core value in education, and anything that impedes the learning process may be considered an educational injustice. It is, therefore, unacceptable to let AI write drafts of a paper or assignment (or work through a tough reading, or give you all the answers, etc.).  

It can be tempting to use AI to write a first draft (or second or third). But this kind of personal outsourcing can undermine the very purpose of writing: to clarify thinking, engage with ideas, come to understand something for oneself — to learn. 

While AI can simplify tasks, particularly in writing papers, educational research indicates that genuine learning requires effort and struggle rather than passive absorption. According to journalist Jess Fong, educational researchers refer to this as “desirable difficulties” — the kind of challenging engagement that promotes learning but is also somewhat demanding. If AI makes things too easy for students (for instance, by enabling them to avoid reading a challenging text) they will not learn effectively (Vox.com).

Another moral issue in this domain concerns bias. AI systems reflect the social biases in their training data and convey these biases in the information they provide to users. According to a report from Rutgers University, 

Biased AI can give consistently different outputs for certain groups compared to others. Biased outputs can discriminate based on race, gender, biological sex, nationality, social class, or many other factors. Human beings choose the data that algorithms use, and even if these humans make conscious efforts to eschew bias, it can still be baked into the data they select. Extensive testing and diverse teams can act as effective safeguards, but even with these measures in place, bias can still enter machine-learning processes. AI systems then automate and perpetuate biased models. (Brobeil)

The possibility of bias means that AI users must always ask: Who is represented in this data? Are all relevant perspectives included? What do other sources say on this topic? 

  1. Conclusion 

AI should be viewed as a tool and an assistant, not as an infallible researcher or a coauthor. While it can assist with tasks, it lacks the critical thinking and moral awareness we strive to instill in students. These human qualities are crucial for solving complex problems and making ethical decisions. Therefore, while AI can support us, it cannot replace the unique human attributes that drive true learning and responsible action. 

References

Center for Teaching and Learning. “Artificial Intelligence (AI) in Higher Education,” DePaul University, https://resources.depaul.edu/teaching-commons/teaching-guides/technology/artificial-intelligence/Pages/default.aspx

AI Pedagogy Project. “AI Starter,” metaLAB (at) Harvard, https://aipedagogy.org/guide/starter/. Accessed 8 June, 2024.

Vox.com. “AI Can Do Your Homework. Now What?” YouTube, uploaded by Vox, 12 Dec. 2023, https://youtu.be/bEJ0_TVXh-I?si=VMrieG2_wWaxCMVP Accessed 12 June, 2024.

Brobeil, Caroline. “Battling Bias in AI,” Rutgers University, https://stories.camden.rutgers.edu/battling-bias-in-ai/index.html#:~:text=As%20artificial%20intelligence%20becomes%20increasingly,potential%20for%20wide%2Dranging%20impact. Accessed 12 June, 2024.

Further Reading and Resources

Department of Education: Office of Educational Technology 

Hechinger Report: Teens are looking to AI for information and answers, two surveys show 

MLA Guide to Citing Generative AI 

MEET THE AUTHOR

Lewis Vaughn

Lewis Vaughn is an independent scholar and freelance writer living in Amherst, New York. He is the author of several leading textbooks, including Doing Ethics: Moral Reasoning, Theory, and Contemporary Issues and Beginning Ethics: An Introduction to Moral Philosophy.

Image Credit: Kathleen Vaughn

Leave a Reply