Why you should understand how to teach with AI tools — even if you have no plans to actually use them.
Large language models (LLMs) such as ChatGPT are sophisticated statistical models that predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense. Through intensive mining, modeling, and memorization of vast stores of language data “scraped” from the internet, these text generators deliver a few paragraphs at a time which resemble writing authored by humans. This synthetic text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.
The public release of ChatGPT has provoked much debate as to the potential consequences of artificial intelligence (AI) for writing education. For second language educators and researchers in particular, AI-generated writing poses a major concern for student learning. However, it is also important to recognize the powerful affordances of ChatGPT for second language writers in their studies and careers. We argue that the value of ChatGPT and similar AI-based writing tools for second language learners must be understood through three contradictions: the “imitation” contradiction, the “rich get richer” contradiction, and the “with or without” contradiction. To most effectively address these contradictions, we propose a five-part pedagogical framework focusing on understanding, accessing, prompting, corroborating, and incorporating AI in the writing process. By facilitating conversations about how to use AI in the classroom, we can better support second language learners’ writing development and prepare these students for a world that increasingly values one’s ability to understand and use AI.
What do teachers who assign writing need to know about AI text generators? How should we change our pedagogical practices, given the recent advances in AI Large Language Models (LLMs) such as OpenAI's GPT-3, as recently covered in The New York Times, The New Yorker, The Chronicle of Higher Education, and Inside Higher Ed? How should teachers participate in shaping policies around these technologies in our departments, institutions, and society?
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. Prompts are included for each of these approaches. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementarity of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop," the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms.
Classroom project on ethical use of artificial intelligence tools like ChatGPT addresses how students can use it, how faculty should grade
Students should learn how to use AI text generators and other AI-based assistive resources (collectively, AI tools) to enhance rather than damage their developing abilities as writers, coders, communicators, and thinkers. Instructors should ensure fair grading for both those who do and do not use AI tools. The GAIA policy stresses transparency, fairness, and honoring relevant stakeholders such as students eager to learn and build careers, families who send students to the university, professors who are charged with teaching vital skills, the university that has a responsibility to attest to student competency with diplomas, future employers who invest in student because of their abilities and character, and colleagues who lack privileged access to valuable resources. To that end, the GAIA policy adopts a few commonsense limitations on an otherwise embracing approach to AI tools.
GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.
The technology promises to assist students with disadvantages in developing skills needed for success. But AI also threatens to widen the education gap like no tech before it.
Since January 2023, I’ve talked with hundreds of instructors at dozens of institutions about how they might incorporate AI into their teaching.
In this post, we are collecting the slides from some of our presentations. We’ll keep it updated over time, and also remove older presentations when the content becomes outdated or superseded.
Monash University supports the responsible and ethical use of generative AI. To equip our graduates with the skills that they need to develop with emerging technologies, we have a duty to explore and educate students on the benefits in the judicious use of technologies such as ChatGPT while also ensuring they understand the risks and ethical considerations of such tools.
This assignment asks first-year undergraduate history and English students to use AI writing models to aid in accessing and understanding readings on specific topics. Students used AI to understand the texts they were reading including the Declaration of Independence and rhetorical analysis readings. Students asked AI questions about the texts and evaluated how AI created academic citations. Students used AI to understand the readings, but also engaged in critical thinking about using AI.
Many of our assignments are designed to show off what ChatGPT can do, while others are intended to imbue various features of AI fluency in students. A third group suggests ways to leverage ChatGPT in writing assignments, recognizing that we might need to revisit our original reasons for assignment writing and re-examine the student learning outcomes we are attempting to achieve. In the future, will “writing” really mean “editing” rather than “drafting/composing”? Will future careers involve editing and refining AI output instead of starting writing, even analysis, from scratch?
Policy on: The following guidance aims to help instructors thinking about the impact of generative artificial intelligence (AI) tools on teaching and learning. Specifically these guidelines suggest instructors:
Giving students guidelines for AI use on assignments—via a green, yellow, or red light—provides clarity around this powerful technology.
A guidebook for educators on the capabilities, limitations and possibilities of generative AI in teaching and learning at McMaster University