While the innovation and creativity of generative AI is exciting, these systems do not come without limitations or ethical challenges. One of the biggest criticisms levelled against GenAI tools is that they make things up. As probabilistic models they are designed to generate the most likely response to any given prompt. Given that these tools do not ‘know’ anything and are – in most instances – limited in their ability to fact check, the responses they generate can include factual errors and invented citations/references. This known phenomenon has been termed ‘hallucination’.
You can learn more about general limitations and risks in the Generative Artificial Intelligence in Teaching and Learning at McMaster University Pressbook.
Thompson Rivers University developed a Critical AI Framework to help you weigh these limitations and risks when deciding whether a generative AI tool is right for your project, classroom context, or workflow. Click on the plus icons to help you in your decision-making process.
Prior to (or instead of) using AI with your students
Ignoring the “problem” won’t make it go away. If you’re unsure about using AI, it can be helpful to make space for conversation and engage in collective knowledge building before you consider integrating these systems into your classroom.
Autumn Caines, an instructional designer at the University of Michigan, provides some activity suggestions for instructors to do with their students prior to or instead of directly using ChatGPT. These include:
- Socially annotating OpenAI’s privacy and service Terms
- Playing the data, privacy, and identity game with your students
- Discussing big issues around AI (e.g., labour, climate)
- Conducting a techno-ethical audit
References
Caines, A. (2023, January 19). Prior to (or instead of) using ChatGPT with your students. Is a Liminal Space. https://autumm.edtech.fm/2023/01/18/prior-to-or-instead-of-using-chatgpt-with-your-students/
Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Satia, A., Verkoeyen, S., Kehoe, J., Mordell, D., Allard, E., & Aspenlieder, E. (2023). Generative Artifcial Intelligence in Teaching and Learning McMaster. Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching. https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/chapter/generative-ai-limitations-and-potential-risks-for-student-learning/
Society & Ethics. (n.d.). StableDiffusionBiasExplorer. Retrieved October 26, 2023, from https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer
Thompson Rivers University. Critical AI Framework – AI in Education. Critical AI Framework. Retrieved October 26, 2023, from https://aieducation.trubox.ca/critical-ai-framework/
Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge. https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data