Skip to McMaster Navigation Skip to Site Navigation Skip to main content
McMaster logo

TAs and Generative AI

According to the Provisional Guidelines on the Use of Generative AI in Teaching and Learning at McMaster, instructors decide if/to what extent generative AI is acceptable in their course. Make sure you’re aware how the use of generative AI is/isn’t permitted for each assessment you’re marking. 

Recognizing AI-generated writing while marking 

Note: McMaster will be enabling Turnitin’s AI Detection Tool pending privacy impact and security assessments. However, care should be exercised when using such tools given issues with accuracy and reliability.  

Most AI writers use a large language model (LLM) algorithm. LLMs work by predicting which words are likely to be placed next to one another based on recurring patterns in the source data and context cues from the surrounding information. This can make it challenging to tell the difference between human and computer-written content. Here are some things to look out for. 

Tone & style

Because LLMs use association to determine the probability of word placement, the output is often strung together, giving it a lack of transition words or varying tones and making it more uniform, almost robotic. AI generated outputs also often include a high frequency of keywords (e.g., words provided in the prompt), which can result in repetitive language. When people write, there are usually varying tones, styles, and language throughout the text as our thought patterns shift.  

However, tools like undetectable.ai can check content for AI Detection and “Humanize” the output. Solely relying on tone/style also runs the risk of misclassifying non-native English writing as AI generated, which raises equity concerns. 

It can be helpful to look for evidence of opinions or reference to personal experiences, though this may not always be relevant and will depend on the type of assessment.   

Accuracy

“Hallucination” refers to AI-generated text that is not grounded in the training data or the input provided. It can occur when the model’s predictions are based on weak or incorrect patterns (e.g., reference to current events), which can lead to responses that seem plausible but are not accurate.  

Recognizing hallucinations in AI-generated text can be challenging, especially when the writing sounds authoritative. Here are some tips to help you identify potential hallucinations 

  • Verify the information: Cross-check suspect information with reliable sources  
  • Look for inconsistencies: Pay attention to inconsistencies in the response, such as contradictions or information that doesn’t align with your existing knowledge.  
  • Assess the relevance: Evaluate whether the response is relevant to the question/assessment.  
  • Be cautious with unfamiliar terms: If the writing uses unfamiliar terms or jargon, especially in technical contexts, take the time to research and verify their validity.  
  • Seek expert advice: When in doubt, consult with the course instructor. 

If you suspect generative AI was used in a way that’s not permitted, contact the course instructor. 

Using Generative AI as a TA 

According to the Provisional Guidelines on the Use of Generative AI in Teaching and Learning at McMaster, course instructors decide if generative AI use is permitted, required, or prohibited for teaching tasks. Make sure you’re aware if/how you can use generative AI in your duties as a teaching assistant.  

Required use

If use of generative AI tools is required by the course instructor, you may require training on the use of the tool. If training is required, you should be compensated for the time required to complete the training and it should be accounted for in your Hours of Work form.  

Permitted use

If the instructor has not required the use of AI to perform your duties, and you would like to use AI to complete your work, you must inform the instructor of the intended use of generative AI, and receive approval, before implementation. Once approved by the instructor, you should also disclose how you are using generative AI with your students both in class and in the course outline. 

You may wish to spend some time reviewing risks and limitations of using generative AI, some of which include perpetuation of biases, environmental and human costs associated with training and use, as well as concerns around privacy and copyright.  

The provisional guidelines permit the use of generative AI tools to provide formative feedback on student work, but they cannot be used to provide summative evaluation of student work; a “pass/fail” or “completion” may be applied with formative feedback. There are some key considerations to keep in mind for this particular use.  

  • Students must consent to having their work submitted to a generative AI tool and be able to opt out.  
  • Whenever inputting student work, you should ensure that data collection is turned off on generative AI tools (e.g., turn off chat history and model training in ChatGPT). 

Other ways in which you may consider using generative AI include generating discussion questions, examples, explanations, scenarios, and ideas for class activities. Further details and examples are provided in Using GenAI as an Instructor. 

Make sure you evaluate any teaching materials or formative feedback generated for accuracy before sharing with students.