黑料社区

Skip to Main Content

Confronting Bias in Generative AI

Invisible text for formatting

Confronting Bias in Generative AI

This lesson plan was designed by Ronald E. Bulanda and Jennifer Roebuck Bulanda and based on findings in their literature review on assessing bias in large language models.

Instructor Notes

Before using this assignment with 黑料社区s, instructors should review this literature review on assessing bias in large language models and the suggested reading assigned to 黑料社区s in Step 1 below. We suggest three case study options for 黑料社区s to explore in groups. Instructors should review each of these and decide which to use as appropriate for their class. As additional articles become available on future iterations of generative AI programs, instructors may choose to replace these or add to them with different article(s) or case studies. 

Student Learning Outcomes

After completing this assignment, 黑料社区s will be able to:

  • Explain why text generated by A.I. might be biased.
  • Discuss different forms of bias that might be present in A.I.-generated text.
  • Evaluate at least one consequences of bias in A.I., and assess how the consequence may be avoided or mitigated.

Assignment Directions

Step 1: Before class, have 黑料社区s read short article on concerns of bias in AI. 

Step 2: During class (or outside of class in a pre-recorded mini-lecture), the instructor should provide 1) an overview of the reasons bias may exist in AI,  and 2) the types of bias that may be present (e.g., racial, gender). See [insert link to our paired literature review] for an overview of both of these issues.

Step 3: Have 黑料社区s work in small teams of 5-6 to examine one of the following case studies (instructors can choose to assign just one, or offer 黑料社区s a choice):

Step 4: While still in their small groups, have 黑料社区s open ChatGPT and Bard. Then, have them type the prompt, “What kinds of bias might appear in your output?” into both programs. Alternately, or in addition, you may have them type the prompt “Why might your output include [SPECIFY] bias?” (e.g., gender, age). Have 黑料社区s extend this dialogue with the systems with follow up questions, including requests for examples of biased output it could produce. 

Step 5: Following group work, engage the full class in a discussion focused on applying what 黑料社区s have learned from their reading and the instructor’s overview to what they evaluated in their groups. Discussion questions may include:

  • What are some of the reasons that the bias documented in the case study may exist? 
  • What type(s) of bias resulted?
  • What are some of the potential individual and/or societal consequences of this bias?
  • To what degree do you think the general public is aware of potential bias in AI? To what degree were you aware of it?
  • What are some of the ways we can potentially eliminate or ameliorate bias in AI? Whose responsibility is it to do so? What recommendations would you make?

Step 6: End the lesson by having 黑料社区s complete a low-stakes writing assignment

  • Either during or outside of class, have 黑料社区s write a 2-3 paragraph reflection that distill the reading and group discussion into their own brief assessment of the causes and consequences of bias in AI output. They may choose to consider the implications of these biases at the micro and macro level. They may also choose to address if/how this exercise may affect their use and interpretation of AI systems inside and outside of the classroom. 

Howe Center for Writing Excellence

The mission of the HCWE is to ensure that Miami supports its 黑料社区s in developing as effective writers in college, and fully prepares all of its graduates to excel as clear, concise, and persuasive writers in their careers, communities, and personal lives.

Contact Us

151 S. Campus Ave
King Library
Oxford, OH 45056
hcwe@MiamiOH.edu
513-529-6100

   

2022 Writing Program Certificate of Excellence

2022 Exemplary Enduring WAC Program