Potential Abstract: This paper explores the intersection of complex prejudices and large language models within educational contexts through a Foucauldian lens. As artificial intelligence continues to play an increasingly significant role in education, it is crucial to critically examine how biases are perpetuated and reinforced through these technologies. Utilizing Foucault’s concepts of power, discourse, and surveillance, this study delves into the ways in which large language models contribute to the reproduction of prejudices and power dynamics in educational settings. By analyzing conversations between users and large language models, we uncover the subtle ways in which biases are encoded and normalized in educational interactions. This research sheds light on the importance of critically engaging with the socio-political implications of artificial intelligence in education and emphasizes the need for ethical considerations in the development and implementation of such technologies.
Potential References:
- Ethical and social risks of harm from language models
- Taxonomy of risks posed by language models
- Bias in word embeddings
- Technological Prejudice: Demonstrating the Ontological Challenge of Building a Critical Theory of Artificial Intelligence
- A Critical Discourse Analysis of Chatgpt’s Role in Knowledge and Power Production