Potential Abstract:
This research article investigates the manifestation and perpetuation of discrete stereotypes in large language models (LLMs) through a semiotic analysis of conversational biases. As LLMs become increasingly prevalent in educational settings, it is imperative to uncover and address the ways in which they may reinforce harmful stereotypes, particularly in the context of student-teacher interactions. Drawing on semiotic theory, this study examines how specific linguistic patterns and representations within LLM-generated responses contribute to the reinforcement of stereotypical beliefs and biases. By dissecting the underlying semiotic structures of these models, we aim to develop a deeper understanding of the mechanisms through which stereotypes are encoded and reproduced in educational dialogue.
Potential References:
- Semiotics, Artificial Intelligence, ChatGpt. Research Lines, Analytical Perspectives and Potential Applications
- Marked personas: Using natural language prompts to measure stereotypes in language models
- Generative AI’s Family Portraits of Whiteness: A Postdigital Semiotic Case Study
- Linguistic analysis of gender stereotypes in the language of mass media
- Textual Analysis