Abstract: This research article explores the intersectional ecologies of learning through the lens of large language models (LLMs) within a Kristevan milieu framework. As artificial intelligence and machine learning technologies become increasingly embedded in educational contexts, it is essential to critically examine their impact on the diverse and complex social identities of learners. Drawing upon the theoretical insights from intersectionality, ecological systems theory, and the works of Julia Kristeva, this study investigates how LLMs interact with learners’ multiple dimensions of identity and how these interactions shape the learning experience.
The article proposes a conceptual framework that integrates the intersectionality perspective with ecological systems theory to analyze the dynamic interplay between LLMs and learners’ sociocultural, linguistic, and individual contexts. This framework highlights the importance of examining the micro, meso, and macro levels of influence to understand how LLMs can support or hinder the development of learners’ identities and knowledge construction.
Furthermore, this study employs qualitative research methods to explore the experiences of educators and students in educational settings where LLMs are implemented. Through interviews, observations, and document analysis, the research investigates the nuanced ways in which LLMs influence language acquisition, cognitive processes, and social interactions in diverse learning environments.
The findings of this research contribute to both the fields of artificial intelligence in education and educational theory. By adopting an intersectional lens, this study acknowledges the complex and multifaceted nature of learners’ identities and challenges the assumptions of homogeneous learner profiles within LLMs. Additionally, the ecological perspective provides a holistic understanding of the impact of LLMs on learners’ educational experiences.
- Bias out-of-the-box: An empirical analysis of intersectional occupational biases in popular generative language models
- Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases
- Gender and technology: Social context and intersectionality
- Detecting intersectionality in NER models: A data-driven approach
- Evaluating biased attitude associations of language models in an intersectional context