Potential Abstract: This research article explores the potential of cognitive scales in improving the performance and interpretability of large language models, such as neural language models and transformers. Large language models have shown remarkable success in various natural language processing tasks, but their opacity and lack of interpretability hinder their application in educational contexts. By integrating cognitive scales into these models, we aim to enhance their transparency and understandability, making them more useful for educational purposes.
This study leverages stigmergic mechanisms, inspired by the collective intelligence observed in social insects, to incorporate cognitive scales into large language models. Stigmergy allows for indirect coordination among individuals through the modification of the environment, enabling the emergence of self-organizing structures. We employ directed acyclic graphs (DAGs) as the underlying framework for representing and managing these cognitive scales, providing a flexible and scalable solution.
To investigate the impact of cognitive scales on large language models, we propose a comprehensive experimental design. We will conduct a series of controlled experiments using benchmark datasets and established evaluation metrics. Our experiments will involve training and evaluating both traditional large language models and the modified models with integrated cognitive scales. Quantitative analyses will be performed to compare their performance across various language tasks, including language generation, summarization, sentiment analysis, and question-answering.
Furthermore, we will perform qualitative analyses to examine the interpretability of the cognitive scales in the modified models. We will explore how the inclusion of cognitive scales affects the ability to explain model predictions and provide insights into the decision-making process. Through these analyses, we aim to shed light on the ways in which cognitive scales can improve the transparency and interpretability of large language models, making them more suitable for educational settings.
- Human evaluation of models built for interpretability
- On the predictive power of neural language models for human real-time comprehension behavior
- Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models
- Using large language models in psychology
- Diagnostic Reasoning Prompts Reveal the Potential for Large Language Model Interpretability in Medicine