Objectives
This project, led by Dr. Frederic Lemieux at the School of Continuing Studies (SCS), focused on developing AI-based tools for detecting and mitigating human cognitive biases in text and speech. Funded by IPAI and supported by CGI, a technology-based solutions company, the project’s first phase establishes the groundwork for the AI model by training it on a diverse dataset of open-source written materials. In the first phase, the team collected diverse open-source written materials to train the AI model on identifying common cognitive biases. With the help of CGI who provided access to tools to help with model training and data labelling, project members employed prompt engineering with the LLM (Amazon Sagemaker) to guide the model in recognizing specific biases, a foundational step for establishing robust AI literacy among students.
The initial stage was crucial for equipping the AI system with the capability to recognize a broad range of cognitive biases across various textual sources, so the team focused on gathering an extensive and varied collection of open-source written materials that served as a foundational training set for the AI model. The project team selected suitable AI frameworks and algorithms capable of analyzing substantial datasets and identifying bias patterns. Establishing clear metrics to measure the model’s ability to detect biases accurately, without overgeneralizing or missing subtle instances, was essential in this phase.
Outcomes
The project provided students with the unique opportunity to engage directly with a large language model (LLM), Amazon Sagemaker, deepening their understanding of prompt engineering techniques. By interacting with the LLM, students learned how to instruct it to identify cognitive biases in text using carefully crafted instructional tags. This process allowed students to contribute to the model’s reinforcement learning by assessing the LLM’s responses and offering corrections grounded in logical reasoning for each type of bias. The project highlighted the importance of refining prompts to guide the LLM more effectively, demonstrating how instructional techniques can act as guardrails to reduce the occurrence of hallucinations.
Table of accuracy results for each bias. The table below provides a benchmark comparison of the LLM model against Mixtral 7x8b and Llama 3 70b, detailing the frequency with which each model successfully identified the presence or absence of the specified bias within the sample text. For example, the Amazon Sagemaker model was able to detect if confirmation bias was present or not present in the sample texts with 99.17% accuracy. This result shows that the project’s prompt engineering template grounds the model well enough to detect if bias is present. In contrast, the other models, which were provided with a basic prompt template, had significantly more difficulty accurately identifying bias presence. Additionally, one of the research assistants found examples where the other models produced hallucinations, unlike the Amazon Sagemaker model. In total, there were 2,160 sample texts annotated by GU students.
Future Applications
By training the AI to recognize biases through prompt engineering, this research has set the groundwork for various impactful uses. Embedding bias detection into decision support systems could enhance objectivity in areas like healthcare, law, and intelligence, where subjective interpretation often affects outcomes. Bias-detection tools also promise greater AI transparency, crucial for explainability in regulated sectors like finance and public policy. The project’s educational approach to training students in AI literacy could be expanded to build essential AI skills and bias awareness across fields. In high-stakes environments like national security, real-time bias alerts could aid balanced decision-making under pressure, while collaborative platforms could prevent groupthink in corporate and governmental settings, encouraging diverse perspectives. Finally, in public discourse, bias-detection tools could help policymakers understand cognitive biases’ impact on public opinion, providing a means to counter misinformation and promote informed dialogue.
Frederic Lemieux
Ph.D., Professor of the Practice and Faculty Director for the Master’s in Applied Intelligence Management, Cybersecurity Risk Management, and Information Technology Management programs