Impact of AI in Instructional Assessment

Objectives

During the summer term of 2024, three sections of an MBA core Management course in Leadership Communications embarked on a pilot project to explore the integration of artificial intelligence in Instructor Assessment. The experimental initiative aimed to assess the comparative quality and impact of AI-generated feedback versus human-generated feedback. Conducted as a randomized control trial, the project sought to understand the “human-likeness” of AI feedback, evaluate student sentiments toward AI use in assessments, and explore how AI tools could influence perceptions of feedback quality. The ultimate goal was to uncover the limits, risks, and opportunities AI integration presents for faculty in academic settings. 

Outcome

Generative AI tools allowed students to receive high-quality, actionable feedback that was robust and closely mirrored human-written responses. This capability provided students with detailed insights that enhanced their learning experience. AI’s rapid feedback generation facilitated faster response times, enabling a more efficient feedback cycle. Students’ assessments revealed that AI feedback was often as comprehensive as human feedback and indistinguishable in its “human-likeness.”

The pilot project yielded notable efficiency gains, as AI tools could produce thorough, consistent feedback at a speed beyond human capacity. This increased efficiency in providing feedback underscored the potential for AI to streamline education practices and deliver prompt responses that support timely student learning. However, students noted some trade-offs in terms of personalization. While AI feedback was informative and structured, human-generated feedback was often perceived as more coherent and nuanced. 

The use of AI in this project enhanced students’ understanding of its applications and limitations in professional and academic settings. By interacting with AI tools and critically analyzing the feedback received, students developed a more sophisticated view of AI’s role in assessment, becoming more aware of potential areas where AI might lack the interpretive depth that human feedback provides. This engagement fostered greater critical thinking and prompted discussions about the ethical implications of relying on AI in evaluative processes. 

The findings from this pilot project suggest that a hybrid approach might be ideal, where AI is used to enhance and support the teaching process while human instructors continue to provide the personalized, nuanced insights that are essential for deeper learning and understanding. 

Team

Rachel Pacheco

Management Area, McDonough School of Business

Nicholas Lovegrove

Management Area, McDonough School of Business

css.php