Into the Woods with Generative AI

Objectives

The “Into the Woods with Generative AI” emerged as a comprehensive research initiative designed to critically explore the multifaceted landscape of generative AI in higher education. The project was fundamentally driven by a desire to understand the deeper implications of AI technologies beyond their surface-level capabilities. By conducting an interdisciplinary workshop with participants from diverse academic backgrounds, assistant professor Akshaya Narayanan sought to unpack the complex value systems embedded within AI tools, examining not just their technological potential but their broader social and ethical ramifications. 

The research was guided by a set of profound and nuanced questions that went beyond simple technological assessment. These included investigating which stakeholders stand to benefit or potentially be harmed by AI usage, understanding the decision-making processes behind AI tool development, and exploring how these technologies might serve as reflective mechanisms for uncovering institutional and personal biases. By creating a collaborative space for dialogue, the project aimed to move beyond simplistic narratives of technological determinism, instead fostering a more holistic and critically engaged approach to understanding generative AI’s role in educational environments.

Outcomes

The research thoroughly explored generative AI’s role in education, highlighting both its potential and challenges. On the positive side, AI demonstrated enhanced accessibility for students with diverse learning needs, personalized tutoring that could democratize educational opportunities, and improved efficiency in both administrative and creative tasks. Participants particularly noted AI’s ability to provide immediate feedback, assist in brainstorming, and help students navigate communication barriers within traditional educational frameworks.

However, concerns emerged around AI’s propensity for producing inaccurate or biased information, raising critical questions about accountability and the integrity of information. The study looked at systemic issues, such as the often-overlooked labor of low-wage workers in AI training, significant privacy risks associated with data collection, and the environmental toll of AI model development. Additionally, participants voiced fears that AI could inadvertently perpetuate societal biases and deepen inequalities, particularly for marginalized communities.

A key contribution of the study was its framing of generative AI not as a neutral tool but as a complex socio-technical system deeply connected to human social structures. By emphasizing the values and intentions embedded in AI design, the research advocated for a thoughtful and critically engaged approach to technological adoption.

Team

Akshaya Narayanan

Assistant Professor of Practice, Ethics Lab, KIE

css.php