Exploring AI: Implications for Education and Beyond

AI

Date: Saturday, March 16th, 2024 

Time: 9am to 4.30pm

Location: Room 550 and 560

Join us for a one-day interdisciplinary workshop focused on exploring the nature of AI technology and its implications for education and society. The event is intended to be of particular interest for post-secondary educators, administrators, students, and others who wish to better understand recent advancements in the field of AI and what it might mean for education. It will encompass a range of topics, including the nature and limits of LLM technology, putting current AI technology into historical context, evaluating AI’s potential role in the classroom, and grappling with the social and ethical implications of the technology. The goal of the event is to offer an informative and interactive experience, where participants can not only foster a deeper understanding of these rapidly developing technologies, but also engage with expert speaker panels in Q&A sessions. Additionally, the event is also intended to serve as a platform for valuable networking, collaboration, and the fostering of supportive academic communities.

Thanks to the generous support of Columbia College, there’s no cost to attend. However, capacity is limited. Please RSVP in advance!

Event Agenda

9:00 am – 9:30 am: Check in & Registration.

9:30 am – 12:30 pm: First Panel – Understanding and Framing The Technology

Panel Format:

Each speaker will deliver a 45-minute presentation, after which there will be a 40-45 minute session dedicated to audience questions directed at the panel.

Speakers:

  • Kent Schmor: Framing AI Debates (9:30-10:15 am)
  • Vered Shwartz: Debunking the Magic behind Large Language Models (10:15-11:00 am)
  • Sung-ha Hong: The History and Legacy of ‘Intelligence’ in Artificial Intelligence (11:00-11:45 am)
  • Panel 1 Q & A (11:45 am – 12:30 pm) 

12:30 pm – 1:30 pm: Lunch break.

1:30 pm – 4:30 pm: Second Panel – Perspectives on AI in Education and Society

Speakers:

  • Chelsea Rosenthal: AI and The Right To Explain (1:30-2:15 pm)
  • Elisa Baniassad (UBC, Computer Science): Concerns and Possibilities For Educational Applications of GenAI (2:15-3:00 pm)
  • Marc Champagne (Kwantlen, Philosophy):  Steering the A.I. bandwagon in a better direction (3:00-3:45 pm)
  • Panel 2 Q & A (3:45 – 4:30 pm) 

Our Speakers

Kent Schmor is a Philosophy Instructor for both Columbia and Langara Colleges, and previously taught for UBC, SFU, and the University of Pittsburgh. Before that, he completed his MA at the University of Pennsylvania, and his PhD at the University of Illinois Chicago. His research focuses on the history and philosophy of twentieth century science, with a particular focus on the extent to which there’s a philosophically neutral perspective from which to understand and evaluate new developments in science.

Abstract: Recent debates about the future role of AI technology in education and society have frequently been marked by polarization. In this talk, I wish to challenge some of the terminologies and analogies that have shaped our collective understanding of the technology, such as the name “artificial intelligence” and the comparison to calculators. I contend that many of the ways we’ve been framing our discussions are obscuring important philosophical questions and/or exacerbating existing polarization.  

 

Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia and a CIFAR AI Chair at the Vector Institute. Her research is concerned with natural language processing, with the fundamental goal of building computer programs that can interact with people in natural languages. In her work, she is teaching machines to apply human-like commonsense reasoning which is required to resolve ambiguities and interpret underspecified language. Before joining UBC, Vered completed her PhD in Computer Science at Bar-Ilan University and was a postdoctoral researcher at the Allen Institute for AI and the University of Washington.

Abstract: Large language models (LLMs) like ChatGPT have become popular, reaching millions of users across the globe. The diverse range of capabilities of LLMs, from conversing and crafting fluent essays, to coding and composing poetry, holds great promise for a variety of fields, including education. In this talk, I will debunk the magic behind LLMs and explain how they work. We will discuss their applications, current limitations, and the potential risks in using LLMs in general and in educational setups in particular. 

Sun-ha Hong is Assistant Professor in Communication at Simon Fraser University, Canada, and was previously Mellon Postdoctoral Fellow in the Humanities at MIT. He is the author of Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society (2020), and is working on his next book, Predictions Without Futures.

Abstract: To understand what is being called artificial intelligence, we must understand what we mean by intelligence. Yet the field of AI has never, in it entire history, produced a clear, agreed-upon definition of intelligence. This was not an abstract intellectual move, but a political and marketing strategy. Today, chatbots speak in the first person, and spooked governments speak of ‘AI textbooks’. What does this ambiguity mean for education, a place where we have long pursued human growth and learning and have long encountered reductive measures of that growth? I will discuss how we got here, and clarify some of the claims around contemporary AI tools.

Chelsea Rosenthal is an Assistant Professor in the Department of Philosophy at Simon Fraser University.  Before joining the faculty at Simon Fraser, she was an Assistant Professor/Faculty Fellow in the Center for Bioethics at New York University.  She received her Ph.D. from NYU’s Philosophy Department and a J.D. from the Law School there.  Her research focuses on ethics, philosophy of law, and political philosophy, with current projects on moral uncertainty, privacy and AI ethics, and the ethical responsibilities of lawyers.

Abstract: Increasingly important decisions are being made by algorithms—decisions about who receives a loan, who is granted parole, or who is allowed to speak on the social media platforms where much of our public discourse takes place.  Recent work has grappled with ethical questions about these procedures, both worries stemming from their potential biases and concerns about whether they provide sufficiently transparent explanations to those who are impacted.  This talk describes an additional type of problem facing algorithmic decision-making: that it can sometimes violate our right to give an explanation to decision-makers: our “right to explain” ourselves when decisions are being made about us.  Collecting a great deal of data from someone is different from giving them a chance to provide an explanatory narrative showing how they think that information fits together or to make a case that there’s other information that’s relevant and explanatory that wasn’t asked for.  This is a striking difference between decisions made by algorithms and decisions involving more personal, direct, human engagement—the human decisions may not always be more accurate or even less opaque (given the difficulty of knowing what other people are thinking), but at least under good conditions, engaging with other people can provide us with an opportunity to explain ourselves, in a way that decision-making by algorithm does not.  Ultimately, thinking about algorithmic decision-making and thinking about ordinary human decision-making can be mutually illuminating.  Reflecting on these problems with the use of machine learning algorithms can also provide some guidance for the challenges of impersonal decision-making in large bureaucracies, which sometimes also violate the right to explain.

Elisa Baniassad is the Acting Academic Director of UBC’s Centre for Teaching, Learning, and Technology, where she is overseeing the faculty professional development and technical support for Generative AI in an instructional setting.  She sits on the Generative AI Steering committee, and on the LLM and Teaching and Learning working groups. She holds a faculty position as a Professor of Teaching in Computer Science.

Dr. Marc Champagne is a Regular Faculty member in the Department of Philosophy at Kwantlen Polytechnic University in Canada, where he teaches philosophy of technology, logic, and occasional ethics courses for Policy Studies and School of Business programs. Before coming to KPU, he taught at York University and Trent University. He has a PhD in Philosophy from York University, a PhD in Semiotics from the University of Quebec in Montreal (UQAM), and did his Post-Doc at the University of Helsinki.

Abstract: Many companies and universities want to adopt A.I., but why? The reason often seems to be: because everybody else is doing it. Of course, when one reasons this way, one becomes the very trend deemed inevitable. Since a belief in inevitable trends can prompt decisions that leave us worse off, I will make suggestions (informed by game theory) that can help us respond to A.I. in a more level-headed manner.