2025-2026 Webinar SeriesWebinar Series Home | 2025-2026 Webinars CFP | Participation Guidelines ARCHIVED SERIES 2025-2026| 2024-2025|2023-2024 |2022-2023 | 2021-2022 | 2020-2021 | 2019-2020 | 2018-2019 | 2017-2018 | 2016-2017 |
Webinar Descriptions |
Webinar Leader: Justin Cary
Date of Webinar: September 26, 2025 from 12:00-1:00 pm EDT
This webinar has the following learning outcomes:
1. Co-Constructive AI Writing Partnerships
Participants will be able to apply AI as a metacognitive partner to co-construct online course content. This involves using the tool to generate and reflect on multiple perspectives, leading to an iterative process of inquiry and revision that enhances their understanding of digital and online content.
2. Metacognitive Course Design for Online Education
Participants will be able to analyze how an AI tool can aid in metacognitive online course design. They will use the tool to generate diverse pedagogical approaches and then evaluate the effectiveness of these online strategies for fostering student engagement and collaboration in a digital learning environment.
3. Fostering Critical Reflection with AI
Participants will be able to evaluate the outputs of an AI tool to foster students' critical reflection and online literacy. This includes identifying potential biases or misinformation within digital content and creating activities that require students to document and reflect on their interactions with the AI.
4. Developing a Process-Oriented Framework for Online Writing with AI
Participants will be able to design a personal framework for integrating AI into the online writing process.
Justin Cary is a Senior Lecturer in the Writing, Rhetoric and Digital Studies Department (WRDS) and has been teaching at Charlotte for over ten years. In October 2024, Justin served on the AI in Teaching and Learning Task Force as the College of Humanities & Earth and Social Sciences (CHESS) representative, working with colleagues across Charlotte to collect viewpoints and perspectives from CHESS administrators, faculty and students around AI in teaching and learning. In Spring 2025, Justin served as a Center for Teaching and Learning (CTL) AI Faculty Fellow, building a database of AI Use Case Stories and collaborating on the CTL 3rd Annual AI Summit for Smarter Learning.
Currently, Justin is exploring new and exciting pathways for incorporating responsible, critical and ethical AI Literacy frameworks in First-Year Writing and Writing Studies, discovering co-constructive, metacognitive applications for harnessing the potential for AI in writing to build upon the foundational skills and habits of critical thinking, communication, reflection, process and rhetoric offered across connected disciplines in the humanities and beyond.
Webinar Leaders: Stacy Wittstock, N. Claire Jackson, & Jennifer Burke Reifman
Date of Webinar: November 10, 2025 from 12:00-1:00 pm EDT
Since November 2022, the landscape around Generative AI, particularly in educational contexts, has evolved in a number of directions. Currently, many of our institutions are investing heavily in GenAI technologies and mandating AI literacy curricula; at the same time intense debates about the ethics and consequences of these products are challenging both the morality of their use and raising serious questions about their potential impact on student learning. Given this context, Writing Program and Writing Center Administrators may question how to respond ethically, efficiently, and responsibly to institutional mandates related to these technologies.
In this workshop, three new and untenured WPAs/WCDs will discuss how they have taken up these efforts. The presenters will discuss emerging understandings of what “AI Literacy” is, including McIntyre, Fernandes, and Sano-Franchini’s (2025) “critical digital cultural literacies”; Söken and Nygreen’s (2024) situating of AI literacy in a broader framework of critical media literacy; Thornley and Rosenberg’s (2024) bridging of AI literacy and information literacy; among others. Presenters will also consider Presenters will also consider how to balance the documented “learning loss” from the use of GenAI tools (Gerlich, 2025; Kosmyna et al., 2025) with the the imperative to teach AI literacy, and explore pathways for instructor and student agency that do not assume, as we’ve been told, that resistance is futile.
After engaging in emerging work in this area, we will describe each of our contexts where we have been mandated to integrate GenAI and must navigate doing so. We describe our efforts to meet the mandates by focusing on how GenAI integration may or may not align with course learning outcomes and course modalities at our institutions, considering issues of professional development and overall fit of GenAI products with already established curriculum. We then invite participants to discuss institutional mandates that may be impacting their own programs and consider how the information presented in this webinar might help them think through potential approaches. More specifically, we’ll provide a heuristic aimed at encouraging participants to examine their own programmatic outcomes and consider the extent to which AI literacy does/does not align with the existing outcomes and/or course modalities, discuss what it would mean to integrate AI literacy into these outcomes and courses, and develop materials for teaching AI literacy tied to their own outcomes. We will also provide a space for participants to strategize ways to respond critically to institutional hype while ensuring your continued place in the conversation.
This webinar explores how data students generate is used to train GAI without informed consent, which allows Big Technology (BigTech) companies and Educational Technology (EdTech) companies to profit off student data and labor. By discussing AI through a lens of surveillance and privacy, students are shown that the content they ask AI generators, upload, and share is monitored and often not attributed as their own intellectual property. Intellectual property is a global topic that is important for educators and students to address within and beyond the classroom space. Such conversations within the classroom can create space for discussions to happen outside of the classroom, impacting how public discourse surrounding GAI can and should include intellectual property concerns.
Whether or not instructors decide to implement GAI in the classroom, instructors should still have conversations with students about topics such as intellectual property and surveillance.
Please contact presenters for more information.
Webinar Leaders: Meghan Velez, Kara Taczak, & Alicia Lienhart
Date of Webinar: January 23, 2026 from 12:00-1:00 pm EDT
Please contact presenters for more information.
Webinar Leader: Sydney Sullivan
Date of Webinar: April 3, 2026 from 12:00-1:00 pm EDT
This webinar explores a student-inclusive approach to AI policy design in online writing instruction. Rather than imposing static, instructor-written policies about generative AI, participants will learn how to co-author classroom guidelines with students—promoting critical literacy, rhetorical awareness, and shared responsibility in digital spaces.
We will explore a replicable, four-phase model for implementing participatory AI policy development in online settings. This includes activities such as anonymous polling, asynchronous forums, Padlet-based brainstorming, and collaborative syllabus clause drafting. Drawing from threshold concepts in Writing Studies—particularly authorship, the rhetorical situation, and writing as knowledge-making (Adler-Kassner & Wardle, 2015)—the session shows how policy writing can itself become a writing assignment that strengthens rhetorical thinking and digital agency.
The approach is further grounded in Asao Inoue’s labor-based assessment theory, which reimagines classroom authority through dialogic, values-based co-creation (Inoue, 2019). Additionally, the session draws on Sambell and McDowell’s (1998) insights about the “hidden curriculum” embedded in policy and assessment design—offering strategies to make implicit expectations visible and student-centered. Sambell, McDowell, and Montgomery (2013) extend this work into the online learning context, where assessment-for-learning practices can foster trust and transparency.
In light of recent institutional moves toward adopting enterprise AI tools like ChatGPT Edu (e.g., San Diego State University IT Division, 2025), which are often introduced without inclusive dialogue, the session addresses the need for critical AI literacy frameworks that resist top-down implementation. Rather than framing policy as punishment or risk mitigation, the webinar encourages attendees to treat it as a collaborative artifact that evolves in response to student voices, technological shifts, and institutional tensions.
Please contact presenters for more information.