Generative Artificial Intelligence (AI) is transforming the educational landscape. One of its most promising roles for educators is AI teaching assistant (AI-TA). This presentation will explore how AI-TAs can enhance educational practices across three critical dimensions: (1) course and content development, (2) student engagement, and (3) promoting equity and inclusion. The session will draw on examples from faculty and interactive discussions to demonstrate how AI can be developed and integrated effectively to benefit diverse learners, including those from marginalized communities. Ethical challenges and strategies for equitable AI use will also be discussed.
Background/Purpose: Depressive symptoms affect 280 million people worldwide, yet the quality of GenAI translations of depression screeners across languages is unclear. We developed a Translation Validity Index (TVI) to evaluate PHQ‑9 translations produced by ChatGPT, Copilot, and Google Translate in nine languages. Method: Two bilingual evaluators per language (N=18) rated each translation’s cultural appropriateness, grammar, and semantic clarity; TVI scores ≥3 indicated acceptable quality. Results: ChatGPT and Copilot generally met acceptability in high- and medium-resource languages (TVI=3.11–3.66), while Google Translate met acceptability for Ewe (TVI=3.73). Conclusions: TVI provides a structured approach for assessing forward and backward translation quality, but bilingual expert review remains essential when developing accurate mental health measures.
This session introduces an assignment where students are tasked with critically evaluating AI to reveal its limitations. To demonstrate the necessity for maintaining human intelligence over AI, students provide ChatGPT with a specific case study and prompt, then perform a forensic analysis of the output. Participants will see how this shift from "using" AI to "evaluating" AI moves the student from a passive consumer to a critical expert. By identifying hallucinations, inaccuracies, and lack of nuance in the AI’s response, students must rely on their own disciplinary knowledge to correct the record. This approach ensures human thinking remains the primary tool for validation, making the "human-in-the-loop" a visible and graded component of the learning process.
Generative AI tools are rapidly changing how students approach programming and problem-solving, creating new challenges and opportunities for computer science education. In response, our Computer Science and Engineering department has begun adapting selected courses, assignments, and assessment strategies to address the growing presence of AI-assisted learning. This presentation describes our efforts to integrate AI tools into relevant courses while redesigning programming assignments and assessments to emphasize critical thinking, problem decomposition, and human judgment. We also discuss recent curriculum updates, including the addition of a required machine learning course to better prepare students for an AI-enabled computing landscape. Through examples from multiple courses, we will share practical approaches for incorporating AI into computer science teaching while maintaining meaningful learning and academic integrity. The session will highlight lessons learned and strategies that may be applicable across disciplines as educators navigate the evolving role of generative AI in higher education.
As artificial intelligence tools become embedded in students’ everyday learning practices, faculty must redesign courses to keep human judgment, disciplinary expertise, and ethical reasoning at the center of learning. This session presents a graduate course in Human Resources and Organizational Development focused on Digital Transformation and Artificial Intelligence in Organizations, which challenges students to critically evaluate AI-enabled practices across the employee lifecycle while designing responsible, human-centered organizational solutions. Rather than attempting to detect or prohibit AI use, the course employs authentic, discipline-specific assessments that require contextual analysis, organizational diagnosis, and ethical decision-making. These are tasks that cannot be completed meaningfully by AI alone. Participants will explore examples of assignments, project structures, and discussion strategies that prompt students to interrogate AI outputs, evaluate risk and bias, and apply professional expertise. The session raises broader questions about AI’s role in professional education while demonstrating how established pedagogical principles can guide responsible AI integration in graduate learning environments.