Background/Purpose: Depressive symptoms affect 280 million people worldwide, yet the quality of GenAI translations of depression screeners across languages is unclear. We developed a Translation Validity Index (TVI) to evaluate PHQ‑9 translations produced by ChatGPT, Copilot, and Google Translate in nine languages. Method: Two bilingual evaluators per language (N=18) rated each translation’s cultural appropriateness, grammar, and semantic clarity; TVI scores ≥3 indicated acceptable quality. Results: ChatGPT and Copilot generally met acceptability in high- and medium-resource languages (TVI=3.11–3.66), while Google Translate met acceptability for Ewe (TVI=3.73). Conclusions: TVI provides a structured approach for assessing forward and backward translation quality, but bilingual expert review remains essential when developing accurate mental health measures.
This session introduces an assignment where students are tasked with critically evaluating AI to reveal its limitations. To demonstrate the necessity for maintaining human intelligence over AI, students provide ChatGPT with a specific case study and prompt, then perform a forensic analysis of the output. Participants will see how this shift from "using" AI to "evaluating" AI moves the student from a passive consumer to a critical expert. By identifying hallucinations, inaccuracies, and lack of nuance in the AI’s response, students must rely on their own disciplinary knowledge to correct the record. This approach ensures human thinking remains the primary tool for validation, making the "human-in-the-loop" a visible and graded component of the learning process.
As artificial intelligence tools become embedded in students’ everyday learning practices, faculty must redesign courses to keep human judgment, disciplinary expertise, and ethical reasoning at the center of learning. This session presents a graduate course in Human Resources and Organizational Development focused on Digital Transformation and Artificial Intelligence in Organizations, which challenges students to critically evaluate AI-enabled practices across the employee lifecycle while designing responsible, human-centered organizational solutions. Rather than attempting to detect or prohibit AI use, the course employs authentic, discipline-specific assessments that require contextual analysis, organizational diagnosis, and ethical decision-making. These are tasks that cannot be completed meaningfully by AI alone. Participants will explore examples of assignments, project structures, and discussion strategies that prompt students to interrogate AI outputs, evaluate risk and bias, and apply professional expertise. The session raises broader questions about AI’s role in professional education while demonstrating how established pedagogical principles can guide responsible AI integration in graduate learning environments.