Hiroyuki Obari

About

Dr. Hiroyuki Obari is Professor Emeritus at Aoyama Gakuin University. He now teaches part-time in the Faculty of Law, Waseda University, and the Tokyo Institute of Technology graduate school. He is a visiting researcher at the National Institute of Advanced Industrial Science and Technology (AIST). Born in 1953, he received his B.A. from the University of Oklahoma (Political Science), M.A. from ICU (International Relations), second M.A. from Columbia University, and Ph.D. from the University of Tsukuba (Computer Science). He served as a visiting researcher at the University of Oxford (1998, 2007, 2018~2020). He specializes in CALL, TESOL, Worldview Studies, and EdTech. His recent publications include 1. Obari, H., Lambacher, S., Kikuchi, H. (2022). Exploring the impact of AI on EFL teaching in Japan. In J. Colpaert & G. Stockwell (pp. 84-101). Smart CALL: Personalization, contextualization, & socialization. London: Castledown Publishers.

Sessions

Presentation AI vs Human Assessment: A Hybrid Approach to English Language Evaluation more

This presentation explores the integration of AI-powered and human-based assessment in English language education, emphasizing their respective strengths and limitations. AI-driven tools, including Progos for speaking assessment and Scribo for writing feedback, provide immediate, objective, and data-driven evaluations, enhancing student learning outcomes. Both the Progos Speaking Test and CASEC, a computer-adaptive English proficiency test, are utilized to assess vocabulary, grammar, and reading proficiency, demonstrating AI’s role in tracking linguistic progress. The mean CASEC score improved from 507 to 648 after 30 weeks of instruction. Additionally, PeerEval, a human-based evaluation system, is used for presentation assessments, underscoring the importance of qualitative, context-sensitive feedback that AI alone cannot fully provide. This study highlights the key differences between AI-driven and human assessment, arguing that while AI excels in efficiency, consistency, and scalability, human evaluation remains essential for assessing creativity, cultural nuances, and personalized feedback. The findings support a hybrid evaluation model, where AI enhances reliability and efficiency while human raters contribute pedagogical insight and qualitative depth. By examining AI’s role alongside human assessment, this presentation contributes to discussions on optimal evaluation frameworks in English education, advocating for a balanced, integrated approach that leverages the strengths of both AI and human expertise.

Hiroyuki Obari