
Hao-Jan Howard Chen
NTNU
About
Howard Hao-Jan Chen is a distinguished professor of the English Department at National Taiwan Normal University, Taipei, Taiwan. Professor Chen has published several papers in CALL Journal, ReCALL Journal, and several related language learning journals. His research interests include computer-assisted language learning, corpus research, and second language acquisition.Sessions
Presentation A Preliminary Study on AI-assisted Automated Essay Scoring more
Automated Essay Scoring (AES) systems present an efficient solution for assessing writing proficiency in high-volume educational settings, though concerns about their accuracy and fairness persist. This study investigates the use of ChatGPT 4.0, a powerful large language model (LLM), as an AES tool for English as a Second Language (ESL) essays. Utilizing the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) corpus, which includes 6,482 essays across 44 prompts, we analyzed a subset of 1,154 essays to ensure robust statistical analysis. We developed a custom Python application to interface with the ChatGPT API, varying the temperature parameter (0.5 and 0.7) to assess its impact on scoring consistency and accuracy. Each essay was scored twice by ChatGPT, and these scores were compared to human ratings using Spearman's rank correlation and the Wilcoxon signed-rank test. Results showed a positive correlation between ChatGPT scores and human ratings, suggesting the model captures some aspects of essay quality; however, a consistent underestimation bias was noted. Correlation coefficients ranged from 0.509 to 0.656, highlighting limitations in the model's ability to reflect human judgment. Further research is needed to mitigate this bias and enhance the accuracy of LLM-based AES systems.

Presentation Assessing ESL Speaking Skills with the Support of AI Chatbots and Azure Speech Services more
Language educators increasingly focus on developing communicative competence in language learners, but assessing speaking skills in large student groups remains a challenge. This study explores how AI and automatic speech recognition can enhance speaking skills assessment through chatbots built on Azure Speech Services and Microsoft OpenAI Services. The AI chatbot engages learners in conversations on various topics, evaluating pronunciation accuracy, fluency, and completeness using advanced algorithms. It also provides immediate feedback and scores on both pronunciation and content, helping students identify their strengths and areas for improvement. We tested this service with a group of 20 EFL students from a university in Taiwan and gathered their feedback through post-session surveys and interviews. We collected data after a 30-minute chatbot session, focusing on learners' interaction experiences with the chatbot across a range of tasks. Students responded positively to the AI chatbot, particularly noting its accuracy in assessing pronunciation. However, they observed that its evaluation of content was less precise. While the system excels in pronunciation assessment, its ability to evaluate learner-produced content requires further refinement. This innovative approach highlights the potential of AI-powered tools in language education, offering a promising solution for efficient and effective speaking skills assessment in ESL/EFL contexts.
