#4426

Presentation

Assessing ESL Speaking Skills with the Support of AI Chatbots and Azure Speech Services

Time not set

Language educators increasingly focus on developing communicative competence in language learners, but assessing speaking skills in large student groups remains a challenge. This study explores how AI and automatic speech recognition can enhance speaking skills assessment through chatbots built on Azure Speech Services and Microsoft OpenAI Services.

The AI chatbot engages learners in conversations on various topics, evaluating pronunciation accuracy, fluency, and completeness using advanced algorithms. It also provides immediate feedback and scores on both pronunciation and content, helping students identify their strengths and areas for improvement.

We tested this service with a group of 20 EFL students from a university in Taiwan and gathered their feedback through post-session surveys and interviews. We collected data after a 30-minute chatbot session, focusing on learners' interaction experiences with the chatbot across a range of tasks.

Students responded positively to the AI chatbot, particularly noting its accuracy in assessing pronunciation. However, they observed that its evaluation of content was less precise. While the system excels in pronunciation assessment, its ability to evaluate learner-produced content requires further refinement.

This innovative approach highlights the potential of AI-powered tools in language education, offering a promising solution for efficient and effective speaking skills assessment in ESL/EFL contexts.

  • Hao-Jan Howard Chen

    Howard Hao-Jan Chen is a distinguished professor of the English Department at National Taiwan Normal University, Taipei, Taiwan. Professor Chen has published several papers in CALL Journal, ReCALL Journal, and several related language learning journals. His research interests include computer-assisted language learning, corpus research, and second language acquisition.