Sessions / Location Name: Room E410

Location not set by organizers

Examining the Novelty Effect of VR in CLIL-Based Intermediate Japanese Courses #4340

Sat, Jul 19, 09:00-09:25 Asia/Tokyo | LOCATION: Room E410

Content and Language Integrated Learning (CLIL) emphasizes learning subject content through the target language. A key aspect of CLIL is engaging in meaningful activities with authentic materials that have real-world relevance. To ensure meaningful learning, appropriate situational and contextualized settings are essential. Immersive technologies like Virtual Reality (VR) can be valuable tools to assist learners’ cognitive processing abilities, and VR has been increasingly integrated into language education. However, research on VR in Japanese language education remains scarce. Previous studies suggest that learners may lose interest in VR over time due to the ‘novelty’ effect of the new technology. In this presentation, the implementation of CLIL with a supplementary instructional tool, VR, to enhance contextual learning in intermediate-level Japanese communication and presentation courses over one academic year will be reported. The primary objective of these courses was to develop students’ understanding of the Sustainable Development Goals (SDGs). At the end of each semester, students were asked to reflect on how CLIL-based activities incorporating VR influenced their learning and to share their impressions of these activities. The textual data were analyzed using text mining to investigate whether VR’s ‘novelty’ affected learners’ perceptions and whether learners developed more negative opinions over time.

Digital Literacy and AI Chatbot Performance in Virtual Reality Public Speaking Training #4327

Sat, Jul 19, 09:35-10:00 Asia/Tokyo | LOCATION: Room E410

Despite the increased adoption of artificial intelligence (AI) chatbots in second language acquisition contexts, empirical examinations of learner perceptions regarding virtual AI-driven audiences remain sparse. This study utilizes mixed methodologies to analyze undergraduate students' experiences interacting with AI chatbot-driven virtual audiences during public speaking tasks in a virtual reality (VR) environment at a Japanese public university. Data collection comprised open-ended surveys juxtaposing student AI chatbot interactions with traditional peer audiences, triangulated with self-reported measures of digital literacy. Preliminary findings reveal pronounced variation correlated with student digital literacy competencies, which resulted in reduced public speaking anxiety, enhanced self-efficacy, and favorable appraisals of chatbot-mediated VR interactions, or increased anxiety and diminished comfort, largely attributing these reactions to perceived emotional responsiveness deficiencies in AI chatbots. The study underscores the critical necessity for differentiated instructional interventions to mitigate digital literacy disparities and optimize AI integration efficacy in immersive VR language learning environments. Recommendations include targeted digital competency training and iterative pedagogical scaffolding. Further research should systematically explore intervention strategies for effectively bolstering learner digital literacies, thereby facilitating meaningful engagement with emerging AI technologies in language instruction contexts.

Merits and Implications of Adopting Virtual Reality into Self-access Language Learning #4326

Sat, Jul 19, 10:10-10:35 Asia/Tokyo | LOCATION: Room E410

Virtual reality (VR) is gaining attention in foreign language learning research for its ability to create immersive environments that enhance engagement and reduce anxiety. While most studies focus on classroom settings, VR’s potential for self-access language learning (SALL) remains underexplored. Self-access language centers (SALCs) provide essential independent study opportunities. However, SALCs also encounter challenges such as offering convenient accessibility and fostering social interaction. Recent research has examined online synchronous consultations, but VR's role as an alternative remains underexplored.

This study examines VR-based SALL communication sessions with Japanese EFL university students. A mixed-methods approach measured learners’ preconceived sentiments about VR for SALL and their experiences after a treatment, focusing on foreign language anxiety, perceived learning opportunities, and learner preference. Results indicate that using VR in SALL sessions reduced communication anxiety, provided ample learning opportunities, and became the preferred method for most participants. These findings suggest that VR can enhance learners' willingness to engage in conversation and create a supportive environment for language practice. However, challenges such as technological limitations and the need for structured facilitation emerged. The presentation will discuss these findings, explore practical considerations for implementing VR in SALCs, and propose directions for future research.

GenAI Literacy - Why It's Relevant More Than Ever #4261

Sat, Jul 19, 11:35-12:00 Asia/Tokyo | LOCATION: Room E410

Generative AI in teaching and learning is here to stay; current trends only point towards further integration and usage by both students and teachers. GenAI literacy refers to having the understanding and ability to choose and use GenAI tools in an appropriate, ethical and responsible way. This presentation will draw from the experience of creating a GenAI Literacy module at a Hong Kong University in response to a university wide policy to allow GenAI tools such as ChatGPT and Microsoft Copilot (formerly Bing Chat) be used by students almost without any restrictions. The literacy module is a mandatory online self-directed module embedded into all language courses meaning that over 1000 students complete it annually. The module, now in its second iteration, has evolved from teaching students how to use the tools to focusing more on exploring ideas within GenAI literacy.

The presentation will argue that for students to gain the most out of using GenAI tools, they need help becoming GenAI literate especially when it comes to assignments or understanding what constitutes appropriate ethical usage of GenAI tools in their own work. There needs to be a shift in mindset to understand that using GenAI is not equivalent to plagiarism.

From Page to Screen: Adapting Literature to Enhance EFL Learners’ Literary Competence in the Digital Age #4227

Sat, Jul 19, 12:10-12:35 Asia/Tokyo | LOCATION: Room E410

For EFL learners, traditional reading and writing approaches often fail to effectively cultivate literary skills. This study examined the use of digital literary adaptations in developing literary competence—the ability to interpret messages conveyed in literary texts. Literary adaptation involves reinterpreting and reshaping works, such as novels or short stories, into different mediums, including digital media. Participants were undergraduate EFL students in Taiwan, who collaboratively created digital video adaptations as a creative response to science fiction and fantasy (SFF) literature. These projects required them to address social issues or provide social critiques. Using a mixed-methods approach, data from surveys, adaptation videos, and interviews were analyzed through an adapted literary competence framework. Findings revealed that participants developed literary competence to varying degrees, with SFF adaptations fostering creative and critical engagement. Educational benefits included enhanced literary knowledge, enriched text interactions, stimulated creativity, and nuanced perspectives on social and cultural dialogues. This study contributed to EFL pedagogy by demonstrating the effectiveness of computer-assisted, adaptation-oriented learning in cultivating creativity and critical engagement with literature. It also expanded the theoretical framework of literary competence to encompass digital adaptations, showcasing their potential as a valuable and innovative tool for literature education in EFL settings.

Paddling the Rapids: Developing Student Scholars with Information Literacy and GenAI #4279

Sat, Jul 19, 15:10-15:35 Asia/Tokyo | LOCATION: Room E410

In higher education, we have an age-old problem: how to develop student scholars so they can seek, evaluate, and use good information thoughtfully, efficiently, and ethically. At the same time, GenAI tools present new challenges and opportunities in language teaching and learning. By integrating information literacy concepts, with thoughtful deployment of GenAI tools, CALL can help teachers and students succeed. In this session, a university librarian and an English lecturer will share their experiences and collaborations in newly developed first-year university-level English language courses focusing on academic literacy. They will also share ideas for incorporating GenAI tools into language teaching approaches and CALL resources, based on examples from English courses and an environmental management course. These include potential applications using resources like a “Framework for Information Literacy for Higher Education” by the Association of College and Research Libraries.

DynaWrite: Computerized Dynamic Assessment for L2 Writing #4253

Sat, Jul 19, 15:45-16:10 Asia/Tokyo | LOCATION: Room E410

This presentation introduces the results of an online web application we designed to leverage large language models to provide language learners with real-time feedback on learner writing. The tool, called DynaWrite, is designed to deliver feedback that begins implicitly and gradually increases in explicitness if the learner is unable to identify and correct errors, thereby utilizing the benefits of dynamic assessment. While the benefits of dynamic assessment are well-documented and proven empirically, its implementation in real-life classrooms has been limited due to its lack of scalability. Computerized dynamic assessment (CDA) offers a scalable solution, yet previous CDA systems focused on the receptive skills of reading and listening. To the best of our knowledge, DynaWrite is the first CDA system to address grammatical errors in extended writing, addressing an important gap. In this presentation we first provide a brief explanation of how the tool is used and the educational theories underpinning the approach. We then present examples of the tool tracking the development of adult Japanese language learners as they use the tool over a course of English language study.

AI vs Human Assessment: A Hybrid Approach to English Language Evaluation #4202

Sat, Jul 19, 16:20-16:45 Asia/Tokyo | LOCATION: Room E410

This presentation explores the integration of AI-powered and human-based assessment in English language education, emphasizing their respective strengths and limitations. AI-driven tools, including Progos for speaking assessment and Scribo for writing feedback, provide immediate, objective, and data-driven evaluations, enhancing student learning outcomes. Both the Progos Speaking Test and CASEC, a computer-adaptive English proficiency test, are utilized to assess vocabulary, grammar, and reading proficiency, demonstrating AI’s role in tracking linguistic progress. The mean CASEC score improved from 507 to 648 after 30 weeks of instruction. Additionally, PeerEval, a human-based evaluation system, is used for presentation assessments, underscoring the importance of qualitative, context-sensitive feedback that AI alone cannot fully provide. This study highlights the key differences between AI-driven and human assessment, arguing that while AI excels in efficiency, consistency, and scalability, human evaluation remains essential for assessing creativity, cultural nuances, and personalized feedback. The findings support a hybrid evaluation model, where AI enhances reliability and efficiency while human raters contribute pedagogical insight and qualitative depth. By examining AI’s role alongside human assessment, this presentation contributes to discussions on optimal evaluation frameworks in English education, advocating for a balanced, integrated approach that leverages the strengths of both AI and human expertise.

Empowering Multilingual Learners with AI: Enhancing A1-Level Engagement through ChatGPT in Bilingual Health Education #4330

Sat, Jul 19, 17:00-18:00 Asia/Tokyo | LOCATION: Room E410

This workshop explores the transformative potential of AI tools like ChatGPT in engaging A1-level English learners through bilingual Health Education. Aligned with JALTCALL’s mission to advance technology in language learning, the session will demonstrate how AI can support differentiated instruction, enhance student engagement, and promote culturally responsive teaching.

Attendees will learn to design AI-powered lesson plans and assessments tailored to A1-A2 learners. Key themes include:

* AI-Powered Lesson Planning: Creating adaptable, engaging lessons to meet individual student needs. * Student Engagement: Using AI-driven interactive activities, such as chants, games, and assessments. * Empowering Learners: Promoting student agency through reflective prompts, personalized AI feedback, and culturally relevant materials.

This hands-on session will guide participants through AI-generated tools and strategies from my classroom experiences in Taiwan. Attendees will practice integrating AI into lesson design and discuss challenges and opportunities in language education keeping their classes in mind.

Participants will leave with actionable skills to:

* Implement AI-generated lesson plans and assessments. * Increase student engagement using AI tools like ChatGPT. * Create inclusive, culturally responsive materials.

This session introduces a novel blend of TESOL and CLIL principles, bridging technology and pedagogy to meet the diverse needs of multilingual learners.

Developing an AI Use Checklist for Formative Assessment in EMI #4288

Sun, Jul 20, 09:00-09:25 Asia/Tokyo | LOCATION: Room E410

As Generative AI becomes widespread in English Medium Instruction (EMI), there is a growing need for formative assessment tools that can track students' learning processes rather than relying solely on traditional summative evaluation. This study develops and implements an AI Use Checklist as a formative assessment method to monitor and support students’ engagement with AI during assignment preparation. Drawing on Bloom’s Taxonomy and TPACK, the checklist helps students reflect on their use of AI for language support, brainstorming, research, drafting, and revision. Forty students (20 Japanese and 20 international) in an EMI semantics course completed the checklist after each assignment, documenting when and how they used AI, and reflecting on its impact on their understanding and output. This process-oriented tool provides ongoing feedback to both learners and instructors, making student learning more transparent while promoting academic integrity. Preliminary findings show that while AI can aid conceptual understanding and revision, meaningful engagement depends on guided reflection. By focusing on students' processes and reflections, this study offers a model for AI-integrated formative assessment in EMI classrooms that supports independent learning and critical use of technology.

A Preliminary Study on AI-assisted Automated Essay Scoring #4209

Sun, Jul 20, 09:35-10:00 Asia/Tokyo | LOCATION: Room E410

Automated Essay Scoring (AES) systems present an efficient solution for assessing writing proficiency in high-volume educational settings, though concerns about their accuracy and fairness persist. This study investigates the use of ChatGPT 4.0, a powerful large language model (LLM), as an AES tool for English as a Second Language (ESL) essays. Utilizing the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) corpus, which includes 6,482 essays across 44 prompts, we analyzed a subset of 1,154 essays to ensure robust statistical analysis.

We developed a custom Python application to interface with the ChatGPT API, varying the temperature parameter (0.5 and 0.7) to assess its impact on scoring consistency and accuracy. Each essay was scored twice by ChatGPT, and these scores were compared to human ratings using Spearman's rank correlation and the Wilcoxon signed-rank test.

Results showed a positive correlation between ChatGPT scores and human ratings, suggesting the model captures some aspects of essay quality; however, a consistent underestimation bias was noted. Correlation coefficients ranged from 0.509 to 0.656, highlighting limitations in the model's ability to reflect human judgment. Further research is needed to mitigate this bias and enhance the accuracy of LLM-based AES systems.

ChatGPT as a Writing Peer: Enhancing University Students’ Writing Skills Beyond Genres Through AI-Assisted Formative Assessment #4273

Sun, Jul 20, 11:40-12:05 Asia/Tokyo | LOCATION: Room E410

Formative assessment has proven effective for evaluating academic writing at the tertiary level (Anderson & Ayaawan, 2023; Maphapatra, 2024) thanks to self and peer feedback. Since its debut in 2022, ChatGPT has drawn attention among language scholars for supporting formative assessment of in-class writing at universities. It helps students not only with idea generation (Lingard, 2023) but also adjusts feedback according to their English proficiency level (Barrot, 2023), serving as a reliable peer. However, existing studies predominantly focus on writing genres (e.g., argumentative essays) rather than essential skills like writing strong thesis statements, leaving students unprepared for new writing styles. Therefore, focusing on university students in Japan, this study integrates ChatGPT into the writing curriculum through lectures to enhance students’ writing skills and confidence when tackling diverse genres. The presentation outlines the research design and data collected through in-class pre-tests and post-tests with approximately 40 students over four weeks. Each week focuses on a specific writing skill, with ChatGPT assisting students’ weekly assignments. Surveys are also conducted to corroborate the test results and assess whether ChatGPT is perceived as a useful peer in the writing process. The findings provide insights into integrating generative AI tools to cultivate effective writing skills beyond specific genres in higher education.

Enhancing EFL Learning: Generative AI vs. Human Interactions in Willingness to Communicate and Grit #4354

Sun, Jul 20, 12:15-12:40 Asia/Tokyo | LOCATION: Room E410

Advancements in generative artificial intelligence (GenAI) have opened new avenues for interactive, adaptive, and personalized language learning experiences, yet limited research has compared the effects of GenAI chatbots and human interlocutors on elementary English as a Foreign Language (EFL) learners’ willingness to communicate (WTC) and grit. In this presentation, I will share findings from a 12-week study exploring how GenAI chatbots compare with human interlocutors in enhancing elementary EFL learners’ WTC and grit. Fifty-seven third-grade students in Taiwan participated in this 12-week study. Participants engaged in 10-minute communication activities during their English class weekly and were divided into two groups: the Bot Group (N=30), who interacted with CoolE Bot, and the No-Bot Group (N=27), who participated in peer interactions. The results demonstrated that CoolE Bot significantly improved learners’ WTC and grit compared to peer interactions. Qualitative analysis highlighted five key benefits of CoolE Bot: (1) providing authentic communication opportunities, (2) facilitating dynamic and coherent interactions, (3) offering contextually appropriate and personalized responses, (4) adopting multi-interactive roles, and (5) fostering low-pressure environments with personalized feedback. These features enhanced participants’ WTC and grit. The findings suggest that GenAI chatbots can complement conventional human-mediated instruction by providing flexible and personalized language practice.