مطالب مرتبط با کلیدواژه

Automated Writing Evaluation (AWE)


۱.

Efficacy of Instatext for Improving Persian-English Freelance Translators' Language Quality: From Perception to Practice

کلیدواژه‌ها: Automated Feedback Program (AFP) Automated Writing Evaluation (AWE) InstaText Language Quality Perception Persian-English Freelance Translators

حوزه‌های تخصصی:
تعداد بازدید : ۲۱۸ تعداد دانلود : ۱۴۷
There is growing agreement among researchers on the advantages of using automated feedback programs (AFPs), but most of the previous studies have evaluated more well-known AFPs like Grammarly, Ginger, etc. in English writing classes. None of the previous studies on AFPs evaluated the effectiveness of and users' perception of InstaText. Thus, this study was aimed at examining the effects of InstaText on improving the language quality (i.e., grammar, spelling, and style) of Persian-English freelance translators using InstaText for editing their English translations of Persian academic papers, which are considered technical translations. In addition, it was conducted to investigate how the said users perceived this InstaText. This quantitative study was conducted in two phases: a one-group pretest-posttest phase, where the effect of using InstaText on improving the language quality of translated technical texts was examined with 75 participants; and a survey phase, where the participants' perception toward InstaText was measured using Usefulness, Satisfaction, and Ease of Use (USE) questionnaire. InstaText did not help the participants make significant progress in grammar and spelling, but its effect on improving their style was significant. Further, the participants perceived the tool as intuitive, user-friendly, efficient, time-saving, and satisfactory.
۲.

Formative Assessment Feedback to Enhance the Writing Performance of Iranian IELTS Candidates: Blending Teacher and Automated Writing Evaluation

کلیدواژه‌ها: Automated Writing Evaluation (AWE) Blended feedback Formative assessment IELTS Writing Learners’ Perception

حوزه‌های تخصصی:
تعداد بازدید : ۱۲۴ تعداد دانلود : ۱۴۷
With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human correcting and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This embedded mixed-method research aims to investigate three groups of Iranian intermediate IELTS applicants who received automated, teacher, and blended (automated + teacher) feedback modes on different aspects of writing when practicing for the writing skill of IELTS. Furthermore, a structured written interview was conducted to explore learners’ perception (attitude, clarity, preference) of the mode of feedback they received. Findings revealed that students who received teacher-only and blended feedback performed better in writing. Also, the blended feedback group outperformed the others regarding task response, the teacher feedback group in cohesion and coherence, and the automated feedback group in lexical resource. The analysis of the interviews revealed that learners had high opinion regarding the clarity of all feedback modes and learners’ attitude about feedback modes were positive. However, they highly preferred the blended one. The findings suggest new ideas that can facilitate learning and assessing writing and recommend that teachers provide comprehensive, accurate, and continuous feedback as a means of formative assessment.