مطالب مرتبط با کلیدواژه
۱.
۲.
۳.
۴.
۵.
writing assessment
حوزه های تخصصی:
The present study attempted to investigate the impact of electronic portfolio assessment on performance of Iranian EFL learners’ writing. To do so, 30 advanced EFL learners who participated in a TOEFL preparation course were selected as the participants of the study. After administrating a truncated version of TOEFL proficiency test, they were randomly assigned to control and experimental groups. The experimental group was given a treatment including electronic portfolio assessment, while the control group was given a placebo. To collect the required data, two instruments (a writing pre-test and a writing post-test) were administered to both groups during the experimentation. Subsequently, the learners’ scores were collected and the results were statistically analyzed. Inter-rater reliability, matched t-test, and independent t-test were calculated. The findings revealed that the participants of the experimental group outperformed those of the control group and thus it was concluded that electronic portfolio assessment can improve the writing ability and can be considered as a motivating assessment strategy.
Score Generalizability of Writing Assessment: the Effect of Rater’s Gender(مقاله علمی وزارت علوم)
منبع:
Applied Research on English Language, V. ۶ , N. ۴ , ۲۰۱۷
411 - 434
حوزه های تخصصی:
The score reliability of language performance tests has attracted increasing interest. Classical Test Theory cannot examine multiple sources of measurement error. Generalizability theory extends Classical Test Theory to provide a practical framework to identify and estimate multiple factors contributing to the total variance of measurement. Generalizability theory by using analysis of variance divides variances into their corresponding sources, and finds their interactions. This study used generalizability theory as a theoretical framework to investigate the effect of raters’ gender on the assessment of EFL students’ writing. Thirty Iranian university students participated in the study. They were asked to write on an independent task and an integrated task. The essays were holistically scored by 14 raters. A rater training session was held prior to scoring the writing samples. The data were analyzed using GENOVA software program. The results indicated that the male raters’ scores were as reliable as those of the female raters for both writing tasks. Large rater variance component revealed the low score generalizability in case of using one rater. The implications of the results in the educational assessment are elaborated.
Reciprocal Contribution of Writing Attributes to One Another(مقاله علمی وزارت علوم)
حوزه های تخصصی:
Formative writing assessment can help writing instructors to explore weaknesses and strengths of language learners’ writing performances. The current research aimed to explore firstly writing attributes and secondly examine their reciprocal contribution to one another. To achieve such an objective, the participants (N=200) were asked to write about two different topics. One writing sample before treatment was considered as the pre-test and the other after the treatment was considered as the post-test writing sample. Having scrutinized the pre-writing samples, five raters extracted the writing attributes which appeared in pre-test and post-test writing samples. Results indicate that there is a statistically significant difference among the participants’ performances in terms of using writing attributes. The results can be advantageous for both instructors and syllabus designers to provide pedagogical materials which identify particular frailties and notify them about the more troublesome points to concentrate on in the classroom so as to arrange effective education.
Celebrating Mistakes: The Alignment of Assessment for Learning (AfL) and Motivational Strategy (MotS) in a Constrained Context(مقاله علمی وزارت علوم)
منبع:
Applied Research on English Language, V. ۱۲ , N. ۴ , ۲۰۲۳
71 - 102
حوزه های تخصصی:
In education, the terms “assessment” and “motivation” seem paradoxical. However, a closer examination of the two terms leads to the understanding that the two terms can conceptually be aligned. Assessment for Learning (AfL) and teachers’ Motivational Strategy (MotS) can be synergized using AfL pedagogical principles that purportedly foster students’ motivation. The dearth of studies juxtaposing both constructs prompted us to examine the AfL practices of seven higher-education teachers in Indonesia, with the aim of providing empirical data on the convergence between AfL and MotS. Set against the backdrop of a low-motivation context, namely the emergency remote learning and teaching English as a Foreign Language (EFL) writing, the teachers were interviewed regarding their AfL practices, and the data was examined using principally deductive qualitative analysis. The results showed that the greatest alignment occurred in the “maintain” stage of MotS, where teachers provided a supportive classroom environment where mistakes are a natural part of learning and involved students in self and peer assessment. On the other hand, the constrained context resulted in divergent conceptions in the teachers of what they perceived as motivating for the students. This implies the need for EFL writing teachers to integrate AfL and various stages of motivational strategies to lead to more engagement and help students improve their writing achievement.
Perceptual (mis)matches between learners’ and teachers’ rating criteria in the Iranian EFL writing self-assessment context
حوزه های تخصصی:
As a formative assessment procedure, self-assessment aims to converge learners’ and teachers’ views in assessment. Hence, reducing the perceptual mismatches between the learners’ and the teachers’ assessment would positively affect the learning process. For this aim, the present study investigated to what extent the learners’ assessment of their writing before and after being provided with a list of rating criteria, agrees that of their teachers. Therefore, a body of 6 EFL writing teachers and 27 EFL learners participated in this study. The learners were asked to rate their writing before and after being provided with rating criteria developed by the researchers. The teachers also rated the students’ writings following the same criteria. The obtained results showed a significant difference between the students' scores on the first and second assessment occasions. The teachers’ and the students’ assessment on the second time also were found to significantly correlate. Moreover, the analysis of the students’ comments showed that while they rated their writing on some limited aspects of writing in the first rating occasion, they assessed their essays using more components in the second assessment phase. Overall, the findings revealed that providing the learners with rating criteria would not only reduce the perceptual mistaches between the students’ and the teachers’ assessment but through involving the students’ voices in their assessment would promote democratic classroom assessment. Pedagogical implications of the study are discussed.