مطالب مرتبط با کلیدواژه

Rasch model


۱.

Construction, Validation, and Application of a Teacher Status Scale (TSS): A Case of Iranian Junior High School Teachers ساخت، هنجاریابی و کاربرد پرسشنامه تعیین جایگاه معلمان در مدارس ایران(مقاله علمی وزارت علوم)

کلیدواژه‌ها: هنجاریابی Validation جایگاه معلم مدل رش معلم زبان انگلیسی teacher status Rasch model English language teacher

حوزه های تخصصی:
تعداد بازدید : ۷۷۸ تعداد دانلود : ۵۰۶
The role of teachers in the educational context could go beyond simply teaching the subject matter. It is not uncommon for some students to be greatly influenced by certain teachers and even consider them as their role models. An interesting and novel way of inferring the impact a teacher has on the students is through revealing the status of the teacher as perceived by the students. The present study pursued two goals: first, to construct and validate a teacher status scale (TSS); and second, to reveal the relative status of English language teachers as compared to other school teachers in students’ perceptions. Regarding the first goal, an 18-item teacher status scale was designed and, using the data collected from 200 students, its construct validity was substantiated through Rasch model. As for the second goal, 650 junior high school students rated their 300 teachers. The data was then analyzed using Chi-square test. In addition, 135 students participated in short interviews and a total of 530 minutes of recorded interviews constituted the qualitative data. Based on the results, English teachers were found to have the highest status of all school teachers as perceived by the students. Finally, statistical results were discussed, and implications were provided for English language teaching in the formal context of education.
۲.

Investigating the Impact of Response Format on the Performance of Grammar Tests: Selected and Constructed(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Response format multiple-choice item Constructed -response item Rasch model

حوزه های تخصصی:
تعداد بازدید : ۴۰۶ تعداد دانلود : ۳۴۷
When constructing a test, an initial decision is choosing an appropriate item response format which can be classified as selected or constructed. In large-scale tests where time and finance are of concern, the use of response chosen known as multiple-choice items is quite widespread. This study aimed at investigating the impact of response format on the performance of structure tests. Concurrent common item equating design was used to compare multiple-choice items with their constructed response stem-equivalent in a test of grammar. The Rasch model was employed to compare item difficulties, fit statistics, ability estimates and reliabilities of the two tests. Two independent sample t-tests were also conducted to investigate whether the differences among the item difficulty estimates and ability estimates of the two tests were statistically significant.  A statistically significant difference was observed in item difficulties. However, no significant difference was detected between the ability estimates, fit statistics, and reliabilities of the two tests.
۳.

Translation Quality Assessment Rubric: A Rasch Model-based Validation

کلیدواژه‌ها: Polytomous Data Rasch model Rubric Translation Quality Assessment Validity

حوزه های تخصصی:
تعداد بازدید : ۳۲۸ تعداد دانلود : ۳۱۴
The present study aimed to examine and validate a rubric for translation quality assessment using Rasch analysis. To this end, the researchers interviewed 20 expert translation instructors to identify the factors they consider important for assessing the quality of students’ translation. Based on the specific commonalities found throughout the interviews, a 23-item assessment rubric was constructed on a four-point Likert scale. More specifically, this study used the Rasch rating scale model for polytomous data to investigate the psychometric properties of the rating scale in terms of dimensionality, reliability, use of response category, and sample appropriateness. Then, a translation exam was administered to 60 translation students at the BA level in Iranian universities. Following this, the rubric was employed to assess the quality of students’ translation. The results revealed that the Rasch model fits the data well. Thus, the findings of the study indicated that the rubric is potentially valid and useful, and can be used as a measure of translation quality assessment in the Iranian context.
۴.

Constructing and Validating an EFL Hidden Curriculum Scale Using the Rasch Model(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Hidden Curriculum Validity Rasch model scale adaption EFL Teachers

حوزه های تخصصی:
تعداد بازدید : ۳۵۵ تعداد دانلود : ۳۲۴
Whether acknowledged or neglected by educators, the hidden curriculum is present in every institute. Therefore, studying the hidden curriculum is essential to understand how it functions within an English language institute’s setting and among those within it. The purpose of this study was to design and validate a scale to measure language teachers’ perspectives on English as a Foreign Language (EFL) hidden curriculum by the application of the Rasch model. The review of the literature indicated the lack of sufficient research on the investigation of EFL hidden curriculum components in the view of EFL teachers. To fill the existing gap in the literature, a 40-item questionnaire was devised and validated, then 164 Iranian EFL teachers, teaching at different language institutes were asked to reply to the questionnaire. In this study, hidden curriculum components were based on Saylor, Alexander, and Lewis’ (1981) perspectives. Accordingly, items were classified into three different constructs, namely the social atmosphere (including 15 items), the organizational structure of the English Language Institute (consisting of 14 items), and the interaction between teachers and learners (including 11 items). The results showed that the questionnaire items fitted the Rasch model after removing six items from the scale. Moreover, it was confirmed that the scale enjoyed suitable reliability. This proposes that the questionnaire is potentially valid and can be used as a measure of EFL hidden curriculum. One of the study implications is that the questionnaire designed and validated in this study can be used as a research tool in future research to assist policymakers and material designers, institutions’ administrators, and language teachers to be considered for future decision making, and designing materials. It also can be used as a research tool to measure the relationship between EFL hidden curriculum and other variables in future research.
۵.

A Brain-Friendly Teaching Inventory: A Rasch-based Model Validation(مقاله پژوهشی دانشگاه آزاد)

کلیدواژه‌ها: Brain-friendly teaching EFL Teachers Rasch model Scale development Validity

حوزه های تخصصی:
تعداد بازدید : ۳۲۰ تعداد دانلود : ۱۰۴
Teachers usually teach according to how brains naturally learn. In this way, not only do their learners learn, retain, and recall quickly, but also the teaching becomes more joyful. Increased attention to the worthwhile role of the mind in learning/teaching in recent times Due to the lack of a valid scale for estimating teachers' awareness of brain-friendly teaching, the current study intended to construct and validate a 54-item brain-friendly teaching inventory by the implementation of the Rasch model. The test was administered to 200 Iranian EFL teachers from different educational contexts. The results revealed that all the 54 items of the scale had a good fit to the Rasch model. Infit and outfit values were within the acceptable range which indicates unidimensionality of the scale. Furthermore, it is asserted that the inventory enjoyed suitable reliability. This demonstrates that the Brain-Friendly Teaching Inventory is valid and can be applied as a scale for assessing the teachers' awareness of brain-friendly teaching.
۶.

Psychometric Evaluation of Cloze Tests with the Rasch Model

کلیدواژه‌ها: cloze test Local item dependence Rasch model Unidimensionality Validity

حوزه های تخصصی:
تعداد بازدید : ۱۵۳ تعداد دانلود : ۱۳۰
Cloze tests are gap-filling tests designed to measure overall language ability and reading comprehension in a second language. Due to their ease of construction and scoring, cloze tests are widely used in the context of second and foreign language testing. Previous research over the past decades has shown the reliability and validity of cloze tests in different contexts. However, due to the interdependent structure of cloze test items, item response theory models have not been applied to analyze cloze tests. In this research, we apply a method to circumvent the problem of local dependence for analyzing cloze tests with the Rasch model. Using this method, we applied the Rasch model to a cloze test composed of eight passages each containing 8-15 gaps. Findings showed that the Rasch model fits the data and thus it is possible to scale persons and cloze passages on an interval unidimensional scale. The test had a high reliability and was well-targeted to the examinees. Implications of the study are discussed.
۷.

Psychometric Evaluation of Dictations with the Rasch Model

کلیدواژه‌ها: dictation partial credit model Rasch model reduced redundancy tests Validation

حوزه های تخصصی:
تعداد بازدید : ۱۵۲ تعداد دانلود : ۱۳۹
Dictation is a traditional technique for both teaching and testing overall language ability and listening comprehension. In a dictation, a passage is read aloud by the teacher and examinees write down what they hear. Due to the peculiar form of dictations, psychometric analysis of dictations is challenging. In a dictation, there is no clear boundary between the items and every word in the text is potentially an item. This makes the analysis of dictations with classical and modern test theories rather difficult. In this study, we suggest a procedure to make dictations analyzable with psychometric models. Our strategy entailed using several independent short passages instead of a single long passage. The number of mistakes in each passage was counted and entered into the analysis. Rasch model analysis was then applied to the passage scores (mistakes). Our findings showed that dictations fit the Rasch model very well and it is possible to measure examinees’ ability on an interval scale using dictations.
۸.

Distractor Analysis in Multiple-Choice Items Using the Rasch Model

کلیدواژه‌ها: Distractor analysis Item response theory Multiple-choice items Rasch model

حوزه های تخصصی:
تعداد بازدید : ۱۶۳ تعداد دانلود : ۱۲۵
Multiple-choice (MC) item format is commonly used in educational assessments due to its economy and effectiveness across a variety of content domains. However, numerous studies have examined the quality of MC items in high-stakes and higher education assessments and found many flawed items, especially in terms of distractors. These faulty items lead to misleading insights about the performance of students and the final decisions. The analysis of distractors is typically conducted in educational assessments with multiple-choice items to ensure high quality items are used as the basis of inference. Item response theory (IRT) and Rasch models have received little attention for analyzing distractors. For that reason, the purpose of the present study was to apply the Rasch model, to a grammar test to analyze items’ distractors of the test. To achieve this, the current study investigated the quality of 10 instructor-written MC grammar items used in an undergraduate final exam, using the items responses of 310 English as a foreign language (EFL) students who had taken part in an advanced grammar course. The results showed the acceptable fit to the Rasch model and high reliability. Malfunctioning distractors were identified.
۹.

Evaluating Measurement Invariance in the IELTS Listening Comprehension Test

کلیدواژه‌ها: Differential Item Functioning IELTS measurement invariance Rasch model

حوزه های تخصصی:
تعداد بازدید : ۱۳۷ تعداد دانلود : ۱۰۶
Measurement invariance (MI) refers to the degree to which a measurement instrument or scale produces consistent results across different groups or populations. It basically shows whether the same construct is measured in the same way across different groups, such as different cultures, genders, or age groups. If MI is established, it means that scores on the test can be compared meaningfully across different groups. To establish MI mostly confirmatory factor analysis methods are used. In this study, we aim to examine MI using the Rasch model. The responses of 211 EFL learners to the listening section of the IETLS were examined for MI across gender and randomly selected subsamples. The item difficulty measures were compared graphically using the Rasch model. Findings showed that except for a few items, the IELTS listening items exhibit MI. Therefore, score comparisons across gender and other unknown subgroups are valid with the IELTS listening scores.
۱۰.

Examining Local Item Dependence in a Cloze Test with the Rasch Model

کلیدواژه‌ها: cloze test Local item dependence Rasch model residual correlations

حوزه های تخصصی:
تعداد بازدید : ۱۳۹ تعداد دانلود : ۸۶
Local item dependence (LID) refers to the situation where responses to items in a test or questionnaire are influenced by responses to other items in the test. This could be due to shared prompts, item content similarity, and deficiencies in item construction. LID due to a shared prompt is highly probable in cloze tests where items are nested within a passage. The purpose of this research is to examine the occurrence and magnitude of LID in a cloze test. A cloze test was analyzed with the Rasch model and locally dependent items were identified with the residual correlations. Findings showed that three pairs of items were locally dependent. When these items were removed from the analysis, test reliability dropped but item fit and unidimensionality improved. Removing the three locally dependent items did not affect person ability mean and standard deviation, though. The findings are discussed in terms of LID detection and modeling in the context of cloze test and language testing.
۱۱.

Investigating Gender DIF in the Reading Comprehension Section of the B2 First Exam

کلیدواژه‌ها: Fairness Gender Mantel-Haenszel Rasch model Reading Comprehension

حوزه های تخصصی:
تعداد بازدید : ۱۳ تعداد دانلود : ۱۲
Construct-irrelevant variance is considered as a major threat to validity which indicates the existence of additional unrelated variables that distort the meaning of test scores and cause the test to be biased. Differential item functioning (DIF) analysis is an important technique in examining the validity and fairness of educational tests. Concerning the importance of test fairness in large-scale exams, this study aimed to (1) detect gender DIF in the reading comprehension section of the B2 First exam using the Rasch model and Mantel-Haenszel method, and (2) investigate the comparability of results from the two DIF detection techniques. To this end, the reading section of the B2 First exam was administered to 207 undergraduate students of English as a foreign language (EFL). After checking the fit of the data to the Rasch model, the results of the Rasch model-based DIF analysis showed the presence of two items indicating DIF, whereas the results of Mantel-Haenszel showed that there were three gender-DIF items.
۱۲.

Modelling Local Item Dependence in Cloze Tests with the Rasch Model: Applying a New Strategy

کلیدواژه‌ها: cloze test Conditional independence partial credit model Rasch model

حوزه های تخصصی:
تعداد بازدید : ۱۱ تعداد دانلود : ۹
Cloze tests are commonly used in language testing as a quick measure of overall language ability or reading comprehension. A problem for the analysis of cloze tests with item response theory models is that cloze test items are locally dependent. This leads to the violation of the conditional or local independence assumption of IRT models. In this study, a new modeling strategy is suggested to circumvent the problem of local item dependence in cloze tests. This strategy involves identifying locally dependent items in the first step and combining them into polytomous items in the second step. Finally, partial credit model is applied to the combination of dichotomous and polytomous items. Our findings showed that the new strategy results in a better model-data fit than the dichotomous model where dependence is ignored but with a lower reliability. Results also indicated that the person and item parameters from the two models highly correlate. The findings are discussed in light of the literature on managing local dependence in educational tests.