Score decision-making is largely an undocumented process in performance assessment. To conduct a more in-depth cognitive study in scoring, one must ask if these underlying processes can be identified efficiently and objectively. To this end, the present study attempted to shed some light on how Iranian teachers as untrained raters rate speech samples of learners and how their cognition functions in the decision-making process in terms of the scores they assign. A series of monologues were obtained from a group of language learners; afterward, English language teachers were asked to rate them. The raters were asked both to assign a score and provide comments regarding why they assigned a specific score. Having rated the samples, the raters were individually interviewed. The results of the recorded interviews and the comments they had provided on scores were subjected to qualitative analysis like coding and extracting both idiosyncratic and shared features of the raters’ cognition. The results revealed that some of the factors attended to by the raters were both linguistic and relevant to speaking proficiency construct like fluency, accuracy, and complexity. Some other factors influencing the raters while rating were non-linguistic and not directly related to speaking proficiency construct like the tone of voice, personality feature of the testee, etc. It seemed that the untrained raters did not have a clear definition of oral proficiency construct. The implications of the study for rater training programs have been discussed.