Corpus-Based Insights into Modeling a Level-Specific Persian Language Proficiency Test (PLPT): Development and Factor Structure of the PLPT Listening Tasks

نوع مقاله : مقاله پژوهشی


1 Full Professor, University of Tehran

2 Post-doctoral Researcher, University of Tehran




عنوان مقاله [English]


نویسندگان [English]

  • -- -- 1
  • --- --- 2
چکیده [English]

The factor structure of the listening section of a Persian Language Proficiency Test (PLPT), developed and used for academic purposes, was examined in this study. A Structural Equation Modeling (SEM) was employed using AMOS (V. 18) to analyze the responses of a number of Persian language learners (n=120) who participated in the first piloting phase of the test in 2014. To examine whether the listening factor corresponds to the test hypothesized structure, three models (unitary, correlated and uncorrelated) were postulated on the basis of the literature. The results from model testing suggested that the correlated model (i.e., correlated receptive skills of listening and reading) fitted the obtained data best, supporting the reporting of distinctive listening and reading factors. The results of the current pilot study provide empirical evidence for reporting valid listening scores and interpretations based on separate scores found for the PLPT listening skill. Implications for Persian language teaching, learning, and assessment are discussed.

کلیدواژه‌ها [English]

  • Persian Language Proficiency Test (PLPT)
  • Academic Version (AV)
  • Persian language learners
  • Written Corpus
Alavi, S.M., Kaivanpanah, SH. & Nayernia, A. (2011). The factor structure of a written English proficiency test. Iranian Journal of Applied Linguistics, 3(2), 27-50.
Alderson, J. C. (1996). Do corpora have a role in language assessment? In J. A. Thomas and M. H. Short (Eds.), Using corpora for language research (pp. 284-259). London: Longman.
Alderson, J. C. & Banerjee, J. (2001). Language testing and assessment. Language Teaching, 34, 213-236.
Alderson, J. C., Clapham, C. M. & D. Wall (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.
ALTE. (2002). The ALTE can do project. Articles and can do statements produced by the members of ALTE 1992-2002. Retrieved from
Bachman, L. F. (2004). Statistical analyses for language assessment. New York, NY: Cambridge University Press.
Bachman, L. F. & Palmer, A. S. (1981). The construct validation of the FSI oral interview. Language Testing, 31, 67-86. 
Bachman, L. F. & Palmer, A. S. (1982). The construct validation of some components of communicative proficiency. TESOL Quarterly, 16, 449-465.
Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.
Bachman, L. F. & Palmer, A. S. (2010). Language assessment in practice. Oxford: Oxford University Press.
Barker, F. (2004). Corpora and language assessment: trends and prospects, research notes. Cambridge: UCLES.
Barker, F. (2010). How can corpora be used in language testing? In Anne O’Keeffe & Michael McCarthy (Eds.), the Routledge handbook of corpus linguistics (pp. 633-646). New York: Taylor and Francis Press.
Biber, D., Conrad, S., Reppen, R., Byrd, P., Helt, M., Clark, V. Cortes, V., Csomay, E. & Urzua, A. (2004). Representing language use in the university: analysis of the TOEFL 2000 spoken and written academic language corpus (Publication No. RM-04–03), Supplemental Report No. TOEFLMS-25). Princeton, NJ: Educational Testing Service.
Bijankhan, M., Sheykhzadegan, J., Bahrani, M. & Ghayoomi, M. (2011). Lessons from building a Persian written corpus: Peykare. Language Resources and Evaluation, 45, 143-164.
Brooks, L. (2001). Converting an observation checklist for use with the IELTS speaking test. Research Notes, 11, 1-20.
Canale, M. & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1-47.
Council of Europe. (2001). Common European framework of reference for languages: Learning, Teaching, Assessment. Strasbourg: Language Policy Unit.
Davies, A. (1989). Communicative competence as language use. Applied Linguistics, 10(2), 157–170.
Ghonsooli, B. (2010). Development and validation of a PLPT. Foreign Language Research, 57, 115-129.
Granger, S., Hung, J., & Petch-Tyson, S. (2002). Computer-learner corpora, second language acquisition, and foreign language teaching. Philadelphia, PA: John Benjamins.
Hale, G. A., Stansfield, C. W., Rock, D. A., Hicks, M. M., Butler, F. A., & Oller, J. W., Jr. (1988). Multiple-choice Cloze items and the Test of English as a Foreign Language, TOEFL Research (Rep. 26), Princeton, NJ: ETS.
Harrington, D. (2009). Confirmatory Factor Analysis. New York: Oxford University Press.
Harsch, C., & Rupp, A. A. (2011). Designing and scaling level-specific writing tasks in alignment with the CEFR: A test-centered approach. Language Assessment Quarterly8(1), 1-33.
Hawkey, R. & Barker, F (2004) Developing a common scale for the           assessment of writing. Assessing Writing, 9, 122–159.
Hawkins, J.A. & L. Filipovic (2012). Criterial Features in L2 English. Cambridge: CUP.
Hughes, G. (2008). Text Organization Features in an FCE Reading Gapped Sentence Task. Research Note, 31, 26-31.
Hymes, D. (1972). On communicative competence. In J. Prides & J. Holmes (Eds.),  Sociolinguistics: Selected Readings (pp. 269-293). Harmondsworth: Penguin.
In’nami, Y., & Koizumi, R. (2011). Factor structure of the revised TOEIC® test: A multiple-sample analysis. Language Testing, 29(1), 131-152.
Kennedy, C. & Thorp, D. (2007). A Corpus-based Investigation of Linguistic Responses to an IELTS Academic Writing Task, in L. Taylor and P. Falvey (Eds.), IELTS Collected Papers: Research in Speaking and Writing Assessment (Studies in Language Testing vol. 19, pp. 316-77). Cambridge: UCLES and Cambridge University Press.
Kollias, C. (2012). Standard Setting of the Basic Communication Certificate in English (BCCETM) Examination: Setting a Common European Framework of Reference (CEFR). Hellenic American University, Office for Language Assessment and Test Development (OLATD).
Milanovic, M. (2009). Cambridge ESOL and the CEFR. Research Notes, 37, 2-5.
Oller, J. W. Jr. (1983). Evidence for a general language proficiency factor: An expectancy grammar. In J. W. Oller, Jr. (Ed.), Issues in language testing research (pp. 3–10). Rowley, MA: Newbury House.
Pae, H. K. (2012). A model for receptive and expressive modalities in adult English learners’ academic L2 skills. Retrieved December 11, 2015, from Pearson Language Test. Documents/ ResearchNoteexpressivefinal2012-10-02GJ.pdf.
Park, B. (2014). Cognitive and affective processes in multimedia learning. Learning and Instruction, 29, 125-127.
Sang, F., Schmitz, B., Vollmer, H. J., Baumert, J. & Roeder, P. M. (1986). Models of second language competence: A structural equation approach. Language Testing, 3(1), 54-79.
Sasaki, M. (1996). Second language proficiency, foreign language aptitude, and intelligence: Quantitative and qualitative analyses. New York: Peter Lang.
Sahraei, R. M. & Jalili, S. A. (2012). Theoretical Foundations for development of a Persian Proficiency Test.  Research Notes of AZFA, 1(1), 123-150.
Schedl, M. A., Gordon, P. C., & Tang, K. (1996). An analysis of the dimensionality of TOEFL reading comprehension items.TOEFL Research (Rep. 53), Princeton, NJ: ETS.
Shin, S.-K. (2005). Did they take the same test? Examinee language proficiency and the structure of language tests. Language Testing, 22(1), 31–57.
Spolsky, B. (1989). Conditions for second language learning: Introduction to a general theory. Oxford: Oxford University Press.
Taylor, L. & Jones, N. (2006). Cambridge ESOL exams and the Common
           European Framework of Reference (CEFR). Research Notes, 24, 2–5.
Wilson, K. M. (2000). An exploratory dimensionality assessment of the TOEIC test (TOEIC Research Report). (Publication No. RR-00-14), Princeton, NJ: Educational Testing Service.