Managing Classroom Assessment Quality Evidence on Teacher Test Construction Competence, Test Quality KPIs, and Pupils Performance in Western Area Rural, Sierra Leone

Author(s)

Issa John Gbla ,

Download Full PDF Pages: 01-05 | Views: 15 | Downloads: 4 | DOI: 10.5281/zenodo.17722760

Volume 14 - November 2025 (11)

Abstract

The researcher reposition classroom assessment as a management system by testing whether teachers’ test‑construction competence (TCC) operates as a process KPI that improves teacher‑made test quality and, indirectly, pupil performance in junior secondary mathematics.The study covers Western Area Rural, Sierra Leone (30 schools; 60 teachers; 300 pupils). We audited tests for blueprint coverage, item difficulty and discrimination, distractor performance, and internal consistency (KR‑20/α) to build a Test Quality Index (TQI). Multilevel models with school random intercepts and a mediation specification assessed TQI as an indirect pathway between TCC and achievement.
Findings: TCC correlated positively with achievement (β≈0.22–0.28). Introducing TQI attenuated the direct TCC path (β≈0.17; p≈.065) and produced an indirect effect of about 0.08 SD (bootstrap 95% CI ≈ [0.02, 0.14]).
Practical implications: Treating TCC and TQI as KPIs supports a low‑cost continuous quality improvement (CQI) cycle: blueprint checks, item‑writing clinics, marking moderation, and termly dashboard review embedded in district QA governance.
Originality/value: By recasting psychometric indicators as management devices, we offer a pragmatic blueprint for assessment quality management (AQM) in low‑resource public education systems, linking process control to measurable gains in information quality and student outcomes.

Keywords

quality management; KPIs; continuous improvement; assessment quality management; performance dashboard; public sector; Sierra Leone

References

Author, D. D., Author, E. E., & Author, F. F. (Year). Title related to teacher-made tests and reliability. *Assessment in Education: Principles, Policy & Practice*, *Volume*(Issue), pages–pages. https://doi.org/xx.xxx/yyyy

Gronlund, N. E. (1981). *Measurement and evaluation in teaching* (4th ed.). Macmillan.

Ministry of Basic and Senior Secondary Education. (Year). *Education Sector Plan 2022–2026*. Publisher/URL

Author, C. C. (Year). *Title of book in sentence case*. Publisher.

Author, A. A., & Author, B. B. (Year). Title of the article in sentence case. *Journal Name*, *Volume*(Issue), pages–pages. https://doi.org/xx.xxx/yyyy

Crocker, L., & Algina, J. (2008). *Introduction to classical and modern test theory*. Cengage Learning.

Government of Sierra Leone, Ministry of Basic and Senior Secondary Education. (2022). *Sierra Leone Education Sector Plan 2022–2026*. https://www.unicef.org/sierraleone/reports/sierra-leone-education-sector-plan-2022-2026

Government of Sierra Leone, Ministry of Basic and Senior Secondary Education. (2018). *Free Quality School Education (FQSE) initiative*. https://mbsse.gov.sl/fqse/

Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. *Applied Measurement in Education, 15*(3), 309–333. https://doi.org/10.1207/s15324818ame1503_5

Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. *Applied Measurement in Education, 2*(1), 37–50. https://doi.org/10.1207/s15324818ame0201_3

Holland, P. W., & Thayer, D. T. (1986). Differential item functioning and the Mantel–Haenszel procedure. *ETS Research Report Series, 1986*(2), i–24. https://doi.org/10.1002/j.2330-8516.1986.tb00186.x

Kamata, A. (2004). An introduction to differential item functioning analysis. *Applied Measurement in Education, 17*(1), 101–114.

Zwick, R. (2012). A review of ETS differential item functioning assessment procedures: Flagging rules, minimum sample size requirements, and criterion refinement. *ETS Research Report Series, 2012*(1), 1–30. https://files.eric.ed.gov/fulltext/EJ1109842.pdf

Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. *Psychometrika, 2*(3), 151–160. https://doi.org/10.1007/BF02288391

Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. *Journal of Educational Psychology, 30*(1), 17–24. https://doi.org/10.1037/h0057123

Kissi, P., & Adomako, E. (2023). Teachers’ test construction competencies in examination-oriented schools: Evidence from Ghana. *Frontiers in Education, 8*, 1154592. https://doi.org/10.3389/feduc.2023.1154592

Nunoo-Quansah, F., et al. (2025). Multiple-choice test development competencies of junior-high mathematics teachers in Ghana: A triangulation methodology. *Frontiers in Education, 10*, 1658971. https://doi.org/10.3389/feduc.2025.1658971

Osei, K., & Mensah, I. (2022). Using classical test and item response theories to evaluate psychometric quality of a teacher‑made test in Ghana. *European Scientific Journal, 18*(1), 147–165. https://eujournal.org/index.php/esj/article/view/15098

Quansah, F., Amoako, I., & Ankomah, F. (2019). Teachers’ test construction skills in senior high schools in Ghana. *International Journal of Assessment Tools in Education, 6*(1), 1–8. https://files.eric.ed.gov/fulltext/EJ1246259.pdf

Gronlund, N. E. (1981). *Measurement and evaluation in teaching* (4th ed.). Macmillan

Deming, W. E. (1986). Out of the crisis. MIT Press.

Juran, J. M., & Godfrey, A. B. (1998). Juran’s quality handbook (5th ed.). McGraw‑Hill.

Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Harvard Business School Press.

Sanger, M. B. (2008). From measurement to management: Breaking through the barriers to state and local performance. Public Administration Review, 68(S1), S70–S85.

Mintrom, M. (2019). Public policy: Efficiencies, equity, and evidence (2nd ed.). Oxford University Press.

Bryson, J. M., Crosby, B. C., & Bloomberg, L. (2014). Public value governance: Moving beyond traditional public administration. Public Administration Review, 74(4), 445–456.

Cite this Article: