Photo of Myford, Carol M.

Carol M. Myford, PhD

Associate Professor Emerita

Educational Psychology

Related Sites:

About

Carol Myford is an Associate Professor Emerita in the Department of Educational Psychology. Her program of research focuses on scoring issues in large-scale performance and portfolio assessments. She has published studies related to rater training, rubric design, quality control monitoring, improving rater performance, and the detection and measurement of rater effects.

During her career, Myford has held assessment- and measurement-related positions in government, business and industry, and higher education. Prior to coming to UIC, Myford was a Senior Research Scientist at the Educational Testing Service (ETS). While at UIC, she designed and taught face-to-face and online courses in assessment, measurement, and program evaluation and advised doctoral students in the MESA (Measurement, Evaluation, Statistics and Assessment) program. In semi-retirement, she continues to provide assessment- and measurement-related training and consultation, both in the U.S. and abroad. Her training-related travels have taken her to Australia, Thailand, Russia, South Africa, Honduras, Japan, and Costa Rica.

In 2009, and then again in 2018, she was named a Fulbright Specialist. Working through the Fulbright Specialist Program, Myford provides training and consultation services to institutions overseas that are facing assessment and measurement challenges. She has authored and co-authored numerous publications, and has served as an advisory editor for the Journal of Educational Measurement and as a member of the Editorial Board for the Journal of Applied Measurement.

Selected Publications

Knoch, U., Fairbairn, J., Myford, C. M., & Huisman, A. (2018). Evaluating the relative effectiveness of online and face-to-face training of new writing raters. Papers in Language Testing and Assessment, 7(1), 61-86.

Till, H., Ker, J., Myford, C. M., Stirling, K., & Mires, G. (2015, March 26). Constructing and evaluating a validity argument for the final-year ward simulation exercise. Advances in Health Sciences Education, doi: 10.1007/s10459-015-9601-5.

Winke, P., Gass, S., & Myford, C. M. (2013). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing, 30(2), 231-252.

Myford, C. M., & Wolfe, E. W. (2009). Monitoring rater performance over time: A framework for detecting differential accuracy and differential scale category use. Journal of Educational Measurement, 46(4), 371-389.

Notable Honors

2020, Benjamin Drake Wright Senior Scholar Award, American Educational Research Association, Rasch Measurement Special Interest Group

2018, Fulbright Specialist, Fulbright Specialist Program, U.S. Department of State, Bureau of Educational and Cultural Affairs

2013, University of Illinois at Chicago Award for Excellence in Teaching, University of Illinois at Chicago

2009, Fulbright Specialist, Fulbright Specialist Program, U.S. Department of State, Council for International Exchange of Scholars, Institute of International Education

2006, Teaching Recognition Award, University of Illinois at Chicago

1995, ETS Scientist Award, Educational Testing Service

Education

1989 - PhD, University of Chicago, Measurement, Evaluation, Statistical Analysis
1980 - MEd, University of Georgia, Educational Psychology
1973 - BA, Hiram College, Psychology/Music Education

Research Currently in Progress

Myford's program of research focuses on scoring issues in performance and portfolio assessments. She has conducted studies related to training raters, designing scoring rubrics, quality control monitoring, improving rater performance, detecting different types of rater errors, devising statistical indicators of rater drift, and understanding rater cognitive processes that underlie unusual or discrepant rating patterns. Myford has devised rating scales to evaluate complex performances and products and has analyzed rating data using many-facet Rasch measurement models. Her work blends qualitative and quantitative approaches to examining rating processes, illustrating how the interplay of statistical and qualitative analyses can help one develop, monitor, and continually improve large-scale performance and portfolio assessment systems.