Carol Myford, PhD
Carol Myford is an associate professor in the Department of Educational Psychology. Her program of research focuses on scoring issues in large-scale performance and portfolio assessments. She has published studies related to rater training, rubric design, quality control monitoring, improving rater performance, and the detection and measurement of rater effects. During her career, Myford has held assessment and measurement-related positions in government, and in business and industry. Prior to coming to UIC, Myford was a senior research scientist at the Educational Testing Service (ETS).
In 2009, she was named a Fulbright Specialist. Working through the Fulbright Scholar Program, Myford consults with institutions overseas that are facing assessment and measurement challenges. She has authored and co-authored numerous publications, and serves as an advisory editor for the Journal of Educational Measurement and as a member of the Editorial Board for the Journal of Applied Measurement.
1989 - PhD, University of Chicago, Measurement, Evaluation, Statistical Analysis
1980 - MEd, University of Georgia, Educational Psychology
1973 - BA, Hiram College, Psychology/Music Education
Research & Teaching Interests
Myford's program of research focuses on scoring issues in performance and portfolio assessments. She has conducted studies related to training raters, designing scoring rubrics, quality control monitoring, improving rater performance, detecting different types of rater errors, devising statistical indicators of rater drift, and understanding rater cognitive processes that underlie unusual or discrepant rating patterns. Myford has devised rating scales to evaluate complex performances and products and has analyzed rating data using many-facet Rasch measurement models. Her work blends qualitative and quantitative approaches to examining rating processes, illustrating how the interplay of statistical and qualitative analyses can help one develop, monitor, and continually improve large-scale performance and portfolio assessment systems.
Winke, P., Gass, S., & Myford, C. M. (2013). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing, 30(2), 231-252.
Boyle, M., & Myford, C. M. (2013). Pharmacists’ expectations for entry-level practitioner competency. American Journal of Pharmaceutical Education, 77(1), Article 5.
Esfandiari, R., & Myford, C. M. (2013). Severity differences among self-assessors, peer-assessors, and teacher assessors rating EFL essays. Assessing Writing, 18(2), 111-131.
Till, H., Myford, C. M., & Dowell, J. (2013). Improving student selection using multiple mini interviews with multifaceted Rasch modeling. Academic Medicine, 88(2), 1-8.
Myford, C. M. (2012). Rater cognition research: Some possible directions for the future. Educational Measurement: Issues and Practice, 31(3), 48-49.
Engelhard, G., & Myford, C. M. (2009). Comparison of single- and double-assessor scoring designs for the assessment of accomplished teaching. Journal of Applied Measurement, 10(1), 52-69.
Houston, J. E., & Myford, C. M. (2009). Judges' perception of candidates organization and communication, in relation to oral certification examination ratings. Academic Medicine, 84(11), 1603-1609.
Myford, C. M., & Wolfe, E. W. (2009). Monitoring rater performance over time: A framework for detecting differential accuracy and differential scale category use. Journal of Educational Measurement, 46(4), 371-389.
Iramaneerat, C., Myford, C., Yudkowsky, R., & Lowenstein, T. (2008, Nov. 5). Evaluating the effectiveness of rating instruments for a communication skills assessment of medical residents. Advances in Health Sciences Education [published online]
Iramaneerat, C., Yudkowsky, R., Myford, C., & Downing, S. (2008). Quality control of an OSCE using generalizability theory and many-faceted Rasch measurement. Advances in Health Sciences Education, 13(4), 479-493.
Honors & Awards
2009 - Fulbright Senior Specialist, Council for International Exchange of Scholars, Institute of International Education
2006 - Teaching Recognition Award, University of Illinois at Chicago, Council for Excellence in Teaching and Learning
1995-1996 - ETS Scientist Award
Professional Affiliations & Activities
National Council on Measurement in Education
American Educational Research Association
Rasch Measurement SIG (Special Interest Group) of AERA
Journal of Applied Measurement, member of the Editorial Review Board
Member, Technical Advisory Committee, Renaissance 2010 Initiative, Chicago Public Schools
Consultant, The American Board of Orthopaedic Surgery
American Evaluation Association
Member, Technical Advisory Committee, ACCESS for ELLs (English Language Learners) Testing Program, WIDA Consortium, University of Wisconsin
Member, Technical Advisory Committee, Gwinnett County Public Schools Assessment Program, Suwanee, Georgia
Consultant, The Accreditation Council for Graduate Medical Education
Consultant, Higher School of Economics, Program in Psychometrics, Moscow, Russia
360 Assessment in the Urban Elementary Classroom I
- EPSY 255
- Junior standing or above
- Admission to the Bachelor of Arts in Urban Education, Concentration in Elementary Education.
Beginning concepts in the design, administration, and scoring of assessments useful in urban elementary classrooms for measuring different types of learning outcomes, from simple to complex. The focus will be on achievement assessments. 2 hours. Extensive computer use required. Field work required. Thirty hours of fieldwork required.
507 Approaches to Analyzing Rating Data
- ED 503 or the equivalent; or consent of the instructor.
- Recommended background: EPSY 504 and EPSY 505 and EPSY 506 and EPSY 512 and EPSY 546; and EPSY 547
An introduction to various statistical approaches for detecting rater effects and monitoring rater performance. Extensive computer use required.
553 Assessment for Teachers
EPSY 421 and EPSY 422; or consent of the instructor.
Plan, construct, administer, score, and report on classroom assessments that measure a wide variety of learning outcomes, from simple to complex; select and use standardized achievement tests; developing defensible grading procedures.
560 Educational Program Evaluation
EPSY 503; or consent of the instructor.
An introduction to concepts, approaches, techniques, and practices of educational program evaluation. Students work toward acquiring knowledge and skills to plan and conduct evaluations of programs, projects, curriculum and institutions.
561 Assessment for Measurement Professionals
ED 421 and ED 422; or consent of the instructor.
Plan, construct, administer, score, and report on classroom assessment; select and use standardized achievement tests; develop defensible grade procedures; measure issues in classroom assessment; validity and reliability of classroom assessments.
562 Large-Scale Testing
- EPSY 501 or the equivalent; or consent of the instructor.
- Recommended background: EPSY 503 or EPSY 553 or EPSY 561. Prior experience in designing, administering, scoring, and/or reporting on large-scale tests.
An introduction to large-scale assessments, including planning, constructing, administering, scoring, and reporting on large-scale tests.