Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Child Care Data Analysis Leads to Policy Changes

Early child care is a largely unregulated field in Chicago.  While early childhood center teaching positions now require state licensure, in-home child care and day care providers operate free of any education preparation requirements.

That muddle means how the city goes about assessing the quality of child day care providers is a key barometer for providing a level picture of what quality and success looks like in day care.  And that’s where College of Education alumnus Ken Fujimoto, PhD Educational Psychology, steps in.

Fujimoto, assistant professor at the Loyola University Chicago School of Education, was part of a team while a doctoral student at UIC that assessed an instrument used across the nation for high-stakes evaluations of day care centers.  His investigation determined the scale being used could possibly generate inaccurate scores and should not be used for funding decisions that can have major impacts on urban communities.

“That work actually influenced some of Mayor Rahm Emanuel’s decisions on how to setup a child tracking system in the City of Chicago,” Fujimoto said. “It’s helping shape the system going forward for kids in Chicago.”

Fujimoto’s assessment came about after the day-to-day practitioners running child day care centers offered numerous objections to how the scale was rating their efforts.  Fujimoto’s work provided a statistical verification to back up the qualitative critiques of the practitioners.  The scale had been in place for 30-40 years, consisting of a 1-7 quality scale, with five representing decent quality and seven representing high quality, but Fujimoto’s research determined that the high quality levels were not necessarily reflective of the highest ratings earned.

His work as a doctoral student and a professor at Loyola builds on his research interests in item response theory, an assessment system that treats the difficulty of each prompt as information to be incorporated into the rating scale itself.  Item response theory is frequently applied to analyze testing data.  In this application, the probability of a correct answer is gauged as a function of person parameters (intelligence, rate of guessing, how an individual strays from their general level of intelligence) and item parameters (difficulty and how steeply success varies answering each question).

“We do not realize how many assumptions we make and how easily they can alter or lead to misleading conclusions,” Fujimoto said. “That’s why I have focused on working on these methods, seeking to overcome limitations by relaxing those assumptions.”

At Loyola, Fujimoto teaches courses on statistics, linear modeling and item response theory.  His ongoing research at Loyola is pursuing examining how clustered data impacts item response theory and looking at multiple response models.