The Camera-Off Problem
A provider joins a professional development session. Camera off. When the facilitator asks a question, there’s a long pause — then someone else’s voice answers. Meanwhile, other providers in the system are racking up 10+ college units per semester. No one checking whether a single concept made it from the screen into the classroom.
I’ve seen this pattern enough times that I’ve started calling it the camera-off problem: a system that counts hours instead of competency. The gap between what gets measured and what actually matters.
What I’m seeing in the field
Training hours and credentials accumulate on paper. They show up in licensing files and QRIS ratings. They satisfy regulatory requirements. But they don’t tell you whether a provider can hold a meaningful conversation with a four-year-old, or whether they notice when a child is struggling before the behavior escalates.
The field actually lacks reliable and valid assessments of early educators’ knowledge, skills, and abilities to implement research-based practices in day-to-day work. That measurement gap isn’t anyone’s fault — but it’s real, and it creates a space where compliance can look identical to competency.
What I’m seeing in the administration
San Francisco’s Department of Early Childhood is piloting quality assurance activities to monitor the programs it funds. This is a significant moment. When you’re investing public dollars in early care and education, the question of what you measure — and what you don’t — shapes everything downstream.
I’ve sat in DEC and QRIS meetings where the conversation gravitates toward additional training mandates. More hours, more units, more certificates. I understand the impulse. But I keep thinking about the provider with the camera off. We can add another 20 hours to the requirement. That provider will complete them. Will anything change?
What the research points toward
Some approaches seem to get closer to what matters:
Observation-based tools like CLASS and Environment Rating Scales assess what actually happens between teachers and children — the back-and-forth dialogue, emotional responsiveness, depth of interaction. These are harder to implement and more expensive than tracking hours. But they measure practice, not paperwork.
Practice-based coaching — individualized, sustained over time, embedded in the actual work setting — has a stronger research base for changing educator behavior than traditional PD. Some states have recruited coaches who speak educators’ home languages, which matters when providers are navigating professional development in their second language.
Reflective supervision shows up in programs that require genuine engagement — cameras on, active participation, multi-session commitment. Programs like NAPA demonstrate what high-expectation professional development looks like in practice. The research suggests the benefits require time and trust, and that adoption happens faster where trusting relationships already exist.
Leveled competency frameworks — like Michigan’s Bloom’s Taxonomy-based model or California’s 12-area ECE competency structure — try to distinguish between someone who attended a class and someone who can apply what they learned.
None of these are simple to implement. All of them cost more than counting hours.
The knowing-doing gap
In business strategy, there’s a concept called the knowing-doing gap — the distance between what people learn and what they actually implement. Rich Schefren, whose work on AI-powered business systems I’ve followed through his Zenith Mind program, frames it this way: most training produces knowledge without changing behavior. The real leverage isn’t in teaching more — it’s in building systems that close the gap at the point of practice.
Early childhood education has the same problem. Providers know the concepts. They’ve sat through the workshops. But knowing that responsive interactions matter and consistently doing them under the pressure of a 12-hour day with six children — those are two different things. Our professional development system measures the knowing. Almost nothing measures the doing. I explore why the system was built this way — and what it reveals — in a follow-up.
The question I’m sitting with
I don’t have a clean answer. But the question feels important:
Are we measuring what providers actually do with children — or what they did to satisfy a requirement?
If we’re honest, I think most of our current systems lean toward the second. That doesn’t make them useless. But it means we should be careful about assuming that more of the same will produce different results.
SF DEC’s quality assurance pilot is a chance to ask this question openly. I’m curious what others in the field are seeing — providers, coaches, administrators, funders. What does meaningful quality look like in your experience? What are we getting right? What are we missing?
I’d welcome the conversation.