The Overmeasurement Fallacy, or Why We Shouldn't Measure That Much

In my psychology studies, I had three stats courses and one experimental psychology course. That knowledge was underrated then but super useful today.

I learned the difference between qualitative and quantitative research, hypotheses, confirmation bias, well-chosen variables, and experiment design. And the key idea: we don't measure everything "just in case," hoping to stumble upon something brilliantly new.

Confession time: I was there too for years after graduation. Collected tons of unused data, hoping to dig up insights—and they were there. But nothing happened, because it wasn't the research goal.

Why does this matter for business metrics?

No matter which metric we pick: flow, delivery, engineering, product, survey-based, health checks, maturity, etc., it's good to remember that metrics are for hypothesis verification only, not for proving or casually scanning.
The clearer your hypothesis, the easier it is to create the measurements.

It's so tempting to add "just a few more” things to check

To resist it is good to know how it might look:

— "Just want a broad view. I'll see the big picture first and then decide"
— "No specific hypothesis, but wanna learn about the team... so everything."
— "Need these metrics eventually anyway, let’s start collecting all of them"
— "Saw cool new metrics, let's try!"
— "It takes too much time to talk to people, do a survey instead, and check all I need."

Why is it good to check and avoid extra data gathering?

  1. It forces clear goals and hypotheses upfront
    Not "collect everything first, figure it out later"
    Avoids cherry-picking, missing data bias, and other traps

  2. We seek data for specific reasons
    Shiny alarming numbers may seem important, but if it wasn't your main worry, why jump?
    Attention shifts to minor "alarms" instead of real issues

  3. Don't waste time and energy on unused data
    Especially asking people to fill out surveys. If they see no action, their trust drops, and next time, it's going to be "What's the point?" push back

  4. Metrics can't replace real talk
    We end up with second-hand info and interpretation errors

Want some actionable advice?

  1. State research goals first (What problem are we solving?)

  2. Define the key variables (What affects the goal?)

  3. Pick how to measure them** (Metrics + method)

  4. Try the easiest and less energy consuming way

Less noise, more insight. Science done right

Next
Next

Common Knowledge Theory