Why is measuring the S in ESG still so hard?
There is a thermometer on the wall of nearly every commercial building in America. It tells you exactly what the temperature is, to the degree. You can argue about whether 68 is the right number, but you cannot argue about what the number is. The measurement is clean, reliable, and replicable.
Now try measuring whether the people in that building feel like they belong there.
That's not a rhetorical provocation. It's the actual challenge at the center of social sustainability, and it explains — more honestly than most people will say out loud — why the S in ESG keeps getting treated as the complicated cousin no one quite knows how to seat at the table.
The category error hiding in plain sight
For the better part of a decade, the sustainability field has tried to measure social impact using the same tools it developed for environmental performance. On the surface, this seems reasonable: both involve data, both involve targets, both involve third-party verification.
But things and people are not the same kind of subject.
When you measure a building's carbon footprint, you're working in physical science. The variables are knowable, the relationships stable, and the studies replicable because the underlying reality doesn't change much year to year. A kilogram of CO2 is a kilogram of CO2 in 2015 and in 2025.
Social science doesn't work that way. People change. The conditions that shape what a particular intervention means to a particular group of people shift constantly: across time, across cultures, and sometimes across a single city block. A program that reduces economic displacement in one neighborhood might have no effect, or even a negative one, two miles away.
When you apply physical-science thinking to social questions, you get proxy metrics. You count diverse vendors and believe it signals inclusion. You tally volunteer hours and believe it is community investment. None of those numbers are lies. But a proxy is an informed guess about a relationship between activity and outcome that hasn't been proven yet. Treated as an outcome, it's a placeholder pretending to be an answer.
This is the honest reason the S remains hard to measure. Not because the people working on it aren't trying. But because the field largely inherited a measurement philosophy built for a different kind of problem.
Taking a different approach
When SEAM developed its framework, the organization knew it needed an approach rooted in social science methodology. The logic model — developed in social program evaluation and international development — offered a proven way to measure human outcomes rigorously.
A logic model maps the chain of causation between what you do and what actually changes. It has five stages:
- 1. **Inputs:** The resources you commit
- 2. **Activities:** What you do with them
- 3. **Outputs:** Direct, countable products (e.g., number of community participants, percentage of contracts awarded to diverse suppliers)
- 4. **Outcomes:** Changes in conditions resulting from activities (e.g., reduced displacement, improved economic mobility, stronger community trust)
- 5. **Impact:** The portion of changes actually attributable to your project, rather than what would have happened anyway
That last distinction separates substantive impact measurement from preliminary reporting. Claiming positive outcomes is straightforward. Attributing them honestly requires isolating your contribution, accounting for baseline conditions, and specifying precisely how much change you can claim credit for. This represents the only rigorous version of impact measurement.
Where we are, and where we're going
SEAM acknowledges current limitations: the field cannot yet guarantee specific, quantifiable outcomes for every context. Social science lacks the comprehensive evidentiary datasets needed to offer such certainty immediately.
What SEAM Certification does accomplish is establish foundations. Every certified project generates structured, comparable data. Applications document inputs, activities, and outputs in standardized formats, allowing correlations across projects and building the evidence base for future precision.
This mirrors environmental certification's development trajectory — the field didn't arrive with perfect evidence initially but accumulated data project-by-project over years.
The S in ESG is hard because it should be. We're trying to measure what happens to people, and people deserve that level of care. If you're looking for a practical starting point, the ROSSI Calculator can help you quantify the financial return on social equity investment, and the SEAM Standard provides the full framework.