“Scale”—too often this is the bane of a grantee’s or evaluator’s existence when it comes to assessing the social impact of media projects. Why are funders so stuck on this concept, and what other impact models might matter more? As we’ve been reviewing materials we’ve gathered on media impact over the past several years, these questions have popped up time and again.
First, a note on the term itself. What does it mean for a media project or outlet to “scale”? According to Merriam-Webster, the definition has evolved in the context of business jargon, and now means to “grow or expand in a proportional and usually profitable way.” Two other terms are often bandied about without much context: “getting to scale,” and “at scale.” Merriam-Webster is mute on these, but the less definitive yourdictionary.com defines “at scale” as “at the required size to solve the problem”—a meaning echoed by other online sources.
In the philanthropic context, these dueling definitions raise a few clear tensions. First, a longstanding debate: Should funders tasked with filling market gaps really be adopting commercial frameworks and goals for the projects they support? And second: How do we know when a project is big enough to solve a problem? This question is particularly difficult to answer given the proliferation of content and the fragmentation of media distribution channels.
In the case of evaluating the impact of media projects, “scale” tends to be equated with reach, which in turn relates to profitability. This corresponds to the advertiser-driven, mass media model that has increasingly under threat over the past few decades. But measuring audience size does not always indicate if a goal has been met. Of course, nobody wants to support media that reaches no one at all. And sometimes reach is central to social mission—if the goal is persuasion, amplification or awareness. But all too often, reach metrics fail to capture the nuanced change funders and mission-driven makers are seeking.
This mismatch has implications not only for setting key performance indicators, but designing editorial and engagement strategy. Tracking audience is important and should not be ignored. But doing so is not the same as setting “scale” up as the be-all, end-all.
A few recent articles have explored these points. In April, Comment Magazine Editor-in-Chief Anne Snyder wrote a post for the Center for Effective Philanthropy titled “Does Love Scale?” In it, she suggests alternative success indicators, such as intentional pluralism, healthy relationships, and generativity. In March, the Democracy Fund’s Local News Lab republished a post by Damon Kiesow, Knight Chair at the Missouri School of Journalism, who writes, “To succeed, local media have to abandon scale and refocus on community. Advertising remains part of the equation, but reader revenue, donations, foundation funding — yard sales if necessary — are all in the mix.” The piece kicked off this Twitter debate among journalism funders and grantees.
Several tools and frameworks MIF has collected reveal the importance of other dimensions of impact for philanthropically supported media projects. For example, the Active Voice Lab Horticulture framework suggests that the impact of media projects might be assessed by how well they help movements to grow, or how successfully they transport audiences via narrative. The Impact Field Guide and Toolkit, produced by Doc Society, breaks down evaluation into four buckets with corresponding indicators and methods: changing minds, changing behaviors, building communities, and changing structures.
Our newly updated Impact Pack, which was developed with support from MIF, has evolved in concert with the discussion about media impact goals over the last few years. It also offers other outcomes other than scale that funders might do better to assess. These include:
- Political or legal change
- Individual behavior change
- Equity
- Empowerment
- Trust
- Adoption
- Internal capacity building
Learning to think about what constitutes success beyond scale opens up new possibilities. It also requires very different tools and methodologies. You will find many examples of these in the curated collections in the Impact Assessment section of the MIF web site.