Register Now
JFunders24: The Annual Gathering of Journalism Funders
Reserve your seat today!
Learn More
JFunders24: The Annual Gathering of Journalism Funders

While the media and tech landscapes are changing rapidly—requiring nimble and adaptive business practices and new models—the larger social sector funding landscape can be stuck in model for a previous age, planning out discrete interventions and then collecting metrics to examine how many people were touched by them. And media funders—despite often being at the cutting edge of impact evaluation—still struggle with these tensions.

For the past six years, Media Impact Funders has been following and reporting on trends in the media impact landscape. As funders began to emphasize the value of tracking media impact, they invested in systems that made collecting and analyzing data easier for grantees. A few years ago, we saw a big push toward developing dashboards, many of which we have cataloged in our database of impact tools. But some grantees, especially those with limited staff capacity and resources, can find funder-required impact tracking onerous, especially when it comes to applying metrics to experimental new media projects for which benchmarks don’t exist.

But there’s a new impact approach that’s picking up momentum, one that relies less on advance planning and traditional metrics collection and more on rapid experimentation.

This approach is laid out in the new book, Lean Impact, in which author Ann Mei Chang, formerly the chief innovation officer at USAID, argues that “we need nothing short of a revolution in how social good is funded.” She writes: “Rather than adopting an investor’s mindset to take calculated risks in pursuit of growth and impact, philanthropic investments tend to be risk averse and prefer to deploy well-known interventions over and over again.”

Unfortunately, that’s true: Foundation funding structures and habits are often at odds with what is required for long-term growth and true social impact. Grants often support short-term projects and focus on short-term instead of long-term impact indicators. But projects designed to achieve social good need the ability to iterate and continuously evolve based on feedback, and to pivot when new solutions are warranted. Having to plan out projects in advance, adhere to rigorous reporting requirements and seek approval from funders to make adjustments stifles innovation. When—with the best of intentions— funders require grantees to comply with a particular evidence-based program model, they may actually end up thwarting impact by leaving no room for continuous adaptation and improvement.

So what is lean impact?

The model is adapted from the methodology laid out in the bestselling Lean Startup, a movement among entrepreneurs around the world to get products to market faster, with more innovation. Much like Lean Startup, Lean Impact’s goal is to “find the most efficient path to deliver the greatest social benefit at the largest possible scale.” And its orientation is an adaptive one, centered on learning rather than compliance. The Lean Impact Model includes the following practices drawn from the Lean Startup Methodology:

  1. Identify assumptions: What are the assumptions underlying the project? Often, in the case of media projects, it is that people will interact with a specific piece of media and then be provoked to change their behavior. But change isn’t usually so simple. Identifying assumptions means considering what could go right—and what could go wrong—at the outset.
  2. Build a Minimum Viable Product (MVP): The concept of a minimum viable product is frequently used in software development to refer to an early version of a product with the bare minimum of features to ensure use and collect data on how to improve in future iterations. Releasing an MVP will allow grantees to test riskier assumptions earlier in the process. What could this look like in nonprofit media? Maybe it’s releasing pieces of a long-form documentary as a web series first and gathering audience feedback to inform the longer piece. Maybe it’s holding off on some aspects of a project, such as a social media campaign, until after validating the assumption that social media is the most useful way to reach a particular target audience.
  3. Use validated learning: This means gathering the data from the release of the MVP to figure out what’s working and what’s not.
  4. Build, measure, learn: This is a cycle of continuous iteration. It entails continuously building upon the project, measuring results, learning from findings, and making improvements.
  5. Pivot or persevere: This means that if something isn’t working, it’s time to either pivot and try a new solution or persevere by continuing with more experiments.

Chang suggests that this model is in sharp contrast to the traditional funding model in which the needs of users and the priorities of funders are not always in perfect alignment, and grantees who are reporting to multiple funders can often end up spending inordinate amounts of time trying to collect and present data to please so many different—and sometimes competing—priorities.

Grantees are “typically expected to articulate compelling answers in detailed proposals that imply more confidence than is warranted,” Chang writes. Programs are “confined to a predetermined timeframe and budget, rather than being designed to persist and expand over time.” This makes success difficult to achieve, and transformational impact nearly impossible. And let’s be honest: In such an uncertain environment, no one really knows what the best solution is. Experimentation is desperately needed.

What’s more, much of the data that funders ask grantees to collect is not directly related to actual impact. Traditional metrics such as dollars raised or number of people reached rarely tell the whole story, as they only “reflect activity and don’t equate with having made a substantive difference.” Chang notes these “vanity metrics” have “spread through the social sector like a communicable disease.”

For example, a million people see documentary film about global warming. The filmmakers and their funders celebrate: They achieved their goal of bringing a particular message to a large number of people. But, imagine in this instance, the people who saw the film all walked away actually feeling helpless, and that global warming was an insurmountable issue, and took no further action. Now, imagine only 10 people saw the same documentary. But these 10 people happened to be U.S. senators, who previously resisted addressing global warming. After seeing the film, they immediately worked to craft legislation to address the topic. Which scenario has greater impact?

We’re not saying reach is an entirely useless metric for communications projects designed to amplify and spread a message, but it is only one of several dimensions to consider when assessing impact. But if you’re a traditional funder evaluating a grantee, you may be focused too much on reach instead of the bigger impact story, lessons learned, sustainable growth and more.  “In the absence of any data on the costs entailed and ensuing impact achieved, [metrics of reach] give no indication of whether an intervention is working or better than another alternative,” Chang writes.

So what does this mean for media funders and their grantees?

While Chang is focused on the overall social sector, we think many of these ideas can be applied to media funders. Here at 10 tips for funders who want to move toward with a lean impact approach:

  1. Fund research and experimentation. Funding iteration over time rather than one-off projects is a big step forward. But this requires a massive mindset shift for both funders and organizations: looking at the size of the need rather than anticipated progress within existing constraints.
  2. Fund general operating costs. As Chang puts it, funders need to fundamentally reimagine the relationship between funders and grantees, moving from “suspicion and micromanagement to one of trust and reward.” This means funding long-term, general operating costs rather than one-off projects. As MIF board member Molly de Aguiar and MIF consultant Jessica Clark wrote in their open letter to media funders, “patient, long-term support is rare.”
  3. Move more quickly. Clark and de Aguiar also make this point: Foundation decisions take a very long time. This is especially frustrating for media entrepreneurs who may come up with an innovative idea, apply for an innovation grant, and not find out whether they’ve received modest funding for months or even a year. By the time these media entrepreneurs receive pilot funding, it’s likely they’ve moved on—or, the media landscape has changed so much in the meantime, their original idea is no longer relevant.
  4. Fund the middle. That being said, it’s encouraging that media funders are putting more weight into hackathons, innovation funds, and prototypes, but there is little support for the next phase of iteration: testing assumptions, refining the project, etc. When foundation interests shift after the initial pilot, it’s unlikely any media intervention will achieve lasting change.
  5. Fund evaluation. Onerous reporting requirements take away from grantees’ ability to do the actual work. If funders require detailed evaluation, they should fund it and/or provide an external evaluator. Note that while rigorous academic research has a place in evaluating changes in large systems over time, it relies on a different timeline that is often at odds with rapid experimentation, identifying new solutions, and pivoting. Developmental Evaluation and Participatory Action Research are examples of models that may be more in line with Lean Impact.
  6. Encourage iterative program design. Don’t wait for a year-end grant report. Check in with grantees at regular intervals and ask them: What’s working? What experiments have you tried? How can you pivot? Ask grantees to share their assumptions at the outset and test them in the process. Encourage pivoting when something doesn’t work.
  7. Encourage innovation metrics. Instead of asking grantees to track numbers in a vacuum, funders can encourage innovation grantees to consider innovation metrics instead: What’s changing? What action is being taken? What’s the rate of adoption and engagement? What’s the actual value of the project? How can the program be continuously tested and improved through feedback and experimentation? Ask for lessons learned, including failures as well as successes.
  8. Fund the infrastructure for long-term collaborations among media grantees. Rather than funding isolated projects, take a look at cohorts of grantees and assess systems-level change. Funders can also put funding behind developing shared resources and leveraging them across cohorts of grantees rather than encouraging grantees to reinvent the wheel. This may include considering shared reporting requirements and evaluations by getting together with other funders of the same project to converge on shared metrics or co-fund evaluation.
  9. Fund hybrid structures. Much of the innovative work in media is happening outside of the nonprofit landscape. Consider funding partnerships between nonprofits and innovative for-profit media organizations that advance a larger field.
  10.  Know when to let go. One potential pitfall of this model is that funding the same organizations over a long period of time may end up shutting out other worthy organizations who are less established or don’t have an existing relationship with the foundation. This has the unintended consequence of shutting out community organizations, who may be making real changes on the ground. There are instances in which such organizations are overlooked as funding keeps flowing to larger organizations making incremental changes. In these instances, it’s important for funders to take a step back and assess whether real change is actually happening, and if not, if there are other places where their dollars could make a more meaningful impact.
About the Author
Katie Donnelly

Katie Donnelly

Research Consultant

Katie is a research consultant for Media Impact Funders and associate director for media strategy and production firm Dot Connector Studio. She formerly served as associate research director at American University’s Center for Social Media (now the Center for Media and Social Impact), and as senior research associate at the University of Rhode Island’s Media Education Lab. Katie has led impact evaluations for many media organizations including PBS, Working Films, and the National Association for Latino Independent Producers. She has conducted extensive impact research, particularly on the power of documentary film, and has written about the power of media to make change for numerous academic and journalistic publications. Katie has created many educational toolkits that use media to dig into social issues, including curricula addressing youth and gender, substance abuse, and gender-based violence.