play icon for videos

Common pitfalls when using IRIS indicators (and how to avoid them)

Impact investment metrics IRIS indicators serve many functions, principally to let you know how well you are serving your beneficiaries.
Category
Standards
Written by
Unmesh Sheth
Published on
June 26, 2018

IRIS Indicators are standardized metrics for assessing the social, environmental, and financial impacts of investments, developed by GIIN.

Understanding IRIS Indicators

IRIS indicators are a powerful tool used to measure and track the performance of social impact investments. They provide a standardized set of metrics that allow investors to evaluate their investments' social and environmental impact. IRIS indicators cover many impact areas, including education, health, and economic empowerment.

Common Pitfalls When Using IRIS Indicators

Despite their many benefits, using IRIS indicators can be challenging. Below are some common pitfalls that users often encounter when working with IRIS indicators:

  • Misinterpreting Metrics: One of the most significant challenges of working with IRIS indicators is the risk of misinterpreting the metrics. IRIS indicators cover a wide range of impact areas, and it can be easy to misinterpret the meaning of specific metrics or compare them inappropriately.
  • Data Quality Issues: Data quality issues are another common pitfall when using IRIS indicators. Users must ensure that the data they use to measure impact is accurate and up-to-date. Failure to do so can lead to inaccurate measurements and misinformed decisions.
  • Limited Availability: Finally, IRIS indicators are not yet widely adopted, which means that limited data may be available for certain impact areas. This can make it challenging for investors to measure and track their investments' impact accurately.
  • Misalignment of outcomes:
    It may seem trivial, but it’s worth looking at how we define impact metrics to understand how we often fall short in establishing the right ones for our programs. Impact metrics form a defined system or standard of measurement to track the progress of change by your organization. In the impact space, there are standard metrics and custom metrics. Standards are written by research and evaluation organizations and generally exist around focus areas or organization types.

    For example, the IRIS Metrics Catalogue is a vast database of metrics often used by impact investors and their investees.
    The first common pitfall in the social sector regarding IRIS indicators is that investors frequently demand these metrics from their investees, even if the indicators do not provide a relevant assessment of the outcomes sought by that change-making organization.

Sometimes, custom metrics -- created by an organization and designed around their use case -- are necessary.

  • Mistaking IRIS Indicators for impact indicators:

Social impact metrics fail when we mistake the metrics themselves for the change we seek to create. Look at a famous example from the social impact sector, PlayPump. The idea was genius. A merry-go-round apparatus that pumped underground water the more it spun around, the more local children played on it.

Hailed as an ingenious solution to the lack of access to quality drinking water, it received millions in backing and enjoyed public praise from global leaders. That praise was short-lived. The failures of PlayPump have been well-documented (what if there aren’t enough children to play or they don’t want to?) – as have the organization’s laudable efforts to learn from their mistakes.

Let’s look at the possible indicators that could have led to the continued implementation of the PlayPump even if it was not improving the lives of its targeted marginalized communities.

Disclaimer: This is a hypothetical breakdown of metrics and does not mean that PlayPump used these metrics and/or failed because of it.

Metric 1: # of communities where PlayPumps are installed

Metric 2: % of children using PlayPumps

Metric 3: Liters of water pumped

At first glance, these seem like positive things to track the impactful progress of PlayPump in the rural communities where it aimed to do good. PlayPump should probably track all of these metrics. But they do not communicate impact.

They are outputs that communicate the implementation of a product, not whether that product positively affects the lives of the target beneficiaries.

However enticing it may be to say that 500 pumps have been installed and a million liters of water pumped, as social impact metrics, these three would fail.

Many IRIS metrics fall into this same pitfall, encouraging us to measure indicators while overlooking ways of measuring which can help us assess "progress of change". 

Outcome Metrics

Outcome metrics more accurately tell us whether we are having an impact or not as they describe the intended medium-term consequences of a program. They are the second level of results associated with a project (after outputs like those mentioned above) and more closely relate to the project goal or aim. Here are some impact metrics that might be more relevant to the social impact PlayPump seeks to generate:

Outcome metric #1: Water-borne illness rate

Outcome metric #2: School attendance rate

Outcome metric #3: Mortality rate

These metrics hone in on the change that is potentially occurring in the communities where PlayPump has a presence and might be more readily attributed to the introduction of that pump. Furthermore, they can (and should be) measured against a baseline to understand the pump's comparative value over time.

How to Avoid Common Pitfalls When Using IRIS Indicators

To avoid these common pitfalls when using IRIS indicators, we recommend taking the following steps:

  • Familiarize Yourself with the Metrics: 
    Before using IRIS indicators, take the time to familiarize yourself with the metrics and their definitions. This will help you avoid misinterpreting the metrics and ensure that you are comparing them appropriately.
  • Ensure Data Quality: 
    To ensure data quality, it is essential to establish robust data collection and management processes. This may include investing in data quality tools, conducting regular data audits, and establishing clear data governance policies.
  • Consider Alternative Metrics:
    If limited data is available for certain impact areas, consider using alternative metrics to measure impact. For example, you may be able to use proxy metrics or qualitative data to gain insights into the impact of your investments.
  • Begin with the theory of change and align with IRIS+ only if it makes sense:
    It’s worth reiterating that the initial failures of PlayPump were more design-based (they could have benefited from more profound engagement with beneficiaries from the outset). Still, the point here is that if they had focused on measuring the project's desired outcomes, they might have discovered sooner that the pumps were not having the desired positive effect on the lives of the beneficiaries. A course correction at an earlier stage might have been possible.

So how might those in the impact sector establish the right metrics from the outset? We can first go through a step-by-step process to connect our mission, vision, and program structure to a set of metrics relevant to a specific program.

And in the process, we should clearly understand the differences between outputs vs. outcomes (A Theory of Change model would help). Then, by holding ourselves accountable to various metrics, we can better understand where our true impact lies.

Finding the right balance with IRIS Indicators 

If approached by a funder demanding we use IRIS metrics, effective use of IRIS indicators must still include assessments using the outcome-oriented approach.

Considering the above discussion, we can negotiate how to combine IRIS indicators, custom metrics/other standard metrics or define our social impact metrics framework.

For a primer on ensuring that your impact metrics genuinely serve you and your beneficiaries, check out Volumes I & and II of our Actionable Impact Management guides!

 Learn more: Impact measurement

Maximizing Impact: Designing Effective Workforce Development Programs

Track workforce success. Centralize structured and unstructured data for smarter decisions and continuous growth.
email newsletter image

Get useful, spam-free insight direct to your inbox every month.

Spam-free and secure!
Thank you! Your submission has been received!
Oops!
Something went wrong while submitting the form.
Please try again.