EP 21: From Charity to Change: Mastering Effective Program Evaluation

May 4, 2023

Show Notes

How can you ensure your nonprofit is genuinely creating positive change rather than unintentionally causing harm? 

Nonprofit leaders should aim to “do no harm”, and evaluating programs is a crucial step in ensuring that nonprofits are making a positive impact instead of causing unintended negative consequences.
Unfortunately, it can be challenging to evaluate your programs effectively, especially with limited resources and expertise. 

It’s time to shift from charity to change.

In our latest episode of THRIVERS, Tucker and Sarah dive deep into the challenges of evaluation and how to overcome them. They discuss the importance of understanding the pains of evaluation, such as:

  • The lack of proper training in conducting evaluations
  • Balancing necessary resources
  • Making evaluations accessible for community-based nonprofits
  • Aligning good intentions with good science

They also highlight the importance of avoiding common pitfalls in program evaluation: 

  • It’s essential not to expand programs without evaluating the existing ones, and 
  • Always resist generalizing impact from one segment to another (e.g., assuming success with high schoolers translates to middle schoolers).

Join us for an insightful discussion on overcoming evaluation challenges and embracing practical advice on how to transform your nonprofit’s approach to evaluation.

Looking for ways to increase your impact in your communities and causes?
We’ve created a modular series of workshops focused on creating impact from the inside out. Explore details and schedule a discovery session at thriveimpact.org/insideout

Want to get notified of new episodes?

Share this episode

Transcript

Tucker: Welcome to THRIVERS: Nonprofit Leadership for the Next Normal. I’m your host, Tucker Wannamaker, the CEO of THRIVE IMPACT, and our mission is to solve nonprofit leader burnout because burnout is the enemy of creating positive change, and that’s why we’re all here: to create positive change. We want to connect you with impactful, mission-driven leaders and ideas so that you can learn to thrive in today’s nonprofit landscape, because it’s a tough one out there. And, I’m joined here today, as usual, with my co-host, Sarah Fanslau, our Chief of Impact. Sarah, good to be on the show with you today.

Sarah: Great to be here

Tucker: And I’m actually really excited to interview you because, you know, there are so many topics in the nonprofit world that we are not experts on all of them for sure. But one area that we do have pretty deep and clear expertise is through you, Sarah, which is through things like impact evaluation. Because like I said earlier in the opening, we’re not all here to just do nice things in the world. Hopefully not. We’re not here to just be charitable and nice. We are here to create positive change, like actually create positive change. That’s why we’re here. In some ways, that’s why we’re judged or how we’re judged in terms of what makes us a great nonprofit. So, Sarah, I just wanted to first of all, before we get into the pains around evaluation, which is really our next normal topic, tell us a little bit about what course you’re going through, your background, some of the master’s degrees you already have, the master’s degree that you’re getting right now, and just some of your background and history around that. And then let’s hop into some of the pains right after that.
Sarah: Awesome, so I have a Master’s of Science already from the London School of Economics in Social Policy and Development Studies. But I’ve been interested in research and evaluation for a long time, as part of my undergraduate degree which I got from Emerson College. I studied abroad in Brazil and conducted research looking at how women living in the northeast of Brazil – I was in a place called Fortza – understood themselves as citizens based on their access to resources and where they were located geographically in the city. So, I leveraged interviews and worked with the university there, the geography department. Then I went to LSC and got my master’s, and while working there and afterwards, I worked at a think tank called The Young Foundation, which was a think tank and center for Social Innovation in London. My job there was a researcher and I worked with teams of folks to really look at the impact of social policies from the lens of people who were experiencing them. We used ethnography, which has been called “deep hanging out” – you literally go live somebody’s life with them for a while.

Tucker: And deep hanging out. I love that. That’s a technical term, right?
Sarah: It’s, you know, to get to really see, to observe and to talk with and see people’s lives. And so, used interviews and surveys and focus groups and ethnography in that context. And published a number of papers for central and local government. And then I came back to the States and did work in the social determinant space and continued to do research. You know, while I was a program manager at Health Leads, I was working out of Bellevue Hospital here in New York City. And our work was around the social determinants of health, which is about breaking that link between poverty and poor health. And you know, we realized this was like 12 years ago now and the social determinants were not as well known as they are now. And so part of what I did working with the head of pediatrics and the chief resident at the time was design a study that we got IRB or institutional review board approval to conduct in the hospital environment to better understand what physicians actually knew about psychosocial need, what they didn’t know about it and what they understood about how to refer for it. And that really helped the organization as a whole have a set of data to really prove the need for their intervention and then to help us better design a program around it. And then I moved into the civic engagement space, which we’ve talked about, and continued to do evaluation work. So I have a lot of experience here, but you know, my goal a while ago was to go get my PhD, and I got accepted into a fully funded program and then had to decline it because I had a baby and a brain tumor at the same time. And you know, two benign growth at one time, just not enough time in the world. So, you know, I just recently was able to get back into academia via this master’s program that I’m doing at Claremont Graduate University, which is around evaluation and applied research. And, you know, it feels really great to be back there. I love going to school. I know not everybody does, but that’s a little bit of my history here.
Tucker: Well, and I always do love how much energy you get, like after you go through one of the classes. I mean, not only is it obviously incredibly relevant to our work at THRIVE IMPACT and to the nonprofits that we work with in our community, but you just have so much energy around it, which I love. It’s like you get this big smile on your face and that’s such an important piece. Well, you know, Sarah, as you’ve been — you’ve obviously been in this space of impact evaluation for quite a while, and you’ve gone even deeper, of course, in this master’s program through Claremont. Let’s hop into some of the pains. What have you been noticing, and maybe you’ve noticed this not just in this last program, in this last master’s program that you’re in, but even in before? But what are the pains or the issues that nonprofit leaders are really experiencing regarding evaluation of their impact?
Sarah: I mean, I think first and foremost, most nonprofits don’t have staff trained to do this work. And so because of that, many nonprofits just don’t do it. And 10 or 15 years ago, that was okay. It’s not anymore. Before, it was okay if people were just measuring outputs, which is like the number of kids or the number of hours and things like that.
And funders now, especially the bigger funders and the more serious foundations, are really asking people to go beyond outputs, to short and medium-term outcomes, and then potentially long-term outcomes or impact. And so the problem, especially for small community-based nonprofits, is that most of them don’t have staff with research or evaluation skills.
And then it takes time and money to hire external folks to do it. And so because of that, a lot of people just aren’t doing it. They just aren’t doing it or aren’t doing it well. And that certainly is a real challenge.
Tucker: It’s, it feels like almost an injustice on nonprofits that there is what we are judged by is not what we are supported in order to given support for. In order to, whether it’s the professionalization, which is, we know this in our work around professional development, leadership development, is a drastic last lack of funding around that. But also, you know, just the literal time and money to literally evaluate and what that means. And yet, that’s what we’re judged on. So it’s like this massive problem. It sounds like inside of nonprofits, especially small community-based ones.
Sarah: For sure. And then, I mean, I think one of the things I really got a better sense of through Claremont is just, you know, the professionalization of the field of evaluation itself.
You know, one of the things that I really learned, especially last semester in my theory-driven evaluation course, was that evaluation is really booming as a field. And so, for example, in the late seventies, there were two professional evaluation societies, and in 2018, there were more than a hundred. More than a hundred.
So, like, there’s been a huge explosion in evaluation as a field and as a career field. And, you know, what’s happening as a result of that is that there are a lot of people doing it who, quite frankly, haven’t been trained to do it. And so that, that’s, I think, another pain that a lot of folks have out there. A plaque that says, “I’m an evaluator. I do program evaluation or policy evaluation,” without really having been trained in what that means.
And there is a really specific science here, around what evaluation looks like, and there’s a number of different branches or approaches to evaluation that one can take. And, you know, many evaluators just, you know, haven’t received the formal education to understand that or to base their approaches on that science because they haven’t been able to get the education.
So, I think that’s another pain. And which isn’t to say that evaluators who haven’t been professionally trained are not bringing value. You know, just to be completely clear, I’m sure many do bring a lot of value, but for small community-based nonprofits who may have to rely on folks who are less expensive, there’s just a real variation in the field that is partially born out of the fact that it has so rapidly expanded and is still, you know, really in its infancy.
Tucker: Well, and Sarah, I want to, I want to dive just a little bit deeper in this pain, which is because of these things, what’s really happening inside of a nonprofit? Because of a lack of training, a lack of understanding of what positive change—or change, I should say—we’re having. What’s really happening underneath the surface? It’s like our favorite question, which we’ll ask later around what’s made possible, but on the positive, what ultimately is made possible on the negative side because of these pains. What’s really happening here?
Sarah: Well, I think, you know, at the most basic level it means that a lot of programs are designed based on good intentions, but not good science, is what it comes down to. And so, you know, again, like I said, good intentions are nothing to laugh at. That’s really important that we care about what we’re doing. But fundamentally, so many nonprofits are not actually constructing their programs and the theory of change, which is really, you know, about how all of the program components work together in support of the ultimate outcomes or impact. Many of those are just not based on social science research or theory.
And so, for example, when we just finished or wrapped up a six-month program with the Pikes Peak Community Foundation where we developed a leadership collaborative and many other things, one of the things that I did as part of my graduate program was to create a theory of change for our leadership collaborative. And the theory of change is different from a logic model. A logic model, right, says how do the inputs and the activities line up with all of these other things? And it’s a little, it’s a lower altitude, kind of connect the dots between the outputs, the activities, the outputs, and the impacts. A theory of change is a higher level, can still be a visual diagram, but it’s really representing the change that happens in the participants as a result of the program.
And so it’s really mostly focused on the short-term outcomes, the medium-term and the longer-term outcomes. And so as I was doing the evaluation proposal for this class that I was in, I was really focused on that leadership collaborative. And what that means is what I did was I looked at the pieces of the program, and I’m going to take the initial outcomes as an example for a minute here. You know, part of what we wanted to do when first folks got in the program was to introduce them to concepts of leadership. To increase their awareness around leadership, which then, you know, we hoped or hypothesized would translate into a commitment to working on their leadership, which then would lead to an understanding of the core concepts of conscious leadership, which would then transition into other things. But all of these pieces around awareness of skills or awareness of concepts, I put together in a set of hypotheses, which I then went to the social science research and I said, based on existing theories, is this plausible? Does it hold water? Does it make sense?
And so, for example, one of the things we were working on with folks in the collaborative was awareness of opportunities for internal and external leadership growth. So we had them take an assessment and then set goals. So from that, we wanted to have them work on goal setting and commitment, which would then lead to goal progress, which then was incorporated with goal reflection. And so I went to the social science research and I leveraged the work, the theory called cognitive dissonance theory, which is a theory that suggests that once we’re confronted with information about how or whether our actions align with our self-belief, we work to reduce dissonance, which drives goal commitment. Which is a fancy way of saying, for example, if I took that leadership survey and I thought that I was a really great self-reflective leader, but actually upon taking that survey I was like, you know what? I’m not so good here. That dissonance between what I thought and what was true would drive me to say, you know what? I want to get better at what I thought I was already good at, and then it would help make me commit to that goal. So that’s an example of how social science theory can help back up or give validity to the ways in which a program is suggesting it makes change.
Tucker: You know, Sarah, you’re hitting on a couple things here. One, going back to that piece around pains, which is speaking a little bit into what you talked about with our THRIVERS program, is something you say a lot, which is the Hippocratic Oath, do no harm. And if we don’t know, if we don’t have any form of data, which is difficult to find sometimes, and we’ll get into that here in a little bit, around what is that next normal and what are some practical steps for all who are listening. But if we don’t know, then we may very well be doing harm. I mean, there are definitely books out there like Toxic Charity is a good example of that, where good intentions not met with good science can actually lead to negative change for people when what we had set out to do was actually positive change. And so if we don’t have that objective, then that becomes an issue for us. The very thing that we set out to do, we’re doing the exact opposite. I know we thought about this actually, we’ve wrestled with this from our work at THRIVE of how do we create the conditions in a community that aren’t perpetuating shame and guilt inside of a nonprofit leader who already feels somewhat beat down? And if we just have the power dynamic of a quote-unquote expert coming in, and it’s not that experts are bad, I mean, I just said that you’re an expert in impact evaluation, but if we have workshops where all we have is an expert, but we don’t allow that to go in a more accessible way to the nonprofit leader, we may actually be perpetuating more guilt or shame. Like, “Oh look, well that expert has all their stuff figured out. Why don’t I?” and maybe perpetuating those cycles of reactivity. And it’s like, “Oh, we need to make sure that we are not actually perpetuating burnout versus preventing burnout.”
Sarah: Totally. And really, you know, the social science theories that our program design or methodology is based on, you know, is action or experiential learning, and that is in and of itself a theory that has been proven by research to be effective in helping people learn skills as well as connect with their peers and have a higher sense of satisfaction. So 100 percent, I think what we’re both getting to here is that nonprofits, one of the real challenges, if you don’t have something like a theory of change that’s validated or backed up by social science research, is that a lot of times the pieces of the puzzle just don’t make sense. They don’t actually fit together, and so that’s why it’s so important not just to say, “What do we hypothesize happens?” but then, “How does the existing research support that?” And then to your point, what does it look like when we measure it? Like, we could have a great hypothesis of change, but then when we measure it, we may find that it’s not backed up by social science research, or we may measure it in different populations and realize, “You know what? Part of the hypothesis didn’t prove true for a certain group.”
And I’ll give you an example there. You know, at my last organization, we had a theory of change and logic model. It was backed up by social science research. We created a pre- and post-assessment using validated scales for use with young people, and we gave it to the young people in our program. And one of the things we found was that lower-income middle school students were actually seeing decreases in some of the scales that we were hoping they would see increases on. And that was because the program was built for high school students, and it had naturally evolved to middle school students because everyone was like, “This is so great, we have to bring it to middle school.” But this is where the concept of generalizability is so important. Oftentimes we just think, “Oh, because this thing worked here, it’ll work there.” And that is so often not true. And so that’s just another reason why we have to build our program based on logic, validate it with social science research, test it using real scales and real measurement, and see who it works for and who it doesn’t. And then we need to make adjustments or changes. But if you’re a nonprofit leader hearing this, you’re probably like, “Oh my gosh, that sounds like so much work. I do not…”
Tucker: That’s exactly what I was thinking. I was like, oh man, here I am. I’m sitting here. A small nonprofit leader. I’ve got. Multiple people pounding on my door.
I’m in the human services work. You know, with a mother who I’m helping to recover from addiction. Or I’m sitting in the space between the community and the cops and, and helping reduce youth violence. And I get phone calls at one in the morning to come out and to support.
I’m thinking these are literal nonprofit leaders that I’m totally about right now with these stories. Like, what is my, if I’m that person, what is my next normal around this? So that I can take the steps that I need to take. What is the next normal around this work and what are some steps that would make sense around it?
Sarah: I think there’s some dos and there’s some don’t dos. And I’m gonna start with the don’t dos because we see this a lot. The first thing to not do, especially if you don’t have a research base and a measurement tool, is do not expand your programs and services, especially to groups you’re not sure the intervention works in. So we see a lot of nonprofits, who are expanding services, like it happened in my nonprofit because somebody asked you to, because it sounds good at the moment, but you haven’t done that thinking to think how will what change will we generate for this group that’s different from the one that’s currently using the services? And how does the intervention needs to change in order to best serve that new population? So the first don’t do is do not expand your services or your activities if you aren’t first measuring the impact of the ones you already have, just don’t do it. Don’t do it. And especially don’t do it if you’re working with sensitive populations. I think that’s just the other piece that I would add there.
Tucker: Sarah, that reminds me of an analogy that I’ve thought about with my own house, which is, we’re not, and we’re so guilty of this, is you know, for everything we bring into the house, we need to take something out of the house. Totally. It’s like, you know, verse as instead what happens, of course, is everything that comes in just keeps compiling. We just, yes, we need this guy, this overwhelm of stuff, and then we are filling bogged down and heavy. But if we have a discipline around, you know, don’t add a program unless you’re already evaluating something and know what’s going on in the first place. You know, and this gets back to our shift that we use around don’t, or stop saying yes, and double down on your unique value. Well, know what your unique value is. You know, and get into it and go deeper there and then go into other services.
Sarah: Big time, yeah. And that just reminded me of my last organization. You know, well-intentioned program folks who are so passionate about the work would often come to me and say, oh, I wanna add this, or I wanna add that. And I had to be the holder of No a lot. And I would use an analogy, which is to say, you know, when we’re building an addition to the house, we don’t just like tack on, like let’s say it’s a blue house, we don’t just be like, oh, I’m gonna put a pink window over there because I need a window and just like add on without thinking about how it’s gonna fit in with the rest of the house.
We have to build an addition to the house that is built on the integrity of the existing house and matches right with that house. And so often, additions in the nonprofit space are just like that. It’s like, well, let me throw this new room up here and it’s gonna look different. And maybe the floorboards are higher, but who cares? And so we just, we have to be really intentional. We have to be really intentional. And that means saying no a lot, which is uncomfortable, straight up. It’s uncomfortable.
Tucker: And Sarah, in that, which I think this is definitely in the next normal, is what are the… Sometimes saying no can be really difficult because it becomes almost like a confrontation. Are there questions that you would even ask to… You know, it’s a little bit of like the, what does it take? But like, are there questions that you can ask that help program staff or a board member, or whatever, that you can have them work on the work that needs to be done. Like to help them to determine what will it take, what are the questions that somebody might ask?
Sarah: Well, for sure. I mean, I think if it’s a brand new thing that you want to add on, definitely think about what it takes and what both positive and negative implications it might have, and be really explicit that there may be negative implications as well. For example, a lot of times folks expand programming without expanding staff resources, which means the staff resources dedicated to the existing work get stretched thinner, which means the impact for the existing work gets thinner.

So that is one of the first things, but I would say there’s always a way to deepen and make richer existing programming, and that’s where I would point folks’ attention. So, for example, if you’re working with young people on service learning, how could you improve or enhance their training that they already get? How could you improve or enhance a part of the program that you’re already delivering in line with the change you want to make? And that’s where I really suggest most folks focus.

But going back to your original question on the dos and don’ts, I think what folks can do is start small. You can look at your existing program, and in the show notes, we can put a link to a logic model template. Start with a logic model. Have your whole program staff get together and think about together: what are your inputs, what are your activities, what are your outputs, outcomes, and impact? And just start there. It’s a great way to make explicit what might be implicit to folks. And then I would really focus on the latter half of that logic model and think about developing a theory of change, which is to say, how does the change actually take place in our program and what research might support it?

Now, the research I did was really deep, and I’ll say because I’m a student at Claremont, I have free access to all of the articles, all of the scholarly research I want. Many folks don’t have that access if they’re not in school right now. But you can certainly go online. Google Scholar has a ton of access to great academic research. Some of it is available for free. Do some research to look at other similar interventions that have similar outcomes to yours and find a few studies that prove or disprove that what you’re doing may actually work with the population you’re serving.

So that’s where I would suggest folks start. Do a logic model, just start somewhere, and find some research that might help you make the case that your intervention is going to actually have the intended outcomes with the population that you’re serving. And then the second thing I do is I would say get creative with finding resources for evaluation. And there are two things to think about here. One is a lot of universities have students who are looking for projects to do as part of their research projects. Like me, right now, I’m doing a research project with an organization. So, for example, at my last organization, we worked with Furman University and an AmeriCorps student who was getting a master’s degree, and she helped us create our theory of change, our logic model, and our pre and post-assessment. It was really low cost because AmeriCorps dollars were paying for it. So, a lot of universities need case studies. Go to your local university and see what they have available. Their students are always looking for opportunities to do research and real-life organizations. And then, of course, interns, as summer comes up or whatever comes up, look for those interns who are doing evaluation or research and have access to the scholarly database that might help you as an organization get access to tools and resources you wouldn’t otherwise be able to get. So those are a few things that I think nonprofit leaders can do.
Tucker: Well, and you know, another thing too that came up, you know, I, when I was the head of fundraising at an organization, we explicitly put, you know, I don’t remember, I think it was $20,000 in the budget explicitly for impact evaluation, which fed of course, very beautifully into that grant process that we’re in. Is, hey, we wanna make sure that your investment into our program is actually producing the results that you’re looking for, as are we for that matter too. We’re looking for those same results, but we put it directly in there saying we need to continue to ramp this up. And that particular foundation was totally open to that.
You know, Sarah, you also mentioned another thing around a tactic, and I’m going back to you were talking about program staff. And, you know, the phrase that always comes to mind for me is one from a dear mentor of mine, Bill Milliken, the founder of Communities and Schools, and you know, and that program or that nonprofit is based upon longitudinal data, the best dropout prevention program in the country, and they have real longitudinal data. But Bill, back in the early eighties used to say, we have to move from charity into change. We have to move from charity into change. And I was thinking about what you had shared around program staff that when you’re hiring for program staff, what should people be looking for? You know, it may not be like the professional evaluator per se, but what’s the mindset and the approach that program staff that nonprofit leaders need to be looking for in program staff when it comes to having this bent towards not just being charitable and nice, but that they’re actually geared towards change and understanding what that is.
Sarah: Absolutely. I mean, I think especially at mid to larger size nonprofits and even smaller, finding somebody with not just a programming background, but also ideally an impact and/or research background. And it doesn’t have to be explicitly evaluation, but somebody who’s done some research work before. I think it’s really important that folks come in with that mindset, that analytical mindset, where they respect and know how to use and want to collect and analyze data. And so I would really look for somebody with that skillset to lead your program. It’s really most important in terms of a leader of your program that they understand how to develop programs based on science and on previous research. And then I think for program staff, it’s really an opportunity for skill-building, is the way I think about it. Program staff who are actually implementing the program may or may not have to have a great facility with data, however, I think when you hire a VP or head of programs who does, they will bring it into the day-to-day with staff in order to help staff understand the importance of that data and then leverage it to make decisions.
So for example, at my last organization, when I came in we were not collecting data much at all, and not in a consistent way. And then we really weren’t using it to make decisions. And so one of the things that I did as we were creating this logic model in theory of change was to create a kind of a spectrum of program implementation from low to high, with the idea being that if you are implementing the program at a high level, which meant like, you know, the young people are taking advantage of most of the opportunities offered, they were doing all of the activities, all the program managers were checking in at a certain level, that we could hypothesize that those were going to lead to higher increases on the pre and post-test because the young people were more engaged in the program. But guess what, if we’re not tracking implementation and using that as a discussion point, we can’t have that conversation about what needs to change. And so that was one thing that I brought to that organization was a scorecard that I looked at with my program managers monthly, which is just to say, what does implementation look like right now? How much and to what degree are the young people in the program actively participating? How much and to what degree are you going out and teaching? So that we could make adjustments to those, not in a punitive way, but in a, like, what does it look like to increase engagement way in support of the overall aims and objectives?
So for me, I think the most important hire is that person leading your program. They need to know and understand and really have a passion and appetite around data. And then I think that person can work with the team and the staff to help other folks have that same passion.
Tucker: I love that. You know, what’s made possible if these things happen? If you put some of these pieces in place around your dos and your don’ts, starting small, putting a logic model together. I’m thinking about, you know, in our THRIVER program that we just did with the Pikes Peak Community Foundation, that learning community, we did an Impact and Story Energizer, which is basically in our case was a three-workshop series, two of them about impact evaluation or one of them anyway. However, it was like even stories like that. What was made possible for even those nonprofit leaders or what’s made possible when you’re able to get into this space around hiring, starting small, putting logic models together? What does that enable in your organization?
Sarah: I mean, I think ultimately the goal is to make the positive change that you hope to see in the world. I think that’s the ultimate goal of that work, and so I think it makes that impact, that positive impact, possible. I also think what it makes possible is focus, and that is I think a big thing for nonprofits. A lot of nonprofits have so much duplication. There’s competition instead of collaboration, in part because people expand without thinking about, “Should I be actually doing this?” and, “What does it take to do it well?” And so I think if more nonprofits were doing this work of evaluating impact, they would be doing less. They would be doing less and maybe they’d be doing the less that they’re doing better. And so, that is ultimately what I see. I just see a tightening up of the nonprofit landscape in the best possible way. Right now, so many nonprofits are just doing… It’s a little bit like a rummage sale sometimes. It’s like all of the things, and I get it, you know? It’s hard to say no, but ultimately I think if our goal is change and not charity, then we have to start measuring because measuring’s gonna help us say no.
Tucker: You know, you also said something else too that was making me think about from a “made possible” perspective is this gives data to inform from an objective perspective. Like you, at the very beginning of this, you talked about that story from the hospital. You said something around, you know, based upon the data that we were able to gather directly from the physicians that said, “This is an important program for us to now do.” And that gave buy-in, I’m guessing, to all the pieces. I’m thinking about the story I remember from Kevin Hagen, our co-founder at THRIVE IMPACT. And, you know, he said he was the head of, he was the CEO Feed the Children, and this program officer came to him with very clear data around the cost and the impact of this particular program which was in an orphanage in Kenya. And it was like $2,500 per kid that it cost to create the same level of impact as something else which was costing like $50 per kid. I can’t remember the exact numbers. But if his program officer wouldn’t have had the data to be able to share the return on that impact and the return on that investment, there’s no way. I mean, that was an orphanage that Kevin had very close to his heart, yes, you know, based upon Kevin’s story. And he shared that before. But without that data, it just would’ve been a subjective conversation of, like, “I think this, you think that.” And there’s no way Kevin would’ve said something about that. Creating that objectivity sounds like what’s made possible is, frankly, better conversations because you’re not in subjective arguments, definitely of what I think versus what you think. We’re actually having a conversation about the data. And I remember this happened for me too when I was at my last organization as well, that I got so tired of the subjective conversations because it was just ultimately became arguments. And then I was like, “Oh God, I need to get some form of data that I can grab onto.” And then my conversations with my CEO were literally conversations about the data. And when he would not agree with it, I wouldn’t say, “Oh, you’re not…I wouldn’t feel like he was disagreeing with me.” I would say, “Well, what does the data say? What is the data suggesting we do?” And so it allowed for us to have, frankly, literally better conversations that were much less contentious around moving forward.
Sarah: And I will say that the conversations aren’t always, you know, the answers can range from “let’s stop doing this” to “let’s create the conditions for this thing to work within the population that we’re serving.” So for example, in the example I gave earlier about low-income middle school students, ultimately, there wasn’t enough political will to stop delivering that program to middle school students. But what we could do is say, “Here are the middle school students it works for, and here’s what they need in order for it to be effective for them.” And so it also helps to have that conversation about the resources necessary to produce the positive change, which again is just then a much easier conversation to have because it’s not “I think they need this,” it’s based on the data, “Here’s the gap, and here’s the deficit we need to fill it.”
Tucker: I loved how you said, didn’t have enough political will, and I’m just thinking about all the organizations that are out there and leaders who are working with their boards as an example, and how hard is it to say no to the sacred cows? That’s, that’s what you’re talking about. The political will is at a higher level in the organization to say no because to something that we so deeply care about because maybe we started it or whatever.
Sarah: Which is why I started with the don’t dos. I started with the don’t dos because it’s so easy to not to stop something that hasn’t been started because it’s not started. Once you start something and actual people are benefiting from it, even if it’s two or three, and they have beautiful stories to tell, the ability for people to take that away, I mean, it’s just so hard. It’s just so hard, which is why the don’t do is out there for me as the first step. Like if you do not have evidence that your thing works, like first of all, that’s your first step, and then if you want to bring something to another population in another context, do. If you don’t have data to suggest that your approach is generalizable to another population, don’t start it until you have that data because stopping it is so hard.
Tucker: So hard, so hard. Well, I want to turn this into a part two because Sarah, I know you have a lot more deeper wisdom and around qualitative research, research specifically. So maybe we’ll create, we’ll do another podcast going maybe a little bit of a deeper dive into things like appreciative inquiry, qualitative research.

Sarah: Yeah, love it.
Tucker: Different methods of surveys, interview questions, sequencing of those questions because I think getting even more granular, that’ll let people, I kinda wanna let you just like geek out for a while and then let’s see how it goes. But I think that what you’re learning and have learned not only through lived experience, but also your first and second master’s programs is deeply important to this space. And even if, you know, you would take people to like level 10 and they’re at a level two right now, you know, and there’s a big gap in between, maybe that’s okay.

Sarah: For sure.

Tucker: I just kinda wanna let you go deeper and share some of this like, what was that word that you used earlier with me?

Sarah: Satisficing!

Tucker: Satisficing. I’m like, oh, let’s go into that. So anyway, thank you Sarah for sharing some of this wisdom, particularly for the small community-based nonprofits that we serve and that we work with, that we are, and appreciate your level of heart. Like what I see in you, Sarah, with this is how deeply you care and you’re like, ‘We have to care all the way to the level of deep impact evaluation.’ It’s actually an indicator of care. It is that we actually care. Not just to do the nice thing, but to actually make sure that that nice thing is actually creating the positive change.

Sarah: Absolutely

Tucker: And that’s a level of care that I appreciate how much you bring to this sector.

Sarah: Thank you.
Tucker: Awesome. Well, we’ll put a few things in the show notes, like Sarah mentioned, like a logic model. Maybe we’ll find some other things that we have around this. But take a look at those in the show notes. But otherwise, see you next time for part two. I’m just gonna tee it up right now. Part two, perfect of nonprofit leadership for the next normal around impact evaluation. Sarah, thank you so much for all your heart and your great work.

Sarah: Thank you.