June 14, 2023
Join us in this enlightening episode as we delve into the world of evaluation and innovation with two esteemed guests, James Stauch, Executive Director for the Institute for Community Prosperity at Mount Royal University and Bethany White, Evaluation Manager from United Way of Calgary and Area. In this conversation, we explore the critical role of evaluation in the context of innovation, particularly in social and community development initiatives. Together with James and Bethany, we navigate the complexities of evaluating innovative projects and programs, uncovering the key principles and methodologies that drive effective evaluation practices.
Listen to this episodeJune 14, 2023; Evaluation in Innovation
0:12 JAMES GAMAGE, HOST:
Welcome to this episode of the Responsible Disruption podcast. My name is James Gamage, Director of Innovation and the Social Impact Lab with the United Way of Calgarian and Area. At the Social Impact Lab, we work almost exclusively on upstream projects with designs on the social future of our province. The problem we have is that it's often complicated to evaluate the direct impact of these projects as the environments in which they play are complex and interconnected. That's why I'm so excited for today's conversation on evaluation and innovation. I have with me, two local experts, James Stauch and Bethany White, whose insights will shine some light on this often nebulous subject and hopefully leave you with some insights into how to approach evaluating reporting impact.
So let's introduce our guests. James Stauch is the Executive Director of the Institute for Community Prosperity at Mount Royal University. During his time with MRU, he has developed social innovation, leadership and systems focused learning programs for both undergraduates and the broader community. A former foundation executive and philanthropy and social change consultant, including as Vice President and Program Director of the Gordon Foundation. James currently serves as a director on the board of the Alberta Ecotrust as an adviser to the nonprofit Resilience Lab and on the editorial Advisory Board of the Philanthropist. He is the lead author of an annual scan of trends and emerging issues produced in partnership with the Calgary Foundation.
Bethany White is the manager of evaluation and insights at United Way of Calgary and Area, a graduate of the University of Calgary with a Master's degree in sociology. She is a social research evaluation and impact assessment professional with extensive experience in nonprofit, government, and private sectors and is skilled in policy and social research, data analysis, data visualization, knowledge translation, summative and formative and developmental evaluation. Welcome to you both. So I'm really looking forward to this discussion with you because I have to be honest, this is an area which is a bit of a black box for me. I'm used to... my background is in business innovation and I'm used to projects where the outcomes are pretty obvious. You know, new customers, additional sales revenue, but evaluating the success of social innovation sector is so much more complex and I'm fascinated to discuss it with both of you experts. I'm going to start off today, if you can both share your story of how you ended up working in your field. What drew you to this industry and what are you most passionate about? If I can ask that of you first, James.
03:01 JAMES STAUCH, GUEST 1:
So first of all, thank you for hosting this conversation and for providing space just to dive into social impact topics at depth and have the space to really consider these topics and critically evaluate them because they're really important. And how we think about them is really important. So I came to this space of social impact learning or social impact education through a very circuitous route. I originally was trained as an urban planner, realized I didn't want to be an urban planner but I was exposed to a whole range of skillsets through that training, and so that's everything from public policy to understanding philosophical and ethical concepts to understanding a range of technical competencies and the important role of community and how communities evolve over time and what that means for societies when you scale that up. And so I've been involved a lot in philanthropy and as a grant maker, as a funder, and so when I come to evaluation and talk about that, that's mainly the lens that I'll probably be drawing on. And I'll also talk a little bit about the role of post-secondary as the role of universities, the role of research cultures as well in that vein.
04:36 BETHANY WHITE, GUEST 2:
Thank you both, Jameses for joining in this conversation today. And I'll tell a little bit of the story about how I came to be here, similar to James’ very circuitous route. I did my undergrad and graduate studies in sociology and focused on environmental studies actually at the time. I was very curious about the interactions of humans with the physical environment and how that shapes cultures and societies. I ended up as a result setting environmental policy and then moved into environmental consulting where I focused on socioeconomic impact assessments for major oil and gas and mining developments in Western and Northern Canada. I did that for the better part of a decade and it's interesting in the consulting field where you are given the purview of expert and yet learning is not as emphasized, so it's an interesting field to grow up in as a professional and then to reflect on after having left it in the downturn of 2016. I didn't know where my skills might take me and how transferable they might be but I was fortunate enough to find a position with the City of Calgary and working in their policy and research unit there for a little while and getting more exposure to the government and nonprofit sector as well, and met United Way folks at the time as well, too. So when my when my contract came up at City of Calgary, I made some connections at United Way and this position as an evaluator was available. So by that time I was able to see the connection between the skills I'd grown as in impact assessment to evaluation. And kind of moved in that direction over here and have since become manager of that team here with two evaluators.
I remember thinking when I first joined United Way that the purpose of evaluation in this sector was to was to make sure that the agencies we were funding were doing what we asked them to do and that they were achieving results. And I soon came to realise that that was not my job. I very quickly became, you know, learned that it's really about being able to use that as a use evaluation as a means of learning about how to improve what we're doing and to learn about collaboration and how to do that better and to focus less on accountability and more on progress and learning and practice in the sector. So that's sort of the mindset I've brought to my work since those early days of my role here at United Way. And continue to be grateful for the opportunity in the space to grow my understanding. I often admit that this is the first time I've actually done evaluation in this context. So everything I know about theory of change and theory transformation, developmental evaluation, etcetera, came out of my work here with the United Way and our funded agencies as around Indigenous evaluation as well too. So there's I'm very grateful for the people and partners that I've come in contact with over the last five and a bit years. It's the most growth and learning I've ever done in my career. Coming from consulting where it's very much a “what do you know now that has value and I can be paid for in this moment” so I'll stop there and continue with the conversation. [Laughs]
08:23 JAMES: No, that’s absolutely the consulting world. [Laughs]
So you mentioned evaluation. You mentioned social impact there and I think you touched on something that we were talking about before we came online with this podcast around the baggage associated with the word evaluation. Talk a little bit more about that and maybe James, you can step in as well.
08:49 BETHANY: Yeah, certainly. I think it has to do with the context in which evaluation happens for the large part as something that between a funder and a fundee and imposed on fundees by a funder and their priorities, and so the value piece of evaluation is often not defined by those most affected and because of the power dynamics inherent in our sector and the way that resources flow through it. So I've learned in many contexts to avoid the term evaluation and to speak primarily to what do we need to know about what we're doing that can help us do it better and that can help us learn from this for the next time, or to learn the value of what we've done to share that with others so that others might learn from it as well. So that's generally the approach I take, and I often also consider it an important part of effective project management, even if you don't call it evaluation, so that resonates with some folks who have more of a mind towards project management. But yeah, it's certainly a term that that comes with a lot of expectations that are rooted in the traditional methods of program evaluation that permeate and continue to influence the way that evaluation is done in a status quo kind of setting, where it's added on or imposed rather than embedded within a process.
10:19 STAUCH: I really like Bethany how you started off by saying it's fundamentally about values. About what we value and how we value and the concept of value is embedded in evaluation but we sometimes have substituted measurements or metrics for evaluation as if they're kind of synonyms, but actually evaluation is much more about, well, how do we come to an understanding about A, what are we clear on what we value and what is important to us, what tools do we need to discover that sometimes it's measurement, sometimes it's summative, sometimes it's formative, and sometimes it's about storytelling. And so all of these tools need to be mobilized and a culture of curiosity, a culture of inquiry, and a culture of trying to constantly be clear on what the values of what you're delivering, whether it's a program or it's an entire organization, whether it's a system. That exercise of even defining and figuring out well, what is it that really matters to us? What is important? And a lot of those things you can't measure, of course. How do you measure identity or belonging or? I mean there can be crude proxies for some of these. And they're worth exploring. But I think that that question of value is really important.
Mark Carney wrote a book last year or a couple of years ago, on this concept of value and he started off by saying it's pretty ironic that you have very clear metrics around returns, around profitability, around return on investment with say a company like Amazon, which at the time I think was $1.7 trillion in valuation and yet we have the world's lungs, the Amazon rainforest... We have no agreement on what the value of that is and surely it's more than $1.7 trillion. But our entire culture, our entire ethos has not set us up to value that. And so that can scale down to a very local community context where there are certain things that are complete blind spots and evaluation that are actually the really important bits. And we end up measuring stuff because we can measure it and then we sometimes program to the measurement especially and funders kind of amp that up too. So if we can measure it then we'll program to it. And you see it in all sectors. You see it in education, for example, that we know we can measure math easily so let's place a lot of value on math. But let's not place value on, I don't know drama, right? Creativity and even though we know those are really important 21st century skills. The ability to communicate, the ability to collaborate, the ability to improvise, to adapt, to change. These are all really important skills to critically think, but those are really tough things to measure. But they shouldn't be exempt from evaluation, and they should be core, because those are what we value.
13:32 JAMES: I'm going to jump on something you said there about the pressures of funders and what they might bring to what we evaluate. Do you think we look for the easiest thing to evaluate normally because we're just trying to satisfy that need as opposed to the real need of the program?
13:56 BETHANY: In some instances, yes. I mean, I don't think funders necessarily go into it with that as their purpose, their mindset. They might just be using the tools that are somewhat embedded with that or engrained with that idea. It's kind of like using the right tool for the right job. So you think the only tool you have or think you have available to you is to measure how many people were served and what percent of them said X or Y. And then using that to compare cost programs or cross initiatives and to somehow say this one has a better return than that one. I think a lot of it is due to having initially evaluation practice being built out of the world of performance management rather than practice management or strategy management or community engagement. I learned that it initially was actually mainly about justifying people's investments in anything particularly social or as opposed to investing in a business which has fairly clear traits of return with them, and we know the limitations of those kinds of measurements, even in the business world, when we know the limitations of GDP, the Amazon rainforest doesn't have a value unless you cut it down. But we're still stuck with, like, well, what's the alternative, then? What do we do? And evaluation as a practice has certainly evolved since the 70’s when it first kind of came to be and read a lot about the development of utilization focused or principles based or developmental evaluation, which are fundamentally focused on what are we trying to achieve and how are we going to know that we've gotten there. And so you're centering that problem or that purpose, rather than centering what might be required of you in order to remain sustainable and to be able to speak to that in a way that gathers support
I think too often evaluation only happens and it happens in different pockets because it's required by supporter or funder or donor, and that there is not enough conversation about a funder or donor who may have their own priorities, but how do they recognize how many different ways that organization now needs to package what they've done for different audiences, and what that takes away from the work that they're trying to do and what might be better is to consider asking those organizations what they consider to be meaningful and relevant representations of the impact of that work and then leaving it to the donors and funders to synthesize that and to consider some kind of fulsome value of it rather than asking fundees to do that work.
16:51 STAUCH: The other thing that often happens, and this is a problem that is particularly acute in Alberta, is that we have a lot of one year funding arrangements and the typical thing is you'll apply to a funder, whether it's government or sometimes philanthropic funder, where you might get a third of what you asked for. So there's an assumption there about your capability to deliver and did you inflate the numbers and so you get a one year grant and you were hoping for something more than that and you are still required to track and report on metrics, indicators, outputs, outcomes. And what happens is at the end of that year period, it's likely there's going to be some moving on to something else. And so the numbers go into a black box. Nobody really knows what happens with those numbers and then you have to kind of find somebody else to continue the investment in this program or project that's no longer new and it's a tougher sell. And so that is not a recipe for innovation. Innovation does not happen in that environment because you have a funder who's not even sure why they're asking for this. But they ask for it because everyone's asking for it. It's just the culture, right? So we don't really know what happens with it and the people who are expected to collect the information are typically not evaluators. They're not often data scientists or researchers. So they may be people who are actually better purpose doing the work. We don't ask nurses and hospitals, I don't think, and if we do, we should stop doing this to spend half their time or even a tenth of their time evaluating what they're doing. They are caregivers, and that's what they're good at doing, and that's what they're trained to do. And so don't ask a frontline youth worker to evaluate. Don't ask somebody who's working with a senior citizen or who's trying to build affordable housing or whatever the case may be to then turn around and spend time and evaluation when they're not really good at it for the most part. And it goes into a black box. There's no comparability. There's no sharing. The community isn't able to use the data, and so we have a measurement and evaluation ecosystem that is disjointed and has no point to it. It doesn't lead to anything significant. And so I think there's a lot of issues there that funders need to revisit.
There are beacons of hope out there. Ontario Trillium Foundation. So a public provincial funder, which is notable, has just announced under a Ford government, a program for systems innovation in the youth serving sector. So this is interesting. The minimum grant is two years, but they can grant up to five or six years and it is constructed in a way that understands that it's a long journey where a relationship based on trust is fostered with the funder. Still a public funder. And they're much more interested in knowing what are you learning on your journey? How are you rethinking, retooling, rediscovering emergent properties? They're not evaluating, “what did you say you would do? And at the end of the six years how come you didn't do that?” You did something better. And more interesting and more awesome. But you didn't do the thing that we funded you for. In Alberta, we're still obsessed with, “you didn't do the thing we funded you for,” which is absolutely bonkers. We need to get away from that culture because it's the last thing... It's the farthest thing from innovation.
20:40 BETHANY: And you brought up systems change there too. And I think there's a really strong relationship between systems change and social innovation. As social innovation, as a vehicle for propelling systems change and to doing things differently in a way that adds value. That’s sort of the definition I have for innovation, doing things differently to add value to the system and I think traditional forms or structures of evaluation are not actually conducive to change to systems change at all. They privilege transaction over transformation and they privilege performance over practice and or it continues to create barriers to real change and it does eventually make you wonder if they're interested in change at all, or to addressing root causes, or to making things better, or to justify further existence of anything that may be of particular interest to them. So it's seeking to actually find the right tools for that shift is something that I'm most interested in, in the work that we do day-to-day, especially too, as we've moved to being much more engaged in fairly ambitious and innovative and large scale initiatives. The task of evaluation and needs to follow suit with that purpose rather than to continue to use program evaluation tools and to require logic models of each fundee and things like that. Programs will never change the world, even in aggregate. There's things behind it and around it that need to be understood and contextualized in order to affect real change.
22:35 JAMES: So you've both talked about the disconnection between evaluation and the actual work and that’s sort of the curse of short termism. But then you're touching on systems change and how to evaluate systems change. Talk more about that, Bethany. What tools could an organization use to effectively evaluate assistance change initiative?
23:03 BETHANY: When I think about tools for that, it's an emerging field. So we've been doing a little bit of research on it more recently with PolicyWise and you we've come across are things like the waters of systems change framework to understand and define what systems change is and what domains systems change can occur and to think about how change needs to occur at various levels from mindsets to relationships to policy and resource flow. And employing tools and methods that helps you to surface those things and to map them or to relate to them in ways that do not put onus on a particular single organization. And is it a way of fostering collaboration as well, too. I think the traditional models of evaluation actually support a competitive environment, but they do not support a collaborative environment. So there are tools available out there that like developmental evaluation as an approach in particular, or principles-based evaluation that are geared towards the purpose of understanding. What have we done and the big “we”, what are we doing? What does it mean and and what does that mean for in terms of what steps we might take next and to engender a conversation at that level around where we're not saying James, you need to do this, but to say... It's tough though because you have this power dynamics at the same time. You can't ignore that they're there and yet still try to engender collaboration. It's one small step in that direction. It's actually look at very nitty gritty, what questions and tools and methods are you employing and are they conducive for that kind of engagement and interaction.
24:50 JAMES: Yeah, I mean you bring it up that context is everything, when you're in an innovation project, that they're never linear and actually all you're doing is really proving successful failure of the next step, which might define what the step is beyond that. So you know development of developmental evaluation gives you that flexibility does it to be able to...
25:16 BETHANY: Yeah, definitely. It's much more of an ongoing process. You're making space to stop and reflect on what we've done and what we do next. Unlike something like programmatic evaluation, which tends to focus on waiting to the so-called end of whatever intervention and then looking back. It's usually too late by then to inform anything. And what damage has been done in the meantime? You also have formative evaluation, where it's similar to developmental evaluation, but I would say the difference is in that case is that you're trying to test an existing model and your focus is on that. Is it working in this context? But with developmental evaluation, I think much more flexible, much more geared towards questions that are meant to get to that community strategy management or engagement at that level. It's not at the level of the program. It's at the level of the community or at the level of the issue or the coalition or network involved, so there's that kind of appeal to it in that context. And it's that much further removed from those traditional models that are still dependent on tracing back to a logic model that may make zero sense and the complexity of the actual complexity of the world. It's designed more to actually acknowledge complexity and to work with complexity as opposed to trying to simplify the world around us and to some kind of literally a model.
26:47 JAMES: Linear logic model.
26:49 STAUCH: Yeah, and good evaluators, they're deeply consultative. They build trust. They go on a journey with you. It's often multi months, multi-year and all of the players start to see this evaluator as a kind of neutral trusted party. It’s so different than like compared to like the McKinsey syndrome, that McKinsey phenomenon where you just kind of swoop in as a consultant and just kind of decree based on your wisdom and your schooling. What ought to be or what the situation is. Developmental evaluation is deeply curious and it takes time. It takes time for everyone to move away from, like you say, that competitive culture where we have fostered with our kind of older form of evaluation or older culture evaluation which is, let's measure stuff and let's hoard that information, we'll share it with the funder if they need to hear it. But we're not going to share this with the world because some of it's going to make us vulnerable, right? And we don't know how we compare it with organization X, Y, or Z and so if you foster a really good developmental evaluation process and culture, people start to let their guard down and they start to realize and part of the magic is they're getting real time feedback through the process on how this is unfolding and they're hearing their voice in that. And so that builds more trust and more trust, and it leads to much richer insight and shared learning than you would ever get from bringing in an outsider to, in a dispassionate way, kind of look at it from above and then report and then it sits on a shelf and it doesn't really stick.
28:36 BETHANY: Yeah, and the data itself is actually used for its intended purpose as opposed to being sent to some external other for some unknown purpose. It's not about your grading or performance. It's about understanding the value of what we all come together to do together.
28:55 STAUCH: Well, the other thing with systems evaluation is it' you're in a non-binary world and you're in a non-binary space and you're in a nonlinear reality. So it's useful, rather than binaries to think of polarities. So often in systems you have two things that are true, and they should be contradictory. And they are contradictory, but they're both true, and they're both happening. And this happens quite frequently. Plus you have emerging, unpredicted, unpredictable phenomenon that nobody saw coming. Nobody has the tools to measure or had pre planned that, but wow, this has emerged as a thing. And so the flexibility, the nimbleness, the creativity that you need to bring to systems evaluation and that non-linear approach is really, really, really critical. And not to take anything away from logic models, but a logic model will only go so far in a systems context.
29:59 BETHANY: Indeed, and I think too, James, you mentioned earlier, something about the burdens of evaluation. When we're talking about developmental evaluation, it is quite intense, resource intensive, and yet it may not feel like a burden because it's something you've developed and are have embedded within your processes as something of value to the work, whereas evaluation can feel imposed and burdensome and pending on the context in which it's attached to a particular initiative. Where does it come from? Whose questions are we trying to answer here? Do we have ownership of this as a process?
30:35 STAUCH: That's so true. Yeah, the evaluation feels to many people like an albatross sitting on their neck. And developmental evaluation done well is actually a rich growing experience that you kind of forget that you're even doing evaluation and you're doing it right. It's like, no, this is organizational learning. This is how we learn and grow and become better at what we do and that ought to be the purpose of evaluation anyway.
31:06 JAMES: How do you get traditional funders on board with that way of evaluating?
31:13 STAUCH: Well, they have their own systems challenges. We need funders that have... and this is going to be a terrible stereotype, but I'm gonna say it anyway. In Calgary, we need fewer funders with geophysicists and engineers and accountants on their boards, people who are numbers, people who love numbers and love measurement. And we need people who understand social systems on these boards who actually do some of the work and understand some of these dynamics. Who can bring a little more nuance. We need people who understand the power of narrative and storytelling. We sometimes delude ourselves that stories don't matter, that stories are kind of on the side. But stories are really, really, really critical. Let's go outside the nonprofit sector for a second and look at how our storytelling around climate change and how we're acting on climate change in this province. So we have a lot of people. I've heard lots of geophysicists and engineers say you know what, it's carbon capture and storage. This is technology that is still in its infancy. Canada is not even a world leader in this technology. Not anymore, anyway. And we are telling ourselves a story that this will save us. This will achieve our net zero targets. This will make Alberta not have to make harder choices. If you care about measurement and numbers and actual trends. You know that's a lie. You know that's not true. And so I think even the people who have the most affinity for numbers and measurement are still telling themselves stories. And are not necessarily coming clean with that right. So it's a fascinating thing. We are made of stories and at the end of the day, donors will fund things because the story is compelling. Much more so, ten times more frequently than are the numbers compelling. And they may say differently, but it is not true. It’s the story.
33:22 JAMES: It's funny, I was reading an article about carbon capture this morning and I was thinking exactly the same thing. We appear to be kicking the can down the road and based on a story or a belief of the future, which isn't necessarily the case. I want to talk to you about ethical considerations as well, and how it feels like the story you're telling about developmental evaluation is that ethical considerations around collection of data seem to be solved by the more collaborative approach, more embedded nature of an evaluator in developmental evaluation. Is that the case? Is that a byproduct of developmental evaluation that is more ethically sound?
34:18 BETHANY: By itself, no. You would still need to be intentional about your ethical considerations, whether it's a collaborative space or not. And so there's many different things to consider around confidentiality and privacy and informed consent of those that you are serving. The populations you're serving. And that does not come automatically from engaging them in that process. It comes with the relationship and being very intentional about creating those protections in that system and structure. Respect for diversity equity inclusion is also... I think that system is more conducive to it, but you still need to be intentional about it. You still need to figure out where you need to embed that within your stories and your thinking around what you're looking for, the questions you ask, who you're asking, who's at the table, who's not... There's all kinds of things you could do. The four of us could do a developmental evaluation and come up with some ideas and solutions. Will they work? Probably not. If we're not more fully engaged in what the reality is, people with lived experience or different cultural sensitivities and where we actually going to try this. Are we trying it in a neighborhood, at a city level. Do we then say now that we've made a success here in Sunalta that it will work in Edmonton? It's in a different neighborhood. There are all kinds of assumptions and biases that you need to consider. And I think that's something that can run in parallel to and support a proper and good developmental evaluation. But yeah, in and of itself, that's not what that is.
36:15 STAUCH: It's interesting because there's certain ethical frameworks that lend themselves to measurement more easily because they're about outcomes, they're about consequences and sometimes they're even just about doing the math. So it's sort of utilitarian concept of pleasure versus pain has been adapted into effective altruism where the idea is... If you're a really smart person, you could work in a socially purposeful career for a nonprofit, not make much money. And according to effective altruism, you'd be wasting your life. You're much better off to become a hedge fund trader and give 10% of your wealth to another non profit that will distribute the funds effectively etcetera, etcetera. And this is all based on you know numeric calculations and it is an ethical framework of sorts that in some ways is very compelling. But there are other ethical frameworks that are much less easy to quantify and you mention the word intention. How do you value moving towards an equity-based model that might require changes in how you operate that make you less efficient or make your output less, but you're doing the right thing. You're doing something that from a different justice framework, makes sense, but it doesn't lend itself to the same kind of measurement outcome. So those are really interesting things to dig into when you dive into this question of evaluation. Value implies there's an ethical framework that you're appealing to. Consciously, you're not. And some of those ethics are really, really difficult to attach any kind of metric to.
38:17 JAMES: So we're drawing to a close on this discussion. So anything that we haven't touched on that you think our listeners might like to know or might add value to the conversation about evaluation in general.
38:33 STAUCH: Yeah, I would like to talk about the role of post-secondary and universities in particular. So in Canada we invest about $43 billion in R&D and the vast majority of that is tech or commercial. About 1/3 of that to 1/2 goes to directly to higher education. If you were to try and measure how much of that would qualify as social innovation or social R&D, it's microscopic. Maybe one 100th and I take that number because $430 million is the figure that the Social Sciences and Humanities Research Council grants this past year. And so let's assume and this is a generous and wrong assumption, but let's assume that's all for social action. Of course it isn't. But that should imply that, OK, well, that's a pretty big chunk of change in and of itself, even though it's a fraction of R&D research. So that means that we have this huge army of social science researchers who could be mobilized to help understand community dynamics, community metrics, social impact and so on. But yet that's not really happening. There are some academics who are quite embedded in community, but in general there's a huge disconnect. There's a huge wall between academic research and social insight in the field in the community. So why is that happening? Why are we not able to access the huge resources within academia to make social impact measurement flourish and why is academia so seemingly disconnected from the community? I think that is a systems challenge in and of itself. That's no small challenge that is the reasons for that are both ancient and modern, but that underpins a big part of our inability to get on top of understanding social impact and and communicating new insights across the field in a rapid, useful way.
And just look at food security, for example. We have had actually a very public facing research consortium focus on food security for now about at least a decade. It's called PROOF, and there's researchers. It's based at U of T but there's researchers right across the country, including in Calgary. And they have been better than the vast majority of research consortia in getting their information out into the world for free in Open Access journals, and so on. And yet the communities uptake of that information and our ability, funders or frontline agencies to absorb and change how we approach food security or governments for that matter. We seem to have almost perfectly ignored that information, or we're aware of it but you know what, we're just too busy feeding hungry kids or whatever the excuse is. But if you really claim that you're changing the system, then you actually need to measure what it is you're doing to change the system. So if we know that there are certain forms of intervention that are more effective than others. And we know that there's certain policy changes that are required. Part of your evaluation is, well, how are you changing policy? Who are you meeting with? How many elected officials? At what levels? What's the message you're trying to get across? What kind of public advocacy work are you doing? Those are things that many nonprofit boards still run from, like the plague. And yet it's perfectly permissible. And it's charitable. If you're saying you're interested in systems change, you need to embrace that, not just measure how many widgets you're creating or how many hampers you're filling, or whatever the case may be. What are you actually doing to change the system? So those are really collective herd of elephants in the room, I guess when we talk about evaluation.
43:03 JAMES: Thanks James. Bethany, anything to leave our listeners with?
43:04 BETHANY: Oh my goodness, I don't know how to follow that. I just want to say thank you, James, for your insights. It's always a pleasure listening to you speak and reading about your insights, and I might circle back to about evaluating innovation and some of the things that can actually foster conducive environment for that. So one of the things is and these are things we've kind of touched on over the course of the hour, but one of the big ones is shifting that evaluation mindset away from emphasizing accountability and judgment to being oriented to learning and adaptive approaches. That's a big one and that's a mindset, not only in evaluation but in the sector, in philanthropy, etcetera, that we should be telling a story about and emphasizing as much as possible. And whatever circles you might have some small influence in. And I think we've also talked about insofar as evaluation can be a barrier to collaboration. It can also be a barrier to innovation as well, too, and so you need to determine what mechanisms might be best. I think just surfacing that as I think there's maybe a disconnect between an evaluation tool and the mindset or the outcome or the mechanism through which you're trying to achieve change. That it's just an add-on rather than something that can really affect your process and outcomes depending on the way that you do it. And so while it has a lot of baggage, it's not to be avoided, because if you're avoiding it is at the cost of perhaps the purpose you intend to effect, to advance and to achieve in the long term.
I think evaluation as a practice and philanthropy as a practice should come to terms with its risk aversion as well too. There needs to be room for error. There needs to be room for practice, insofar as you're able to pay attention to that. You're also better able to respond to potential risk to community as well, because if you're only focusing on what the successes were and glossing over what the challenges were, that could end up causing more damage on the long term and take you away from your intended purpose, especially around actually transforming and improving the world around us. Yeah, I think I'll stop there.
45:49 JAMES: No, it's a great way to round out the conversation. So thank you, James. Thank you, Bethany, for your time today. I found that conversation very insightful and I'm sure our listeners will do too. And thank you to our listeners for choosing to spend the time with us. Keep an eye out for the next episode where we'll be continuing this discussion by bringing the theory we've talked about today into a practical space and looking at what evaluation and innovation looks like in practice. So until next time goodbye.
[Outro music]
That's all for today's episode of Responsible Disruption. Thank you for tuning in and we hope you found the conversation valuable. If you did, don't forget to follow, rate, and share wherever you get your podcasts. To stay up to date on future episodes and show notes, visit our website @ Social Impact Lab dot com or follow us on social and until next time, keep on designing a better world.