Creating solutions with community using AI

In the second episode of the Working Title podcast, host Amina Mohamed interviews Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech University. The discussion explores how AI is transforming non-profit organizations, and the importance of data privacy.

In the second episode of the Working Title podcast, host Amina Mohamed interviews Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech University. The discussion explores how AI is transforming non-profit organizations, and the importance of data privacy.


Peter Lewis talks about his work with small organizations and non-profits, emphasizing partnerships with groups like the Canadian National Institute for the Blind and the Pamoja Institute to tackle issues such as food insecurity and accessibility in AI transparency tools. He addresses non-profits’ mixed feelings about AI, its potential benefits, and the critical need to manage AI-induced biases. Lewis advocates for a balanced approach – integrating AI while preserving the mission-driven essence of non-profits and ensuring that meaningful work remains for employees.


This is the second podcast episode from Amina Mohamed, one of five writing/podcast fellows working with The Philanthropist Journal. The fellowship is focused on the future of work and working and was made possible through funding and support from the Workforce Funder Collaborative.

Related links:


Transcript

Amina Mohamed: Welcome back to Working Title, a podcast by The Philanthropist Journal. I’m your host, Amina Mohamed, and today we are continuing our two-episode deep dive into the intersecting world of non-profit and AI. In this episode, we’ll get to hear from Dr. Peter Lewis, who’s a Canada Research Chair in Trustworthy AI. And he’s going to talk to us about some projects that he’s working on over at Ontario Tech University. But before that, I just wanted to talk a little bit more about some non-profit-centred AI tools.

AM: Following up from our last episode, where we got to talk to Rob Hunt, who’s a PhD candidate at Concordia University – and for those of you who haven’t heard that yet, I would definitely go back and give it a listen; Rob is really, really insightful. We talked about bossware, and bossware is the kind of technology used by employers to oversee remote workers. A really popular example of that is actually a feature that’s enabled on Microsoft Teams. But alongside that, we also touched on the kind of technology and AI-enabled tools that are created to support unique industries.

One question that really tugged at my mind afterwards was which platforms alongside the big ones – obviously, like Microsoft and Google – were specifically targeting non-profits. So I did some research. And I discovered that they lined up pretty neatly with Rob’s initial idea in that they were mainly related to fundraising. So a good example is a product called Raise by Gravyty. Raise is the name of the product, Gravyty is the company that created Raise, and Raise itself is a subscribable tech product, meaning that you pay a monthly fee to use it. And when you go on their website, they really almost immediately greet you with a pop-up on the home page, promising that you’re going to earn at least two times the amount of your Raise annual contract price in the first year. And there’s something about the kind of infomercial-style marketing that is both really nostalgic here and gives me pause a little bit, because there’s a proof of concept that’s missing here, right?

Now, putting Raise aside, there are also additional tools like Grantable, which is an AI-powered grant-writing system. And what I find really interesting about Grantable – and this stands true for almost all of the AI apps, right, or at least the vast majority – is that they all integrate a really familiar large language model, a form of AGI, you might even say, and that’s ChatGPT, which is kind of an unavoidable behemoth in this space. And this is important because OpenAI made it really clear that all of the information that is fed into ChatGPT is used and it’s used to train its algorithm. Which means that all ChatGPT products or GPT power platforms – so that’s ChatGPT and also Dall-E, which is the image-generating platform, also made by OpenAI – that all of the work that is generated, all of the prompts, everything, none of that’s private. It all serves to train the product. So there’s something really important to think about here, especially for non-profits who may have some qualms about the really sensitive information that they share with these platforms.

Now, there are many more apps that have similar promises to give you more output with less work so long as you subscribe. DonorPerfect is one of them, iWave is another, and there are way more that promise you the easiest workflow ever when you use their products. But what’s important here is two things. One, that privacy component that we just discussed with GPT. So there are some really important considerations that non-profits need to make there. And second, that none of these platforms actually run independently. So in order to be used, they have to be monitored. So they don’t actually replace any worker entirely. They’re just a tool that you add to your workflow. And as many of us know, and I will gladly attest, Asana and Monday, Trello, Teams – none of these platforms or tools have ever really removed the need for any human worker. And in some cases, they don’t even make the work faster. But the platforms themselves, either because the AI revolution is really early or because they’re just not designed to run independently, they still require oversight. And I myself have lost countless hours inputting data into various platforms that are supposed to make my life easier. They’re supposed to make my job easier. But then I just turned around and I spent almost twice as much time correcting mistakes or inputting more data or overseeing the workflow. And so here Rob’s advice that organizations should be really aware of why they’re onboarding a platform is so true. Distinguishing business needs from tech FOMO is something that just every organization has to do for itself. Regular check-ins with our teams to see whether or not these platforms are actually helping and making the work easier and faster, as per all of those infomercial-like ads, is also really key to kind of hold us accountable and make sure that we’re focused on productivity, as opposed to keeping up with trends.

Moving us beyond platforms, subscribable tools, and the general productization of tech, I became really curious to learn whether there are any interesting things that might be merging a non-profit, mission-driven focus with the technical world of AI. Basically, like some form of tech-enabled philanthropy. So to learn more about that, I called up Dr. Peter Lewis, who’s a computer science professor at Ontario Tech University and Canada Research Chair on trustworthy AI, to talk to us a little bit about some of the projects that he’s working on.

Peter Lewis: Hello, I’m Dr. Peter Lewis, I’m a Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech University, which is in Oshawa, Ontario.

AM: Welcome to the show, Dr. Lewis. So from what I understand of your work, obviously, you’re an academic because you’re based at the university, but you also do a fair amount of community-oriented work. And I’m wondering if you could walk us through that just a little bit? What does your day to day look like? And also, what are your primary focuses?

PL: So yeah, I’m a professor, I’m an academic researcher, but I think it’s really important to work with companies and civil society, non-profit organizations, in order to really ground the work, especially in something like artificial intelligence that’s having the kinds of impact and raising the questions that it is today. So, you know, my academic background was on the, kind of, really in the technical parts of certain kinds of artificial intelligence, it seems like before it became cool, if you like. And so it’s been really interesting to kind of watch that transition, as it started to kind of find its way into all these different organizations.

I’ve worked in general with smaller organizations, I would say, and some of those have been non-profits, some of them have been smaller businesses. And the common theme really, I think, is organizations that don’t have the in-house expertise or the resource or ability to hire their own AI team, but could really benefit from some expert advice and some support in figuring out what AI means for them, what sort of opportunities and risks that might be there. And so that’s been part of it, has been around working with organizations just to try and figure out what this AI thing is and how it might impact them and what opportunities there are. I’ve also been particularly working with non-profits that advocate for various equity-deserving groups – such as, for example, people with disabilities, people who are food insecure, vulnerable newcomers – and trying to understand how AI impacts those groups, and then how that will then in turn need to impact the advocacy and support work that they do.

AM: Oh, there’s just so much to unpack there. And I’m really excited to learn more about the projects that you’re working on, and what these collaborations and supports for community really look like. But before we get into that, one of the questions that I had for you, and one of the curiosities that I have just out of doing my own research, is around AI risks and opportunities, because I feel very much like in the past year or so, or in the past, like, two years maybe, the emergence of AI as a major area of interest has grown exponentially. And with that interest, there have been a number of conversations around opportunities and risks. There are folks who are really anxious about what the advent of AI on a popular level really means for them in their jobs. And then there are others who are really excited about opportunities, and you know, the research is incredibly mixed. And even when I speak to people, folks are generally of two minds; it really depends on how they feel at that moment. I’m wondering for you – who’s, you know, in the field and speaking to people and working on these projects – what you’re hearing?

PL: I think most of the conversations that I have, organizations wanting to know, what is this AI thing? And is it really going to have a huge impact on me? I think this speaks to the – perhaps the sense of anxiety that you talked about earlier? Is there a sense that if I don’t get up to speed with this, then it’s going to have a huge impact on the way we work that we’re not in control of. And thankfully, actually the answer to that question is usually no, you’re going to be okay. And it does just represent a set of opportunities, if used in a, what I might say a careful and mindful way. But usually, I think that the questions that we get asked are fairly kind of broad and open-ended in terms of, you know, what does this mean for us, right, and how do we start to map that out?

And as I said, I think with some of the non-profits with a social mission, for example, I think often they have specific questions around the impact that AI in the world, generally, now – in society and government and companies – the impact that AI is having on their service users or the communities that they’re there to advocate for. So often they have quite specific questions about that which depend on the particular group that we’re thinking about. For example, it’s well documented now that AI systems have what’s been called a kind of, quote, “structure or propensity to systematize, and propagate bias and discrimination.” And of course, if you’re advocating for an equity-seeking group, this is a really important question, right? I think non-profits should be aware of the issues around that, and how that works, and what it means, and to talk about it.

And then some really practical things, you know: what can we do to assess and mitigate these sorts of negatives that often come with this technology? We have a lot of conversations around things like that – how to do, say, bias audits, for machine-learning models, for example, working with some healthcare organizations around that at the moment. And this is really because I think people, the organizations, they don’t want to accidentally sleepwalk into doing something that’s actually counter their mission without even realizing because they adopted a particular piece of technology. But that’s the kind of thing that we’re working with people and helping them.

AM: Yeah, not just from my own research, but even just coming from the non-profit space, it makes so much sense. And I feel like one of the really key opportunities and unique features of the non-profit industry and sector is that kind of careful consideration around the “why,” around ensuring that things are mission-aligned, and that’s one of the things that I love about our sector so much. But I’m really glad that that level of care and consideration is also taking place in these partnerships. But speaking of partnerships, I know that you and your team at Ontario Tech have an interesting project or partnership going on with CNIB. And I was wondering if you could tell me a little bit more about what that’s about.

PL: We have a strategic partnership with the Canadian National Institute for the Blind, CNIB. And we’re currently working on a project with them around what explainable AI should be for people with sight loss, various different types. One of the key things in deploying an AI system into your organization is often trying to understand how it works. I mean, they’re notoriously black-box systems, which basically means that we don’t necessarily understand how they come up with the decisions that they come up with. Now – and it’s not just that the users don’t, it’s often that the people who develop the system don’t either. This is really what that sort of, quote, “black box,” unquote, terminology means. And so explanations of AI systems have become a really important part of it. Now one of the problems with these is that they are fundamentally, usually the technologies now, fundamentally really visual kinds of technologies. They’re heat maps, they’re complex infographics and things like this, which are not particularly accessible, not very helpful if you’re living with sight loss.

AM: That’s amazing. Immediately, I imagine that there haven’t been very many projects like this in the world. And so I wonder, as you’re approaching this, and as your team is approaching this, what does success look like for you? Or actually, to phrase it a better way, what does a useful deployment or use of AI look like on a project like this?

PL: Yeah, that’s a good question. And actually, one of the principles of the project we’ve got is that we’re not going into it with any preconceptions around what that needs to be. And co-design with the sight-loss community is a really important part of that. There are obvious traditional ways in which we do non-visual interactions with computers, such as haptic interfaces, such as verbal interfaces and things like that. All of those are possibilities. And we can have kind of conversations with an AI system, for example, in which it might be able to explain, in particular ways, some of the decisions that have been made and why. And there are also kind of audio descriptive versions of graphical things. You can, for example, have an audio description of a chart that might have been produced by one of the classical explainable AI tools. So those are all things that are possible. I think what we really want to understand here is what’s the best practice there, what’s really helpful for people, and then how do we codify that? There’s a few options, but I think we need, you know – one of the mandates of our project really is to be able to develop these best-practice guidelines so that all sorts of organizations can do explainable AI in a way that is more accessible.

AM: I think it very easily goes without saying that this work with the CNIB is absolutely incredible. And I really can’t think – both in my research and in my own experience, and from my conversations with people – that I have heard of anything that is focused in quite this way to really rethink not just the role of technology but about opportunities in a way that’s really community-first. And so I just wanted to start off by saying that I think this is so inspiring. Well, not start off. I think I said it earlier. But just really reiterate how cool this is. And to also ask you a little bit more about a project that I think you’re doing in Toronto, about food insecurity. And I know that you mentioned at the top of this interview, but you were talking about kind of creating systems to help alleviate food insecurity. And I think that’s definitely top of mind for everyone right now. But I’m really curious about what that kind of project might look like, and also regarding technical tools, and what the execution might be, as well.

PL: Yeah. So food security is a huge issue globally, as well as locally. And I think it’s kind of incumbent on all of us to, when we have something we can do, to try and tackle these things. And I think one thing that I would argue is that the solution to food insecurity is not more food banks, right? It’s actually in solving the structural issues behind it. And so what we’re looking at doing at the moment is looking at how we can empower communities to connect in order to be able to solve their own food-insecurity challenges. We’re doing this with an organization called the Pamoja Institute. And we – essentially, the way the technology works, it’s like on the face of it, a food-sharing app, right? It’s like an Uber Eats, but for food sharing and food security. But really, the clever thing is that we’re looking at the social network that underpins the food sharing, because actually, that tells us about how well-connected people are who are doing all that labour within those communities? Who are the people who are not connected? Who are the people who are very tenuously connected, who maybe sometimes are able to be supported by their community, and sometimes not. And then we’re actually repurposing or we’re reimagining, if you like, recommender systems, in order to be able to make recommendations to people to engage in different ways and to connect in different ways, such that we build social capital within those networks, so that we build multiple links, if you like. Kind of bonding, kind of social capital where we can create resilience in the network.

And we’re also looking at how we can bridge between different communities. So for example, making recommendations that brings together two different communities in two neighbouring buildings that might not have been connected before. And then by creating those additional links, we create more opportunities for people to be able to support each other. And so really, what this is about is having a little bit of algorithmic work behind the scenes that models that social network. And then intentionally targets how we can strengthen it in various different ways that people occupying that social network can support each other through things like food security. And it’s not just limited to that. For example, I know the team is also looking at how we can support people in the winter months as well, through, say, the sharing of coats and blankets and things like this. I think there’s all sorts of opportunities, if we can target this technology towards strengthening communities, for those communities to then be empowered to support themselves. So that’s really what this project is about.

AM: All right, quick note from editing Amina with an update on the food insecurity app that Dr. Lewis was just describing. So, since recording, the Pamoja Institute have actually released more information about the app, which is now called Zero Hunger on their website. And if you’re interested, I really encourage you to go read more – it’s really cool. You can find them at pamojainstitute.org.

AM: Oh, man, you know, I’m really thoroughly enjoying our conversation. I just have to say that I am – I feel like I’m learning so much. And your perspectives are really, really unique. And I really appreciate you taking the time to talk with me today. But I’m very conscious of time. And I know that we’re nearing the end, and I have two regional questions that I’m just dying to ask you. And one of them is just a really broad-strokes question about the state of non-profits in Canada in general. And so I know that you’ve worked internationally, and now you’re working here fairly locally as well with some communities, and you’re creating bespoke products. And I’m curious if, in your perspective, you feel like the state of, you know, our non-profit industry is fairly healthy, where we’re focused on future opportunities in maintaining the work that we’re doing, or if you have found, in your experience, that there are actually some gaps that we might want to fill.

PL: Canada has a thriving non-profit sector. When you look at in comparison with many other countries, it’s really exciting all the different things that are happening here. And I think in part that is, you know, as we talked about earlier, that is in part due to the level of philanthropy that we have here and the community connectedness of some of those non-profits. So I think there are absolute opportunities. There are really great opportunities here for Canadian non-profits supported well, and to be able to be global leaders in this space and how to really do this community-driven, public-interest, AI technology. I think there’s a lot of really exciting opportunities there.

AM: Well, that’s extremely heartening. And I for one, I’m very, very happy to hear that. And I think I would very strongly agree. So my second question, also regional in nature, is about the state of tech in Canada, which is to say that, one of the things that I’ve been turning over in my mind for a very long time is the fact that – both within this episode as we’re talking about, you know, the advent of AI and its relation to possible implementation in non-profit – one of the things that also concerns me, and concerns so many people, is the fact that many of the largest players in tech, the ones who create major platforms, are companies that are formed in different nations, so just outside of Canada. And so we can look at “FANG” [an acronym for Facebook, Amazon, Apple, Netflix, and Google] for example – you know, Facebook, or Meta, or Amazon, or some of the platforms that we were talking about earlier today, in our episode as well, like Google and Microsoft – are fundamentally American companies. And so they’re built with just different values in mind. And so I’m always curious about the best ways to go about not recreating or retroactively creating something to suit us after the fact. But what does it mean to have a tech-enabled industry or, you know, partnerships with non-profits that are distinctly Canadian that focus on leveraging technology from a place that is uniquely reflective of our society and our values and the way that we live?

I’m very curious to ask you this, because I know that you’ve worked internationally, and now you’re here in Canada, doing things that are extremely bespoke. You’re very, very close to community. And so I always wonder, what is the best way to go about, or what are the things that we need to think about even when we are approaching, you know, this conversation from the vantage point of Canadians building meaningful guardrails that are reflective of the way that we live and we view the world?

PL: Yeah, you’re right. That’s a huge question. But it’s also a really important one. And it’s probably one that comes with a huge answer as well, to be honest with you. So I think that the concern is well founded, that if we’re not on the front foot with this, then we end up very much using technology that’s been developed with a set of values that we don’t recognize or don’t align with the way that we would have done it. And so I think, you know, you talked a little bit earlier on about Uber and platform labour, and the kind of rise of, say, in this case, Californian and tech companies with a particular worldview, and then we kind of have to go along with that, because that’s the way things have happened. And I’m not saying there’s not benefits to that, but it certainly has a range of impacts that we might want to be different. And I think in answer to your question about what’s the right way, I think there’s many different aspects to it.

So, you know, on the one hand, there’s things like regulation, so we absolutely should be encouraging and facilitating the creation of good quality legislation and regulation around this. And there’s been some progress on this around the world and within Canada. And then we also have, you know, guidelines, that can be very helpful for people who are looking for how to do that the right way. For example, the Government of Ontario have produced a set of what they call beta principles, which talk about how AI systems should be developed in order to be appropriate for use here. And one of those that I think is really important that they’ve really highlighted there is human-centricity. And I think this is perhaps something that, in my experience – having worked in Europe and in North America – I think this is something that I really see coming through in a Canadian attitude towards technology, is that human-centric development process, right? Regulation is important, but it’s not enough. It’s also important that we co-design things with people, we understand the way they perceive it, and we don’t just say, “Oh well, this passed the regulation, and it’s passed all the engineering checklist things that we decided were important to be able to verify this thing, therefore people will trust it.” Like, I think that’s not the right way to think about it. But instead, if we’re asking people, “What are your concerns about this? And how do you perceive this? And how would you use it? And what would the risks be to you?” And really trying to understand that diversity of perspectives that people would have about the development of a new technology at the design stage, and the conception stage. I think that’s something that we can do really well here in Canada, and I’ve seen examples of that with numerous organizations.

And things like privacy by design, you know, piloted and championed in Canada. I think we have a real kind of equity-by-design, AI approach that we could really lead with globally. And there’s a lot of evidence of that, for example. In summary, I think there’s the regulatory, legal kind of thing, there’s the technical correctness – so a lot of people who talk about, say, responsible and trustworthy AI, they’re actually talking about how to make reliable technical systems. And that’s a really important question. But it’s not the same question as, you know, how are people going to react to, what’s the impact on a community going to be of a particular technology, if it’s designed this way or designed that way? I think my starting principle would be involve people at the start and do it through co-design. And be prepared to be surprised, and be prepared to decide that it’s not the right solution at all. But also, I think if you do that you stand the best chance of getting it right and getting something that really will benefit people and people will legitimately trust, right? In a justifiable way, because they’ll be part of the process. To me, I think that’s part of what I see Canadian civil society doing, starting to do, really well with AI systems that’s maybe a little bit different from some of the examples you might see elsewhere.

AM: All right. To wrap up our lovely conversation today, I wanted to ask you if there are any big things that you’ve been thinking about or anything that’s been really exciting or really energizing to you that you might want to share with our audience. Basically, any final thoughts?

PL: One of the things that I’ve written about and I do think about quite a lot is, I feel like for too long we’ve been, essentially, on the defensive when it comes to things like AI technology. So for example, we look at a piece of technology that comes out, and then we try and analyze it and we realize there’s bias. And so we’re kind of defending against, say, gender bias or ethnicity bias in a tool that someone else has developed. And then we’re in a kind of defensive situation. And I think I’d much rather we moved to being on the front foot. And I think there’s a lot of opportunities to use AI-related technologies of various different kinds to do things like community building, strengthening resilience in organizations. I talked a little bit earlier on about the work we’re doing on building social capital and resilient communities to support things like food security. And I think there’s really great opportunities for us to reinvent, if you like, the way AI is even thought about as something that can do that, because it absolutely can. But we’re used to actually, instead, these technologies doing things like recommending something for us to buy. And it’s a very different kind of economic-driven interaction, rather than, say, a community-focused interaction.

So I think I’m really excited about all the opportunities there and the partnerships that come with the non-profits that will drive this. Because I think there’s all sorts of avenues for new AI-based technologies to enhance and support community action and act in people’s interests, support them to be able to be empowered and to be able to achieve their own ends, and things like that, that we haven’t quite seen yet, given the focus that we’ve had, so far in the role of AI in much more economic interactions.

AM: I just want to say a really, really big thank-you to Dr. Peter Lewis. His work is so groundbreaking and also such a clear example of what’s possible when we work in partnership with one another. So as Dr. Lewis says, co-creation and co-design are way more than consultation; they are partnership. Canadian non-profits already rely really deeply on their relationships and longstanding connections with the communities that they serve. And that really unique awareness of service gaps, discrepancies, and opportunity is what makes them so special. Using this knowledge and leveraging it and these relationships is so key because it’s impossible to manufacture these relationships unorganically. Like, there’s no way that without long-standing effort in a community, really good staffing, and genuine empathy and care that you could create such lasting bonds. And yet Canadian non-profits are so well known for doing exactly that. And these things don’t just support the growth of our sector either. They can also help, as we’ve seen through the work of Dr. Lewis, inform and positively influence the future of the tech industry too.

Now that ONN report that I keep referencing talks about the importance of trustworthy AI that has proper governance and oversight, especially when it comes to non-profit, just given the complexity of our work. So without really carefully constructed ethical and legal frameworks that can guide AI, there’s a possibility that using it can cause inequities and uncertainties and harm that would kind of negate all of the positives and the efficiencies that it’s supposed to bring. But bearing today’s conversations in mind, I think I really believe that this ethical work is possible, and not just possible, but it’s happening. And it’s evidenced by some of the really unique and bespoke work that’s happening at Ontario Tech led by Dr. Lewis. Now with greater investment in slower work and ethically motivated partnerships, the non-profit sector is in such a unique and wonderful position to support the growth and success of more conscientious and tech-enabled work.

That brings us to the end of the show. In our next episode, our journey into non-profit work is going to take a slightly new turn as we look into the state of work and working in non-profit: quite specifically, what it’s like to work at and clock in to a non-profit job. Until then, thank you so much for listening.

Subscribe

Weekly news & analysis

Staying current on the Canadian non-profit sector has never been easier

This field is for validation purposes and should be left unchanged.