In this first episode of the Working Title podcast, host Amina Mohamed explores AI’s transformative impact on the non-profit sector. In a conversation with communications scholar Rob Hunt, they grapple with the mixed results that the AI revolution has produced so far.
Through an AI-generated intro and a real conversation, host Amina Mohamed highlights the potential and pitfalls of artificial intelligence, emphasizing its role in enhancing efficiency and tackling global challenges. She delves into the Ontario Nonprofit Network’s report on AI, discussing algorithmic bias and the need for ethical AI use.
The episode features an interview with Rob Hunt, a PhD candidate at Concordia University researching AI in workplace management. Hunt discusses the concept of “bossware,” the implications of AI-driven employee monitoring, and the psychological effects of such technologies. The conversation underscores the importance of understanding and shaping AI’s role in the non-profit sector to ensure it supports rather than undermines human workers.
This is the first podcast episode from Amina Mohamed, one of five writing/podcast fellows working with The Philanthropist Journal. The fellowship is focused on the future of work and working and was made possible through funding and support from the Workforce Funder Collaborative.
Related links:
- Ontario Nonprofit Network report: The Impact of Artificial Intelligence (AI) on Canada’s Nonprofit and Charitable Sector
- The New York Times: “The ChatGPT Lawyer Explains Himself”
- Wired: “Who Is Mira Murati, OpenAI’s New Interim CEO?”
- MIT Sloan Management Review: “How to Monitor Remote Workers – Ethically”
- The Philanthropist Journal: “From Algorithms to Altruithms: The Fourth Social Purpose Revolution (Note: Though the episode names only James Stauch, Alina Turner co-wrote this article.)
Transcript
Amina Mohamed: Hi, and welcome to Working Title, a podcast about work and working in non-profits brought to you by The Philanthropist Journal. I’m your host . . .
AI bots: . . . Amina Mohamed, and today we’re diving into a topic that’s reshaping the landscape of philanthropy and social change: the profound influence of artificial intelligence on the non-profit sector. From leveraging predictive analytics to optimize fundraising strategies, to harnessing the power of machine learning to revolutionize humanitarian aid, AI is proving to be a game changer for organizations striving to make a difference. But . . .
. . . of course, with great power comes great responsibility. How do we ensure that the use of AI in philanthropy aligns with our values?
Joining us today, our thought leaders at the forefront of this AI-driven transformation in the non-profit sector together will unpack the stories, challenges, and triumphs that define this evolving landscape. So . . .
. . . whether you are a non-profit professional navigating these uncharted waters, a tech enthusiast curious about the humanitarian applications of AI, or simply someone passionate about making a positive impact in the world, you are in the right place.
Stay tuned, because the future of philanthropy is here. And it’s driven by pixels and algorithms. This . . .
. . . is Working Title. And we’re about to embark on a journey through the evolving frontier where artificial intelligence meets altruism. But first, a message from our team.
AM: Hey, real Amina here. That intro was pretty obviously presented by an AI voice. And it’s supposed to illustrate the growing but still imperfect strides that have been made in accessible AI voice technology lately. But what might be a little less obvious is the fact that the text itself was written entirely by ChatGPT.
So I prompted ChatGPT to create an opening monologue for a podcast that was investigating the impact of AI on the non-profit labour force. And what I got instead was a really, really long-winded speech – much, much longer than you heard – about how AI is uniquely poised to transform non-profits. That line about a revolution powered by pixels and algorithms? I mean, are you shocked that AI wrote that about itself?
In spite of ChatGPT’s thoughts on the matter though, I have a few questions about the future use of AI in the non-profit sector. There’s a really well-known saying in new media studies that we often overestimate the short-term impact of new technology and really underestimate its long-term effects. Think about the early rise of social media, for example. So, I grew up in the age of MSN Messenger and MySpace and Facebook. And those platforms were really exciting because being able to chat with our friends and share our lives was really novel at the time. And that change felt profound on its own. What I didn’t expect was the way that it would grow to fully redefine the way that I engage with the world, with news, my friends, my family, my job, and even myself.
Now with that in mind, I am really curious about the potential short-term effects that this new AI era can bring. As workers in non-profit, what exactly can we expect from this new tech revolution? And what role do we have to play in shaping a more equitable and safe future that is enabled but not controlled by AI? So to explore this further, I will look at the Ontario Nonprofit Network’s recent report on the impact of AI on the non-profit and charitable sector. And I will also explore ChatGPT just a little bit more. Then I’ll chat with Rob Hunt, a PhD candidate at Concordia University studying a new phenomenon called “bossware.”
A report on the impact of artificial intelligence on non-profits was released late 2023 by the Ontario Nonprofit Network. It highlights the reality that non-profits, just like many other organizations, are already using AI to support some tasks, and that as the AI sector grows, more people will use it.
The report makes the case for considering the harms of AI as well as the good, and just like all policy-making, the goal should be to make AI safe for everyone. One specific danger that report highlights is called algorithmic bias, which is when the makers of a tool fail to remove their bias or worldview from the tool itself. To give you an example, think about coming across the work of a painter who hates and never uses the colour red. That might not affect how colourful their work is. It doesn’t even stop it from being art, but it does mean that their preferences affect what you get to see in their work. Algorithmic bias is kind of similar. It can show up as echoing or platforming prejudicial, harmful, or dangerous ideas and language, either because the creator has built it in, or because many users are engaging with that kind of content. It has the potential to affect the communities that non-profits serve. Authors James Stauch and Alina Turner identify those communities as women, youth, seniors, newcomers, racialized communities, and so on. They make the point that non-profits can’t afford to be the clean-up crew of the Fourth Industrial Revolution.
Lastly, the report also makes the case that AI is not all bad but that we need to see it as a tool instead of a human substitute. According to the report, if government policies support and encourage AI to focus on what machines do better than humans, rather than focusing on replacing humans, we can grow opportunities and share wealth rather than concentrating wealth and limiting opportunities. This requires a realistic perspective so that we view AI tools as possible support systems in our work while remaining critical and cautious of their pitfalls. With this in mind, let’s now turn to the most popular AI tool in recent years, ChatGPT.
AM: When OpenAI launched ChatGPT in November 2022, it catapulted AI into every boardroom and future planning session on Earth. Though AI tools existed before ChatGPT shifted the world of artificial intelligence from a research area in the tech world to a publicly accessible tool for anyone with a computer or smartphone. Most importantly, it became the background app for most other AI-marketed tools. Most chat bots, grant-writing tools, and other apps are really just ChatGPT power platforms. So you might think that you’re using a bespoke non-profit focused tool to help you write a report. But in reality, nope, you’re using ChatGPT.
So we’ll talk a little bit more about some of these examples later on. But this does raise the question, is ChatGPT the best chatbot on the market? To which I have to say, like, I don’t know, that’s a little bit complicated. We can say yes, in the sense that it works the best, but that doesn’t mean that it’s problem-free. Take the recent example of two American lawyers who famously used the tool to submit a brief in court, and that brief cited six cases. Unfortunately for them, those cases provided by ChatGPT just didn’t exist. The legal firm – Levidow, Levidow and Oberman, which is the one that employed those two ChatGPT-using lawyers – said in their statement that they, quote, “made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” unquote. Which is to say that they just didn’t believe that the tool could or would lie.
Now, this assumption got me thinking: if non-profit organizations rely on AI to support communications, so, like, to write emails or copy for their website or their reports, are we offloading the most valuable parts of our work to a tool that is known for being inaccurate? Is this really the best use of new tech? Or are there other avenues that might be worth exploring?
ChatGPT is a form of something called artificial general intelligence, or AGI, which is an effort to get computers to think on their own, instead of using preset algorithms. ChatGPT and AGI in general pose a number of risks, especially to mission-driven organizations. And as we saw in the example of the Levidow lawyers, the tool is not exactly committed to accuracy. But it’s actually more than that. Researchers have also identified copyrighted content, misinformation and harmful biases in generated text. The way ChatGPT creates answers to our questions is by scraping information available on the internet, and then pooling similar concepts together. It does not take into consideration harmful language, images, phrases, or racist tropes. And yet, Mira Murati, CTO of OpenAI, said in a recent interview with Wired magazine that she believes AI tools are experiential and necessary. She claims we . . .
AI bot: . . . have to make sure that people experience what this technology is capable of. It’s futile to resist it. I think it’s important to embrace it and figure out how it’s going to go well. Initially, there was speculation that AI would first automate a bunch of jobs, and creativity was the area where humans had a monopoly. But we’ve seen that these AI models have the potential to really be creative. When you see artists play with Dall-E, the outputs are really magnificent.
AM: I just can’t help but to include that quote, because the idea that resisting a new technology is futile is so dystopian. But in this case, it’s a really relevant point. In a 2020 article for The Philanthropist Journal, James Stauch [and Alina Turner] notes that non-profit jobs are generally in the most AI-proof vocations but that such professions should not merely be AI-proof; they must be AI-ready, which is to say that we can’t just be put in a position to respond to the outcomes of this new technology, or become victims of rapid change We have to do our part in shaping its future, to explore some of these risks that are attached to AI and how workforce platforms are inching more AI-powered services into our workflow. We also need to explore something called “bossware.”
AM: COVID ushered in the transition to fully remote work faster than anyone was prepared for. And with that transition, many large platforms like Microsoft Teams and Google released AI-enabled supports that were supposed to make remote work easier and to help manage staff. Between 2020 and 2023, countless articles from every news outlet reported on the rise of worker surveillance and offered tips on how to avoid it. And some, like the MIT Sloan Management Review, provided tips on how organizations could monitor remote workers ethically. These tools are often referred to as “bossware.” The ONN report that I mentioned earlier lists AI-based apps that assist in recruitment, evaluation, and retention as a growing area. Now, as remote work continues, and as AI threatens to make further tech integrations a day-to-day affair, I felt like it was necessary to look into what this might mean for non-profit work. Because although our sector is unique, it’s still vulnerable to larger labour trends. To do this, I spoke with Rob Hunt, a PhD candidate at Concordia University in communications who researches bossware. I was especially curious for his take on human capital management programs, which are tools that are mostly used in HR.
AM: Hi, Rob, thank you so much for joining me today and welcome to the show. I was wondering if we could start with you giving us a little bit of an introduction into both yourself and your research.
Rob Hunt: My name is Rob Hunt. I’m a PhD candidate in communication studies at Concordia University in Montreal. And yeah, I’m writing my dissertation on the use of AI as a tool for managing workers. And I’m primarily looking at office work, partially because there’s already a lot of really good work out there about gig-work platforms like Uber or TaskRabbit. There’s some good work about companies like Amazon that use a lot of algorithmic management in their warehouses. So I was sort of curious, like, how is this actually moving into what you might call white-collar work?
AM: All right. So Rob, let’s start at the top. Okay. Would you be able to help me understand a little bit more about what bossware is and what it might look like? If I were to find it in my office?
RH: So bossware? Yeah, bossware is a handy tool – or handy word, sorry – because it does capture one aspect of it very well, and which I think was especially acute when people were working remotely during the pandemic. Which is, if your boss isn’t there physically to watch you work, how do they know you’re working, which is not really a concern much for the workers. But for the bosses, obviously. So in some ways, but bossware is literally just software that managers can use to monitor their employees remotely or in the office. And this can be pretty simplistic things like counting how many times you tap on your keyboard in a day, seeing that your mouse moves every five minutes, 10 minutes – whatever setting your boss might want to pick. It can be pretty invasive. It can be there as bossware that takes a screenshot of your desktop, of your computer, at timed intervals and just sends it back to your boss, or sometimes freelance workers have that installed on the computers for clients to make sure they’re not embezzling hours, I guess. Or you can even just turn on the computer – I mean the camera on their computer and just directly monitor the workers in their houses. And the big controversy that I think happened during COVID was that workers started to realize that your employer could install this on your workplace computer equipment – your phone, tablet – without telling you. So there’s a big kind of balloon of concern during COVID that basically your boss was spying on you in your house, or just kind of monitoring you in these crude ways that felt undignified.
So bossware, yeah, that’s my take on bossware – it’s basically like, monitoring software. And there’s a little bit of maybe analysis in there, like it might give you a kind of productivity score, or rank employees from, you know, top to bottom who’s doing the best work. And I’m interested in that. But I’m kind of interested more broadly in what gets called human capital management software. There’s also human capital management platforms. Sometimes these terms change: I find the human resources world tends to be a bit into trends and get excited about buzzwords. So yeah, basically, what my research is about is looking at the human capital management software industry, some of which are just basically platforms in the same way. They’re just platforms for within a company. So a company might get, say, Microsoft’s HCM platform and use it across their whole enterprise. And there are actually some interesting similarities, I think, between what most people think of as platform labour, which again, is stuff like – I think Uber being kind of the paradigmatic case – where you know, you don’t really work directly for the company, your labour is entirely mediated through the platform, you don’t necessarily have a human boss, you just have an app that tells you what to do. And I think we’re seeing that kind of happen in even more traditional white-collar office spaces, where you might have less contact with the human manager, even if you’re working in the office. There’s software that I’ve looked at that’s kind of, like, I think of it as like automated micromanaging, where you might get these sort of automated nudges. You see it in other kind of quotidian applications of AI in our normal lives, kind of like when Gmail suggests how you read your email, or, you know, even just autocomplete on your phone.
We’re kind of seeing those things spread into work more generally. And this is even before ChatGPT-style generative AI, just kind of what the company might call nudges. They’re kind of just like, “Hey, you haven’t sent the quarterly report yet. And you normally would do that in April,” or things like that, that would just sort of pop up on your screen automatically, without having a human manager write you an email to that effect.
AM: That was a fantastic, fantastic answer. I actually wanted to ask you a little bit about the term “human capital management.” And I asked, because in the non-profit space, what I’m hearing really frequently from the folks that I’m speaking to is that the work is really only as impactful as the staff who are doing it. And that has a lot to do with, you know, the relationships that they build and institutional knowledge that they have about the organization and about their clients or the people that they serve. But a big question across the board, as always, it’s about retention. Yeah, how can we keep people? How can we ensure that they’re working? How can we ensure that they’re happy in the office? And I’m curious if human capital management platforms do anything to kind of support that. And also, if you would know anything, or if you’ve come across in your research, the kinds of responses that workers are having about being subjected to, you know, working with some of these platforms?
RH: Yeah, definitely, the talent and retention thing is a big piece of it. Now, that’s kind of why I expanded it into HCM platforms. Because, you know, the basic monitoring of workers, I think it’s pretty straightforward, but also pretty easy just to say this is kind of bad. Where they think the more interesting and complicated stuff is in the more – I noticed sort of touchy-feely things that HCM platforms claim they can do. A big, big part of my research, which is building out of research I did on marketing software for my master’s is about emotion detection. And kind of like psychological surveillance, you could call it. This is everything from, like, Zoom, for example, recently announced that they have a tool that will monitor the emotional reactions of people in meetings and give the speaker feedback, which, emotion detection has been around for a while. It’s highly controversial. The science behind it is debated even in psychology. But it’s interesting that there’s obviously an appetite for it – it keeps coming back. And then you also have things trying to sort of feel out the mood of employees, or morale. So yeah, HCM platform is one of their big things right now. They would call it talent management, which is kind of the whole life cycle of an employee – you would even start with hiring. I mean, as you probably heard, there’s a lot of use of AI now to do sort of screening for initial rounds of hiring, whether it’s automatically analyzing resumés, or whether it’s using, in my opinion, somewhat dubious techniques to analyze video interviews. Again, this is the kind of emotion detection or personality profiling that will be applied to things like a video interview. I’m sorry, I’ve lost my train of thought.
AM: That’s okay. So is that, like, you’re doing an interview and maybe you look away a little too much and the machine reads that and potentially reads that as being suspicious. Are you, like, unprepared?
RH: Yeah, exactly. I’m—
AM: —unprepared. That’s the word.
RH: Yes. Yes, exactly. It’s, I mean, again, that’s why I think of it as pretty dubious. It has a bit of, you know – it taps into a long tradition, I’d say, in management culture. You know, Myers Briggs Personality profiling has been very popular for decades, despite having no empirical evidence to support it as a useful tool. I’m sure anyone who’s ever worked in a corporation has experienced having to do some kind of team building or personality-profiling exercise that are often based on these kind of unsubstantiated psychological theories, or even ones that are more rigorous – there’s one called OCEAN. That’s a personality profiling tool that has a bit more science behind it than Myers Briggs.
But, you know, the degree to which they work or are even appropriate to do at work, I think is debatable. But yeah, so to answer your question, yes. The video analysis is they’ll run it through this AI that will purport to rate people’s confidence and affability. I mean, you can probably pick it depending on the product, you can probably pick which kind of metrics you want it to look for. But yeah, trustworthiness. There is a whole part of this industry that’s very concerned about trust and safety. Some of the companies quite explicitly frame employees as threats, as risks. Some of that’s industry-specific, you know, like financial services and banks have to do a lot of compliance. And that’s where a lot of the AI comes in for them is doing things like automating legal compliance, and making sure people aren’t stealing, essentially. So that’s where, you know, whether or not it’s surveillance becomes kind of a question, I think, because some of that stuff’s legally mandated. Part of it is sometimes the, you know – everything has to be countable for these data analytics to work. So it often reduces your work to, like, how much time you spent in Microsoft Word that day, or how many emails you sent. You know, it kind of turns everything into numbers in a way that I think can be dehumanizing and demoralizing. If you’re aware that that’s what your work has been evaluated on, is just these brute countable qualities, like how many times you move your mouse.
AM: You know something about that is just kind of creepy and inhumane. And it reminds me that one of the things that I find so pervasive about remote work, even now, and this is across industries, is that it can kind of strip some of the softer relational components of our work in a way that feels counterintuitive for productivity. And I don’t know about anyone else, but for me, at least, isolation and monitoring through third-party apps has just never made me feel better about what I do. So that’s definitely something that I’m turning over in my mind right now. But that said, I also heard you say something about human capital management systems. And I personally am quite interested in them as I was looking into a report from the Ontario Nonprofit Network for the sake of this episode. And it’s really about AI in the non-profit sector, specifically as it affects workers. And that report highlighted that HR is one of the areas that might be more or most prone to the adoption of some AI tools.
But I’m wondering if you can give some examples of platforms that are a part of this human capital management, this HCM bucket, so that people can see whether or not they may be already working with some of these platforms already?
RH: Yes, that’s true. I mean, a big one right now is Microsoft. Which – Microsoft has so many products, it’s a bit hard to define where one ends and one begins, because they kind of want to be, I don’t know – you could say they want to be like the Amazon of workplace software, except there are way too many competitors. They’re not quite that dominant. But even at the university where I work and I’m a student, they adopted Microsoft for email and Teams, and I guess the whole 365 package, and it does do things like send you little reports once a month, about how productive you were in the different apps. And you can turn that off, I think, at Concordia, but yeah, so if you’re using Microsoft products at work, there’s a possibility that your manager has access to data on how you’re using it. So that’s a big one. And then there’s other big ones – like ADP is another one. But then there’s also small companies that will just offer one product, like that personality-profiling software I’ve talked about before. There’s a Canadian company called Knockri that does video interview assessments. They have an emphasis on equity, diversity, and inclusion. I mean, that’s actually such a growing sector, I’d be surprised if anyone worked for an at-all-large corporation if they didn’t use them. But some other ones would be Service Now, they’re a big one and I said ADP.
The thing is, is this stuff being rolled out, and like I said, Zoom now offers this emotional-feedback software, Google’s starting to move into this, pretty much, the big change from my work that was totally unpredictable when I started this five years ago – it has been ChatGPT and other generative AI. So that’s something I’m having to figure out as it’s unfolding, is how generative AI is being incorporated into management. And I don’t really have concrete answers for that yet. But I think you’re seeing a big push, as you said at the beginning of our conversation, to adopt that in lots of different ways. And you know, ChatGPT does not monitor you. It’s not bossware, but it’s getting incorporated into software tools that you might use normally, like, say, the Adobe suite, or even just Gmail or Outlook. It opens the door, I think, to having everything you do in those apps feed back into something, whether it’s just a feedback into the generative AI model, or whether it’s feeding back data to your boss.
AM: Yeah, I see what you mean. I moved back into the non-profit space about a year ago, for a little bit of an academic and corporate break. And we have Microsoft 365. And what I find particularly interesting about Microsoft, because I was working on Google before, is that email that you just mentioned – where you get a bit of a summary of how you use platforms, or all the various tools throughout the week – for me is incredibly imprecise. And that has a lot to do with the fact that I take a lot of calls, or I move between Outlook and Drive, mostly because I think I prefer the function of Drive. But then it’s the shareability of Outlook and OneDrive that people really rely on. But what I’ll get is “you spent zero hours doing this,” or “you had 15 uninterrupted days with no meetings,” even though in my Microsoft calendar, there are a series of meetings. And even though I’m present in those meetings, and I’m probably using, you know, Teams to take them, I found it really interesting that there’s a bit of a disconnect between how I’m actually using the platforms and what the reporting structure is communicating. And I am very lucky in the sense that my director does not micromanage me in that way. So it never really comes up. But it is a source of real curiosity for me in the sense that I can clearly account for what I did, but it is never represented. And I always find that so curious.
RH: Yeah, that’s a great example. Right? I mean, you can already see how faulty these tools could be. I think in a smaller organization, it’s probably less of a concern, as long as you can talk to your manager about what actually you were doing. I think it gets more concerning when you’re working at a large enough organization that it, that data, just might get sent elsewhere, without you even knowing. Yeah, so I think you’re right, yeah, exactly. Some of these tools are really – they’ll them AI, but they’re actually just really basic timekeeping technologies that don’t necessarily tell you anything. And even the flipside of your examples is, what would it mean if you had a meeting every day for 15 days? Like, I think there’s also a larger, more abstract issue of what we mean by productivity. And what do we want out of our workers? Like, is 15 days of meetings being productive? I would say, probably not. But it might depend on what kind of job you have, of course.
AM: Okay. I think my last question for you is specifically for our listeners. And as I mentioned before, for those who are in positions of leadership in non-profits who might be thinking about the use of these platforms or might be buying into the enthusiasm or are worried as you so rightly put earlier, that they might be left behind by this new wave of technology. What do you think they should keep in mind regarding their use, and this issue just in general?
RH: That’s a great question. Just be wary of chasing trends for the sake of chasing them. Be aware that a lot of the claims that AI companies are telling you are marketing claims. They’re not – they haven’t been tested, they have nothing to test them against. There’s quite a lot of hype in the AI world. And I think there’s this very carefully manufactured pressure to make people think that if they – yeah, like I said, if you don’t adopt AI, your organization is irrelevant. So yeah, I think you just need to be more purposeful. Don’t think, “Oh, let’s get AI and figure out what we want to do with it.” Look at your organization and think about what are problems you’re having, and then consider technology as one of the possible solutions to the problem. But I definitely wouldn’t even default to it as the only one.
Be skeptical. Like I said, a lot of it is just marketing hype. But I think there’s also – there’s a decent amount of pseudoscience being kind of peddled by some of these companies. I think emotion detection is very dubious, based on debatable psychological theories, and then even more debatable technological techniques. Any thing in the psychological arena, just be skeptical of it and think about why you think you need it. Even the Zoom tool I mentioned, why would you want automatic unasked-for emotional feedback from everyone in a meeting? Like, how useful is that really? And maybe you could just ask people –I think my main advice in general is talk to your workers. First of all, include them in the conversation about whether or not your organization should use these tools, especially if they are bossware. Don’t, you know, don’t spy on people. That’s just basically unethical, I think, even though it’s not illegal.
And yeah, I would just try and have an organizational-wide conversation about what these tools are for, what the organization’s goals are for adopting them. You mentioned people being nervous about being automated out of jobs. I think that is a very real concern. There’s a tendency to just dismiss things as knee-jerk Luddism. But as I think a lot of people are recognizing more and more, the original Luddites actually had a good point about their labour being made more tedious and less dignified – and paid less. So yeah, I think in the enthusiasm for AI, just keep in mind that you’re working with human beings, and they should be included in that decision.
AM: Perfect. Thank you so much, Rob. That was fantastic.
RH: Thank you.
AM: Thank you again to Rob for taking the time to chat with me. Now, that brings us to the end of today’s episode, but our deep dives into the intersections of AI and non-profit don’t end here. They continue next time with a conversation with Dr. Peter Lewis about some really, really cool real-time collaborations happening over at Ontario Tech University. So until then, thank you so much for listening.
Transcribed by https://otter.ai