From Algorithms to Altruithms: The Fourth Social Purpose Revolution

This is the first of two articles about artificial intelligence.

 

Although we recognize it is a loaded question, we asked Alexa, Amazon’s virtual assistant, “how can we solve homelessness?”  The boilerplate answer, not surprisingly: “Sorry, I don’t know that one.”

It is easy to conclude that machine intelligence is a long way from helping us solve social problems.  The future, however, is nearer than we think. Moreover, we believe the role of artificial intelligence, or AI, in delivering social good and diminishing social harm, is the most important topic that the social impact sector should be considering. The future of humanity quite literally depends on it.

As a chatbot, Alexa is an example of narrow, or specialized, AI. It is a reminder that AI is not a distant, far-into-the-future topic, but rather something that is present in the daily lives of most Canadians. From the predictive text of your Gmail replies, to your Facebook feed and Netflix viewing suggestions, to detecting and limiting the fraudulent use of your credit card, AI is increasingly omnipresent.

While popular culture is awash in dystopian applications of AI, from Terminator to Black Mirror, we are also seeing many socially-beneficial applications of AI emerge, including in health care diagnosis, analyzing and summarizing research, soil and water conservation, and even in helping produce artistic works. As we have argued in our paper In Search of the Altruithm, “AI can be made to be generative, beautiful and not merely ethical, but rationally compassionate and just, enriching our lives beyond what we can currently imagine.”[i] But, in order for that to happen, we have some catching up to do.

Technological revolution and social innovation

As the World Economic Forum has signaled, the “fourth industrial revolution,” fueled by machine-learning, big data, the Internet of Things (IoT), autonomous vehicles, 3-D printing, blockchain, gene editing, implantable devices, and, potentially, quantum computing, will profoundly alter the way we live, work, and relate to one another. It will also radically affect the ways in which we approach “social good.” For example, as philanthropy futurist Lucy Bernholz queries, when so much of our individual and collective dependence on digital infrastructure is intensifying, how much of this is in the public domain, and what does civil society look like when it takes on a largely digital form?[ii]

If the last two centuries have taught us anything, surely it is that technological advancements are not equally distributed or universally beneficial. The three previous industrial revolutions were hugely disruptive, bringing important advancements for human prosperity and health, but also giving rise to significant social challenges and market externalities such as human-induced climate change (which began as we started burning coal), rapid urbanization, child labour, urban sanitation issues, air and water pollution, and industrial-scale farming and incarceration. Our social innovation has always played catch-up with our tech innovation.

The first industrial revolution, marked by steam power, the combustion of coal, and mechanization, ushered in revisions to the Poor Laws of England, facilitating new forms of charitable relief. Mutual societies and industrial non-profits emerged, as did social reformers such as Robert Owen, Elizabeth Fry, and John Howard. New fields such as sociology and psychology arose, alongside early forms of what we now call corporate social responsibility. Despite the emergence of these double-edged innovations, economic and social disparities intensified.

The second industrial revolution, led by electrification, mass production, and the internal combustion engine, saw the emergence of social work, urban planning, wilderness conservation, co-ops, and formal charitable organizations such as the YMCA, Salvation Army, and Red Cross. The Pemsel case, the 1891 decision of the British House of Lords, outlined the “four pillars” of charity still in use today; pillars we now recognize as wholly inadequate for a modern context.[iii] In Canada, we saw the establishment of peer-support bodies such as the Neighborhood Workers Association in Ontario and the Antigonish Movement in Nova Scotia, which blended adult education, co-operatives, microfinance, and community development. The seeds of what we now call the “social economy” were planted, profoundly changing how society thought about, and dealt with, social good, relocating it from an elite concern to a democratic one.

The post-war third industrial revolution, marked by electronics, automation, mass access to aviation, and, eventually, the internet, saw the rise (and partial fall) of the welfare state, and cemented the role of non-profit organizations as essential to alleviating the effects of both market and government failures. Mass media technologies facilitated the rapid spread of social movements, as we saw how resurgent Indigenous nationhood, anti-war, and anti-colonial movements the world over were inspired by Ghandi, the civil rights movement, and other phenomena.  In Canada, the “third sector” ballooned: between 1974 and 1990 the number of registered charities grew by 80% from 35,113 to 63,186.[iv]

While many areas of social need continue to grow, the combined value of government, charitable, and corporate support for the innovation and development of the social sector has not kept pace. As a result, we now see a structural social deficit, with intensified calls for investments in social innovation, social finance, social R&D, adaptive capacity, and systems leadership.

The social impact sector is a by-product of past industrial revolutions, but its role has been more about cleaning up the broken pieces rather than influencing social and economic policy and generating new futures. This can, and must, change.

We have already seen AI start to displace entire professions, including bank tellers, customer service representatives, telemarketers, and stock and bond traders. Oil sands and mining firms are “de-manning” or “zero manning.” Within the next two decades, AI is expected to outperform, and displace translators, truck drivers, retail sales workers, and many other jobs.[v] New jobs will doubtless appear, but some predict that virtually all human employment could be fully automated within a century, finally realizing economist John Maynard Keynes’ prediction of permanent structural “technological unemployment.”

As surely as day follows night, these transformations will upend society and the social sector. In thinking about collective impact, complex social challenges, and “wicked problems,” it is unimaginable not to consider AI. However, there is precious little discussion about AI in the philanthropic or non-profit sector. Canada is an emerging world leader in AI science and innovation, and increasingly recognized for having an incredibly supportive ecosystem for social innovation. Yet a yawning chasm remains between social and tech. We can’t afford to be the clean-up crew of the fourth industrial revolution. We are all passengers on an AI plane that is picking up exponential momentum with each metre of runway. Some of the passengers are asking for more.

It starts with a conversation, such as what Ottawa-based Future of Good or Newspeak House in the UK are prodding us to talk about. Collaborative think tanks like the Centre for Collective Intelligence in the UK and the Center for Artificial Intelligence in Society at the University of Southern California are resourced to take a deeper dive. At these centres, social workers, artists, and health practitioners work alongside data scientists, software engineers and machine learning specialists.

What is AI?

AI technology is commonly separated into “weak” and “strong” forms. The former has been around as long as computers. In fact, computer scientist John McCarthy coined the term AI in the 1950s as “the science and engineering of making intelligent machines.” Weak AI, of which Alexa and Siri are advanced examples, is entirely the product of human programming. Strong AI, on the other hand, refers to unsupervised machine learning: when algorithms manifest through computational trial and error (deep reinforcement learning) and autonomous optimization, not through human design. For example, the computer program AlphaGo used an advanced form of weak AI to defeat a human player at the Chinese game Go in 2016, a significantly more complex feat than IBM Deep Blue’s defeat of Gary Kasparov in chess, already a generation ago.

A mere two years later, AlphaGo Zero, the fourth iteration of the “program” (one can call it this because it has freed itself from the constraints of human knowledge), presents a example of strong AI that uses unsupervised learning. Interestingly, it also uses a fraction of the power consumption of its early predecessors.

The Holy Grail of AI, to some, is general artificial intelligence (AGI), a machine that can understand or learn any intellectual task that a human being can. Such AI remains, for now, science fiction. What is more likely in the near future is that we will experience a number of strong AIs saturating every aspect of our lives (from facial recognition and natural language processing to data mining and predictive analytics). Either way, we should pay close attention to the advice and warnings of those who study this topic. Whether we are looking at a 15-year or a 50-year horizon, a massive disruption will likely set into motion the next epoch of human existence. Because of this, it is worth examining a range of possible 22nd century futures.

Why should we care?

We need to care about AI because the future of humanity will likely be shaped by machine learning more than any other factor, and in ways that range from extinction or enslavement to enriching our collective well-being far beyond what we currently think possible. The possible futures we outline fall into four broad categories, only the latter of which creates the conditions for an AI-human co-created future in which positive social outcomes flourish:

  1. Civilizational collapse: This is the one scenario, due to devastation from climate change, soil depletion, nuclear conflagration, or some other cataclysm, in which AI has no future, or is set back decades or centuries. David Wallace-Wells, Bill McKibben, and Nathaniel Rich are among those thinkers querying, with disturbing evidence, whether the human story is on the cusp of its terminal chapter.
  2. Civilizational replacement: Elon Musk, Sam Harris, and Nick Bostrum are among those who warn of runaway super-intelligent machines, which may deem humans as expendable or even threatening. Some have compared Bostrum’s book[vi] on super-intelligent AI with Rachel Carson’s Silent Spring because of its prescient, existential warning for the future of humanity. Henry Kissinger’s Campaign to Stop Killer Robots warns not merely of the end of the enlightenment, but of the potential extinction of our species at the hand of powerful autonomous, self-learning weapons systems.
  3. Benign containment: AI in this scenario, aware of the biases and crueler tendencies of humans, will take steps to contain and circumscribe our power, so that we cannot bring undue harm to other people or to ecosystems. The algorithms may well ensure we are collectively well nourished and have equal opportunity for leisure, creativity, and fun but, in such a scenario, humans will lack agency, political power, or any meaningful form of control. Think of it as a human zoo. Much as the welfare of other primates is currently entirely in our hands; ours might soon be in the hands of future AI.
  4. Co-creation: A human-machine co-created future is the one scenario that envisions humans at least in the co-pilot’s seat, if not the captain’s seat: developing AI with the very noblest democratic, rationally compassionate, and just human values and aspirations baked into the design of every algorithm possible. Such a scenario could see us eliminate homelessness, halt climate change, find cures or life-extending treatments for countless diseases, and put an end to violence as a legitimate means of solving disputes.

As one futurist frames it, “Some humans will struggle against the AI. Others will ignore it. Both these approaches will prove disastrous, since when the AI will become more capable than human beings, both the strugglers and the ignorant will remain behind. Others will realize that the only way to success lies in collaboration with the computers. They will help computers learn and will direct their growth and learning”[vii]

Elizabeth Good Christopherson, the president and CEO of the Rita Allen Foundation, echoes this, with particular reference to non-profit work: “Used poorly, there is no doubt that artificial intelligence can serve to automate bias and disconnection, rather than supporting community resiliency. For the social sector, a values-driven, human-centred, inclusive process of development can help to mitigate the ethical risks of developing artificial intelligence.”[viii]

How do we “do really good things” with AI?

Setting aside the use of AI to target citizens for marketing and surveillance, many existing AI applications are about solving human problems. Even Facebook CEO Mark Zuckerberg, defending his company against Democratic leadership contender Bernie Sanders’s challenge that billionaires should be taxed at a far higher rate, asserts that most billionaires are simply “people who do really good things and kind of help a lot of other people.” Yet Facebook’s impact, following the Cambridge Analytica data scandal, may be more malignant than benign. As Amy Webb, author of The Big 9, notes, the “future of AI — and by extension, the future of humanity — is already controlled by just nine big tech titans, who are developing the frameworks, chipsets, and networks, funding the majority of research, earning the lion’s share of patents, and in the process mining our data in ways that aren’t transparent or observable to us.”[ix]

Ironically, the people who actually do really good things and kind of help a lot of other people (i.e. people who work in the caring and sharing professions, though demand less in compensation than Zuckerberg), are generally in the most AI-proof vocations. These are jobs that require high levels of compassion and/or creativity, including caregivers, counsellors, teachers, and artists. But such professions should not merely be AI-proof – they must be AI-ready. Ready to step to the forefront of the debates on the future of humankind, bringing critical perspectives and insight.

Webb goes on to note that “safe, beneficial technology isn’t the result of hope and happenstance. It is the product of courageous leadership and of dedicated, ongoing collaborations.” Chinese venture capitalist Kai Fu Lee calls the notion of “friendly AI” a “blueprint for co-existence,”[x] the kind of development that gets us to a co-created future and avoids the likelihood of those other darker possible futures. So far, this AI is largely eluding us.

Think of an historic analogy: When post-war cities were built, fueled by an exuberant modernism and a blithe embrace of the liberating promise of the future, we made the mistake of turning to “experts” to manage the city-building process. The transportation engineers and architects, overwhelmingly male, and overwhelmingly white, basically said “leave it us.” Then they overbuilt cities, trashed heritage, and warehoused people living in poverty in gleaming edifices that ironically served to intensify urban violence and decay, ignoring the nuances of city life at the neighbourhood level. They had blind spots around race and ethno-cultural diversity, and little regard for the ecological footprint of their creations. We must ask ourselves, to what extent is high tech following similar patterns? Is the development of algorithms relentlessly open and inclusive? How participatory are our social hackathons? Are we speaking across disciplines and including the voices of those closest to the problem we are trying to hack? Are social innovators and tech innovators in the same room? The answer to these questions is often “no.”

In the next article, we will highlight the more hopeful, innovative social applications of AI already underway in the domains of health, the environment, arts and creativity, democratic engagement, and other common good activities. We will also offer up institutional and public policy suggestions for how we might ensure common good AI, as well as explore further how citizens and social impact practitioners can be more engaged in AI technological and policy development.

 

 

[i] Stauch, J. Turner, A. and Escamilla, C. (2019) In Search of the Altruithm: AI and the Future of Social Good. Institute

for Community Prosperity, Mount Royal University.

[ii] Bernholz, L. (2019). Philanthropy and Digital Society: 2019 Blueprint. Stanford Center on

Philanthropy and Civil Society, 2019. Retrieved from https://pacscenter.stanford.edu/publication/

philanthropy-and-digital-civil-society-blueprint-2019/

[iii] See, for example, Senate of Canada. (2019, June) Catalyst for Change: A Roadmap to a Stronger Charitable Sector. Report of the Special Committee on the Charitable Sector.

[iv] Elson, P. R. (2009, March). A Short History of Voluntary Sector-Government Relations in Canada. The Philanthropist. Retrieved from https://thephilanthropist.ca/original-pdfs/Philanthropist-21-1-358.pdf

[v] Grace, K., Salvatier, J., Dafoe, A. Zhang, B. and Evans, O. (2018). When Will AI Exceed Human Performance? Evidence from AI Experts, Journal of Artificial Intelligence Research.

[vi] Bostrum, N. (2015) Superintelligence: Paths, Dangers and Strategies. Oxford University Press.

[vii] Roy, T. (2017, March 3) Singularity: Explain It to Me Like I’m 5-Years-Old (blog post), Futurism. Retrieved from https://futurism.com/singularity-explain-it-to-me-like-im-5-years-old

[viii] Christopherson, E.G. (2018, November 26). The Future of Listening: How AI Can Help Us Connect to Human Need, in Nonprofit Quarterly.

[ix] Webb, A. (2019). The Big Nine: How the Tech Titans and their Thinking Machines could Warp Humanity. PublicAffairs.

[x] Lee, K.-F. (2018, April). How AI can save our humanity [Video File]. Retrieved from https://www.ted.com/talks/

kai_fu_lee_how_ai_can_save_our_humanity?language=en#t-21678

Subscribe

Weekly news & analysis

Staying current on the Canadian non-profit sector has never been easier

This field is for validation purposes and should be left unchanged.