Potential Pitfalls for Humanity and Buddhism Amidst AI’s Ascent

At a speech dedicated to Vesak at the Indian consulate in Hong Kong last month, I made the far-from-original point that the world was changing extremely rapidly. Key to this global paradigm shift was the ascent of AI and robotics, which is quickly becoming the story of our era. AI is now changing the world before our eyes. On 22 March, an open letter posted on The Future of Life Institute’s website called for a six-month break in research into AI more advanced than GPT-4. Co-signatories included tech moguls like Elon Musk and Steve Wozniak, Turing Prize winner Yoshua Bengio, and Rachel Bronson, president of the Bulletin of Atomic Scientists. At my talk on the 25th, the discussion moved toward a particularly striking and resonant question that took our conversation toward the implications of AI for society. Several concerns were raised that, I have attempted to articulate here as potential pitfalls of AI’s astronomical rise.

If inventors and CEOs were accelerating efforts to make robots acquire consciousness was the AI and digital tech industry ignoring, or overlooking, was the digital world in turn making humanity lose “consciousness?” The word consciousness, in this context, means something broader than simply being aware of one’s inner and outer circumstances. It encompasses our relationships to real people, our grasp on layers of truth and falsehood, which allows us to navigate the world productively rather than maladaptively, and also our connection to the sublime and good, qualities like wisdom and compassion. In its full breadth, consciousness simply means our potential to be fully human and enlightened – something that can be nurtured or undermined.

The Muddling of the Real and the False

When one can manipulate responses from other people in virtual reality, such as the capabilities proposed by advocates of the Metaverse – are we truly leading a “conscious” life?

From The New York Times

If such a life denotes not being manipulated (and social media needs no help in manipulating us with its algorithms), AI is already misleading us through deepfakes, synthetic media that can fraudulently but convincingly replace one person’s likeness convincingly with that of another – in image, video, and sound. Like many, in late March I was duped by a viral image of Pope Francis in a puffer jacket. Far more serious consequences are things like the creation of deepfake pornography, which is a nightmare for women that have not consented for their faces or bodies to be used in this violating manner on the Internet.

With deepfakes, fake news assumes a more sinister meaning: it is no longer just disingenuous reporting or partial truths, but the outright fabrications of videos, audio voices, and other media that can consume someone in scandal before there is even a chance to debunk the deepfake. As noted by Geoffrey Hinton, AI grandee and “we already have a technology that will be disruptive and destabilizing.” On top of this, “advanced language algorithms will be able to wage more sophisticated misinformation campaigns and interfere in elections.” (Wired)

The Decimation of Community and Alienation

Our discussion also explored the question if perhaps “consciousness” is being stripped away with a decrease in real interactions. Virtual AI and AI in robotics – the kind that is making people fall in love with algorithms and nudging us ever close to a future where the movie her (2013) with Joaquin Phoenix is a reality – is one manifestation of AI’s remaking of the world. Gen Z has grown up on 24/7 Internet culture, absorbed in memes and virality, identities enmeshed in social media like TikTok, and many younger people have shared concerns about increasing isolation from traditional avenues of belonging, such as sports associations, interest clubs, and religious institutions (church attendance and formal religion in general is on the decline across the West).

The famous doomer meme, now a staple of online culture. From medium.com

Many who are addicted to social media or video games, or lost in the world of Reddit and Discord and screen-to-screen interactions with little hope of ever meeting their conversation partners or collaborators “IRL” (in real life), will have felt this kind of unhappiness and social unease. AI chatbots potentially compound this societal problem by advancing so quickly that sentience is no longer simply “replicated” competently, but “reflected” by the AI or robot directly back at the human user. In Buddhist terms, AI is becoming capable of manifesting form (rupa) and the experience of reacting with attachment or aversion (vedana), along with perception or cognition (samjna), that part of experiencing that is true or false. In other words – an unenlightened being stuck in samsara.

There have been many reported cases of people falling in love with their AI chatbots, and while it is too early to identify any lasting tendencies, these incidents of falling “out” of love with humanity while increasingly depending on AI to fulfil one’s needs for emotional fulfilment and relationships cannot bode well when paired with the trend observed above for Gen Z.

The Deadening of Wisdom and Compassion

Generative AI, defined as a type of AI that can create a wide variety of data, is already throwing up questions about the future of entire industries, our relationship with work, and the global economy. As the open letter on Future of Life Institute notes:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

(Future of Life Institute)
From swisscognitive.ch

While AI with support of huge computing power can assimilate and draw on wealth of knowledge and “experiences,” the majority of human beings are increasingly becoming highly distracted and unable to capture the present experience in true sense. This is no doubt a hidden danger of the digital age that has only been recently seriously researched. In Buddhism, we are fully human when we realize that every single one of us, and all beings, possesses wisdom (prajna) and compassion (karuna). To lack focus on the present moment, to be unmindful, is a quality of a mind without solid foundations, one that is easily exploited. If AI is abused or deployed incorrectly, it could take more than just our jobs or industries away: it could mire us in truly unenlightened tendencies.

Progressive writer Naomi Klein’s observation, like many thinkers, considers both the positive potential of AI as well as its danger. The picture can be a positive one, but only if other aspects of our politics change:

There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

(The Guardian)

The letter on The Future of Life Institute website proclaims, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” But how serious are today’s grandees of AI about using their considerable influence, wealth, and power to shape a future that is more in line with Klein’s former scenario – one in which AI can flourish alongside humanity as a benefactor rather than a “threat” that could deny us control over our own destiny? The verdict on whether the most influential people in AI truly appreciate the need for AI to have wisdom and compassion baked into its programming – indeed, a yearning for liberation – is still open to question.

AI-generated image

No Turning Back?

As we continue exploring the intersections of Buddhism and the frontiers of the digital world, there is a sense of inevitability about the need for governments, civil society, and religions to marshal a coherent response to the impending domination of AI. The genie is out of the bottle; the bell has been rung. What will define our future as a species is finally knowing what consciousness is, and perhaps AI’s role is to help us figure out this existential question.

And Buddhism’s part, as a religious tradition of the mind, is to help articulate such an answer, which would be no small contribution. If we can eventually articulate the answer, it could usher humanity into the next phase of its evolution. It is a constructive prospect, amidst many warnings. As AI is here to stay, the ideal and positive should be pursued.

See more

What Really Made Geoffrey Hinton Into an AI Doomer (Wired)
Pause Giant AI Experiments: An Open Letter (Future of Life Institute)
AI machines aren’t ‘hallucinating’. But their makers are (The Guardian)

Related features from BDG

In a World of Human Ignorance, Can Artificial Intelligence Help?
The Potential of Personhood: David Hanson on How AI and Human Beings Can Help Each Other

Related blog posts from BDG

The Meaning of Vesak in Our Time

BDG Special Issue: Digital Dharma

Support Our Dharma Work