At the Intersection of Climate Change, AI, and Human Rights Law: Towards a Solidarity-Based Approach (Part 1)

Subcategory:
By:

November 17, 2023

This post forms part of a two-part series, Part 2 is accessible here.

Across the world, public attention has increasingly turned towards two challenges of global proportions: the catastrophic and unequal impacts of climate change and the kinetic development and deployment of artificial intelligence (AI) technologies. Driven by an extractivist growth-oriented economic system with roots traceable to the colonial encounter, climate change has left the world teetering on the edge of ‘irreversible’ breakdown, with marginalised communities particularly impacted by its inequitably distributed and existentially destructive effects. At the same time, fuelled by the extraction of vast amounts of raw materials and data, AI technologies have ushered in intensified forms of surveillance, control, and discrimination dominated by a small number of large technology companies, which have accumulated forms of ‘structural power’ that enable them to influence and circumscribe how communities, corporations and States interact and relate with one another.

Despite the intersecting nature of climate change and AI technologies, policymaking has tended to remain remarkably compartmentalised. The EU’s Digital Services package, for example, is notable for neglecting to expressly confront the environmental and sustainability concerns of digital platforms. Where intersections are acknowledged, the relationship is often perceived to be harmonious – with AI invoked as a technological saviour for society’s ecological challenges. While amendments to the EU’s proposed AI Act signal some movement towards confronting the environmental concerns of AI technologies, tensions between the two tend to be defined in narrow technical terms focused on energy costs.

Although often discussed in isolation, the fields of climate and AI governance have each witnessed a human rights turn in recent years. Importantly, while many have put forward the merits of having recourse to human rights law (HRL) in these contexts, the rise of HRL as a vocabulary of governance has also been accompanied by several more critical currents that have surfaced not only the technical and institutional challenges of applying HRL, but also its structural biases towards individualism, anthropocentrism, and Statism, as well as its orientation towards addressing symptoms over structures, malleability to market-friendly co-option, and tendency to legitimate existing hierarchies of power including ongoing relations of coloniality.

Seeking to move beyond a siloed approach to climate and AI governance, and building on existing critical literature on the emancipatory promise and perils of HRL, this two-part post seeks to surface some of the challenges that have arisen at the intersection of climate change and AI technologies and to advance and critically reflect on the potential of a solidarity-based conception of HRL as one limited avenue for addressing them.

Intersections between Climate Change and AI Technologies

To identify intersections between climate change and AI technologies, it is important to move beyond narrow technical understandings of these concepts. Confronting climate change, for example, requires a frame that extends beyond its atmospheric and biogeochemical dimensions to encompass the structural inequalities that not only constitute significant drivers of the climate crisis, but also underlie its inequitable impacts. Similarly, while AI often conjures an image of a technical toolbox of algorithms, data, and cloud architectures, confronting the governance challenges posed by AI technologies requires a frame that also captures the human (labour) and material (resource) dimensions of its production.

Bearing these frames in mind, it is possible to identify at least four intersections between climate change and AI technologies.

First, AI technologies can be understood as climate consumers. This intersection encompasses the material and immaterial ways in which AI technologies leave significant carbon footprints through extractivism – whether through resource mining, energy consumption, and product obsolescence cycles, or the expansion of data-driven business models, the incentivisation of consumerism, and the marketing of AI services to coal, oil, and gas companies. Importantly, a growing number of scholars have drawn attention to the historical roots of contemporary forms of extractivism, recognising ‘continuities of colonial exploitation, extraction, and dispossession in the Global South, in the use of labour, material resources, and data in AI lifecycles’.

Although opacity remains an ongoing challenge in this context, an expert study commissioned by the OECD recently concluded that direct environmental impacts stemming from, for example, the physical extraction and consumption of natural resources to build AI hardware, the energy and water consumption of training and deploying AI models, and the recycling or disposal of electronic waste, have been ‘most often negative’, while indirect environmental impacts stemming from particular deployments of AI applications have sometimes also proven detrimental, for example, through nurturing ‘unsustainable changes in consumption patterns’.

Second, AI technologies can be understood as climate mitigators and adaptors. This intersection encompasses efforts to proactively harness AI technologies to reduce greenhouse gas emissions and slow the rate of global warming, as well as to improve the resilience of communities to the effects of the climate crisis. Examples of AI mitigation and adaptation initiatives include data-driven sensor and satellite technologies aimed at monitoring and reducing air pollution in smart cities, as well as improving the precision of farming practices as part of smart agricultural systems.

Whether such projects achieve their aims, however, tends to be contingent on the contextual circumstances of their design and implementation. Eric Nost and Emma Colven, for example, suggest that techno-fix ‘AI for Good’ initiatives risk (re)-producing social inequalities and injustices by neglecting ‘questions of social vulnerability and political economic structures’ and erasing ‘the important socio-spatial topographies that research on adaptative, vulnerability and climate justice has so extensively documented’. Critically examining Microsoft’s AI for Earth programme and the 100 Resilient Cities programme in New Orleans, Nost and Colven conclude that, rather than supporting climate adaptation, both initiatives ended up ‘bolstering technology companies’ reputation and technical prowess, furthering state surveillance at the expense of community adaptation, and fueling the climate crisis while diminishing adaptive capacity’.

Third, AI technologies can be understood as climate securitizers. This intersection encompasses the different ways in which AI technologies are relied upon to help frame and respond to climate change as a security issue – whether through the surveillance of climate activists or the establishment of digital borders as part of efforts to stifle climate-induced migration. Importantly, as Madianou explains, the logic of securitization which reduces activists and migrants to security threats tends to be driven by ‘ideological agendas that confirm the monopoly of the state as the provider of security while concealing “some of its own failures”’.

Amidst a rise in murders of environmental and land defenders, their portrayal as (eco)-terrorists, and the criminalisation of climate protests, climate activism has become increasingly dangerous in recent years. AI-based surveillance technologies, including facial recognition software and zero-click forms of spyware, have been deployed against human rights defenders in general and represent a threat to the ongoing activities of environmental defenders in particular. At the same time, many of the world’s highest income States have devoted more time and resources towards constructing a ‘Climate Wall’ to keep migrants out than on tackling the root causes that force communities from their homes in the first instance. Increasingly, this Climate Wall has taken the form of a digital border, which subjects climate-induced migrants to various forms of AI-based technological experimentation, ranging from data-driven surveillance to automated forms of decision-making.

Finally, AI technologies can be understood as climate discourse shapers. This intersection concerns the role of AI in shaping the discourse around climate change – whether in the form of climate mis/disinformation, climate advocacy, or climate lobbying campaigns. Particularly important in this context are the AI technologies relied upon by the most societally dominant online platforms, whose ‘deep pockets’ enable them to conduct significant lobbying efforts and whose ‘systemic opinion power’ enables them to create dependencies and shape the structure of public discourse.

The 2023 synthesis report of the Intergovernmental Panel on Climate Change acknowledges how ‘public discourses of media and organised counter movements have impeded climate action, exacerbating helplessness and disinformation and fuelling polarisation, with negative implications for climate action’. AI technologies, particularly those that underpin the surveillance-intensive business models of today’s largest online platforms, have catalyzed the speed and spread of climate misinformation and disinformation, both through user-generated content and online advertising. Beyond failing to effectively address tactics that aim to ‘distract and delay’ climate action, online platforms have also failed to counter various forms of online harassment directed towards those engaged in climate advocacy. At the same time, major online platforms, such as Amazon and Google, have also provided significant support to climate deniers and organisations that have campaigned against climate legislation.

Towards a Solidarity-Based Conception of Human Rights Law

To address the diversity of challenges that have arisen at the intersection of climate and AI governance, I advance a solidarity-based conception of human rights law. Solidarity in this context takes as its starting point not only acknowledging co-dependence – whether between humankind and nature, colonizing and colonized nations, or the ‘globally interlocking economic systems that drive unsustainable modes of production and consumption’ – but also recognising the asymmetrical nature of these interconnections in terms of ‘how deprivation and privilege interrelate’. From this perspective, a solidarity-based conception of HRL is one oriented towards structural, strategic, and sub-altern mobilisation.

Structurally-oriented mobilisations of HRL are those that recognise that it is ‘at least as important to identify and seek to remove structural obstacles that lie at the root of many an injustice as it is to deal with their symptoms in the form of particular violations’. For example, reflecting on the emancipatory limits of litigation on the right to medicines, Amy Kapczynski has put forward ‘a vision of human rights that is anti-neoliberal, that seeks – whether dialogically or substantively – to intervene to construct a more just political economy’. Examining efforts to characterise domestic violence by private actors as a form of torture, Natalie Davidson has revealed how feminist campaigns relied on ‘a structural understanding of power relations as providing a basis for legal intervention’, in particular by advancing ‘structural inequality (between men and women) as a severe and substantive problem requiring urgent treatment, over and above conflicting rights such as the right to privacy and family life’. These structurally-oriented mobilisations of HRL may be understood as enacting a type of solidarity politics that strives to address the historically-rooted systemic inequalities that underpin particular forms of injustice.

Strategically-oriented mobilisations of HRL are those that seek to harness the vocabulary of HRL in pursuit of and framed by longer-term strategic objectives. In this context, the term ‘strategic’ may be understood in two senses. First, it refers to perspective. To mobilise strategically is to conduct a particular tactical intervention with a view to advancing a longer-term, structural goal that extends beyond the case or event at hand. Importantly, such interventions tend not to be conducted in isolation but as part of ‘emancipatory multilingualism’ – broader struggles that rely on a diversity of complementary and sometimes contradictory emancipatory languages beyond the frame of human rights. Second, ‘strategic’ refers to evaluation. To intervene strategically is to form a judgment about the relative merits of mobilising the vocabulary of HRL in any particular context, for example by evaluating the risk that HRL may prove redundant or even legitimate interests to which the mobilisation is opposed. This will, of course, always be a prediction – there is no form of mobilisation that is completely immune to co-option or which is guaranteed to contribute towards strategic ends. In this regard, although HRL has often been critiqued for failing to adequately address longer-term systemic harms, it remains possible, as Carmen Gonzalez suggests, for social movements to ‘carefully parse the existing legal frameworks and identify cracks in the edifice’ that enable tactical human rights interventions in support of their longer-term strategic agendas.

Finally, sub-altern-oriented mobilisations of HRL are those that strive to centre the needs and interests of the communities most affected by particular injustices in any given context. Lorenzo Cotula, for example, identifies the primary emancipatory promise of HRL in ‘the agency of the social actors – indigenous peoples, agrarian movements, trade unions, non-governmental organisations (NGOs), grassroot groups – that have appropriated and in some cases reconfigured human rights from the bottom up’. At the very least, if HRL is to be oriented towards structural and strategic ends in addressing challenges at the intersection of climate and AI governance, it is important that the terms of human rights mobilisations are driven and informed by the communities most affected – a process that may sometimes result in HRL frameworks being marginalised or sidelined in favour of alternative emancipatory vocabularies depending on the context.

Inclusion of sub-altern voices has proven a particular challenge in the context of climate and AI governance. In his report on international solidarity and climate change, Obiora Okafor emphasises how marginalised groups tend to suffer disproportionately from climate change, yet are excluded from direct policymaking. In a similar vein, Kate Crawford observes how ‘[t]he voices of the people most harmed by AI systems are largely missing from the processes that produce them’. Moreover, even where processes of inclusion have been advanced, Marie-Therese Png reveals the existence of a ‘paradox of participation’ whereby ‘formal representation can be achieved without any improvement in substantive outcomes, and the distribution of resource, agenda-setting and decision-making power remains status quo’. To mitigate such paradoxes requires working towards meaningful forms of inclusion of communities most affected by challenges at the intersection of climate and AI governance, in ways that seek to confront power imbalances in the development and orientation of HRL mobilisations.

Reflecting on the decolonization of political theory, Adom Getachew and Karuna Mantena identify two strategies for developing theoretical insights from the experience of postcolonial politics: first, conceptual innovation, where ‘new concepts are generated out of the specific experiences of postcolonial politics’; and second, conceptual reanimation, where ‘existing concepts are reformulated and retheorized as a result of their circulation and instantiation in postcolonial contexts’. Applied to the field of human rights, the latter strategy reveals how HRL may be mobilised and reconfigured as a result of its circulation within and interaction with sub-altern contexts and experiences. Benjamin Weber, for example, recently revealed how anticarceral campaigns advanced by imprisoned Black radicals within US prisons have deployed expanded conceptions of human rights as a practice of worldmaking, in particular calling forth ‘a right to breathe in the face of state killings, a right to resist in individual and collective self-defense, and a right to repair in restorative and abolitionist terms’. As Weber observes, ‘this tradition of human rights activism has sought to pry open the very underpinnings of the unequal world system and ground an anticarceral Black human rights tradition in a global framework of antiracist, antisexist decolonization that seeks total transformation’.

*****

Having outlined some of the key characteristics of a solidarity-based conception of HRL, in my next post, I turn to consider some of the different registers through which HRL may be mobilised to address the challenges that have arisen at the intersection of climate change and AI technologies.

The Author


Barrie Sander, Assistant Professor, Leiden University Faculty of Governance and Global Affairs