Skip to main content
x

Human rights must be at the core of generative AI technologies, says Türk

Back

14 February 2024
Delivered by: Volker Türk, UN High Commissioner for Human Rights

Good afternoon,  

It is a pleasure to speak with you today. My warm thanks to the Center for Human Rights and International Justice and all of the co-sponsors of Stanford University who are hosting this event.

It is fitting we are here: this university has long been one of the world’s front runners in the Artificial Intelligence (AI) field. Stanford’s AI Lab, to take just one example, has been spearheading innovative research since 1963.

Clearly, AI itself isn’t new. But we all know that today’s rapid and seismic shifts are presenting challenges none of us have ever encountered before. 

We have all heard the dystopian scenario: a world that hurtles towards, almost sleepwalks into self-inflicted tutelage, a vortex of uncontrollable phenomena and possible extinction.

Alternatively, AI could have the potential to unlock the secrets to cure cancer, end global warming, and feed the hungry – the rescue scenario. Yet between these poles of AI killing or saving us is the world of today – where the promise of AI in a wide range of areas is being realized, while unparalleled human rights impacts of advanced AI — including generative AI — are already being felt by vast numbers of people – today.

A couple of thoughts come to mind. Goethe’s Faust tells the fascinating story of Homunculus — an artificial human being representing the best of science and the Enlightenment while being more “human” in his desires than his creator. And in the poem “The Sorcerer’s Apprentice” Goethe described vividly what happens when you let the genie out of the bottle. In the end the old sorcerer returns, when all seems lost, and breaks the spell, with the lesson being that you should only invoke magic when you actually master it. So what helps us in the mastery of AI? Not surprisingly, I argue strongly that human rights are part of the mastery and the solution.

Human rights encapsulate an ancient wisdom that can guide us in the face of today’s challenges. They do provide the sensors of navigation in the fog of the new, the unknown. They tame the wild and avert chaos. They represent, importantly, a normative, legally binding framework that not only directs but also unlocks all freedoms of everyone, everywhere — hence the checks and balances they set forth. They are ultimately about human agency, human dignity and our place in the ecosystem of the Earth. In short, they provide a governance model that is long-term, inter-generational, ensuring our future.

Bringing human agency and human dignity back to the centre of this debate is therefore more urgent than ever.

Because right now, generative AI is still being developed and deployed without a clear picture of how to ensure safety. I would even go further and say without a clear mission of what is meant to be achieved. Limited transparency, unclear liability, and overreliance on private companies to “do the right thing” are just not good enough when we are dealing with this powerful technology. While regulators are jumping in to put guardrails around the tsunami, countries and companies have largely failed to embrace the one cross-cutting set of laws we have in place to address these colossal challenges: the international human rights framework.

The human rights framework provides the foundation we urgently need to innovate and harness the potential of AI.

At the same time, it can prevent and mitigate the plethora of risks and avoid generative AI becoming a conduit for widespread human rights violations and abuse, discrimination and exclusion.

Human rights embody the idea that people must be protected against certain abuses perpetrated by their governments, as well as by individuals, private entities, and corporations.

To illustrate what I mean, let me walk you through some of these rights of particular relevance to our subject.

The right to work.

Generative AI carries the potential to alter drastically economic and labour markets. We have already seen human creatives in various fields replaced by generative AI-created content. A new world of work, leisure and use of time will emerge. What does meaning in life mean for those made redundant, feeling excluded, hard done by? How do we prepare for this? What does it mean if we don’t have universal social protection, if the inequalities rise as a result, if this were to lead to a dangerous disintegration of societies?

The right to non-discrimination, with generative AI models, also trained on material that almost inevitably contains the hateful and discriminatory ideas that infect our societies, spouting deeply racist or misogynistic content, echoing the raft of misconceptions, inaccuracies and straight-out lies found in all societies, and fuelling hate. How will we protect public voices pushing for critical debates, those representing minorities, LGBTIQ+, women and children? If sufficient attention is not paid to the specific needs of these groups at risk, we exacerbate fundamental ills in our societies.

Or the right to access information.

In this massive election year, where some 70 elections are taking place around the world, the risks of powerful propaganda cheaply produced at scale and organized disinformation campaigns are high. And the flow-on effects, including hate, discrimination or the delegitimization of political actors and institutions of the State, can deeply undermine the principles of functioning democracies. How will we ensure that the autocrats, the populists, the extremists — the enemies of an “open society” and of a human rights-based order — are not getting the upper hand, clouding the minds and hearts of people and turning everything into perfect, perpetual, auto-generated propaganda that only serves the interests of a few?

Or the right to privacy.

The ability to produce and disseminate high-quality deepfakes at scale with nominal investment is only just beginning, yet we have already seen that this phenomenon could disrupt elections, deceive people and spread misogyny and hate. How will we prevent the “Doppelgänger” phenomenon that Naomi Klein so eerily described? How will we escape an Orwellian future of mind control and ubiquitous surveillance? And how will we ensure that humanity doesn’t disintegrate further into the haves and have-nots, especially in light of the colossal digital divide?

Other rights come to mind, such as the rights to be free from hunger, to an adequate standard of living, to social security and housing.  It will be important to understand how AI can both realise these rights or pose a threat to them.

Colleagues, friends,

Against this backdrop, what does this mean, concretely, for generative AI?

First, States and companies must use the human rights framework as the first point of reference in the regulation and governance of generative AI. 

And second, this framework must guide the design and development of generative AI technologies, infusing human rights throughout their entire life cycle.

The complexity of AI requires a thoughtful and cautious approach. We need to ask how it affects the rights of different individuals and communities. 

Not just here in California, but everywhere.

What does AI innovation and its human rights impacts mean for a teenager in Lagos, a teacher in Buenos Aires, or a child in Bangkok?  Or those caught up horribly in the 55 or so violent conflicts around the world?

Rights-respecting generative AI means listening to the voices of those potentially affected by it, for better or worse. This can refocus our approach to AI to ensure it works for all, without discrimination. Because just as people are at the centre of human rights, they must be at the centre of technology.

We must also keep asking the difficult questions. How do we safeguard freedom of expression and at the same time prevent hate speech and disinformation? How do we protect the rights and intellectual property of artists, musicians and writers without infringing upon our right to access information?

And while ethical considerations are important in this discussion, we need human rights — and its universal, binding framework — to take us one step further to concrete solutions and to avoid, in the worst case, criminal accountability for human rights abuses.

My Office directly engages with some of the leading companies producing generative AI. Our aim is to support them in translating human rights into reality as they develop their tools.

At the same time, we have also called for a pause in the use of AI in areas of high human rights risk such as law enforcement and the judiciary until sufficient guardrails are in place. We have urged greater integration of human rights in technical standard setting processes, where critical decisions will be made on AI applications and use.

What we know is that navigating the enormous unknowns of generative AI requires the following actions, as a matter of urgency:

We need responsible business conduct.

We need accountability for harms.

We need access to remedy for victims of such harms.

But above all, to mitigate these harms from occurring in the first place, we need sound governance of AI, anchored firmly in human rights.

Companies are responsible for the products they are racing to put on the market. They need to do much more to identify and address obvious immediate risks, and to enhance transparency both on safety approaches and on sources of training data.  

Companies, but also States, must systematically conduct human rights due diligence for the AI systems they design, develop, deploy, sell, obtain or operate. A key element of this needs to be regular, comprehensive human rights impact assessments.

The fact is, we already have the tools to guide governments and the private sector in advancing towards AI grounded in human rights.

Since 2011, the UN Guiding Principles on Business and Human Rights have provided robust principles for both States and companies, and can today lay the groundwork for responsible AI development. My Office’s B-Tech project has also produced a series of recommendations, tools and guidance, developed with active involvement of companies and other stakeholders, on how to apply the Guiding Principles to prevent and address human rights risks relating to digital technologies. And the newly established UN High-Level Advisory Board for Artificial Intelligence has made preliminary recommendations on AI governance. As we head for the Leaders’ Summit of the Future in New York in September, the Global Digital Compact will be another critical pathway forward. 

Colleagues, friends,

Stanford University has made great strides over the last decades in harnessing the potential of AI. The AI4ALL and AI for Social Good programmes are working to increase diversity and train AI engineers to incorporate social justice objectives when building technology. The Center for AI and Medicine and Imaging is benefitting patients through its groundbreaking work in health and innovation.

As students, you are building your future. And in the field of AI, you are also playing a part in constructing the future of humanity.

For technology to benefit humanity, we need deep mindfulness, the voice of reason and a profound sense of intergenerational responsibility.

Human rights are just that.

Your voices here in Silicon Valley - the power hub of digital and tech innovation - matter more than you may think in steering us away from the risks we are discussing today.  Because progressing this sector towards responsible business conduct grounded in our rights and freedoms will have far-reaching and longstanding dividends for everyone around the world.

With human rights principles firmly in place, I believe we can safely harness AI’s incredible opportunities and keep striving for creativity, innovation and the best that human beings can deliver.

I hope we can all commit to this ambition.

Thank you.

Back