Skip to main content
Report

The right to privacy in the digital age: report (2021)

Issued by

OHCHR

Published

15 September 2021

presented to

Human Rights Council - 48th session

Report

Issued by Office of the High Commissioner for Human Rights

Subject

Digital privacy

Symbol Number

A/HRC/48/31

Summary

This report focuses on the multifaceted impacts of the steadily growing use of artificial intelligence (AI) on the enjoyment of the right to privacy and associated rights. It stresses the urgent need for a moratorium on the sale and use of AI systems that pose a serious risk to human rights until adequate safeguards are put in place. It also calls for AI applications that cannot be used in compliance with international human rights law to be banned.

Background
Objectives

On 26 September 2019, the Human Rights Council adopted resolution 42/15 on “The right to privacy in the digital age”. Paragraph 10 of the resolution requested the United Nations High Commissioner for Human Rights "to organize, before the forty-fourth session of the Human Rights Council, an expert seminar to discuss how artificial intelligence, including profiling, automated decision-making and machine-learning technologies may, without proper safeguards, affect the enjoyment of the right to privacy [and] to prepare a thematic report on the issue”.

The expert seminar on the right to privacy took place on 27-28 May 2020. A detailed account of the proceedings of the seminar may be found at the following link. The Office of the United Nations High Commissioner for Human Rights now invites all relevant stakeholders to provide inputs for the preparation of the thematic report.

Key questions and types of input sought

The following list of issues, albeit not meant to be exhaustive, aims to assist interested stakeholders in preparing their submission:

  1. Specific impacts on the enjoyment of the right to privacy caused by the use of artificial intelligence, including profiling, automated decision-making and machine-learning technologies (hereinafter referred to in short as "AI") by governments, business enterprises, international organizations and others. Of particular interest is information concerning:
    1. relevant technological developments, the driving economic, political and social factors promoting the use of AI and the main actors in and beneficiaries of deploying and operating AI (developers, marketers, users);
    2. ways in which AI can help promote and protect the right to privacy;
    3. challenges posed by the use of AI for the effective exercise of the right to privacy and other human rights, including features and capabilities of AI that present existing or emerging problems;
    4. discriminatory impacts of the use of AI;
    5. the interlinkages between the promotion and protection of the right to privacy in the context of the use of AI and the exercise of other human rights (including the rights to health, social security, an adequate standard of living, work, freedom of assembly, freedom of expression and freedom of movement);
  2. Legislative and regulatory frameworks, including:
    1. information on relevant existing or proposed national and regional legislative and regulatory frameworks and oversight mechanisms;
    2. analysis of related human rights protection gaps, ways to bridge those gaps and barriers to advancing effective, human-rights based regulation of AI;
    3. assessments of the need to prohibit certain AI applications or use cases (“red lines”).
  3. Other safeguards and measures to prevent violations of privacy when using AI, and address and remedy them, where they occur, including:
    1. self-governance approaches by business enterprises to regulate AI applications, which meet the companies' responsibilities to respect the right to privacy;
    2. human rights due diligence in the context of the use of AI by governments, business enterprises and international organizations;
    3. data governance models, such as data trusts, that provide effective protection to the right to privacy in data-intensive environments;
    4. technological applications that (could) help adequately protect the right to privacy when applying AI and their limits.

VIEW THIS PAGE IN: