On this page you can find definitions of words and terms that are frequently used throughout the Digital Rights Check.
While there is no shared definition of AI, it generally concerns the science and engineering of making intelligent machines, especially intelligent computer programs. It includes the likes of voice and face recognition, self-driving cars as well as machine learning algorithms that can help predict weather patterns, droughts or even criminal activity.
Automated decision-making refers to the making of decisions based on automated processing. Examples include a lending algorithm that analyses a person’s credit risks and decides whether to grant a loan and on what terms, or an AI tool that diagnoses lung cancer by analyzing the CT scans of patients.
Automated systems might be relied on to make decisions entirely with no human oversight, or might be subject to human intervention (such as when an automated system suggests a decision but a human decision-maker ultimately makes the decision).
Data processing is defined under the GDPR: collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction of personal data (https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-constitutes-data-processing_en)
A grievance mechanism is a mechanism available for individuals who have concerns or complaints, and provides a way for them to submit those complaints to the entity managing the grievance mechanism. They can be constructed in many different ways, and the importance is not the design but the effectiveness (see further resources for more information on effectiveness of grievance mechanisms). Grievance mechanisms can be designed as telephone hotlines, basic email accounts, chat service, as well as physical mailboxes, among other things. Similar to previous questions around users and non-users, it is important that a wide range of rights holders can access and use the grievance mechanism, so as to not bar them from submitting their concerns or complaints.
Grievance mechanisms are important for a variety of reasons. They can assist in the identification of potential human rights risks and impacts, they can function as an early “warning mechanism”, and they can also serve to provide access to remedy to those rights holders who have had their human rights adversely impacted by the digital components in question.
Human Rights Impact Assessment (HRIA) can be defined as a process for identifying, understanding, assessing and addressing the adverse effects of a business project or business activities on the human rights enjoyment of impacted rights holders.
HRIA involves several phases or steps, all of which need to be included to ensure a comprehensive assessment. The phases can be divided into:
1. Planning and scoping
2. Data collection and context analysis
3. Analysing impacts
4. Impact prevention, mitigation and remediation, and
5. Reporting and evaluation.
While HRIA can be divided into different phases, it is important to recognise that the assessment is an iterative process and should facilitate continuous learning and analysis throughout.
See more in Danish Institute for Human Rights’ Guidance on HRIA of Digital Activities.
Machine learning makes use of statistical algorithms that find patterns in enormous amounts data. The underlying data can of course include all kinds of different types of data, including numbers, words, dates, geographic location and much more.
See more here: MIT Technology Review, “What is machine learning?“
A privacy impact assessment is a process to identify risks to data privacy caused by the processing of personal data, evaluate the impact and likelihood of these risks and to address them.
It can be undertaken both for the entire project or for the specific digital solution.
It involves asking questions like: “What personal data are you processing? How is it being processed? What are the existing measures for data protection? What aspects of processing can potentially cause harm to concerned individuals, the organization, or the public? How can the risks of harm be addressed?”
See more here: Rappler, “Data Privacy 101: What is a Privacy Impact Assessment?“, Oct 30, 2018
A ‘rights holder’ is an individual whose rights can be impacted and that the individual can claim from a ‘duty bearer’, which are generally states but that also include businesses and other entities who have a responsibility to respect human rights.