background preloader

Automated decision making and Art 22 GDPR

Facebook Twitter

When a machine makes a decision about an individual, or Automated Decision Making (ADM) Article 22 GDPR applies under certain conditions :

- When the ADM has been conducted solely by automated means (i.e. no human intervention); and
- has a legal or similarly significant effect on an individual.



If any human intervention is involved (for example, considering the results of the automated decision before applying it to an individual), then the activity in principle will not qualify as automated decision-making. Unless it can be demonstrated the human inputs was restricted to the data handling while the decision-making process was solely automated.

As for the "legal effect" it will anything affecting an individual's legal status/rights .
How to understand "similarly significant effect" ?
* an automated application for a loan or e-recruiting practices with no human intervention (such as using psychometric testing to filter-out candidates).
Guidance require a significant impact on individuals behaviour, selections or choices with a prolonged or permanent impact.

1. Lawfull grounds for processing

EDPB guidance says that organisations cannot conduct ADM unless it is:
necessary for entering into, or the performance of, a contract between the individual and the data controller (for instance, a loan application between a bank and a borrower which requires an automatically generated credit score). This is often an option and/or standard practice;
authorised by EU or Member State law (for example, a bank undertakes profiling to identify fraud to comply with its regulatory obligations). This is usually fairly sector or industry specific; or

2- consent requirements

Consent needs to be explicit resulting on an active opt in (freely given, specific, informed and unambiguous affirmative indication of the individual's wishes, which must be an express form of consent such as sending an email).

3. Obligation to carry out a Data Protection Impact Assessment (DPIA)

The ICO considers ADM as high-risk processing, therefore a DPIA to assess the risks to individuals and ways to mitigate those risks is obligatory.

4. Obligation of transparency

This includes providing meaningful information about the logic involved and what the likely consequences are for individuals. A privacy notice has to clarify the means and process of ADM at the point of data collection.

5. Data subject rights for a review of the ADM

If an individual is unhappy with the outcome of an automated decision, they can ask for it to be reviewed (an appeal process has to be in place ). ICO guidance recommends that this is explained at the point the decision is provided.
This includes providing an explanation of how and why the decision was reached, being able to verify the results and explain the rationale behind the decision, and delivering an audit trail showing key decision points that formed the basis for the decision.

: Statistics and Data Visualization. Table of Contents.

: Statistics and Data Visualization

Www.vice. Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

www.vice

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims. After relentless investigative reporting and a string of parliamentary hearings both preceding and following the mass resignations, the role of algorithms and automated systems in the scandal became clear, revealing how an austere and punitive war on low-level fraud was automated, leaving little room for accountability or basic human compassion.

ADM Systems in the COVID-19 Pandemic: France - AlgorithmWatch. By Nicolas Kayser-Bril Slow start for Stop Covid, the contact tracing app In early April, the French government announced an automated contact-tracing app.

ADM Systems in the COVID-19 Pandemic: France - AlgorithmWatch

The project, Stop Covid, is headed by the National Institute for Research in Computer Science and Automation (Inria), a public organization. It designed its own centralized, pseudonymized Bluetooth-based protocol, ROBERT.

France PARCOURSUP

Automated racism: How tech can entrench bias. Nani Jansen Reventlow is founding director of the Digital Freedom Fund, which supports strategic litigation on digital rights in Europe.

Automated racism: How tech can entrench bias

She is adjunct professor at Oxford University and Columbia Law School and an adviser to Harvard Law School’s Cyberlaw Clinic. In the run-up to parliamentary elections in the Netherlands this month, center-right and extreme right parties are outdoing one another in calling for a surveillance state that will come down on marginalized and minority groups in all its might. This should send alarm bells ringing in Brussels and beyond. The party of Prime Minister Mark Rutte, projected to emerge as the election winner, doesn’t appear to have learned any lessons from last year’s benefits scandal, which a parliamentary report called “unprecedented injustice” and a violation of “fundamental principles of the rule of law.” Over the course of two decades, as many as 26,000 parents were wrongly accused of having fraudulently claimed child care allowances.

AI, Algorithm and how to understand the A-Levels Ofqual debacle

The coming war on the hidden algorithms that trap people in poverty. Credit scores have been used for decades to assess consumer creditworthiness, but their scope is far greater now that they are powered by algorithms: not only do they consider vastly more data, in both volume and type, but they increasingly affect whether you can buy a car, rent an apartment, or get a full-time job.

The coming war on the hidden algorithms that trap people in poverty

Their comprehensive influence means that if your score is ruined, it can be nearly impossible to recover. Worse, the algorithms are owned by private companies that don’t divulge how they come to their decisions.

Official guidances

PUBLIC SECTOR EQUALITY DUTY. PRINCIPLE OF PROPORTIONALITY. Where else AI/Algorith had it wrong? FRANCE ANTI-DISCRIMINATION PLATFORM. ALGORITHM FAIRNESS BIAS AND ETHICS. Autonomous cars decision making. Inscription. Automated decision making rules Art 22 GDPR. Art. 22 GDPR – Automated individual decision-making, including profiling.

ADM Systems in the COVID-19 Pandemic: A European Perspective. Use the map menu above to explore the report or download the PDF here.

ADM Systems in the COVID-19 Pandemic: A European Perspective

By Fabio Chiusi The COVID-19 pandemic has spurred the deployment of a plethora of automated decision-making (ADM) systems all over Europe. High hopes have been placed by both local administrations and national governments in applications and devices aimed at containing the outbreak of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) through automation, thus providing a much needed alternative to lockdown measures that limit personal freedoms and strain the economies. Smartphone apps have been launched to help speeding up and complementing the manual contact tracing efforts put in place by health authorities, causing a heated international debate around how to best balance privacy and human rights with the urgent need to monitor and curb the spread of the disease.

The public don't trust computer algorithms to make decisions about them, survey finds. 8 September 2020 The majority of people do not trust computers to make decisions about any aspect of their lives, according to a new survey.

The public don't trust computer algorithms to make decisions about them, survey finds

Over half (53%) of UK adults have no faith in any organisation to use algorithms when making judgements about them, in issues ranging from education to welfare decisions, according to the poll for BCS, The Chartered Institute for IT. The survey was conducted in the wake of the UK exams crisis where an algorithm used to assign grades was scrapped in favour of teachers’ predictions. Just 7% of respondents trusted algorithms to be used by the education sector - joint lowest with social services and the armed forces. Confidence in the use of algorithms in education also differed dramatically between the age groups - amongst 18-24-year-olds, 16% trusted their use, while it was only 5% of over 55-year-olds. Trust in social media companies’ algorithms to serve content and direct user experience was similar at 8%.

UK National variations Contact the Press Office. What if Artificial Intelligence Decided How to Allocate Stimulus Money? Photographer: Win McNamee/Getty Images North America If, like me, you’re worried about how members of Congress are supposed to vote on a stimulus bill so lengthy and complex that nobody can possibly know all the details, fear not — the Treasury Department will soon be riding to the rescue.

What if Artificial Intelligence Decided How to Allocate Stimulus Money?

But that scares me a little too. Let me explain. For the past few months, the department’s Bureau of the Fiscal Service has been testing software designed to scan legislation and correctly allocate funds to various agencies and programs in accordance with congressional intent — a process known as issuing Treasury warrants. Right now, human beings must read each bill line by line to work out where the money goes. Alas, there’s a big challenge. Treasury’s ambitious hope, however, is that its software, when fully operational, will be able to scan new legislation in its natural language form, figure out where the money is supposed to go and issue the appropriate warrants far more swiftly than humans could. Can I be subject to automated individual decision-making, including profiling?

Answer Profiling is done when your personal aspects are being evaluated in order to make predictions about you, even if no decision is taken.

Can I be subject to automated individual decision-making, including profiling?

For example, if a company or organisation assesses your characteristics (such as your age, sex, height) or classifies you in a category, this means you are being profiled. Decision-making based solely on automated means happens when decisions are taken about you by technological means and without any human involvement. 35 guide to the gdpr profiling and automated decisiontaking. T PD(2020)03rev4 final Guidelines Facial Recognition. Automating Society Report 2020. Mutant Algorithms Are Coming for Your Education. The Coming Revolution in Intelligence Affairs. For all of human history, people have spied on one another.

The Coming Revolution in Intelligence Affairs

To find out what others are doing or planning to do, people have surveilled, monitored, and eavesdropped—using tools that constantly improved but never displaced their human masters. Artificial intelligence (AI) and autonomous systems are changing all of that. BCS: Scientific modelling codes should meet independent standards. The computer code behind the scientific modelling of epidemics like Covid-19 should meet independent professional standards to ensure public trust, BCS, The Chartered Institute for IT, has argued. In a new policy paper, BCS calls for professional software development standards to be adopted for research that has a critical impact on society, like health, criminal justice and climate change.

The underlying code should also be made open-source. The organisation has also argued there is a lack of widely accepted software development standards in scientific research which has resulted in undermining of confidence in computational modelling, including in high-profile models informing Covid-19 policy. Bill Mitchell, director of policy at BCS, said: “The politicisation of the role of computer coding in epidemiology has made it obvious that our understanding and use of science relies as much on the underlying code as on the underlying research.