background preloader

Automated decision making and Art 22 GDPR

Facebook Twitter

When a machine makes a decision about an individual, or Automated Decision Making (ADM) Article 22 GDPR applies under certain conditions :

- When the ADM has been conducted solely by automated means (i.e. no human intervention); and
- has a legal or similarly significant effect on an individual.



If any human intervention is involved (for example, considering the results of the automated decision before applying it to an individual), then the activity in principle will not qualify as automated decision-making. Unless it can be demonstrated the human inputs was restricted to the data handling while the decision-making process was solely automated.

As for the "legal effect" it will anything affecting an individual's legal status/rights .
How to understand "similarly significant effect" ?
* an automated application for a loan or e-recruiting practices with no human intervention (such as using psychometric testing to filter-out candidates).
Guidance require a significant impact on individuals behaviour, selections or choices with a prolonged or permanent impact.

1. Lawfull grounds for processing

EDPB guidance says that organisations cannot conduct ADM unless it is:
necessary for entering into, or the performance of, a contract between the individual and the data controller (for instance, a loan application between a bank and a borrower which requires an automatically generated credit score). This is often an option and/or standard practice;
authorised by EU or Member State law (for example, a bank undertakes profiling to identify fraud to comply with its regulatory obligations). This is usually fairly sector or industry specific; or

2- consent requirements

Consent needs to be explicit resulting on an active opt in (freely given, specific, informed and unambiguous affirmative indication of the individual's wishes, which must be an express form of consent such as sending an email).

3. Obligation to carry out a Data Protection Impact Assessment (DPIA)

The ICO considers ADM as high-risk processing, therefore a DPIA to assess the risks to individuals and ways to mitigate those risks is obligatory.

4. Obligation of transparency

This includes providing meaningful information about the logic involved and what the likely consequences are for individuals. A privacy notice has to clarify the means and process of ADM at the point of data collection.

5. Data subject rights for a review of the ADM

If an individual is unhappy with the outcome of an automated decision, they can ask for it to be reviewed (an appeal process has to be in place ). ICO guidance recommends that this is explained at the point the decision is provided.
This includes providing an explanation of how and why the decision was reached, being able to verify the results and explain the rationale behind the decision, and delivering an audit trail showing key decision points that formed the basis for the decision.

FRANCE ANTI-DISCRIMINATION PLATFORM

AI, Algorithm and how to understand the A-Levels Ofqual debacle. Official guidances. PUBLIC SECTOR EQUALITY DUTY. PRINCIPLE OF PROPORTIONALITY. Where else AI/Algorith had it wrong? ALGORITHM FAIRNESS BIAS AND ETHICS. ADM Systems in the COVID-19 Pandemic: A European Perspective. Use the map menu above to explore the report or download the PDF here.

ADM Systems in the COVID-19 Pandemic: A European Perspective

By Fabio Chiusi The COVID-19 pandemic has spurred the deployment of a plethora of automated decision-making (ADM) systems all over Europe. High hopes have been placed by both local administrations and national governments in applications and devices aimed at containing the outbreak of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) through automation, thus providing a much needed alternative to lockdown measures that limit personal freedoms and strain the economies. Smartphone apps have been launched to help speeding up and complementing the manual contact tracing efforts put in place by health authorities, causing a heated international debate around how to best balance privacy and human rights with the urgent need to monitor and curb the spread of the disease. QR codes have been issued to enforce quarantine orders and log check-ins in shops and public places. Is this plan realistic? [9] Douglas J. What if Artificial Intelligence Decided How to Allocate Stimulus Money?

Photographer: Win McNamee/Getty Images North America If, like me, you’re worried about how members of Congress are supposed to vote on a stimulus bill so lengthy and complex that nobody can possibly know all the details, fear not — the Treasury Department will soon be riding to the rescue.

What if Artificial Intelligence Decided How to Allocate Stimulus Money?

But that scares me a little too. Let me explain. For the past few months, the department’s Bureau of the Fiscal Service has been testing software designed to scan legislation and correctly allocate funds to various agencies and programs in accordance with congressional intent — a process known as issuing Treasury warrants. Right now, human beings must read each bill line by line to work out where the money goes. Alas, there’s a big challenge. Treasury’s ambitious hope, however, is that its software, when fully operational, will be able to scan new legislation in its natural language form, figure out where the money is supposed to go and issue the appropriate warrants far more swiftly than humans could. How To Understand the Ofqual AI Predicting A-Levels Debacle.

The public don't trust computer algorithms to make decisions about them, survey finds. 8 September 2020 The majority of people do not trust computers to make decisions about any aspect of their lives, according to a new survey.

The public don't trust computer algorithms to make decisions about them, survey finds

Over half (53%) of UK adults have no faith in any organisation to use algorithms when making judgements about them, in issues ranging from education to welfare decisions, according to the poll for BCS, The Chartered Institute for IT. The survey was conducted in the wake of the UK exams crisis where an algorithm used to assign grades was scrapped in favour of teachers’ predictions. T PD(2020)03rev4 final Guidelines Facial Recognition. Automating Society Report 2020. Safety driver in fatal Arizona Uber self-driving car crash charged with homicide. From viral conspiracies to exam fiascos, algorithms come with serious side effects. Will Thursday 13 August 2020 be remembered as a pivotal moment in democracy’s relationship with digital technology?

From viral conspiracies to exam fiascos, algorithms come with serious side effects

Because of the coronavirus outbreak, A-level and GCSE examinations had to be cancelled, leaving education authorities with a choice: give the kids the grades that had been predicted by their teachers, or use an algorithm. They went with the latter. The outcome was that more than one-third of results in England (35.6%) were downgraded by one grade from the mark issued by teachers.

This meant that a lot of pupils didn’t get the grades they needed to get to their university of choice. More ominously, the proportion of private-school students receiving A and A* was more than twice as high as the proportion of students at comprehensive schools, underscoring the gross inequality in the British education system. What happened next was predictable but significant. Mutant Algorithms Are Coming for Your Education. Photographer: Frederick Florin/AFP/Getty Images Bad algorithms have been causing a lot of trouble lately.

Mutant Algorithms Are Coming for Your Education

One, designed to supplant exam scores, blew the college prospects of untold numbers of students attending International Baccalaureate schools around the world. Then another did the same for even more students in lieu of the U.K.’s high-stakes “A-level” exams, prompting Prime Minister Boris Johnson to call it a “mutant” and ultimately use human-assigned grades instead. Actually, I would argue that pretty much all algorithms are mutants. People just haven’t noticed yet. The foibles of algorithms usually go unseen and undiscussed, because people lack the information and power they need to recognize and address them. The Coming Revolution in Intelligence Affairs. For all of human history, people have spied on one another.

The Coming Revolution in Intelligence Affairs

To find out what others are doing or planning to do, people have surveilled, monitored, and eavesdropped—using tools that constantly improved but never displaced their human masters. Artificial intelligence (AI) and autonomous systems are changing all of that. In the future, machines will spy on machines in order to know what other machines are doing or are planning to do. Intelligence work will still consist of stealing and protecting secrets, but how those secrets are collected, analyzed, and disseminated will be fundamentally different. Military futurists have recognized a similar sea change, and some have dubbed the rise of AI and autonomous weapons systems a “revolution in military affairs.”

7010-2020 - IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. EDPB guidance on connected cars & personal data. Automated decision making rules Art 22 GDPR. Art. 22 GDPR – Automated individual decision-making, including profiling. BCS: Scientific modelling codes should meet independent standards. The computer code behind the scientific modelling of epidemics like Covid-19 should meet independent professional standards to ensure public trust, BCS, The Chartered Institute for IT, has argued.

BCS: Scientific modelling codes should meet independent standards

In a new policy paper, BCS calls for professional software development standards to be adopted for research that has a critical impact on society, like health, criminal justice and climate change. The underlying code should also be made open-source. The organisation has also argued there is a lack of widely accepted software development standards in scientific research which has resulted in undermining of confidence in computational modelling, including in high-profile models informing Covid-19 policy. Bill Mitchell, director of policy at BCS, said: “The politicisation of the role of computer coding in epidemiology has made it obvious that our understanding and use of science relies as much on the underlying code as on the underlying research. 35 guide to the gdpr profiling and automated decisiontaking. Can I be subject to automated individual decision-making, including profiling?

Answer Profiling is done when your personal aspects are being evaluated in order to make predictions about you, even if no decision is taken.

Can I be subject to automated individual decision-making, including profiling?

For example, if a company or organisation assesses your characteristics (such as your age, sex, height) or classifies you in a category, this means you are being profiled. Decision-making based solely on automated means happens when decisions are taken about you by technological means and without any human involvement. They can be taken even without profiling.