Select Page

Deep Dive Into Automated Decision-Making: Implications & Benefits

written by FuturisticLawyer
September 17, 2021

Introduction

How GDPR[1] directly regulates the application of artificial intelligence (AI) might not seem obvious to most people. For better or worse, it does, under Article 22 (1), which prohibits artificial intelligence from being placed in significant decision-making power positions. That is, at least until the point where scientists, perhaps, have developed General AI with humanlike thinking capabilities that can no longer be defined as “automated”.

According to Article 22 (1) GDPR: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Automated decision-making and profiling carry with it some benefits, but also substantial risks of unfair treatment to individuals. In a future post, I will write much more about the individual’s rights when faced with an algorithmic decision, and also about the scope and application of Article 22 (1). But before we get into that, let’s understand what an algorithm is, how automated decision-making and profiling are defined, and what implications and benefits come from it.

What is an Algorithm?

An algorithm is a set of instructions or rules that are made to solve a problem. It could be as simple as an “if A then B”-rule e.g., pushing button A, and an algorithm executes action B.

To use a metaphor to describe how an algorithm works: If we imagine the computer as a chef who has to cook a meal, the ingredients would be the input data, the recipe would be the algorithm, and the output data would be the finished meal.[2] Algorithms today are used to make sense of huge data sets and produce an output such as a decision. AI, machine learning, and deep learning are all based on complex algorithms.

Most automated decision systems rely on machine learning algorithms that are able to seek after patterns and correlations in a data set without requiring the analyst to specify in advance which factors to use. They might discover new surprising or unexpected connections that are non-obvious to a human analyst. For example, the algorithm of a travel company may be able to prove statistically that people want to travel to a certain place on a certain date, and it then automatically charge a higher price for that travel destination on that specific date.[3]

What is Automated Decision-Making and Profiling?

Automated decision-making can be defined as “the process of making a decision by automated means without any human involvement.”[4] Examples could be:[5]

  • Loan approvals via smartphone apps: An algorithm carries through a credit assessment based on the customers’ desired amount of money and their answers to a number of questions regarding his or her financial situation. The customer’s financial history and information from credit agencies are automatically evaluated as well. Finally, depending on their score, they receive an offer or the application is denied.
  • Recruiting: In the field of recruitment, job applicants are very commonly judged by software that evaluates the candidate’s personality and competencies based on cognitive and logical tests. The test results are included in the employer’s overall decision to hire a candidate or not. CV filtering systems can also be used by companies that receive a large number of applications. In these cases, applicants who do not achieve a certain score will automatically be denied progression to the next round of the selection process.
  • Personalized Pricing: Online vendors can gather information about the customer – either with or without the customer’s prior consent – to create a customer profile on the target, and adjust prices accordingly. The online vendors can intercept the customers’ location, their computer type, site account, track how many times the customer has visited their site via cookies, or gather information about the customer from third parties (e.g., advertisement network). On this background, the customers can be classified as “budget-conscious” or “affluent”, and the purchase price can be “personalized”.
  • COMPAS (Correctional Offender Management Profiling for Alternative Sanctions):[6] COMPAS is a risk-assessment tool used by the courts in certain U.S. states[7] to predict a defendant’s risk of committing another crime. The COMPAS software uses an algorithm that considers the answers to a comprehensive questionnaire, partly answered by the defendant, and partly pulled from his or her criminal record. Based on the input data, COMPAS is able to make predictions on the defendant’s risk of recidivism (new arrests). The risk score is considered by the judges in their sentencing.

In all the situations mentioned above, the automated decision-making includes profiling. Profiling is done whenever someone gathers and evaluates your personal information in order to make predictions about you. Profiling aims at putting people into categories based on data.[8] It can be done manually, for instance, if you answer a survey, or it can be done automatically if an algorithm pulls out data from your social networking profiles.[9]

The most well-known example of profiling is perhaps when service providers make a customer profile on you for marketing purposes. But profiling could also be related to anything regarding your performance at work, economic situation, health, personal preferences or interests, reliability or behavior, location, or movements.[10]

When companies indulge in automated decision-making, profiling is often an aspect hereof, but it does not have to be.  Automated-decision making without the use of profiling could be when:

  • An automated system is preprogrammed with the correct answers to a multiple-choice exam and automatically attributes the students’ grades. [11]
  • Your Smart Fridge is automatically set up to buy certain food or drinks on a given day of the month.

Profiling is something each of us do every day – consciously or unconsciously – whenever we meet a new person or whenever we edit or add content to our social media profiles to appear in a certain light. There is nothing wrong with profiling in itself.  The real problem comes, when algorithms make significant decisions about us based on these profiles – whether we have made these profiles ourselves, or whether they are made by others.

Implications of Automated-Decision Making

Inexpedient consequences of automated decision-making include:[12]

  • The algorithms embody a considerable potential for discrimination and unfair treatment.
  • Humans become objectified and are not able to express their motives or values.
  • The processes behind the decisions are often highly complex and not transparent to the individuals.

These points are intertwined with each other. To illustrate, we can use COMPAS as an example. As mentioned above, COMPAS is used by a number of U.S. courts to assess a criminal defendant’s chance of recidivism.

An investigation by ProPublica from 2016 revealed that the COMPAS algorithm leaned against racial bias. Based on extensive data analysis conducted by the ProPublica team, COMPAS were 77 % more likely to peg a black defendant as at higher risk of committing a future violent crime.[13] Conversely, white defendants who re-offended within two years after their initial release were mistakenly labeled low risk almost twice as often as black re-offenders.[14]

COMPAS’ racial bias shows how difficult it can be to “train” algorithms with past data. The risk score may be exaggerated over time for defendants from neighborhoods with high crime rates. But to avoid building discriminatory systems, developers will even have to take factors into account that could cause indirect discrimination. Such factors may be distance from home to work or criminal records which could both correlate with the defendant’s racial background.[15] It becomes increasingly harder to predict which factors that could lead to discrimination or bias down the line, as the algorithm collects more and more data.

It should also be remembered that humans decide which criteria are used for the decision-making and what weight is assigned to them.[16] What you put into the algorithm is basically what you get out. That is true, even if the decisions are based on huge data sets.  Automated decision-making may seem more objective on the surface, but in most cases, the algorithms will inevitably reflect the personal values or the bias of the creators.

Then comes the fact that automated decision-making objectifies humans. People are categorized into groups and judged upon common characteristics. As I mentioned earlier, there is nothing wrong with profiling in itself, but in my opinion, it is wrong to make significant decisions about individuals based on profiles that they have no way of influencing with their free will.

That individuals are not able to express their motives or values in the decision-making process, goes hand-in-hand with another important point; the profiling, what factors the algorithm assigns weight, and all remaining steps that lead to the final decision, remain hidden from the individual.[17]

There are several reasons as to why individuals are only presented with the results, while the decision-making process itself is non-transparent. Data controllers that use the algorithm may fear that individuals could learn how to “game” the system.[18] Additionally, data controllers would lose a competitive advantage if their algorithm could be examined by the public as it would expose trade secrets and may put intellectual property rights in jeopardy. However, it is very possible, and increasingly likely, as AI continues to develop, that humans will not be able to explain or understand the logic of the algorithm’s decision-making process.[19]

Along with the latter point comes the fact that individuals have no chance to properly challenge the decision if they don’t know the underlying presumptions and rationale behind it. I plan to write more about that in an upcoming post about the individual’s right under GDPR in connection with automated decision-making, where I will also go more in-depth on the scope and application of Article 22.

Benefits of Automated Decision-making and Exceptions to the Prohibition in Article 22 (1)

Automated decisions offer some benefits for organizations in virtually any sector. It leads to quicker and more consistent decisions, particularly in cases where a very large volume of data needs to be analyzed and decisions made very quickly.[20]

For the same reason, the individual’s right to not be subject to automated decisions under GDPR is not absolute. The prohibition in Article 22 (1) does not cover situations where:

  • the decision is necessary for entering into, or the performance, of a contract between the data subject and a data controller (Article 22 (2) (a)),
  • The decision is authorized by the Union or Member state law under certain specified conditions (Article 22 (2) (b)), or
  • The decision is based on the data subject’s explicit consent (Article 22 (2) (c)).

In some situations, routine human involvement can be impractical or impossible due to the sheer quantity of data being processed.[21] For example, if a business receives thousands of applications for an open job position, the business may use CV filtering methods in order to make a shortlist of possible candidates.[22] Such as decision may be covered by Article 22 (2) (a) since the CV filtering method could be necessary for the data subject to enter into a contract with the data controller. However, if other effective and less intrusive means to achieve the same goal exist, then it would not be “necessary”.[23]

Automated-decision making can also take place if Union or Member State law authorizes its use.  Additionally, the relevant law has to lay down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests (Article 22 (2) (b)). In practice, this exception could include automated decision-making for monitoring and preventing fraud and tax evasion.[24]

Finally, the automated decision-making can be allowed if it is based on the data subject’s explicit consent (Article 22 (2) (c)). The consent has to be freely given, specific, informed, and unambiguous.[25] The data subject has to understand the consequences of his or her consent, and the consent has to be given “freely” to be valid. This implies that the data subject has a legitimate choice. If the data subject is forced to either accept the provided conditions or not to make use of the service at all, the consent is not freely given.[26]

Sensitive information, such as personal data that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic or biometric data, data concerning health or sexual orientation, is known as special categories of personal data under GDPR.[27] As a main rule, processing of special categories of personal data is prohibited (Article 9 (1)) (see more here).

Automated decision-making that involves special categories of personal data is only allowed under two conditions (Article 22 (4)). First, one of the three exemptions in Article 22 (a-c) as mentioned above has to apply. Secondly, the processing has to either be based on the data subjects’ consent (Article 9(2) (a)) or necessary for reasons of substantial public interest (Article 9 (2) (b)). In both cases, the controller must put in place suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.[28]

Conclusion

Automated decision-making and profiling are used in an array of different sectors with economic benefits for organizations as it saves working hours, and leads to quicker, more consistent decisions.  In some areas of business, automated decision-making is critically needed to structure and quickly analyze huge amounts of data. However, automated decision-making comes with risks too, and the risks are especially high if the decision could have a significant influence on individual people’s lives. It is hard to design “an objective” algorithm, as the creators decide which factors it should assign weight in its decision making. Also, in the decision-making process, human individuals are not able to present their motives or values. Finally, because the decision-making process is often highly complex, individuals are only presented with the final results, without knowing the underlying presumptions or rationale behind the decision they are faced with.

*******************************

[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[2] See more at: https://courses.cs.duke.edu//summer04/cps001/labs/plab2.html.

[3] Stefanie Hänold (2018), Profiling and Automated Decision-Making: Legal Implications and Shortcomings from: Corrales et. al., Robotics, AI and the Future of Law – Perspectives in Law, Business and Innovation, pg. 126.

[4] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/#id2

(04-09-2021).

[5] Hänold (2018), pg. 127-128.

[6] https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/ (05-09-2021).

[7] Kirkpatrick, Keith (2017),  “It’s not the algorithm, it’s the data”. Communications of the ACM. 60 (2): 21–23.

[8] https://www.youtube.com/watch?v=7-MNbzv8lAA

[9] Ibid.

[10] GDPR Recital (71).

[11] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/#id2 (08-09-2021).

[12] Hänold (2018), pg. 123 and 130.

[13] Angwin, Julia, Larson & Jeff (2016), “Machine Bias“, ProPublica.

[14] https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (14-09-2021).

[15] Hänold (2018), pg. 130.

[16] Ibid. pg. 129.

[17] Ibid. 130.

[18] Ibid.

[19] Clifford Chance, (2017), Me, Myself and AI: When AI meets personal data.

[20] Ibid.

[21] Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, As last Revised and Adopted on 6 February 2018, pg. 23.

[22] Ibid.

[23] Ibid.

[24] GDPR Recital (71).

[25] GDPR Article 4 (11).

[26] Hänold (2018), pg. 137.

[27] GDPR Article 9 (1).

[28] Article 29 Data Protection Working Party (2018), pg. 24.

Related Posts

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *