Popia

POPIA and Automated Decision Making, including Profiling


I’m currently working on an app to find a new best friend. My old one kicked me to the curb when I fell asleep to The Prisoner of Azkaban. So far on the app, I have a quick survey to separate the Ron’s from the Malfoy’s by asking them a few simple questions about their interests, likes, loathes, current living situation, past mistakes, fortes, flaws and how they pronounce the word GIF. Once completed, the app will inform them whether their decisions have earned them a one-way ticket to Loner Crescent or a lifetime of moderate adventures and movie references with me. At which point I show up at the given address and we help curate each other’s Insta grids then ugly cry to The Notebook.

My completely true and not made-up crazy app journey opened my eyes to something, although at times concealed, also extremely prevalent. And that is profiling and auto decision-making. These somewhat mysterious processes are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing, and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society, delivering benefits such as increased efficiency and resource savings…not forgetting instant best friends. However, profiling, and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards. These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved. Hey, at least my app users knew what they were in for! But wait…

What does POPIA say?

71. (1) Subject to subsection (2), a data subject may not be subject to a decision which results in legal consequences for him, her or it, OR which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its credit worthiness, reliability, location, health, personal preferences or conduct.

Wait, you had me at “subject to subsection 2”. Once more – read, rinse, repeat! Let’s break it down.

…a data subject…that’s us, coz we’re the people to whom personal information relates

may not be subject to a decisionOk, so a decision is key to this discussion

…which results in legal consequences… if it pleases the court, my lady

…or…take note, there’s an OR

…which affects him, her, or it…so, not only legal consequences!

…to a substantial degree…if the degree is in doubt, should we use the ‘scoville scale’?

…which is based solely on the basis of the automated processing…of course, no human involvement

…of personal information…naturally, POPIA’s not talking about any other types of information

So far, so good?

I put it to you that – an organisation cannot allow a machine (that’s a computer, not a leaf blower) to make decisions using our personal information where those decisions are going to have legal consequences for us or affect us to a substantial degree.

Moving on…

…intended to provide a PROFILE of such person …now there’s the potentially, nasty stuff

…including his or her performance at work, or his, her or its credit worthiness, reliability, location, health, personal preferences, or conduct…wow! I never knew we could be profiled in so many ways.

But wait…there’s more

Remember subsection 2? Here it is:

None of the above applies if the decision – let me repeat that –

  • if the decision has been taken in connection with the conclusion or execution of a contract, and the request of the data subject in terms of the contract has been met; or appropriate measures*** have been taken to protect the data subject’s legitimate interests, OR
  • if the decision is governed by a law or code of conduct in which appropriate measures are specified for protecting the legitimate interests of data subjects.

***Appropriate measures? Your organisation must provide the data subject with the opportunity to make representations about the decision and to provide the data subject with sufficient information about the underlying logic of the automated processing of the information relating to him or her to enable him or her to make such representations.

What else does POPIA say?

A Code of Conduct must be published that specifies ‘appropriate measures’ for protecting the legitimate interests of data subjects insofar as automated decision making is concerned. To date, I’m not aware of any having been published.

What would life be without examples?

Profiling

I discovered that profiling involves gathering information about an individual (or group of individuals) and analysing their characteristics or behaviour patterns to place them into a certain category or group, and/or to make predictions or assessments about, for example, their ability to perform a task, their interests, or their likely behaviour. Sounds familiar… but wait let’s unpack a bit.

 So, we see that profiling in this discussion comprises three elements:

  • it has to be an automated form of processing, and
  • it has to be carried out on personal information, and
  • the objective of the profiling must be to evaluate personal aspects about us

Yep, this definitely has the instant friend app stink all over it. My experience aside though, what could be a more serious and dare I say realistic example of profiling?

Example

A data broker collects data from different public and private sources, either on behalf of its clients or for its own purposes. The data broker compiles the data to develop profiles on the individuals and places them into segments. It sells this information to companies who wish to improve the targeting of their goods and services. The data broker is in fact carrying out profiling by placing a person into a certain category according to their interests.

Profiling may be unfair and create discrimination, for example by denying people access to employment opportunities, credit, or insurance, or targeting them with excessively risky or costly financial products. This actually happens so hold onto your hats for the details because they are not pretty.

Example

A data broker sells consumer profiles to financial companies without consumer permission or knowledge of the underlying data. The profiles define consumers into categories (carrying titles such as “Rural and Barely Making It,” “Ethnic Second-City Strugglers,” “Tough Start: Young Single Parents,”) or “score” them, focusing on consumers’ financial vulnerability. The financial companies offer these consumers payday loans and other “non-traditional” financial services (high-cost loans and other financially risky products). Yep. Quite the reality check right?

 Further processing – Profiling can involve the use of personal data that was originally collected for something else. (Section 15. Of POPIA)

Example

Some mobile applications provide location services allowing the user to find nearby restaurants offering discounts. However, the data collected is also used to build a profile on the data subject for marketing purposes - to identify their food preferences, or lifestyle in general. The data subject expects their data will be used to find restaurants, but not to receive adverts for pizza delivery just because the app has identified that they arrive home late. 

This further use of the location data may not be compatible with the purposes for which it was collected in the first place and may thus require the consent of the individual concerned. And yes, I am one concerned individual! Okay, let’s look at the close friend of profiling…

Automated Decision-Making

Automated decision-making has a different scope and may partially overlap with or result from profiling. As we saw earlier, solely automated decision-making is the ability to make decisions by technological means without human involvement.

 Automated decisions can be made with or without profiling; profiling can take place without making automated decisions. However, profiling, and automated decision-making are not necessarily separate activities. Something that starts off as a simple automated decision-making process could become one based on profiling, depending upon how the data is used.

Example

An individual applies for a personal loan online. The website uses algorithms and auto credit searching to provide an immediate yes/no decision on the application. A decision was made but no profile was created.

Example

Imposing speeding fines purely on the basis of evidence from speed cameras is an automated decision- making process that does not necessarily involve profiling.

It would, however, become a decision based on profiling if the driving habits of the individual were monitored over time, and, for example, the amount of fine imposed is the outcome of an assessment involving other factors, such as whether the speeding is a repeat offence or whether the driver has had other recent traffic violations. Is there a place to check this? Asking for a friend.

Where a contract is at play

POPIA mentioned contracts though. Subsection 2 is that you? You have the floor.

Example

An insurance company uses an automated decision-making process to set motor insurance premiums based on monitoring customers’ driving behaviour. To illustrate the significance and envisaged consequences of the processing it explains that dangerous driving may result in higher insurance payments and provides an app comparing fictional drivers, including one with dangerous driving habits such as fast acceleration and last-minute braking. It uses graphics to give tips on how to improve these habits and consequently how to lower insurance premiums.

 The customer must be informed of the existence of automated decision-making and the insurance company must ensure the above mentioned ‘appropriate measures’ are in place.

Before you go

If any planned processing is likely to result in high risk to individuals, you need to perform a Privacy Impact Assessment (PIA). As well as addressing any other risks connected with the processing, a PIA can be particularly useful if you are unsure whether the proposed activities will fall within POPIA’s Section 71 and, if allowed by an identified exception, what safeguarding measures must be applied.

PrivIQ can help with all that like they helped me. So, my app was unethical and violated a couple hundred POPIA laws. But at least I learned from my mistakes and can shed some light for you out there. But most importantly, it’s pronounced GIF! Not JIF!

This is why I don’t have a best friend; I hear it now.

Similar posts