Independent Research and Policy Advocacy

The Need for Boundaries: Respecting Privacy in Financial Consumer Data Practices

Save Post

Abstract

If we are serious about improving the lives of low-income customers, data privacy cannot be an afterthought. Financial consumer data privacy is not the icing on the cake for financial inclusion efforts, or an interesting philosophical question to be addressed at a later stage if time permits. Financial consumer data privacy must be central in the planning of financial services providers, and in the development of best practice principles, from the very beginning.

This blog post does two things. It explains why data privacy matters and why global developments – particularly changes in the law – are going to make it matter.

Misuses of financial consumer data

Modern data practices constantly penetrate the boundaries between our public and private lives. As we journey through various online transactions and interactions, often in a setting of apparent privacy, we are in fact being tracked and monitored. Our activities are recorded and shared with data aggregators who create individual files, recording name, address, age, gender, ethnicity, occupation, travel, purchasing preferences, sexual preferences, relationships, and diseases.

One of the undesirable uses of this information is the profiling and targeting of individuals for marketing purposes, based on their actual or suspected vulnerabilities. As Professor Frank Pasquale has pointed out, lists have been compiled – lists of real people who suffer from depression, impotence, sexually transmitted diseases, people who are facing financial difficulties, people who are victims of sexual assault. Such lists have been used to exploit people in their most vulnerable moments for financial gain.

Lenders in China have targeted students in financial hardship with exploitative loans leading to a spate of suicides when the students defaulted. In Kenya, digital lenders published the names of defaulting customers on Facebook. In Australia, an insurer used genetic testing data to exclude a woman from certain insurance and increase her premium for another on the basis of a breast cancer gene even after she had a preventative double mastectomy.

These uses are unfair to consumers but they may also mean that data itself becomes less useful. Recent surveys indicate that many consumers do not trust companies (or governments) with their personal information. If people withhold information or provide inaccurate answers because they don’t trust providers with their data, the data collected will be inaccurate and incomplete, undermining data analysis objectives.

Should we regulate data use alone and leave data collection unrestricted?

Numerous scholars argue that we should regulate against the misuse of data, but that, especially in the age of big data, it is no longer realistic to place boundaries on the collection of personal data. Some in this camp have argued that being concerned about the mere collection and digital storage of your personal data is like being worried that your dog has seen you naked. They say no harm actually flows from either of those events (although your dog may beg to differ).

However, as Bruce Schneier points out, your dog seeing you naked and a computer monitoring and storing your personal information are in fact quite different things. Your dog cannot understand or process the fact that you are naked. Nor can your dog store a description of you in that state for the rest of time, or pass that information on to a third party. But computers can do all these things.

The simple truth is the more personal data we collect and the longer we store it, the more opportunities we create for that data to be illegally accessed, stolen and misused, and for anonymous data to be re-identified. We have seen this truth play out on a grand scale in recent times. Equifax is one of the largest credit reporting agencies in the US and provides a service which allows consumers to see whether they have been the victim of identity theft. Last year Equifax’s own systems were hacked and the sensitive financial information of 143 million people was stolen. Hundreds of thousands of sensitive health records have been exposed in the US and the UK.

Even researchers acting with the very best motives can inadvertently cause substantial harm through the collection and storage of data, as the Harvard Signal Program revealed.

The need for balance

Of course, this does not mean that we should stop collecting personal data. We can draw analogies with other risky activities. Consider surgery. Like surgery, collecting personal data is potentially very beneficial and potentially very harmful. Just as we would not avoid surgery altogether, we would not avoid collecting personal data altogether, but we should have a very good reason for doing it.

We should limit when we do it and put in place safeguards from the very beginning, according to the potential risks and benefits of the case in question. Privacy should not be an optional “bolt-on” at the end of a provider’s project planning. It must be the foundation of our data practices. Fortunately, financial consumer data privacy is not a zero-sum game. It is not a choice between privacy and efficiency. Both can be had when privacy is built into systems from the outset.

These are some of the principles of Privacy by Design, developed by Dr. Ann Cavoukian, a former information and privacy commissioner of Ontario, Canada. These principles can be summarised as follows:

  1. Proactive, not reactive – preventative, not remedial, protection of privacy.
  2. Privacy should be the default setting.
  3. Privacy should be embedded into the design – “baked in” from the beginning.
  4. Full functionality – positive-sum, not zero-sum.
  5. End-to-end security – full life-cycle protection of personal data.
  6. Visibility and transparency – keep it open.
  7. Respect for user privacy – keep it user-centric.

These principles have been endorsed and adapted by privacy regulators around the world.

Global developments in data protection regulation

There is currently a great deal of debate on what is the best regulatory response to the threats posed to consumer data privacy in the context of new data-driven analysis and innovations. The Office of the Privacy Commissioner of Canada has recognised the problems with supposedly obtaining informed consent from consumers in the age of big data, and made proposals for improving and replacing consent where appropriate. The UN Special Rapporteur on Privacy last year released a draft report on the right to privacy in the age of big data, including the concept that privacy is necessary as part of a right to the unhindered development of our personalities.

Australia is following the UK’s lead and putting in place an Open Banking regime. The idea is that a consumer will be able to go to her existing bank and instruct them to send her data – years of her transaction history, for example – to a new bank or lender or financial app with the aim of increasing competition in financial services and improving outcomes for consumers. A critical issue currently being considered is how the law should protect consumers and their data in this process.

In this post, I will focus on two particularly important global developments, namely the developments in India following the judgment of the Supreme Court of India in the Puttaswamy case, and the European Union General Data Protection Regulation, which will come into effect this May and change the world of data privacy.

India: The Puttaswamy Case

In August 2017, the Supreme Court of India delivered its ground-breaking judgment in Justice K S Puttaswamy v Union of India in which it decided for the first time that there is a fundamental right to privacy under the Indian Constitution.

The plurality opinion delivered by Justice Chandrachud in Puttaswamy provides a compelling answer for any who believe that privacy is a first-world issue or a luxury that should wait until more pressing economic needs have been met. As he stated:

The refrain that the poor have no need of civil or political rights but are concerned only with economic wellbeing has been used throughout history to wreak the most egregious violations of human rights. …

The pursuit of happiness is founded on autonomy and dignity. Both are essential attributes of privacy, which makes no distinction based on the birthmarks of individuals.

Another important consequence of the Puttaswamy judgment is that the Indian government plans to enact data protection legislation which is likely to create substantial data privacy obligations for financial services providers operating in India and possibly for those processing the personal data of residents of India. There is hope that this law will, in fact, set an example for data protection regulation globally.

EU General Data Protection Regulation

While the EU General Data Protection Regulation (GDPR) was passed by the European Parliament, its consequences will stretch far beyond the EU. The GDPR will apply directly to providers who are established in the EU but also to those outside the EU who monitor the behaviour of individuals in the EU or offer goods or services to individuals in the EU.

The GDPR comes into effect on 25 May 2018. By global standards, it creates some very high standards for data privacy and very onerous penalties for breaching those rights – up to €20 million or four percent of global annual turnover.

The GDPR is likely to affect the businesses of financial services providers around the world, even where it does not apply directly to those businesses. Already the majority of countries with privacy laws follow a more EU-style approach (as opposed to the more “hands-off” US approach) to data privacy regulation. This will continue to be the case, especially for countries that wish to do outsourced work for EU companies. The GDPR is also likely to lead to an increased focus on the design and production of Privacy Enhancing Technologies (PETs).

Here I will mention just one example of the increased obligations created by the GDPR, namely the increased requirements for “consent” as a justification for the use of personal data. Under the GDPR, a data subject’s consent must be explicit (it cannot be implied), active (no pre-ticked boxes) and unbundled (it cannot be tied to other purposes or types of information), among other requirements. Some IT companies have proposed examples of GDPR-compliant consent requests for the tracking of consumers’ online behaviours, for instance, which would include a list of hundreds of entities with whom that data would be shared and an extensive list of categories of personal and sensitive information to be collected.

Interestingly, according to the surveys of these companies, given a choice, the vast majority of consumers would refuse to provide such consent. This reinforces the view that the supposed “informed consent” that consumers are currently assumed to give is not consent at all. Consumers are effectively forced to accept corporations’ open-ended uses of their personal data.

If we want to prioritise the needs of low-income customers, customer privacy must be a first-order concern. We need to understand the threats to customer privacy and build appropriate protections into our systems from the beginning.

The post is based on Dr. Kemp’s keynote presentation at the “Customer Centricity: Enabling Financial Choices and Positive Outcomes for Low-Income Customers” Learning Event, Mamallapuram, India, 22 February 2018

Authors :

Tags :

Share via :

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts :