New Challenges in Data Protection: Artificial Intelligence and Big Data

Published on 21 December 2022

One of the biggest challenges that privacy and data protection face today, which if not handled carefully by employers will result in a large number of penalties, is the use of artificial intelligence (AI) and big data.

As is widely known, AI is a set of technologies that enable computers to perform tasks involving a simulation of human intelligence, including decision-making and learning, by collecting and analysing large datasets (big data). These datasets may include personal data.

Information is one of the main drivers in societal progress; for this reason, it is essential to ensure that data collection, storage and processing is performed in accordance with the current legislation on data protection.

In particular, the following principles should be observed in order to avoid penalisation:

  • Legality and legitimacy: Personal data are processed in accordance with current legislation and the processing of personal data is carried out with full respect for the fundamental rights of the data subjects. This means that the collection of personal data by unlawful means is prohibited. Furthermore, data processing must be based on one of the six legal bases set out in Article 6 of the GDPR.
  • Consent: The processing of personal data requires the data subject’s consent.
  • Purpose: Personal data must not be processed for a purpose other than that stated at the time of collection.
  • Proportionality: Any processing of personal data must be suited to and sufficient for the purpose for which the data were collected and use only that information considered as indispensable.
  • Quality: The personal data that are processed must be truthful, accurate and adequate. They must be stored in such a way that their security is ensured and only for the time necessary to fulfil the purpose of the processing.
  • Security: The data controller and the data processor must take the necessary measures to ensure the security and confidentiality of the personal data they manage.
  • Appropriate level of protection: For cross-border transfer of personal data, an adequate level of protection must be ensured, at least equivalent to that provided for in the GDPR.
  • Data minimisation: Personal data must be adequate, relevant and limited to what is necessary for the purposes for which they are processed.

In this context, automated decisions are particularly relevant, as under the GDPR every “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Consequently, data processing using AI requires fairness and the objective is for AI systems not to generate biased results.

Compliance and accountability are paramount. It is essential for those who process personal data to establish a comprehensive organisation scheme, i.e., policies and procedures to ensure that personal data are processed in accordance with the GDPR, and to be able to demonstrate those policies and procedures in order to avoid sanctions.

Thankfully, data protection authorities have turned their attention to AI and are providing specific guidance on its responsible use.

As an example, the district court of The Hague ruled in February 2020 that the “SyRI” app, which the Dutch government was using to combat tax fraud, breached the European Convention on Human Rights because, in using more personal data – processed with AI – than was necessary for the purposes it sought to achieve, it was disproportionate.

Similarly, in February 2020, a French court ruled that schools using facial recognition technology were processing data in breach of the GDPR as such processing was disproportionate.

Finally, it is worth mentioning that the US Federal Trade Commission imposed a $5 billion penalty on Facebook for its management of user privacy in the wake of the Cambridge Analytica scandal. Through apparently harmless surveys on Facebook, users were asked for a series of permissions, without being informed of what data would be collected, how it would be processed, for what purpose and by whom, and information about the users’ activity, location and contacts on the network was collected. Some 270,000 profiles took this online survey, which resulted in the collection of information from 50 million profiles.

This case is a clear example of how the principle of purpose limitation that governs the processing of personal data has been violated: Though Cambridge Analytica had received consent for the processing of personal data for certain purposes, in actual fact, the information they obtained was diverted towards other purposes. As in this case, AI and big data, when not managed correctly, can be used to manipulate users.

The problem with big data and artificial intelligence is that their value lies precisely in the unexpectedness of the findings they produce. It therefore may occur that consent is sought to process data for one purpose, but once processed, the result is different from what was originally intended, and thus the purpose it is useful for changes.

This, of course, results in a risk of the data being re-used for purposes other than those for which consent has been given. A potential solution would be anonymising the data, although this does not preclude that, given the technologies available today, data could be re-identified despite its previous anonymisation.