What’s next for Data Retention Law in the EU?

Data retention defines the policies of persistent data and records management for meeting legal and business data requirements. The primary aim of data retention is mass surveillance. In particular, by analysing the retained data, governments can identify individuals’ personal information, such as their location.  

The Madrid bombings in 2004 and the London subway bombings in 2005 required the creation of harmonised data retention law in the EU.  Unsurprisingly, this need met significant resistance by human rights organisations, privacy advocates, and citizens who were challenging a data retention law’s compatibility with their rights to privacy and data protection.  

Nonetheless, the Data Retention Directive was adopted in 2006 and places an obligation on providers of publicly available electronic communications services and of public communications networks to retain specific communications data for law enforcement purposes. Notably, the Directive requires the Member States to ensure that communication providers retain the necessary data as specified in the Directive in order, among other things, to trace and identify the source of communication, to determine the date, time, and duration of the communication and to identify the location of mobile communication equipment. Crucially, the data is required to be available to “competent” national authorities in specific cases “for the purpose of the investigation, detection, and prosecution of serious crime, as defined by each Member State in its national law.” 

Member States’ dissatisfaction with implementing this data retention policy was highlighted again by the European Commission in 2011 in a report that aimed to evaluate the Directive. Although the Commission recognised that data retention is a valuable tool for ensuring criminal justice and public protection, it also raised service providers’ concerns about the compliance costs and the civil society organisations’ argument that the Directive was infringing the fundamental right to privacy and protection of personal data. Since the introduction of the directive, the court was required to answer preliminary questions from Member States, explaining further the nature of data retention policy. 

 

Joined Cases C511/18, C512/18 and C520/18 La Quadrature du Net and Others [2020] 

In this case the court was referred preliminary questions from France and Belgium; the former questioned the legality of surveillance techniques introduced since 2015 to combat terrorist attacks and the latter questioned the legality of its respective data regimes. Both referrals queried whether the general retention of communications data in member states could be justified as a measure imposed under safeguarding national security mentioned in Article 15(1) of the e-Privacy Directive against Article 7 (Respect for Private and Family Life) and Article 8 (Protection of Personal Data) of the CharterDue to the similar nature of both queries, they were joined, and a judgement was given for them at the same time.  

In its judgement, the court laid out conditions for general and indiscriminate retention of data in the case of a ‘serious threat to national security’. Such retention should be limited to ‘strictly necessary’ situations, be subject to safeguards and not be ‘systematic in natureIn these cases, EU law was held to apply when national governments forced telecommunication providers to provide access including when done for the purposes of national security. By doing so even as a preventative measure, was not permitted under EU law, especially where there was no link between the conduct of the individual whose data is affected, and the objective being pursued by legislation. In its interpretation of Article 15(1) of the Directive, the court highlighted that the nature of the retention measure must be ‘strictly’ proportionate to its intended purpose and must be subject to review either by a court of independent body with binding authority.  

 

Case C-623/17 Privacy International [2020] 

This case, ruled on the same day as La Quadrature du Net, concerned the collection of bulk communication data by Security Intelligence Agencies. The facts of this case date back to 2001 (GCHQ) and 2005 (MI5) up until the passing of Investigatory Powers Act in 2016. During these years, the Secretary of State issued directions to electronic service providers using s.94 of the Telecommunications Act requiring them to provide the intelligence agencies with bulk communications data. Privacy International, an NGO, argued before the Investigatory Powers Tribunal (IPT) that such actions were going against EU law. The IPT’s provisional judgement stated that the matter fell outside of the scope of EU law seeing as it touched upon national security.  

The case was then referred to the CJEU who was asked to rule on two different questions. The first is that of whether such situation falls under the scope of EU law and the second is, if yes, should the Tele2 judgement apply. The CJEU held that the matter fell indeed under the scope of EU law and that the actions taken were unlawful. In paragraph 81 of the judgement, the CJEU also reminded the importance of the principle of proportionality and doing what is “strictly necessary”, highlighting that the United Kingdom exceeded such necessity in its actions. Following this statement, the CJEU proceeded to hold that the safeguards must be observed in such situations.  

 

The future of data retention in the EU 

The issue of data retention has long been pushed back and forth with privacy advocates or human right activists under the context of national security or similar “shields”. Police or governments request for information is usually for investigative purposes, but the slightest mistake could constitute mass surveillance in privacy advocates’ eyes. However, retaining and accessing personal data in the field of electronic communications to safeguard national security and deter crime has become a common practice among national security agencies throughout the European Union.  

A series of cases of C-623/17 (UK), C-511/18 (France), C-512/18 (France) and C-520/18 (Belgium) since 2015 indicate that data retention is warranted where there is a serious threat to national or public security, the nature of the measure must be ‘strictly’ proportionated to its intended purpose. The doctrine of ‘strict’ proportionality undoubtedly becomes an endorsement of national security grounds, but the proviso left a clear gap-hole though the ruling initially denied that such rules were incompatible with EU law. Recently, a decision made on 2 March 2021 of Case C-746/18 HK v Prokuratuur shows the consistent attitude of the EU, i.e., in the context of criminal law enforcement, access to data like GPS, which can be intrusive to an individual’s private life is only permitted if there is a serious crime or to prevent serious threats to public security. In the ruling, CJEU largely confirmed its previous ruling in Quadrature du Net case.  

The proviso of serious or strictly obviously provides for some countries an interesting reason and time to push back to CJEU’s view. For example, France is not willing to obey the decision of the La Quadrature du Net case which CJEU set a high threshold for retaining and accessing telecommunications data for law enforcement and national security purposes. Therefore, France has acted actively, trying to circumvent such issues on the grounds of “constitutional identity” and national authority. [1] 

Meanwhile, this issue will also cause the butterfly effect and make the EU fall into an embarrassing double standards situation. After all, the EU has extremely distrusted the United States’ data transfer through the Schrems II. 

So, in the future, will the grounds for national security be scrutinised repeatedly in the EU continue to be a crack of legal policy? Will it be controversial about who is more appropriate to carry out scrutiny in the law enforcement context? [2] For example, whether the reviewing body must be independent of the authority requesting to access such data? At least we can see that data retention will keep the CJEU busy in the late future. 

 

[1] Laura Kayali, ‘France seeks to bypass EU top court on data retention’, https://www.politico.eu/article/france-data-retention-bypass-eu-top-court/, accessed on 24 March 2021. 

[2] Thomas Wahl, ‘Conditions of Access to Retained Telecommunications Data for Law Enforcement’, https://eucrim.eu/news/ag-conditions-access-retained-telecommunications-data-law-enforcement/, accessed on 24 March 2021.

Comment on the ‘Right to be Forgotten v Right of Freedom of Expression, Who Wins?’

We noticed that the blog on the ‘Right to be forgotten…’ takes a comparative approach in discussing the balancing of competing rights. This made it an even more interesting read. We will now go on to comment on several points raised below.  

Firstly, the blog makes a prudent analysis of the EU’s jurisprudence and astutely comments on how these have furthered the conceptualisation and advancement of how the right to erasure is to be understood and acknowledges the fine manner by which they have made these analyses in a comparative manner. It is interesting that they are in general agreement with the case law and take a pragmatic approach to the competing rights, diplomatically calling it a draw between the two values, which they agree that the EU’s jurisprudence has correctly articulated. 

Secondly, by highlighting the right to be forgotten versus the right to freedom of expression, the blog reminded us of the broader debate around competing rights which was discussed in our first and second lecture (see the articles by Solove’s [2006] and Fuster [2014] for reference). The reality is that several privacy rights would often have to be balanced against other competing rights and interests. Sometimes, the balance will tilt in favour of privacy and other times it will not. As the blog highlighted, the decision in the Google Spain [2014] case noted the limits to the right to be forgotten. This reminds us of the comment Dr Veale made in Lecture 5 on the right to erasure. He suggested that the right to erasure should be narrower than it currently is. 

Lastly, the comparative approach adopted in this blog is worthy of some further comments. It is interesting to see how different jurisdictions are likely to have different approaches to the balancing act between privacy and other competing rights. As the blog pointed out, this could be based on the values which such jurisdictions put more weight on. The blog gives the examples of the prevailing US and EU values and how this affects the treatment of data protection in those jurisdictions. This contrast takes us down memory lane to Lecture 2 when we discussed ‘The Emergence and Structure of Data Protection’. While we acknowledge that the Omnibus and Sectoral regimes appear to be two contrasting ends of a pole, Lynskey highlighted the nuances of their practical applications make the contrast less obvious. We wonder if Lynskey’s suggested ‘extraterritorial impact of the EU regime’ will by any chance rob off on the USA in a significant way.  

In conclusion, we loved the simplistic and easy-to-understand style of writing adopted in the blog. The authors highlighted some interesting conflicts between rights and jurisdictional approach. Whope this comment hadone justice in responding those issues by drawing on the knowledge gained in lectures thus far.  

Right to be Forgotten v Right to Freedom of Expression, Who Wins?

In the past, forgetting was the norm for humans because it was not easy to keep data. Nowadays, thanks to the rapid development of digital technology, the cost of storing data have fallen and fallen, instead of making it more expensive. Information that enters the internet cannot easily be forgotten and can also spread unnoticed through search engines or social media platforms, with unpredictable consequences for those involved. For example, on October 10, 2012, 15-year-old Amanda Todd was bullied and eventually committed suicide because of fabricated scandalous images on the internet.

The need for a right to be forgotten creates much of debate, especially as a huge chasm is drawn over whether the right to be forgotten infringes the right to freedom of expression. Proponents see it as a landmark in the development of individual privacy rights. Opponents see it as a dominant regulatory burden that will have a direct detrimental effect on general legal freedoms in the internet sphere. So is there really an irreconcilable contradiction between protecting the right to freedom of expression and the right to be forgotten?

The right to freedom of expression can be found under Article 11 of the Charter of Fundamental Rights of the European Union and under Article 10 of the European Convention on Human Rights [1]. The CJEU defines freedom of expression as including “the expression of opinions and the freedom to receive and impart information” [2]. In the debate between the right to be forgotten and the right to freedom of expression, it is the freedom to receive and impart information that is more often discussed. EU has developed detailed balancing principles based on the idea that the right to be forgotten is as important as the right to freedom of expression [3].

However, the EU frequently has a tendency to protect privacy in its judgments compared to other countries and regions, especially the US [4]. This is because the privacy and the cornerstones of freedom in Europe and the US are different. Europe sees human dignity as central to the realization of individual liberty, and privacy as an expression and safeguard of human dignity; the US considers freedom of expression as a tool to realize democratic values, and its privacy is more focused on countering government intrusion. Nonetheless, different perceptions of the ranking of rights cannot be the main reason for evaluating the merits of rights.

Furthermore, Article 17 of the GDPR (2016) [5] is intended to remove potentially damaging private information about individuals. But instead of a “right to be forgotten” a more limited “right to data erasure” is implemented. Article 17 provides that the data subject has the right to request erasure of personal data related to him/her on a number of grounds.

Major criticisms stem from the idea that the right to be forgotten would restrict the right to freedom of expression. The US with its strong domestic freedom of expression laws, state it would be challenging to reconcile with the right to be forgotten. Also, there’s concerns about the requirement to take down information that others have posted about an individual; the definition of personal data in Article 4(1) includes “any information relating to the individual” [6]. Some argue though, that this would require companies to take down any information relating to an individual, regardless of its source, which would amount to censorship, and result in big data companies eradicating important information.

Nevertheless, GDPR balanced this. It introduced an exception under Article 17(3) that sets out limits of the right to be forgotten, including for exercising the right to freedom of expression and information [7]. Ultimately, Jef Ausloos’ observed that, through the right to data erasure the goal is not to silence the media but to place more effective limitations on the commercial holding of data [8]. Freedom of expression can be threatened if we allow anyone who wants, to erase their personal history.

A landmark EU case that touches the issue of balancing these two vital rights and the extend of the protection of personal data, is the Google Spain case [2014] [9]. In this case, Mr. Costeja Gonzalez and Spain’s Data Protection Agency brought a complaint against La Vanguardia, a Spanish newspaper, Google Spain and Google Inc. Mr. Gonzalez’s request was twofold. First, the Spanish newspaper to remove or alter the record of his 1998 proceedings so that the information would no longer be available through internet search engines and second, Google Spain and Google Inc. to remove these data as well. The main argument of Mr. Gonzalez was that the proceedings had been fully resolved and they should no longer appear online.

The European Court ruled that all European citizens have a right to request commercial search firms such as Google, that gather personal information for profit,  to remove links to private information when asked, provided that the information is no longer relevant. The Court did not rule that newspapers should start removing articles though. It affirmed the judgment of the Spanish Data Protection Agency which supported press freedoms and rejected any request to have the relevant article removed.

Why this case is so significant? This decision establishes a precedent that covers the right to be forgotten and the limits of this right [10]. It preserves individuals’ rights of privacy and the protection of personal data while also holding that there are limits to those rights.

In conclusion, the right to freedom of expression and the right to be forgotten are of fundamental importance for all EU citizens. What Article 17(3) of the GDPR (2016) and Google Spain case [2014] manage to do, is to strike a proper balance between the two rights. Any threat to the right to freedom of expression from the right to be forgotten might have been greatly exaggerated as these two mitigating factors defend the coexistence of the two rights and minimise as possible the interference between one another. Hence, it is appropriate to say that the Right to be Forgotten v the Right to Freedom of Expression is a draw and, as a result, a win for all EU citizens after all.

Footnotes:

[1] Charter of Fundamental Rights of the European Union (2012/C 326/02) and Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR) (1950).

[2] Case C-73/07 Tietosuojavaltuutettu v. Satakunnan Markkinapörssi E.C.R. I-9831 [2008]. Singleton S, Balancing a Right to Be Forgotten with a Right to Freedom of Expression in the Wake of Google Spain v. AEPD (2016), 44 GJICL 165, p: 178.

[3] Kulk S and Borgesius FZ, “Privacy, Freedom of Expression, and the Right to Be Forgotten in Europe” in Evan Selinger, Jules Polonetsky and Omer Tene (eds), The Cambridge Handbook of Consumer Privacy(Cambridge University Press, 2018), p:11.

[4] US Courts Should Not Let Europe’s Right to Be Forgotten Force the World to Forget (2017), 89 TLR 609, p: 611.

[5] REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[6] Ibid, Article 4(1).

[7] Ibid, Article 17(3).

[8] Ausloos J, From Individual Rights to Effective Protection (OUP, 2020).

[9] Case C-131/12 Google Spain SL and Google Inc v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González ECLI:EU:C:2014:317 [2014].

[10] Google Spain SL and Google Inc v Agencia Española de Protección de Datos [2014], Columbia Freedom of Expression, Columbia University. <https://globalfreedomofexpression.columbia.edu/cases/google-spain-sl-v-agencia-espanola-de-proteccion-de-datos-aepd/>

The Defective Neuralyzer in the GDPR

The right to erasure (‘right do be forgotten’) was introduced by the article 17 of the General Data Protection Regulation (GDPR), to enhance the level of legal certainty in terms of privacy for the data subjects. Even though this kind of regulation came in an attempt to better balance obligations among data subjects and controllers, it is still questionable whether such a right would be able to be fully exercised in the digital era, where information spread as fast as the speed of light, and technology evolves not in a friendly way for most part of its users. The right to be forgotten is one of the mechanisms brought by the GDPR to enable data subjects to have more control over their personal data and privacy, but some questions had arisen as to how it would affect both the data subjects and controllers.

Before the GDPR, the control over personal data by data subjects were regulated by the European Data Protection Directive, and the right to be forgotten was implicitly extracted by an interpretation of articles 6.1(e) and 6.1(d) – known among scholars as “passive right to be forgotten”. In summary, such provisions provided that personal data should be kept available by the controllers only for the necessary time to achieve their purposes and should be deleted, erased or rectified once it had achieved the result for which it has been collected for. This kind of regulation, however, created a wide field for data controllers to kept personal data collected, calling the attention for the regulators to different treatment under the GDPR’s provisions. Hence, what was valued the most by the regulators as to the personal data, privacy and right to erasure (to be forgotten) was data’s subject consent.

The ex-post empowerment approach to providing for the ‘right to erasure’ in the GDPR has called for discussions on its mode of enforcement. The claims encompass the range of responsibilities that the approach imputes upon both the data subject and the controller. While there is no doubt that the right is an important tool in the hands of data subjects, the ex-post approach can also be argued to result in devaluation of the right since the paramount responsibility is on individuals who are mostly not willing to exercise it. Potential reasons for that would include the “very limited individual impact (or at least the difficulty in measuring it) and the apparent high threshold for taking legal action”. As against the aforesaid, there is also the controller centric view that the modalities of enforcing the right pose hindrance to the economic operations of the controller.

In a typical scenario, controllers are usually multi-national corporations with asymmetric bargaining power over data subjects. While they are capable of negotiating terms, the CJEU’s decision in the Google Spain case followed by the GDPR impose an obligation on controllers to effect erasure if so requested. The controllers are now burdened with hundreds of thousands of requests for erasure of personal data. The internal handling of such requests imposes substantial costs for internet service providers. The expanding volume of such requests posits such challenges that no judiciary has the resources to handle that gamut of work. There being no meaningful appellate system in place, controllers are likely to erase links or data in case of doubt. This has an adverse impact on freedom of expression and right to information. There also arises the question of compromising of security by controllers to fulfil the GDPR objective of allowing data erasure. An example of the same can be found in Apple’s defence that once an user’s voice recording on Siri is uploaded to Apple’s server, its link to the user’s account is cut off and cannot be located thereafter.

The construction of the right to be forgotten is often considered to be an inadequate solution. As is now the position after Google LLC v. CNIL (France) 2019, the de-linking has to be undertaken only on certain versions of a search engine (data controller) that correspond to the Member States of EU. This vitiates the suggestion that was forwarded by the 2014 Working Party with regard to an extra-European effect. The problem is further compounded by the difference in domestic law of various countries which renders it practically impossible to bound the nations with an overarching data protection regime.

The internet has an unforgiving memory and works more like quicksand. The claimant in the Google Spain case, Mario Consteja Gonzalez in ascertaining his claiming be forgotten, is remembered by the internet, with more than 70,000 search results coming up just for his name and it is almost always associated to his case, and the debts he tried so hard to make the internet ‘forget’. Even more, the original information about Mr. Gonzalez was never deleted, it was just the link from the search engine that was removed. Although the Regulation provides individuals with the chance to claim this right, the Regulation can remedy it to the extent of disclosure in the search engine alone.

The right to be forgotten tries to tackle an important problem; but with a very blunt instrument which could also have a chilling effect on access to information and ultimately censorship. Taking a look at the government requests to remove content to Google from UK in the past 12 years, there has been a total of 118,991 items removed since 2009, giving reasons including but not limited to privacy, national security, defamation, violence and trademark. The decision to remove these come from Google itself, and is entirely controlled by their own internal decision making process. This shows an uncomfortable shift, where the Regulation has shifted responsibility of protecting privacy and implementing censorship, from the judiciary to private businesses, regulating themselves and the information that we receive. There is no regulatory obligation, or process available that shows that in such a request, the search engine is required to take into consideration any responses from the party that has disclosed the information on the search engine, making these decisions quite arbitrary.

This brings us to the question of what it means to ‘forget’; in the era of artificial intelligence and social media where personal data is remembered through a multitude of forms, and has become a currency of the internet. It is, the “panopticon beyond anything Bentham ever imagined”. Memory in terms of technology is vastly different from remembering in the human form, and hence, the right to be forgotten, while seemingly straightforward on paper, does not translate in the same manner for machines and technology.

In machine learning environments, current methods of implementing data privacy through data anonymisation, minimisation, and even deletion, come at a cost of loss of functionality in the technology itself. While the right under the data protection framework focuses on legitimacy grounds, purpose and our right to object, much more of our data remains and continues to be utilised through many other means, and it may as well be impossible for individuals to track, identify and remove all of them. With such a reactive measure, the current legal framework does not provide enough safeguards but puts the burden on individuals to claim and request to be forgotten in a realm that is beyond the jurisdiction of the framework itself. Ultimately the aim should be to integrate legal and technical approaches together to enable the right to be forgotten. The solution is not to regulate technology to promote privacy, but to promote privacy by using technology to regulate itself.

 

References

  1. Hans Graux, Jef Ausloos and Peggy Valcke, ‘The Right to Be Forgotten in the Internet Era’ (November 12, 2012). ICRI Research Paper No. 11, Available at SSRN: https://ssrn.com/abstract=2174896 or http://dx.doi.org/10.2139/ssrn.2174896.

 

  1. Jef Ausloos, ‘The Right to Erasure in Practice: In The Right to Erasure in EU Data Protection Law’ (OUP 2020) Retrieved 21 Feb. 2021, from https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198847977.001.0001/oso-9780198847977-chapter-8.

 

  1. Michael Douglas, ‘Questioning the Right to Be Forgotten’ (2015) 40 Alternative LJ 109.

 

  1. Jeffrey Rosen, ‘The Right to Be Forgotten’ (2011-2012) 64 Stan L Rev Online 88.

 

  1. Urs Gasser, Recoding Privacy Law: Reflections on the Future Relationship Among Law, Technology, and Privacy, (2016) Law, Privacy & Technology Commentary Series

 

  1. Li, Tiffany & Fosch Villaronga, Eduard & Kieseberg, Peter. (2017). Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten

 

  1. Rolf H Weber, ‘The Right to Be Forgotten: More than a Pandora’s Box’ (2011) 2 J Intell Prop Info Tech & Elec Com L 120

Are Digital Immunity Passports the next Must- Haves?

Covid-19 has truly proven to challenge the perseverance and durability of our society, more importantly, it has triggered an unprecedented innovative boom as a result of meeting our needs and addressing novel issues. A key debate originating since the spark of global lockdowns in March 2020 has been on how to best manage the infection rate and thereby terminating the virus gradually.

To address this, besides the age-old methods such as social distancing and isolation, digital tracing applications utilising a proximity system that would notify smartphone users of potential contamination by the virus have been introduced. This further evolved into the concept of Digital Immunity Passports (DIP) which is an application that manages an individual’s Covid-19 related test results and health certificates. The DIP model which is generally favoured leans towards a decentralised system of data collection wherein only the data owner is able to use or share his data which is end-to-end encrypted as the data is only federated on a mobile device. In this post, we will focus on the continuation of the tracing debate, specifically, discussing the use of DIP and whether domestic and international usage of the DIP would face challenges in Privacy law and other areas.

The decentralised system DIPs have been lauded by the airlines industry and also by countries such as Chile and Germany and Italy. In November 2020, after the three known vaccines, i.e. Pfizer, Modena and Oxford-AstraZeneca reported more than 90% efficacy of their vaccines, the International Air Transport Association immediately made an announcement that its DIP is in the final phase of development, implying that it should be utilised to revive air travel. The CommonPass Project (which collaborates with the World Economic Forum) also has a DIP model that is designed to streamline vaccine information across borders. DIPs are marketed as a global standardisation that is necessary to help society move away from costly and difficult social distancing and blanket isolation measures. While DIP may succeed in certain parts of the globe, some governments fail to deliver a confident and robust system on account of public trust. One of the main concerns being in how such data is used (‘mission/feature creep’) and protected (decentralised or centralised data collection, ‘sunset provisions’ etc.).

In the UK particularly, we have seen and can compare both the centralised and decentralised systems used in tracing. Evidently, the latter (Google-Apple tracing system based on the principles of the DP-3T protocol) was more successful because of its privacy-friendly model and has replaced the government’s centralised system since autumn 2020.[1] The tussle between the need to assist public healthcare while not compromising privacy as it lies at the heart of these two models. While in theory, DIPs are presented as privacy-friendly, however a wide spread utilisation of the DIP may also pose other issues such as infringing upon human and civil rights.[2]

Domestically, the UK’s prime minister, Boris Johnson, has in February 2021 announced that the government will not introduce any domestic vaccine passports, and instead rely on mass vaccination and rapid lateral flow testing. The vaccine minister, Nadhim Zahawi, has also expressed that the use of domestic vaccine passports would be ‘wrong’, as it may very well lead to discrimination for those who cannot and will not take the jab.[3] Although, this is merely a public stance, as the government does not explicitly prohibit the private use of immunity passports and instead emphasises to companies that wants to use vaccination passports to make sure it still abides with discrimination and privacy law. For example, law firms and other large companies are considering mandatory vaccinations of its existing and future employees by amending existing employment contracts and/or including such a clause in future ones. This would require them to disclose sensitive medical data and open up the risk for possible discrimination claims if they cannot get vaccinated because of a pregnancy, religious beliefs or certain health conditions.[4]

Furthermore, in order to use the DIP effectively, in which you can take the DIP to travel and be used internationally, there would also be a need for international collaboration and consensus. The first question to ask ourselves then is which vaccine(s) (if any) would be considered reliable enough to reach international consensus? Politically, this may raise certain issues (for example, states have denied the same vaccines that many others have accepted, such as the Swiss regulator denying the AstraZeneca vaccine[5]). Practically too, while countries are actively developing vaccines, no vaccine is a forerunner in providing complete immunity and therefore the certainty of DIP may be non-existent and arguably ineffective.

From a legal and technical perspective, if a centralised database system would be applied to the DIP, the data that is transferred internationally has to adhere to data transfer regulation, which may complicate matters particularly in regard to data leaving the EU which is under the protection of the GDPR. In terms of the GDPR, the decentralised database system (which is key to the coveted DIP) is consistent with the data minimisation and collection for purpose rule. This favours very strongly for the DIP. In a decentralised system, the data is locally stored on the device and remains fully anonymous without the need for third party interference. For the foregoing reasons, the decentralised system is immutable, yet some argue that it may not be successfully lobbied since governments lose the right to observe and track data for certain public health policy measures and businesses do not stand to make a profit from such models as the data is not harvested centrally. Alternatively, in the centralised system, which has (identifiable) data collated at one point; while it is seamless for government surveillance, it would result in massive data mining by companies which would ultimately lead to abuse and misuse of data.

In conclusion, the guiding principles of vaccines is to facilitate and not impose restrictions on those who refuse to take them. Therefore, primarily, on the basis of possible discrimination of those who cannot or refuse to vaccinate themselves, there is a strong case to avoid the use of DIPs. On the other hand, since the social distancing and isolation measures are a cost to society, the economy, our mental wellbeing, the utilitarian approach may be to gradually return to ‘normal’ day-to-day life as much as possible, through decentralised DIPs.

After all, utilisation of the DIP should be treated as an interim measure until the virus is fully contained and/or the vaccination is successful and as such, deviating from the most privacy-preserving option may be permissible with appropriate safeguards such as ‘sunset’ mechanisms. Whichever way adopted, it is sure to set a precedent for future pandemics and should therefore be decided upon in that respect.

[1] Leo Kelion, ‘UK virus-tracing app switches to Apple-Google model’ BBC (London, 18 June 2020) <https://www.bbc.co.uk/news/technology-53095336> (Accessed 18 Feb 2021).

[2] Chris Hicks*, David Butler*, Carsten Maple, Jon Crowcroft. SecureABC: Secure AntiBody Certificates for COVID-19. CoRR, abs/2005.11833. 2020. <https://www.turing.ac.uk/research/publications/secureabc-secure-antibody-certificates-covid-19> (paper currently under review) (Accessed 18 Feb 2021).

[3] Kate Beioley, George Parker, Delphine Strauss, Alice Hancock and Siddharth Venkataramakrishnan, ‘UK companies look to make Covid-19 vaccinations mandatory’ Financial Times (London, 16 February 2021)  <https://www.ft.com/content/965dfaf0-f070-4dae-93a6-28bedbdb75da> (Accessed 18 Feb 2021).

[4] ibid.

[5] Sam Jones and Donato Paolo Mancini, ‘Swiss medical regulator rejects Oxford/AstraZeneca Covid vaccine’ Financial Times (London, 3 February 2021) <https://www.ft.com/content/a6a6d64c-a337-4af4-9525-d194571c7887> (Accessed 18 Feb 2021).

 

Comment on ‘Attack of the Cookies’

Hi guys! Well done on your article ‘Attack of the Cookie Monster’. We thought you did a great job and really highlighted some of the key issues surrounding cookies and tracking in the online context. 

We think you did a really good job of explaining why it is so hard for online users to avoid subscribing to cookies. On this subject, we discussed several key reasons why cookies are so easily consented to. 

Firstly, you made an excellent point that users don’t necessarily fully comprehend what a cookie is, what it does and therefore what they are subscribing to. We thought this was a particularly interesting point, especially in the context of the GDPR and the requirement for informed consent. Can consent really be informed and sufficient if users do not understand what exactly they consent to? Although websites may overcome this argument by providing links to the full terms and conditions, studies show a user is unlikely to take the time to read them. Additionally, even if a user did go through the effort of reading the lengthy terms and conditions provided, doing so is unlikely to give them a greater understanding of what a cookie is and what they are signing up to. In providing terms and conditions websites often go to great lengths to word them in a way that can be deemed inaccessible to the average user. When discussing possible solutions to this, it was suggested that widening the GDPR to require websites to make the terms and conditions accessible to the average person might be a way of ensuring more informed consent in the future. However, we acknowledge that this might not be a simple solution given the technical aspects involved in cookies and the varying degrees of understanding users would present, which would differ depending on factors such as age, technical background etc. 

Secondly, those websites that use cookies and rely on users consenting to them, convince users to do so through the use of dark patterns. What we found more interesting, and possibly concerning, about your explanation of dark patterns was that none of us had heard of, nor were aware of them before. Until reading the article none of the group had realised that these dark patterns were being used to ‘push’ users to consent to cookies, however after reading your explanation of them we all remembered similar tactics being used of us during our time online. This highlighted to us just how effective the use of dark patterns are and how easily users can essentially be coerced into consenting to cookies. When discussing this we drew comparisons between dark patterns and the use of advertising and marketing techniques designed to draw a consumer in. We also thought that the use of dark patterns presented a further problem to ensuring consent is informed and this technique could easily be used to deter users away from reading the terms and conditions via the link provided, and instead push them directly towards the ‘I agree’ button. 

In addition to dark patterns, we also discussed how consenting to cookies is used as an entry requirement to access the full website. In some situations, users do not have the option to refuse consent to the cookies if they wish to continue to access the website. We argued this method goes a step further than the use of dark patterns as it essentially forces the user to consent. In addition, we felt like this could be contrasted with websites that require a subscription in order for you to access them e.g. you can’t access Netflix unless you log in and have a subscription. However, in this situation, you are not paying for the subscription with money, but with your own personal data. 

A further reason we discussed as a group was the lack of enforcement in situations of non-compliance with the GDPR in the context of cookies. In the EU, in order to conform with the GDPR, especially in its requirement for consent in the collection of personal data, websites use Consent Management Platforms (CMPs). However, these platforms present their own issues as vendors of CMPs have been known to turn a blind eye to obviously illegal configurations of their systems. Hence, as a group, we felt that more enforcement in this area is needed to ensure greater protection of personal data. In particular, we argued that regulators should work further upstream in the data collection chain, rather than only focussing on the downstream companies involved such as the website owners themselves. 

We thought it should also be mentioned that the GDPR only came into force in 2018 and as such the current issues presented could be a consequence of business adjusting to the new requirements. When new legislation is enforced there is often a transition period as everyone adapts to the new situation. As such, the extent of the issues highlighted in the article may lessen over time as more and more companies change the way they comply with the GDPR etc. In contrast, as the GDPR is new legislation it will likely have to be adapted to cover situations that were not foreseen during its drafting etc. Hence as a group, we thought it would be interesting to see how this situation developed over the next few years. 

Comment on Joint Controllers and the Household Exemption

Hi guys, nice job on the article “Alexa, am I a data controller?”! We loved reading it and felt as though you truly walked us through the complexities arising from using smart speakers… it would dissuade more than one person to buy them!   

After a thorough reading, we questioned who exactly the owner is. Some of us believed it was powerful companies like Google and Amazon, others felt it was the average consumer buying the speaker. These different perspectives led us to discuss controllership as well as the household exemption. 

The first point we discussed was that of controllership and the extent it should have.  

A lot of focus is placed on the idea that smart speaker owners have little control over their device. Arguably, since the owner can turn off the device/microphone and choose what information is being ‘listened’ to by the device (i.e. through trigger words)he may be found to have significant control over it. This could invoke the owner’s liability. This is the case where a third-party may visit the owner’s house, being unaware of the existence of a device, and have their data processed.  An example could be where the owner uses the device in a way that provides information about other individuals. For instance, saying ‘Alexa, call John from the XYZ law firm’ includes information about John’s profession or ‘Alexa remind me to buy a gift for Nick’s wedding’ provides information about Nick’s marital status. 

This element of control may havthe same meaning as controller under the GDPR. The concept of owner entails someone using a product for one’s own useSurely in a situation like this, they cannot be responsible for the processing of their own data, they are the data subject rather than controller. They are however, like you discuss, a data controller under the GDPR definition where other people’s data is concerned. As mentioned above, users may be regarded as “controllers” because they have collected information from friends or other people and “indirectly” submitted them to the manufacturer (i.e., Amazon). In this case, it will be inevitable to discuss the household exemption, but more on this later. 

Wdid however question an argument you put forward in the blogYou write that users should not be identified as controllers because it would dilute effective protection: “Making everyone responsible means that no-one will in fact be responsible”, but this is open to debateFirstly, responsibility should not be evaded; giving users some responsibility can make them more cautious and careful when collecting information from others. Secondly, we agree that users’ responsibilities should be limited in situations of a data breach, but they should be held liable in proportion to their assistance. It is not suitable to use the concept of controller, as described under the GDPR, to hold them liable to the extent of a global tech giants like Amazon or Google.  

The second point in the blog that sparked our interest was the household exemption and your position on its breadth.  

According to s2(2) of the GDPR, the household exemption applies where processing of data is carried out in the “course of a purely personal or household activity”. It is questionable whether the scope of the provision should be extended to protect individuals who enable third-party data processing through smart speakers. 

It is true that in the event of a data breach, the injured party cannot retrieve anything from the owner. However, the concept of controllership does not stem from whether something can be retrieved but rather if there was assistance in obtaining the data. By owning a smart speaker and being negligent in restricting the collection of data (e.g., turning the microphone off), puts the owner in a liable situation.  

Logically, this facilitation should not enable an owner to be protected under the household exemption. They are liable for holding “decisive influence” (Fashion IDover the collection of data, especially in their house. The transmission of data which is not their own is neither a personal nor household activity. 

Furthermore, the household exemption in relation to smart speakers should continue to be narrow due the large scope of information it can gather. Where most people will use these devices to monitor their daily lives (grocery shopping lists, etc); some might also use it for professional reminders. This use of the smart speakers would lead them to, technically, fall under the household exemption but, in practice, cover data that is not personal. This could even be brought further and lead one to wonder what happens when a smart speaker user divulgates professional information to the device and that information is covered by a non-disclosure agreement.  

It is for all these intricacies related to the household exemption and the implications that it has in relation to data that, unlike you, we believe that the scope of the exemption should remain narrow 

Attack of the cookie monster

Picture this, you arrive at a website and the cookie policy pops up. The intrusive, ‘no browsing the site until you’ve dealt with me’ kind of pop-up. Without thinking you click the green box and it disappears. Hurrah! Off to browse the site you go and you think no more about it. But let us pause for a second to unpick what has just happened. When you clicked the green ‘Accept All’ box you apparently gave your consent for various tracking programs to be placed onto your computer which will monitor your activity and report back to their digital masters. If you are suddenly thinking this seems like something you shouldn’t be able to consent to so carelessly you aren’t alone, the GDPR agrees with you.

“But wait,” you say, “wasn’t the GDPR meant to ensure companies couldn’t do this anymore? Didn’t you get a million emails a few years ago from every company you’ve ever known begging you to stay on mailing lists and promising they would respect and protect your data? Big promises were made about how the GDPR would usher in a new age of digital transparency – so why is it still business as usual?”

Well, it’s not quite business as usual. Before the GDPR you probably didn’t get the pop-up in the first place. Businesses have made a big song and dance about how they are ‘GDPR compliant’ and these intrusive, often annoying, pop-up consent boxes are part of it. Under the GDPR, companies must have a lawful basis for processing your personal data (the data in question here is the information the cookie sends back about your browsing habits). One of these bases is consent, which is why you see the pop-up in the first place. If you agree to having your personal data processed then that’s between you and the data controller (that’s the person who is receiving and using your personal data).

“Sure,” you say, “but everyone knows that nobody reads the small print. If the GDPR was meant to be a game-changer shouldn’t it have predicted businesses would do this? What help is the GDPR if you can sign away all your digital rights without realising it?”

Actually, the GDPR was a bit smarter than you give it credit. If someone wants to rely on your consent to process personal data the GDPR requires that your consent be (1) an indication of your wishes which is (2) specific and informed, and (3) freely given. Number 1 is pretty easy; you clicked that big green box, that’s a valid way to express your wishes under the GDPR. Number 2 is a bit more of an issue though – you didn’t know what you were saying yes to. You didn’t read the small print and, even if you had, you wouldn’t have been too much the wiser on what was actually going on as. As an example, a very standard term in a cookie policy states, “We and our partners store or access information on devices, such as cookies and process personal data, such as unique identifiers and standard information sent by a device”. What are these unique identifiers and standard information? When they access your data, what are they doing with it? Unless you have a keen interest in data processing you’re unlikely to be able to truly understand what is happening to your data.

Okay number 2 looks like a bit of a barrier but what happens if you actually read the policy and had understood it? Nobody made you click on the big green ‘Accept All’ box, surely number 3 isn’t a problem? Not so fast. Studies have shown that over 50% of all sites use ‘dark patterns’ to get you to consent. What are these dark patterns you say, suddenly very nervous? Don’t worry, it’s not a global conspiracy, dark patterns are built on the concept of ‘nudges’, very minor changes to an interface or system designed to get the user to choose a particular outcome. One big nudge is the box that you clicked on being green – subconsciously we all know green means go, carry on, no problems here. The position of the pop-up is important too. It’s often on the bottom of the screen, perhaps off to the left, which makes you less likely to pay it much attention. After all, how important can something be that’s not in the middle? “Hmm,” you say, “this sounds like a bit of a stretch, surely people aren’t so susceptible to very minor formatting changes?” Perhaps they are. When sites use these techniques acceptance rates jump from 0.16% to 83.55%, which adds some weight to the idea that your consent might not have been as freely given as you thought.

“Well clearly I didn’t actually consent to these cookies!” I hear you say. That’s almost certainly true, you didn’t properly consent to your data being processed. “Well, I want them to stop digitally stalking me then!” you demand. Again, slow down. Consent isn’t the only lawful basis to collect and process your data. If processing your data is absolutely necessary to pursue a legitimate interest a business can still do this even without your consent.

“How is digitally stalking me just to be able to advertise a lawnmower I once clicked on by accident legitimate?!” you howl. You may have a point. Not only is it not certain that this type of behavioural advertising is only possible by collecting massive amounts of personal data, but your right to privacy might outweigh a business’ right to conduct highly targeted marking campaigns (in this case for lawnmowers). And I say might because the courts haven’t yet resolved the point, not because I personally think businesses have an absolute right to sell you any old tat.

“If I didn’t consent to it and they don’t have a proper reason to do it without my consent, how are they getting away with it?” you righteously enquire. Good question. First, the GDPR isn’t that old. It came into force in 2018 and everyone is still coming to terms with it. Courts have yet to hear cases that will resolve important questions. For their part, although data protection laws existed before 2018 they were nowhere near as stringent, so businesses have only recently started to take it seriously. Therefore, you can expect practices to become more GDPR compliant as time goes by. Secondly, the public bodies who are meant to ensure compliance with data protection laws are generally underfunded and understaffed. The reason they haven’t straightened out businesses’ cookie policies is because they have a lot on their plate. Over the last few years, we have seen massive data loss cases that have dominated data protection agencies’ attention. Cookie policies seem to come further down the list of priorities for them. Thirdly, and perhaps most importantly, who cares? You didn’t before we had this discussion. You blindly clicked on that big green box and wouldn’t have given it a second thought. Businesses are unlikely to reform until there is pressure on them to do so. If internet users are unconcerned about what happens to their personal data why would businesses voluntarily stop doing something which makes them money?

“Well I’m mad and I’m not going to take it any more!” you bellow. One, stop quoting the 1976 classic Network, and two, what are you going to do personally? You won’t stop using the web and you are unlikely to sue them because that’s going to cost you an arm and a leg (and take a long time). You could try and avoid the problem by blocking cookies on your browser but that may stop you visiting certain sites and it doesn’t get at the wider problem. Perhaps the best thing is to raise awareness and encourage other people to be more careful about their personal data going forwards.

“Excellent idea,” you say “I’ll write a Facebook post immediately!”. I see you’ve learned nothing.

Anonymisation in the Panopticon, or Being Naked in the Cyber Agora

Can we ever live without the fear that what we share online will affect us negatively offline?

Data Never Sleeps 8.0
Amount of data generated every minute in 2020

What Luciano Floridi coined ‘onlife’ [1], has become truer than ever in the past year. Our self-conception, our mutual interactions, the way we conceive reality and how we interact with it, have all been heavily influenced by our interaction with technology. It has become impossible to separate our lives offline from our lives online. Everything gets done either exclusively through technology, or is undoubtedly facilitated by it. In that line, we are all living ‘onlives’ and generating constant data, to the point of having our actions and thoughts fully documented in the cyber agora.

In 2018, Forbes magazine claimed that “Every day, we create 2.5 quintillion bytes of data – so much that 90 per cent of the data in the world today has been created in the last two years alone”. [2] While, IBM predicts that by 2025, the world will store 250 zettabytes (or 150 trillion gigabytes). [3]
Not that we would know how to count to that…

One of the tools we have created to protect ourselves and our data from misuse and abuse is the GDPR. However, it can be argued that the GDPR’s scope has become too broad [4] — its attempt to be all-encompassing is backfiring. Whilst it was well intended, the broadness creates confusion and will end up applying to everything, as all data will at some point be personal (or can at least be argued to be). This will lead to a system overload, which will lead to the failure of the GDPR. It is thought that in such a scenario, the GDPR will be largely ignored and its enforcers unable to close the floodgates it opened.

“It knows too much.” – Barron’s Cartoon, Kaamran Hafeez

As technology advances, means of (re-)identification advance. Therefore, anonymisation must be irreversible. ‘Information’, while seemingly uncontroversial, is the exact problem. Personal data can be anything, regardless of the data’s nature or content. Whether anything can be considered personal data should, in theory, be based on whether it can be used with the purpose of influencing individuals.

Case law relating to this notion follows the WP29 approach.

The first data protection case where the meaning of ‘personal data’ was discussed in the Court of Justice was Lindqvist. [5] It constitutes the defining case for interpreting the scope of the household exemption. The case concerned a catechist, Mrs Lindqvist, setting up a website containing information of herself and 18 of her colleagues including names, hobbies, telephone numbers and even personal injuries. All of the above was done without the consent of the relevant people it inlcuded.

The case was discussed in regards to the EU Data Protection Directive 95/47 and the questions referred for preliminary ruling included among others (a) whether the mention of a person in the manners discussed above falls within its scope and (b) whether such information found in a private home page, accessible given an address, could constitute an exception under Article 3(2) of the Directive. “Household activity” is addressed in Recital 18 of the regulation as non-commercial activity along with a description of what this ‘could include’. This creates uncertainty as to the scope of application.  

“Remember when, on the Internet, nobody knew who you were?” – New Yorker Cartoon, Kaamran Hafeez

The Court suggested that both the scope and nature of processing falls with the limits of the Regulation and that the activities of Mrs Lindquist cannot be considered as exclusively personal household activity. In addressing the same exemption of the Directive, the Court in Ryneš [6] addressed the situation of a CCTV operation, that despite being attached to a single household, it was nevertheless monitoring a public space. For that reason, the activity could not be considered as purely “personal or household”.

In addressing the scope of the household exemption, it is important to note that the strict approach taken in court proceedings, revealing its extremely narrow nature, is further reinforced by the fact that there has not been a successful claim falling within the exemption so far.

In contrast, in Breyer, the Court used a very broad definition in relation to the identifiability criterion – ‘identification measures reasonably likely to be taken’. [7]

“Cloud Data in the West” – Chris Slane

Breyer concerned the possibility of identifying the dynamic IP addresses of visitors to the websites of the German Federal institutions and whether such dynamic addresses were personal data.

The Court addressed the notion through analysing “whether the possibility to combine a dynamic IP address with the additional data held by the Internet service provider constitutes a means likely reasonably to be used to identify the data subject.”

Since website providers were found to have the means likely to identify website visitors through third parties (i.e. internet providers), dynamic IP addresses were considered personal data.

The case reaffirmed the broad reading by the WP29 of “all the means likely reasonably to be used either by the controller or by any other person”. Here, the Court explicitly stated that it is not necessary “that all the information enabling the identification must be in the hands of one person”. Thus, with the ruling that a legal ban on identification would make the identification means not reasonably likely to be used, the Court followed the absolute approach.

It has also been suggested that a functional anonymisation approach would be the most favourable in maintaining the possibility of anonymous data. [8] This approach focuses on the relationship between the data and the environment within which the data exists. Anonymisation has become an important part of the data-sharing toolkit, both ethically but also as a procedure who adds to the business. It is a procedure which is applied to the data as algorithm that takes a privacy-breaching dataset as input and produces a dataset from which individuals could not be identified.

‘Accidental Data Release’ – Chris Slane

However, several critics and market failures, such as the release of data in 2013 about journey details of New York cabs, prove that a naive application of anonymisation could lead to a threat of re-identification of personal data, which is defined as any information relating to an identified or identifiable natural person. It is argued that there is indeed a higher risk in recent years considering both the evolution of technology and the increasing financial rewards to hack systems.  

Due to these market failures, a new approach to anonymisation was developed – the Functional approach. According to it, one cannot tell from the data alone whether a dataset is anonymous.

“I changed my privacy settings” – Matt Percival

Certain additional relevant issues shall be taken into account, e.g. the motivation of an adversary wishing to attach anonymised data in order to re-identify somebody, the potential consequences of disclosure, and how a disclosure might happen without malicious intent.

The importance of taking into account the ‘data environment’ has been highlighted, as it is “the set of all possible data that might be linked to a given dataset”. This consists of four elements: other data, data users, governance processes and infrastructure. 

Therefore, the notion of functional anonymisation ties together the ideas of disclosure risk and the data environment best. By applying it, we reduce the risk of re-identification through controls on the data and its environment so that it is at an acceptably low level. This approach is a practical framework which delivers the desired benefits without compromising the concept of information privacy.

 

Footnotes

[1] Luciano Floridi (2015) The Onlife Manifesto, Springer, Cham, DOI https://doi.org/10.1007/978-3-319-04093-6

[2] https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/?sh=1bfe9edc60ba

[3] https://www.ibm.com/blogs/services/2020/05/28/how-to-manage-complexity-and-realize-the-value-of-big-data/

[4] Nadezhda Purtova (2018) The law of everything. Broad concept of personal data and future of EU data protection law, Law, Innovation and Technology, 10:1, 40-81, DOI: 10.1080/17579961.2018.1452176

[5] Case C-101/01 Lindqvist EU:C:2003:596

[6] Case C‑212/13 Ryneš ECLI:EU:C:2014:2428

[7] Case C-582/14 Breyer ECLI:EU:C:2016:779

[8] Mark Elliot, Kieron O’Hara, Charles Raab, Christine M. O’Keefe, Elaine Mackey, Chris Dibben, Heather Gowans, Kingsley Purdam, Karen McCullagh, Functional anonymisation: Personal data and the data environment, Computer Law & Security Review, Volume 34, Issue 2, 2018, Pages 204-221, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2018.02.001.

“Alexa, am I a data controller?”

Over the years, developments in technology led to the adoption of General Data Protection Regulation (GDPR). The GDPR was designed to harmonise data protection laws across EU member states, as well as provide a more solid foundation for the protection of personal data, fit for the 21st century.

The regulation lays down personal rights, scrutinising the processing of individuals’ personal data. This is done by the limitation on data collection, control of data flow extraterritorially and data processing.

The rise in popularity of ‘smart speakers’ raises interesting and important questions for data protection, given their ability to collect large amounts of very personal data.

Producers of smart speakers, such as Amazon or Google, come under the GDPR as data controllers. Data controllers are defined as “a natural or legal person, which alone or jointly with others, determines the purposes and means of personal data processing.”[1] Seeing as voice recordings are a form of personal data, it is important that data controllers are governed by laws broad enough to cover the issues they can raise.

The role of a data controller is to determine how personal data is to be processed, i.e., they are the managers of the personal data.  Controllers are supposed to “require data protection by design and by default”, meaning that appropriate technical and organisational measures must be in place at all times- this includes the time at which the determination is made of how it will be processed, to the actual processing itself.

This definition leads to two interesting questions in regards to smart speakers: first, can the owner of a smart speaker be considered a joint controller? Second, if this is the position, does the household exemption apply, and to what extent?

The degree and extent of control is core to determining the “responsibilities of each controller”, and whether they are a data controller, jointly, or as individuals. For the purposes of the GDPR, “control” is understood as the “truthful representation of factual control”.[2]

Arguably, an owner of a smart speaker has very little control over their device. They are limited to activation of the device, installation of apps, and the deletion of information. Comparatively, Amazon and Google have sole control of the processing cloud and extensive decision making powers.

On this basis, one might credibly assert that owners of smart speakers are not de facto data controllers. Instead, it might be that owners are facilitators when activating the smart speaker.

So, if the owner were the data controller, could the household exemption apply? Article 2(2) of the GDPR excludes the processing of data “in the course of a purely personal or household activity”.  This is further clarified by Recital 18 which elaborates that the phrase “purely personal or household activity” relates to activities falling within the management of a house, family or personal life, which is to exclude all professional or commercial activities regardless of whether the activities take place domestically. In the context of smart speakers, the vast majority of tasks performed will likely fall under the  personal, family or household exemption.Further, the CJEU’s case law has interpreted this exemption very narrowly. Thus, owners of CCTV cameras recording parts of the public have been held to be outside of this exemption.[3]

However, in this context, owners as data controller’s would arguably have responsibility for the information of guests, for example. In that instance, although the third party is in a private sphere, the information recorded by the smart speaker would arguably not constitute personal, family or household information if it were about the third party themselves.

In assessing whether owners of smart speakers should be liable, it is important to go back to first principles: what is the point of the data protection regime? Who is it trying to protect and why? Is there anything to be gained by considering private individuals jointly liable for data breaches along with companies like Amazon or Google?

In Wirtschaftsakademie, Attorney General Bot argued that an imbalance of power between potential joint controllers does not prevent the less powerful party from being classed as a controller, as a “substantive and functional approach” must be taken to assessing controllership.[4] In this vein, the Article 29 Working Party says that “a broad variety of typologies for joint control should be considered and their legal consequences assessed, allowing some flexibility in order to cater for the increasing complexity of current data processing reality”.[5]

However, we argue that it is inappropriate to give GDPR Article 2(2) a meaning so expansive as to risk making owners of smart speakers liable. It is possible to distinguish this position from the decision in Wirtschaftsakademie – a business running a Facebook page has a commercial interest in collecting and processing the data of its visitors, even if it cannot negotiate with Facebook over how this is done. By contrast, owners of smart speakers do not have such an interest; they are users of a product for purely personal benefit.

It is unclear how increasing the number of parties liable would increase the effectiveness of the GDPR. In case of a data breach, injured parties would unlikely be able to recover anything from the owner of the smart speaker. In any case, as De Conca points out, if the owner was in some sense the ‘real culprit’ behind the breach, then other law exists to protect the injured party (negligence, etc).[6] A loose analogy to product liability may be drawn here, where the manufacturer will usually be liable, not the intermediary.[7]

Additionally, this would fragment liability and potentially allow loopholes to open whereby powerful companies like Apple are able to hide behind users, who are nominally ‘controllers’,  but who in fact have no influence over the product. AG Bobek has argued that this would dilute effective protection: “Making everyone responsible means that no-one will in fact be responsible”.[8] Therefore, we submit that private users of products should not count as controllers. If they do, then they should usually fall under the household exemption. The law, especially if it is based on fundamental rights[9], should protect individuals from those actually wielding power – in this case, big tech companies, not Alexa owners.

In sum, there is some ambiguity over whether owners of smart speakers are controllers under the GDPR – Wirtschaftsakademie suggests they are. Meanwhile, the household exemption is very narrow. We argue that they should not be controllers, or at least that the exemption should be expanded, so that those with de facto power are held accountable.

Footnotes

[1] Article 4(7) GDPR

[2] Ibid, Art 25

[3] Case C‑212/13

[4] Case C‑210/16, opinion of AG Bot, para 76.

[5] ibid.

[6] Silvia De Conca, ‘Between a rock and a hard place: owners of smart speakers and joint control’, 2020 Scripted 17(2), 266.

[7] ibid.

[8] Case C-40/17, opinion of AG Bobek, para 92.

[9] Preamble to the GDPR.