Professor Suzanne Rab is a barrister at Serle Court Chambers in London where she specialises in regulatory, data protection and competition law. She combines her professional practice with academic roles including as a law lecturer at the University of Oxford, Visiting Professor at Imperial College London, Professor of Commercial Law at Brunel University, London and associate lecturer at the Open University (United Kingdom). Professor Rab is author of chapters on Telecommunications and Competition Law in Kerrigan, C. (ed) (2022) Artificial Intelligence: Law and Regulation (Edward Elgar Publishing, 2022) and which inspires this work. The insights presented in this work are drawn from both her professional and academic roles. The views expressed are personal.

Over the past two decades, the field of artificial intelligence or “AI” has been reshaping virtually every industry and human endeavour founded on the idea that machines could be used to simulate human intelligence. This presents challenges for regulation when knowledge is still emerging about the underlying phenomena and raises questions about whether existing legal tools are adequate to address the risks of harmful outcomes while not presenting undue constraints on the benefits that AI can unleash. Insofar as data is an input to AI models, the intersection with data protection and data privacy merits close inquiry.
A number of existing legal regimes operate at a horizontal level to address AI (for example, competition law, Intellectual Property and contract which are largely sector and technology neutral). However, there has yet to emerge a comprehensive legal regime that focuses squarely on AI and still less on the relationship between AI and data protection and data privacy. This finding is unsurprising, not least because the development of AI is still in early stages, with a lot of potential yet to be realised. In this dynamic market and regulatory environment, this article is intended at a modest level to contribute to the existing and emerging global knowledge about the regulation of AI, here with a focus on the interaction with data protection and data privacy. The EU AI Act is a significant development in this area and which came into force on 1 August 2024. However, my focus in this article is on the other pieces of legislation that deal with AI and data protection and privacy.
The focus reflects the professional interests of the writer, as a UK-based EU lawyer, professor of commercial law and policy adviser and is intended for those multiple audiences. It does not purport to be comprehensive but instead to contribute a distinct viewpoint based on the writer’s real-world and interdisciplinary experience.
AI and the General Data Protection Regulation (GDPR)
The EU has been a pioneer in the development of sector-neutral or “horizonal” data protection laws. A number of features of EU data protection law under the General Data Protection Regulation (“GDPR”)[1] already contribute to ensuring the accountability of AI systems. These include: (1) a preventative, process-oriented approach which restricts the way in which personal data are collected and processed in order to prevent ‘data power’ in the hands of the data controller, (2) strengthening of the rights of individuals to obtain information about how their data are being processed and for what purpose and to object to such processing under certain circumstances, and (3) enhancement of sanctions against data breaches.
Article 22 of the GDPR provides a limited right for a data subject not to be subject to fully automated decisions based on profiling. In an AI environment of data analytics and personalised customer experiences, an important question is whether Article 22 will increase the strength of data protection law over the generation and application of algorithms where businesses make decisions based on customer profiles.
Although the greater emphasis on data protection processing ‘harm’ under the GDPR does represent a move towards a more substantive approach, it remains to be seen whether or not this will be effective as it will depend on two elements. First, data controllers need to embrace the importance of the need to protect individual rights. Second, individuals need to be aware of and motivated to exercise their rights. While there are indications of progress in both these areas, concerns over use of personal data by Big Tech have served to question whether these instruments provide a workable and effective mechanism to secure the protection of individual rights in a rapidly evolving technological environment where the actual or perceived harm has already occurred.
Data, AI and other EU law policy initiatives and instruments
In parallel with its reform of EU data protection law and adoption of the GDPR, the European Commission developed a reform package on eprivacy regulation focused on electronic communications in the form of a proposed ePrivacy Regulation.[2] This merits separate consideration given that it affects the use of communications data, that is both the content of the communication and also the metadata around it and it sets out very specific rules.
Separately, the EU has introduced a new Data Act[3] which entered into force on 11 January 2024 and from 12 September 2025 will apply directly in the Member States throughout the EU. Its main objective is to improve access to and use of data, especially data generated by connected devices, commonly referred to as the “Internet of Things” (“IoT”).
The Data Act makes provision for claims by IoT users against the data holder for the provision of data generated through the use of IoT devices (Article 4). The user can also request the data holder to transfer data to a third-party data recipient (Article 5). The data access claims in the Data Act are limited to data generated by IoT devices. While the Data Act has no direct connection with the use of AI, AI models are important to IoT devices and it is possible to envisage a number of scenarios where these claims may arise which directly relate to AI models.
If an IoT device is AI-based, the user’s right to data provision also includes the data generated by the AI model. A practical example would be voice assistance systems using AI models to make user conversations more natural. The data generated by an IoT device may also be of interest to a third party to train their AI models based on the data generated. For example, a manufacturer of a domestic appliance such as a refrigerator with a ‘smart’ expiry date feature may be able to create an AI-based application with suggestions on food ordering and recipes.
Oulook and conclusions
Given that the experience of GDPR implementation is still relatively recent, it is probably not realistic to expect a wholescale revision in the near future. At the time of writing the final scope and timing of implementation of dedicated eprivacy regulation in the electronic communications sector or more generally is unclear but other policy instruments including the Data Act have moved to enactment.
There are some overarching points that emerge. First, the Data Act will take a global approach to compliance so operators outside the EU will need to watch and plan for its implementation. Second, where a business’ core service is the provision of communications services and where it processes an individual’s content or metadata, the Data Act appears to entail stricter requirements for consent than under the GDPR. For example, outside the use of metadata for billing purposes communications providers may want to develop new services through the use of AI technologies. The ePrivacy Regulation and the Data Act may present operational challenges where anonymisation or consent are not feasible. Whereas a provider might want to rely on (its) legitimate interests as a lawful basis for processing under the GDPR (Article 6(1)(f)), there is no such provision in the ePrivacy Regulation or the Data Act. Third parties may also be interested in such content and metadata to leverage the data, for example to detect patterns showing the location and intensity of (mobile) users. It is not clear that anonymisation of such data would be possible in all such deployments without compromising the rationale for seeking the data in the first place. Where such data is shared with another communications provider (for example, to inform better network optimisation), consideration will need to be given to the antitrust implications to the extent that such data sharing would involve sharing of competitively sensitive information with a competitor.
The new data access rights in the EU Data Act present a tension with the interests of data subjects and holders of trade secrets. The text attempts to strike a balance between these conflicting interests. It remains to be seen whether this balance can be achieved in practice and whether this model can be used for AI regulation elsewhere. In the EU at least, the practice of regulators and courts and the Court of Justice as final arbitrator will be decisive in ensuring a consistent approach on how to strike a balance.
Notwithstanding that the experience of AI and GDPR implementation is still relatively recent useful work could be done to inform appropriate and targeted policy making in relation to the following issues around the intersection with data protection and data privacy:
- Clarification of the goals of data protection regulation through a survey of its origins and implementation in different countries and emerging enforcement trends so as to discern best practices against those goals or determine whether other goals may be more appropriate (e.g. focusing more on data processing ‘harm’ as opposed to data procedures).
- Identification of real-world problems in AI use cases which are not caught and whether these are consistent with the underlying purpose of the regulation and, if not, whether the regulation needs to be changed or improved.
- Identification of possible initiatives that may be pursued voluntarily by industry to address any such gaps. This aspect is particularly important in balancing the risk of regulatory overreach against the need to win over the hearts and minds of individuals and regulators.
- Exploration of the scenarios in which content or metadata is useful to communications providers or third parties who might want to leverage such data outside the area of billing the communications provider’s own customers and how workable it is in practice to seek consent for such usage from the customer.
The aims of this article were modest and focused; namely, to explore the interaction between AI and data protection and data privacy through the lens, primarily, of EU legal and regulatory developments and proposals. In so doing it is hoped or even anticipated that it will inspire others to undertake more focused analysis, including comparative analysis of interventions in other jurisdictions, at a time when there is yet to emerge a global consensus as to the right balance between horizontal and sector-specific regulation of AI and whether AI requires its own bespoke regulatory regime and sector regulator.
[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). OJ L 119, 4.5.2016, p. 1–88.
[2] Proposal for a Regulation of The European Parliament and of The Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications) COM/2017/010 final – 2017/03 (COD).
[3] Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act). OJ L, 2023/2854, 22.12.2023.