Dr Tanya Lokot and Dr Eileen Culloty will appear at Joint Committee on Justice, Home Affairs and Migration later today. The committee has invited experts to discuss the EU's the Digital Omnibus package.
The Digital Omnibus Package introduces substantial changes to the rules governing data protection and AI systems. The ostensible reason is to simplify complex rules and thereby increase European competitiveness. This is an appropriate goal, but critics warn that the proposals represent a significant weakening of safeguards that were designed to protect the public.
The Commission has not provided evidence that the proposed changes address disproportionate burdens on companies that comply with existing EU rules. The Commission did not conduct a dedicated impact assessment on fundamental rights. This is surprising when one considers that the point of GDPR and the ePrivacy framework is the protection of fundamental rights. Alberto Alemanno, Jean Monnet Professor of European Union Law at HEC Paris, observes that a pattern is emerging whereby the Commission is employing an omnibus as a “legislative technique ... to systematically avoid impact assessments, bypass consultation periods, and skip the evidentiary requirements increasingly required to satisfy the principle of proportionality.” Equally, Edoardo Celeste, Associate Professor of Law, Technology and Innovation at DCU, points out that such regulatory streamlining risks “inflating the level of relativity of personal data” while “restricting the possibility for the data subject to exercise the[ir] rights”.
With these considerations in mind, the FuJo researchers highlight the following issues in their written submission:
- The proposals weaken the definition of personal data under the General Data Protection Regulation (GDPR). The GDPR currently relies on an objective test: if a person can be identified by anyone using reasonable means, the information qualifies as personal data. This approach avoids subjectivity and ensures a consistent baseline of protection. The Omnibus Package moves towards a controller-specific interpretation. Under this approach, companies could argue that data is not personal if they believe they cannot identify individuals, even if others could. This would allow the same dataset to be treated as personal data by one company and non-personal by another, undermining legal certainty and weakening protections for data subjects. Some experts note that this NOYB notes that 'According to the opinion, this would “go far beyond a targeted modification of the GDPR, a ‘technical amendment’ or a mere codification of CJEU jurisprudence.”
- The proposals also restrict the possibility for data subjects to exercise their right to gain access to their own data, mandating that this right must only be used for “data protection purposes”. This essentially denotes all other requests as abusive and would likely exclude journalistic, research, political, economic, legal or many other purposes to access one’s own personal data. Such a restriction would undermine existing data subject access rights, depriving individuals of a right that is routinely used to support legal disputes.
- The Package expands the legal bases for AI development by allowing developers to rely on “legitimate interest” rather than consent when processing personal data for AI training. In practice, this could further normalise the broad use of personal data without sufficiently robust necessity and proportionality checks. Many individuals are unaware that their data is incorporated into AI training pipelines, raising significant concerns about transparency and consent.
- The proposals weaken protections for sensitive data. Special category data could remain within AI training datasets where removal is considered by companies to be too “difficult”, with little detail as to how such an assessment would be carried out. This risks normalising the retention of highly sensitive personal information in systems whose downstream uses may be difficult to predict or control.
- The Omnibus expands the definition of research activities under the GDPR. Existing provisions provide flexibility for scientific research because it serves a clear public interest. These safeguards depend on maintaining a clear distinction between scientific research and commercial product development. By broadening the definition to include a wider range of industrial and commercial activities, the Omnibus blurs this boundary. This could allow firms to justify extensive reuse of personal data under the banner of “research” without meeting the methodological, ethical and accountability standards
typically associated with rigorous scientific work. In doing so, the proposal weakens the principle of purpose limitation and creates new opportunities for large firms to repurpose data. - The Package also delays certain obligations for high-risk AI systems and removes the requirement for public registration of some of these systems. These changes reduce transparency and weaken mechanisms for external scrutiny and oversight.
- The decision to downgrade AI literacy from a direct legal duty on AI providers and deployers to a matter of encouragement by the Commission and Member States risks hollowing out existing expectations around meaningful explanation and accountability. Without clear legal obligations, commitments to improving public understanding of AI systems may remain largely aspirational. Moreover, this proposal is concurrent with a lack of financial support for bodies capable of promoting AI literacy at the national level such as Safer Internet Centres.
Read the opening statement that will be presented to the committee below
Publications
Type: Reports
Published in:
Authors: Tetyana Lokot and Eileen Culloty
Year: 2025
URL: Resource

