How Does the AI Act Relate to Data Protection Laws?
FAQ
How Does the AI Act Relate to Data Protection Laws?
The General Data Protection Regulation (GDPR) regulates the handling of personal data within the European Union (EU) and extends its reach to companies outside the EU if they process data of individuals within the EU. Similarly, the EU AI Act will have global implications for organizations involved in AI systems. While the GDPR targets data controllers and processors, and the EU AI Act focuses on AI system providers and users, there are notable intersections between the two regulations that organizations need to navigate carefully.
One key area of overlap is the treatment of special category data, which includes sensitive information such as racial or ethnic origin, political opinions, and health data. The GDPR generally prohibits processing this type of data unless specific exceptions apply. Recent case law has clarified that if special category data can be inferred or deduced, such inferred data should also be treated with the same level of protection. This means that in machine learning contexts, proxy variables might be considered special category data under GDPR, so organizations may need to comply with GDPR as well.
In contrast, the EU AI Act allows for exceptions to this prohibition, permitting the processing of special category data for purposes like monitoring and correcting biases in high-risk AI systems, provided appropriate safeguards are in place. This creates a potential conflict between the two regulations, as organizations may face differing requirements regarding the handling of sensitive data.
For a better understanding here is an example:
Imagine an AI system that uses purchasing behavior and online activity to predict an individual’s health condition. Even though the system doesn’t directly process health data, the predictions could reveal sensitive health information. Under GDPR, the organization must handle these predictions as if they were actual health data, ensuring all relevant data protection measures are in place.
In summary, under GDPR, both direct and inferred special category data must be protected. For AI and machine learning, this means ensuring that any data or predictions that can reveal sensitive information are managed in compliance with GDPR requirements. This creates additional obligations for organizations, increasing compliance costs.
Additionally, the GDPR requires data controllers to conduct impact assessments when their processing activities present a high risk to individuals’ rights and freedoms. This requirement, known as Data Protection Impact Assessment (DPIA), is essential for identifying and mitigating potential risks related to personal data.
The EU AI Act acknowledges that providers of AI systems may not always be able to anticipate all potential uses of their technologies. Therefore, even if a provider assesses their system as not high-risk, users of the system might still need to perform their own impact assessments. This could result in differing risk management requirements under each regulation.
Given that many AI models process data protected under GDPR, organizations will likely need to conduct both DPIAs and AI Conformity Assessments. DPIAs are established practices for evaluating data protection risks and will be useful for compliance with the AI Act, despite differences in technical and documentation requirements between the two assessments. Even if an organization is not explicitly required to perform a DPIA, it is advisable to conduct both assessments. This approach ensures thorough risk management and adherence to both GDPR and the AI Act, addressing both data protection and AI-specific compliance issues effectively.
Lastly, GDPR grants individuals the right to avoid decisions based solely on automated processing with significant legal effects. The EU AI Act similarly emphasizes the need for human oversight of high-risk AI systems. This could mean that an AI system deemed low-risk under the AI Act might still engage in automated decision-making that falls under GDPR restrictions.
Navigating these regulations requires careful consideration of how GDPR and the EU AI Act interact, particularly in terms of data protection, risk assessment, and automated decision-making. There is no question that we live in a data-driven world.
One key area of overlap is the treatment of special category data, which includes sensitive information such as racial or ethnic origin, political opinions, and health data. The GDPR generally prohibits processing this type of data unless specific exceptions apply. Recent case law has clarified that if special category data can be inferred or deduced, such inferred data should also be treated with the same level of protection. This means that in machine learning contexts, proxy variables might be considered special category data under GDPR, so organizations may need to comply with GDPR as well.
In contrast, the EU AI Act allows for exceptions to this prohibition, permitting the processing of special category data for purposes like monitoring and correcting biases in high-risk AI systems, provided appropriate safeguards are in place. This creates a potential conflict between the two regulations, as organizations may face differing requirements regarding the handling of sensitive data.
For a better understanding here is an example:
Imagine an AI system that uses purchasing behavior and online activity to predict an individual’s health condition. Even though the system doesn’t directly process health data, the predictions could reveal sensitive health information. Under GDPR, the organization must handle these predictions as if they were actual health data, ensuring all relevant data protection measures are in place.
In summary, under GDPR, both direct and inferred special category data must be protected. For AI and machine learning, this means ensuring that any data or predictions that can reveal sensitive information are managed in compliance with GDPR requirements. This creates additional obligations for organizations, increasing compliance costs.
Additionally, the GDPR requires data controllers to conduct impact assessments when their processing activities present a high risk to individuals’ rights and freedoms. This requirement, known as Data Protection Impact Assessment (DPIA), is essential for identifying and mitigating potential risks related to personal data.
The EU AI Act acknowledges that providers of AI systems may not always be able to anticipate all potential uses of their technologies. Therefore, even if a provider assesses their system as not high-risk, users of the system might still need to perform their own impact assessments. This could result in differing risk management requirements under each regulation.
Given that many AI models process data protected under GDPR, organizations will likely need to conduct both DPIAs and AI Conformity Assessments. DPIAs are established practices for evaluating data protection risks and will be useful for compliance with the AI Act, despite differences in technical and documentation requirements between the two assessments. Even if an organization is not explicitly required to perform a DPIA, it is advisable to conduct both assessments. This approach ensures thorough risk management and adherence to both GDPR and the AI Act, addressing both data protection and AI-specific compliance issues effectively.
Lastly, GDPR grants individuals the right to avoid decisions based solely on automated processing with significant legal effects. The EU AI Act similarly emphasizes the need for human oversight of high-risk AI systems. This could mean that an AI system deemed low-risk under the AI Act might still engage in automated decision-making that falls under GDPR restrictions.
Navigating these regulations requires careful consideration of how GDPR and the EU AI Act interact, particularly in terms of data protection, risk assessment, and automated decision-making. There is no question that we live in a data-driven world.