NEW: Unlock the Future of Finance with CRYPTO ENDEVR - Explore, Invest, and Prosper in Crypto!
Crypto Endevr
  • Top Stories
    • Latest News
    • Trending
    • Editor’s Picks
  • Media
    • YouTube Videos
      • Interviews
      • Tutorials
      • Market Analysis
    • Podcasts
      • Latest Episodes
      • Featured Podcasts
      • Guest Speakers
  • Insights
    • Tokens Talk
      • Community Discussions
      • Guest Posts
      • Opinion Pieces
    • Artificial Intelligence
      • AI in Blockchain
      • AI Security
      • AI Trading Bots
  • Learn
    • Projects
      • Ethereum
      • Solana
      • SUI
      • Memecoins
    • Educational
      • Beginner Guides
      • Advanced Strategies
      • Glossary Terms
No Result
View All Result
Crypto Endevr
  • Top Stories
    • Latest News
    • Trending
    • Editor’s Picks
  • Media
    • YouTube Videos
      • Interviews
      • Tutorials
      • Market Analysis
    • Podcasts
      • Latest Episodes
      • Featured Podcasts
      • Guest Speakers
  • Insights
    • Tokens Talk
      • Community Discussions
      • Guest Posts
      • Opinion Pieces
    • Artificial Intelligence
      • AI in Blockchain
      • AI Security
      • AI Trading Bots
  • Learn
    • Projects
      • Ethereum
      • Solana
      • SUI
      • Memecoins
    • Educational
      • Beginner Guides
      • Advanced Strategies
      • Glossary Terms
No Result
View All Result
Crypto Endevr
No Result
View All Result

EU Offers Guidance on How AI Devs Can Obey Privacy Laws

EU Offers Guidance on How AI Devs Can Obey Privacy Laws
Share on FacebookShare on Twitter

EDPB Opinion on AI Models and GDPR Principles

Introduction

The European Data Protection Board (EDPB) has published an opinion addressing data protection in AI models. It covers assessing AI anonymity, the legal basis for processing data, and mitigation measures for impacts on data subjects for tech companies operating in the bloc.

The opinion was published in response to a request from Ireland’s Data Protection Commission, the lead supervisory authority under the GDPR for many multinationals.

What were the key points of the guidance?

The DPC sought more information about:

  1. When and how can an AI model be considered “anonymous” — those that are very unlikely to identify individuals whose data was used in its creation, and therefore is exempt from privacy laws.
  2. When companies can say they have a “legitimate interest” in processing individuals’ data for AI models and, therefore, don’t need to seek their consent.
  3. The consequences of the unlawful processing of personal data in the development phase of an AI model.

When an AI model can be considered ‘anonymous’

An AI model can be considered anonymous if the chance that personal data used for training will be traced back to any individual — either directly or indirectly, as through a prompt — is deemed “insignificant.” Anonymity is assessed by supervisory authorities on a “case-by-case” basis and “a thorough evaluation of the likelihood of identification” is required.

However, the opinion does provide a list of ways that model developers might demonstrate anonymity, including:

  • Taking steps during source selection to avoid or limit the collection of personal data, such as excluding irrelevant or inappropriate sources.
  • Implementing strong technical measures to prevent re-identification.
  • Ensuring data is sufficiently anonymised.
  • Applying data minimisation techniques to avoid unnecessary personal data.
  • Regularly assessing the risks of re-identification through testing and audits.

Kathryn Wynn, a data protection lawyer from Pinsent Masons, said that these requirements would make it difficult for AI companies to claim anonymity.

When AI companies can process personal data without the individuals’ consent

The EDPB opinion outlines that AI companies can process personal data without consent under the “legitimate interest” basis if they can demonstrate that their interest, such as improving models or services, outweighs the individual’s rights and freedoms.

This is particularly important to tech firms, as seeking consent for the vast amounts of data used to train models is neither trivial nor economically viable. But to qualify, companies will need to pass these three tests:

  1. Legitimacy test: A lawful, legitimate reason for processing personal data must be identified.
  2. Necessity test: The data processing must be necessary for purpose. There can be no other alternative, less intrusive ways of achieving the company’s goal, and the amount of data processed must be proportionate.
  3. Balancing test: The legitimate interest in the data processing must outweigh the impact on individuals’ rights and freedoms. This takes into account whether individuals would reasonably expect their data to be processed in this way, such as if they made it publicly available or have a relationship with the company.

Even if a company fails the balancing test, it may still not be required to gain the data subjects’ consent if they apply mitigating measures to limit the processing’s impact. Such measures include:

  • Technical safeguards: Applying safeguards that reduce security risks, such as encryption.
  • Pseudonymisation: Replacing or removing identifiable information to prevent data from being linked to an individual.
  • Data masking: Substituting real personal data with fake data when actual content is not essential.
  • Mechanisms for data subjects to exercise their rights: Making it easy for individuals to exercise their data rights, such as opting out, requesting erasure, or making claims for data correction.
  • Transparency: Publicly disclosing data processing practices through media campaigns and transparency labels.
  • Web scraping-specific measures: Implementing restrictions to prevent unauthorised personal data scraping, such as offering an opt-out list to data subjects or excluding sensitive data.

Consequences of unlawfully processing personal data in AI development

If a model is developed by processing data in a way that violates GDPR, this will impact how the model will be allowed to operate. The relevant authority evaluates “the circumstances of each individual case” but provides examples of possible considerations:

  1. If the same company retains and processes personal data, the lawfulness of both the development and deployment phases must be assessed based on case specifics.
  2. If another firm processes personal data during deployment, the EDPB will consider if that firm did an appropriate assessment of the model’s lawfulness beforehand.
  3. If the data is anonymised after unlawful processing, subsequent non-personal data processing is not liable to GDPR. However, any subsequent personal data processing would still be subject to the regulation.

Why AI firms should pay attention to the guidance

The EDPB’s guidance is crucial for tech firms. Although it holds no legal power, it influences how privacy laws are enforced in the EU.

Indeed, companies can be fined up to €20 million or 4% of their annual turnover — whichever is larger — for GDPR infringements. They might even be required to change how their AI models operate or delete them entirely.

Conclusion

The EDPB’s opinion provides a comprehensive framework for AI companies to ensure the lawful processing of personal data. By understanding the key points of the guidance, AI firms can avoid potential legal issues and maintain the trust of their users.

FAQs

Q: What is the main purpose of the EDPB’s opinion on AI models?

A: The main purpose is to provide guidance on how AI companies can ensure the lawful processing of personal data in the development and deployment of AI models.

Q: What are the three tests that AI companies must pass to process personal data without consent?

A: The three tests are the legitimacy test, necessity test, and balancing test.

Q: What are the mitigating measures that AI companies can apply to limit the impact of processing personal data?

A: The mitigating measures include technical safeguards, pseudonymisation, data masking, mechanisms for data subjects to exercise their rights, transparency, and web scraping-specific measures.

Q: What are the consequences of unlawfully processing personal data in AI development?

A: The consequences include the impact on how the model will be allowed to operate, potential fines, and requirements to change or delete the model.

cryptoendevr

cryptoendevr

Related Stories

DigitalOcean Expands Identity Management Offerings by Adding Custom Roles for Advanced Permission Management

DigitalOcean Expands Identity Management Offerings by Adding Custom Roles for Advanced Permission Management

June 30, 2025
0

Rewrite the New Role-Based Access Control offering provides tailored access controls for teams, allowing digital native enterprises to enhance security...

Scattered Spider nimmt Luftfahrtbranche ins Visier

Scattered Spider nimmt Luftfahrtbranche ins Visier

June 30, 2025
0

Rewrite the Der Helpdesk als Hintertüre Besonders im Visier der Kriminellen stehen dabei hochrangige Führungskräfte wie Chief Financial Officers (CFOs),...

What is Narrow (Weak) AI and What Is It Mainly Used For?

What is Narrow (Weak) AI and What Is It Mainly Used For?

June 30, 2025
0

Rewrite the Artificial Intelligence has transformed numerous sectors by streamlining complex tasks and enhancing decision-making processes. Although science fiction frequently...

Cybercriminals take malicious AI to the next level

Cybercriminals take malicious AI to the next level

June 30, 2025
0

Rewrite the “This trend is particularly concerning because it demonstrates adversaries ‘closing the loop on model tuning’ — their offensive...

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Are You Thinking or Doing? 🧠

Are You Thinking or Doing? 🧠

June 28, 2025
Don’t trust that email: It could be from a hacker using your printer to scam you

Don’t trust that email: It could be from a hacker using your printer to scam you

June 28, 2025
Billions in corporate buys can’t budge Bitcoin—5 reasons the BTC price won’t move

Billions in corporate buys can’t budge Bitcoin—5 reasons the BTC price won’t move

June 28, 2025
Ethereum ‘Death Cross’ Flashes For The First Time Since 2022 ETH Price Sell-off

Ethereum ‘Death Cross’ Flashes For The First Time Since 2022 ETH Price Sell-off

June 28, 2025
Bybit Ethereum Heist Propels Record .1 Billion in Crypto Stolen by Hackers So Far in 2025

Bybit Ethereum Heist Propels Record $2.1 Billion in Crypto Stolen by Hackers So Far in 2025

June 28, 2025

Our Newsletter

Join TOKENS for a quick weekly digest of the best in crypto news, projects, posts, and videos for crypto knowledge and wisdom.

CRYPTO ENDEVR

About Us

Crypto Endevr aims to simplify the vast world of cryptocurrencies and blockchain technology for our readers by curating the most relevant and insightful articles from around the web. Whether you’re a seasoned investor or new to the crypto scene, our mission is to deliver a streamlined feed of news and analysis that keeps you informed and ahead of the curve.

Links

Home
Privacy Policy
Terms and Services

Resources

Glossary

Other

About Us
Contact Us

Our Newsletter

Join TOKENS for a quick weekly digest of the best in crypto news, projects, posts, and videos for crypto knowledge and wisdom.

© Copyright 2024. All Right Reserved By Crypto Endevr.

No Result
View All Result
  • Top Stories
    • Latest News
    • Trending
    • Editor’s Picks
  • Media
    • YouTube Videos
      • Interviews
      • Tutorials
      • Market Analysis
    • Podcasts
      • Latest Episodes
      • Featured Podcasts
      • Guest Speakers
  • Insights
    • Tokens Talk
      • Community Discussions
      • Guest Posts
      • Opinion Pieces
    • Artificial Intelligence
      • AI in Blockchain
      • AI Security
      • AI Trading Bots
  • Learn
    • Projects
      • Ethereum
      • Solana
      • SUI
      • Memecoins
    • Educational
      • Beginner Guides
      • Advanced Strategies
      • Glossary Terms

Copyright © 2024. All Right Reserved By Crypto Endevr