Trustworthy AI extends well beyond just privacy, spanning security and ethical considerations as well. But in order to get AI right, you need to first get data privacy right. Learn more about three common privacy pitfalls in AI adoption, and how you can avoid them
Bex Evans
Senior Product Marketing Manager
June 10, 2024
Gartner® predicts that “By 2026, more than 80% of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models, and/or deployed GenAI-enabled applications in production environments, up from less than 5% in 2023”1
At the same time, according to Gartner Hype Cycle Methodology, “Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.”2
Research from Forrester identifies data privacy and security concerns as the top barrier to generative AI adoption. The promotion of trustworthy AI extends well beyond the privacy domain, spanning security and ethical considerations as well. But in order to get AI right, you first need to get data privacy right.
That’s because AI is an amplifier of existing privacy gaps – a single misconfigured access point can get exponentially more problematic when its exposed to an AI system. When navigating the role of data privacy for AI systems, there are three privacy pitfalls to be aware of:
Here, we’ll explore each pitfall in more detail and look at some common practices for addressing the risks associated with it.
A common scenario that illustrates the importance of responsible data usage is the collection of birth dates for identity verification. In contexts such as two-factor authentication or banking, verifying a person's identity is crucial. This process often involves collecting sensitive information – like their birth date.
However, possessing this data for identity verification does not automatically grant permission to use it for other purposes. For instance, if a marketing team wants to use birth dates to send out birthday promotions, they must first obtain explicit consent from the individuals involved. Without such consent, using birth dates for marketing is a violation of data privacy principles.
The advent of large language models (LLMs) and generative AI adds another layer of complexity to this issue.
Providing clear and detailed information upfront is essential for obtaining informed consent, meaning individuals must understand in plain language how their data will be used to provide informed consent.
One significant challenge organizations face is striking a balance between offering enough context and avoiding overwhelming individuals with lengthy terms and conditions that they might – and often do – simply skip. Effective communication in the audience's language is vital to ensure that consent is truly informed and not just a formality.
Consider the use of a resume scanning tool designed to streamline hiring practices. Historically, organizations might exclude sensitive information such as race, gender, and ethnicity from resumes to minimize privacy risks and reduce overall risk if the application were to experience a breach. However, excluding these data points can also prevent the identification and mitigation of bias within the hiring process.
Bias can persist even when sensitive data is omitted, as other indirect factors might contribute to biased outcomes. To accurately analyze and ensure fair representation in a dataset, it’s necessary to document and record sensitive data points. This allows for proactive monitoring of the system for fairness and the detection of potential biases.
A common challenge faced by organizations today is that they may not have collected race, gender, or ethnicity information initially due to privacy concerns or the potential discomfort it might cause applicants. Consequently, they lack the data needed to perform thorough fairness assessments.
To address these challenges, privacy-enhancing technologies (PETs) can be employed. PETs such as differential privacy, synthetic data, homomorphic encryption, and multi-party computation help protect sensitive inputs while enabling the necessary analysis. These technologies allow for the safeguarding of individual privacy during data processing and model training.
However, it is important to recognize that there is no one-size-fits-all solution. The choice of PET depends on the specific use case and the infrastructure in place. In many scenarios, a combination of multiple PETs may be required to adequately protect privacy while maintaining the utility of the data.
For consent to be truly lawful, it must be freely given and able to be withdrawn at any time. This principle poses a significant challenge when a consumer requests the deletion of their data from a business-critical system. If these processes rely on AI systems trained on personal data, the removal of such data can disrupt business continuity.
AI models, much like human brains, can’t simply forget information once it has been learned. The only solution is to roll back to a previous version of the model, trained before the data in question was included, and then retrain the model without it.
This necessitates robust documentation on model versioning, dataset versioning, and detailed tracking of data categories and identifiers to ensure the data can be accurately removed.
The complexities associated with enforcing data governance and model retraining highlight the importance of thorough documentation and precise version control. This involves maintaining detailed logs of model versions, datasets, and the identifiers used to track individual data points. When a data subject revokes consent, these records allow for the targeted rollback and retraining of models.
Given these data governance challenges, there is a compelling case for employing retrieval-augmented generation (RAG). RAG involves retrieving facts from an external knowledge base to ground LLMs in the most accurate and up-to-date information. This approach offers several benefits:
By utilizing RAG, organizations can maintain control over the data input at the point of prompt rather than continuously retraining models. This approach helps ensure business continuity and compliance with data privacy regulations, even when individual data points are removed due to withdrawn consent.
Webinar
This webinar will explore the key privacy pitfalls organizations face when implementing GenAI, focusing on purpose limitation, data proportionality, and business continuity. Attendees will gain insights into how to navigate these challenges through strong data governance, version control, and detailed model documentation to ensure compliance and mitigate risks.
Webinar
This webinar will explore the how AI is affecting the data landscape, focusing on how data teams can extend common data practices to support AI’s unique use of data.
eBook
Download our guide to building an AI project intake workflow that balances risk and efficiency, complete with a checklist for thorough, informed assessments.
Checklist
Download our AI Project Intake Checklist to guide thorough assessments and ensure secure, compliant, and effective AI project planning from start to finish.
Webinar
This webinar will uncover the top 5 data sharing challenges organizations face and demonstrate how advanced data governance solutions can streamline processes, improve data quality, and enhance compliance, allowing organizations to discover the full potential of their data assets.
White Paper
Download this white paper to learn how to adapt your data governance program, by defining AI-specific policies, monitoring data usage, and centralizing enforcement.
Report
Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report
Webinar
This webinar unpacks California’s approach to AI and emerging legislations, including legislation on defining AI, AI transparency disclosures, the use of deepfakes, generative AI, and AI models.
eBook
Download this coauthored eBook by OneTrust and Protiviti to learn how organizations are building scalable AI governance models and managing AI risks.
Report
Download this 2024 Forrester Consulting Total Economic Impact™ study to see how OneTrust has helped organizations navigate data management complexities, generate significant ROI, and enable the responsible use of data and AI.
Webinar
Join us for a webinar on the latest updates and emerging trends in global privacy regulations.
eBook
Download this eBook to explore strategies for trustworthy AI procurement and learn how to evaluate vendors, manage risks, and ensure transparency in AI adoption.
eBook
Learn why discovering, classifying, and using data responsibly is the only way to ensure your AI is governed properly.
Webinar
Join our webinar to gain practical, real-world guidance from industry experts on implementing effective AI governance.
Webinar
Join our webinar and learn about the EU AI Act's enforcement requirements and practical strategies for achieving compliance and operational readiness.
Video
Learn how OneTrust AI Governance acts as a unified program center for AI initiatives so you can build and scale your AI governance program
Webinar
Whether your AI is sourced from vendors and third parties or developed in-house, AI Governance supports informed decision-making and helps build trust in the responsible use of AI. Join the live demo webinar to watch OneTrust AI Governance in action.
Webinar
Discover the EU AI Act's impact on your business with our video series on its scope, roles, and assessments for responsible AI governance and innovation.
Webinar
As innovation teams race to integrate AI into their products and services, new challenges arise for development teams leveraging third-party models. Join the webinar to gain insights on how to navigate AI vendors while mitigating third-party risks.
Resource Kit
Download this resource kit to help you understand, navigate, and ensure compliance with the EU AI Act.
Webinar
In this webinar, we'll navigate the intricate landscape of AI Governance, offering guidance for organizations whether they're developing proprietary AI systems or procuring third-party solutions.
eBook
Discover the ISO 42001 framework for ethical AI use, risk management, transparency, and continuous improvement. Download our guide for practical implementation steps.
Webinar
Join OneTrust experts to learn about how to enforce responsible use policies and practice “shift-left” AI governance to reduce time-to-market.
Webinar
Join out webinar to hear about the challenges and solutions in AI governance as discussed at the IAPP conference, featuring insights and learnings from our industry thought leadership panel.
Webinar
Colorado has passed landmark legislation regulating the use of Artificial Intelligence (AI) Systems. In this webinar, our panel of experts will review best practices and practical recommendations for compliance with the new law.
Webinar
In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.
Report
Download the full OCEG research report for a snapshot of what organizations are doing to govern their AI efforts, assess and manage risks, and ensure compliance with external and internal requirements.
Report
In this 5-part regulatory article series, OneTrust sponsored the IAPP to uncover the legal frameworks, policies, and historical context pertinent to AI governance across five jurisdictions: Singapore, Canada, the U.K., the U.S., and the EU.
Webinar
In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.
Webinar
This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.
Webinar
In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.
Resource Kit
What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.
Interactive Tool
This self-assessment will help you to gauge the maturity of your privacy program and understand the areas the areas of improvement that can further mature your privacy operations.
Webinar
Learn the challenges AI technology poses for the (re)insurance industry and gain insights on balancing regulatory compliance with innovation.
Webinar
Watch this session for insights and strategies on buiding a strong data protection program that empowers innovation and strengthens consumer trust.
Webinar
Get the latest insights from global leaders in cybersecurity managment in this webinar from our Data Protection in Financial Services Week 2024 series.
Webinar
Join the first session for our Data Protection in Financial Services Week 2024 series where we discuss the current state of AI regulations in the EU.
White Paper
Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.
Webinar
Join OneTrust and PA Consulting as they discuss key global trends and their impact on the UK, reflecting on the topics from IAPP DPI London.
Webinar
In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states.
In-Person Event
Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.
AI Governance
See the latest OneTrust platform features that improve on customers' ability to build trust, ensure compliance, and manage risk.
Webinar
In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.
eBook
Data privacy is a journey that has evolved from a regulatory compliance initiative to a customer trust imperative. This eBook provides an in-depth look at the Data Privacy Maturity Model and how the business value of a data privacy program can realised as it matures.
Webinar
In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.
Data Sheet
Data privacy is evolving from a regulatory compliance initiative to a customer trust imperative. This data sheet outlines the four stages of the Data Privacy Maturity Model to help you navigate this shift.
Checklist
Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.
Webinar
In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.
Webinar
OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.
Report
OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.
Webinar
In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.
Webinar
Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.
Webinar
Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC.
eBook
Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.
eBook
With the use of AI proliferating at an exponential rate, the EU rolled out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.
Webinar
Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.
White Paper
What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team.
Infographic
AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.
White Paper
Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.
eBook
We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.
Webinar
Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.
Webinar
Prepare for AI data privacy and security risks with our expert webinar. We will delve into the evolving technology and how to ensure ethical use and regulatory compliance.
Webinar
Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.
White Paper
With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.
Webinar
Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.
Regulation Book
Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.
Webinar
Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.
Webinar
OneTrust DataGuidance and Sidley are joined by industry experts for the annual Data Protection in Financial Services Week.