This article is the first in our three-part series focused on data privacy considerations related to the use of Artificial Intelligence (AI) and machine learning. This first article highlights privacy topics related to the collection of personal information via AI applications, transparency, and the challenges associated with regulating AI. Our second article will focus on considerations for customer-facing AI applications (i.e., external facing) whereas our third article will focus on data privacy topics related to employees’ use of AI (i.e., internal facing).
Many online companies utilize AI and machine learning technologies for targeted advertising, profiling, customer service, data analysis, and other typical data processing activities. AI provides real-time processing of data and mimics human adaptability. It analyzes and predicts, just like humans, but on a much larger scale. The use of these technologies broadens every day as well as the utilizing sectors.
Economic and labor opportunities drive the growth of these technologies. A 2017 study led by PricewaterhouseCoopers predicts that global GDP (Gross Domestic Product) will rapidly rise by 2030 driven by AI technology: "our research also shows that 45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand… this is because AI will drive greater product variety, with increased personalization, attractiveness and affordability over time."[1] This fast-driving movement raises concerns around the collection of data and the transparency of processing for consumers, vendors, and employees. AI technologies that collect or process personal information and personally identifiable information primarily stand at the forefront of these concerns.
Data Privacy Implications
AI technology presents significant data privacy implications that need careful consideration. As AI systems become more advanced and pervasive in various domains, it often rely on collecting and processing large amounts of personal data. This raises concerns about the potential misuse or unauthorized access to personal information. Privacy implications arise at different stages of the AI lifecycle, including data collection, data storage, data processing, and AI systems' output or decision-making processes.
Collection of Personal Information
Data collection is fundamental to AI training, as algorithms require diverse and comprehensive datasets to learn from. However, collecting personal data introduces risks such as data breaches, unauthorized access, and the potential for re-identification of individuals. Providers of AI tools will be focused on establishing robust data protection measures, including data anonymization and encryption, to ensure the privacy of individuals whose data is used in AI training.
AI Learning Models
Moreover, AI algorithms themselves can raise privacy concerns. For example, deep learning models can extract personal information from seemingly innocuous data inputs. A report from Brookings Institute states “As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.”[2] This presents risks related to unintended inference, where AI systems may draw conclusions or make predictions about individuals that go beyond the explicit data provided. It becomes crucial to implement techniques like differential privacy to protect individuals' privacy while maintaining the utility and effectiveness of AI models.
Transparency
Another privacy concern lies in the transparency and explainability of AI systems. This lack of transparency can be problematic, particularly when AI systems are used in highly regulated domains such as healthcare or finance. Efforts are underway to develop explainable AI methods that can shed light on the decision-making processes of AI systems, enabling individuals to understand and contest decisions that impact their privacy. A Harvard Business Review article examines the Transparency Paradox[3] and cites a Cornell Study on transparency risks of black box data models: “potential negative impact of explaining machine learning models, in particular, it shows that offering model explanations may come at the cost of user privacy.[4] By providing transparency behind machine learning algorithms may introduce risk to consumers by creating exploitable information. Transparency is fundamental to data privacy and consumers who have a better understanding of how their data is used create the opportunity for more control.
Regulatory Challenges
Additionally, deploying AI technology must align with legal and regulatory frameworks, such as data protection and privacy laws. These frameworks set guidelines for collecting, processing, and storing personal data, and organizations need to comply with these regulations to safeguard individuals' privacy rights. Cameron Kerry Brookings Institute states that “the challenge for Congress is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets.”[5] Striking the right balance between utilizing the benefits of AI technology and protecting individuals' privacy is an ongoing challenge that requires close collaboration between technology developers, policymakers, and privacy advocates. Robust privacy measures, data protection strategies, explainability, and compliance with legal frameworks are vital for fostering trust in AI systems and ensuring the privacy rights of individuals are upheld in the era of AI.
[1] PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
[2] Kerry, Cameron F. “Protecting Privacy in an AI-Driven World.” Brookings, 10 Feb. 2020, www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/.
[3] Burt, Andrew. “The AI Transparency Paradox.” Harvard Business Review, 13 Dec. 2019, hbr.org/2019/12/the-ai-transparency-paradox.
[4] Shokri, Reza, et al. On the Privacy Risks of Model Explanations. Feb. 2020
[5] Kerry, Cameron F. “Protecting Privacy in an AI-Driven World.” Brookings, 10 Feb. 2020, www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/.
© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.