This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 4 minute read

Data Privacy Considerations for Customer Facing AI Technologies

This is the second article in our three-part series focused on data privacy considerations related to using Artificial Intelligence (AI) and machine learning. Our first article highlighted privacy topics related to the collection of personal information via AI applications, transparency, and the challenges associated with regulating AI. This second article focuses on considerations for customer-facing AI applications (i.e., external facing). In contrast, our third article will focus on privacy topics related to employees’ use of AI (i.e., internal facing).

Customer Facing AI Applications

In an era where customer-facing AI technologies are becoming increasingly prevalent, safeguarding privacy has become a critical concern. Companies like OpenAI have been taking steps to address these privacy challenges and implement privacy controls to ensure user data protection. Additionally, regulatory authorities, such as the Italian Data Protection Authority (DPA), have been closely scrutinizing AI technologies like ChatGPT for their privacy implications. This section dives into the privacy controls implemented by OpenAI and discusses the Italian DPA's focus on ChatGPT, including the initial "ban" and subsequent reversal. These topics propose possible solutions to customer-facing privacy implications associated with AI technologies. There is an inherent responsibility for companies to provide privacy-compliant experiences to consumers.

OpenAI’s Enhanced Privacy Controls

OpenAI, a leading AI research laboratory, has recently introduced privacy controls aimed at enhancing user privacy and data protection. One of the key areas of focus has been the retention period of user data. TechCrunch examines these practices and states that “the company [OpenAI] is implementing a 30-day data retention policy for API users with options for stricter retention “depending on user needs,” and simplifying its terms and data ownership to make it clear that users own the input and output of the models.”[1] OpenAI plans to significantly reduce the retention period for data processed by its AI systems, including ChatGPT. By shortening the duration for which user data is stored, OpenAI aims to minimize the potential risks associated with prolonged data storage and mitigate the chances of unauthorized access or data breaches.

Additionally, OpenAI has introduced an incognito mode for its customer-facing AI technologies. This feature allows users to interact with AI systems while ensuring that their personal information remains anonymous and untraceable. This feature arose from scrutiny on AI models which suggests that consumers lose control of their data as it may be used to further train AI models. By using the incognito mode, users can engage with AI technologies like ChatGPT without their data being digested into the training model, thereby preserving their privacy and reducing potential privacy concerns.

Italian Data Protection Authority 

The Italian Data Protection Authority has been closely monitoring the privacy implications of AI technologies, particularly ChatGPT. In a notable development, the Italian DPA initially imposed a ban on the use of ChatGPT due to concerns about data protection and privacy. The decision stemmed from worries that ChatGPT may not adequately safeguard user data and could potentially infringe upon individuals' privacy rights.

However, there has been a recent reversal of this ban by the Italian DPA, allowing the use of ChatGPT again in Italy. This decision follows OpenAI's efforts to address the privacy concerns raised by the authority. The Italian DPA states “The Authority expresses satisfaction with the measures taken and hopes that OpenAI, in the coming weeks, will comply with the further requests given to it with the same provision of 11 April with particular reference to the implementation of a system that verifies the age and the planning and implementation of a communication campaign aimed at informing all Italians of what happened and the possibility of opposing the use of their personal data for the purpose of algorithm training.”

[2] The related press release also contains the steps that OpenAI has taken to increase the transparency, choice, and overall privacy of Italian and EU citizens. The collaboration between OpenAI and the Italian DPA demonstrates the significance of regulatory scrutiny and the importance of adopting robust privacy measures in AI technologies.

As governments collaborate on the interpretation of privacy jurisprudence for future protection, many privacy laws offer practical solutions such as the General Data Protection Regulations’ guidance on Data Protection Impact Assessments (DPIAs). A DPIA is an internal assessment where companies assess the processing of personal data and its impact on an individual. DPIA-like assessments are also requirements in many of the emerging U.S. State privacy laws.

As AI technologies like ChatGPT become more prevalent in customer-facing applications, protecting user privacy is of paramount importance. OpenAI's implementation of privacy controls, such as shortened data retention periods and the introduction of incognito mode, reflects their commitment to safeguarding user data. The Italian DPA's initial ban and subsequent reversal highlight the regulatory scrutiny faced by AI technologies and the need for continuous improvement in privacy protection. By combining responsible data handling practices, enhanced privacy controls, and collaboration with regulatory authorities, companies can strike a balance between the potential of AI technologies and the protection of individual privacy rights.

 

[1] Wiggers, Kyle. “Addressing Criticism, OpenAI Will No Longer Use Customer Data to Train Its Models by Default.” TechCrunch, Mar. 1 2023, techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/. Accessed 18 May 2023.

[2] “ChatGPT: OpenAI Riapre La Piattaforma in Italia Garantendo Più Trasparenza E Più Diritti a Utenti E Non Utenti Europei.” Www.gpdp.it, Apr. 2023,                                           www.gpdp.it/home/docweb/-/docweb-display/docweb/9881490#english. Accessed May 18 2023.

© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

Tags

data & technology, cybersecurity & data privacy, data privacy & cyber risk, article

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with