Skip to content

Multiple discussions between individuals and ChatGPT found on the internet, despite attempts to delete them permanently.

Struggles conveyed in report regarding permanent nature of published online content

Online Chats with ChatGPT Being Accessible Despite Attempts to Remove Them from Public View
Online Chats with ChatGPT Being Accessible Despite Attempts to Remove Them from Public View

Multiple discussions between individuals and ChatGPT found on the internet, despite attempts to delete them permanently.

In a recent incident, conversations between users of the popular AI model ChatGPT, developed by OpenAI, were inadvertently made accessible to search engines like Google. This revelation, if tied back to individuals or organizations, could lead to reputational damage, competitive disadvantage, regulatory scrutiny, or personal safety risks depending on jurisdiction, according to San Francisco-based adviser Pradeep Sanyal.

The wider incident raises concerns about AI literacy among the rapidly growing number of people engaging with the technology. Barry Scannell, an AI law and policy partner, expressed shock at seeing some original ChatGPT conversations made accessible to routine Google searches. He compared the current situation to the early days of electronic communication, where some people were careless about what they put in emails, which was sometimes revealed in discovery processes.

Examples of questionable conversations include an Italian lawyer discussing land acquisition for a hydroelectric facility in the Amazon, an Egyptian seeking to use AI to write critically about their country's authoritarian regime, and a researcher documenting academic fraud. Pradeep Sanyal gave an example of potentially concerning content: a lawyer discussing a strategy to displace indigenous communities for the lowest possible price, which could have legal and ethical implications.

The majority of the conversations, as reported by Digital Digging, are harmless, but some were not. The incident occurred when thousands of ChatGPT users inadvertently made their conversations visible to search engines. ChatGPT developer OpenAI attempted to erase over 100,000 conversations between users, but some remain accessible through internet searches. The facility, initially described as a "short-lived experiment", was quickly disabled, and some 50,000 conversations were "scrubbed" by the company.

Potential implications for privacy and security include unintended or indefinite data exposure, exposure of sensitive personal or confidential information, differential privacy treatment, security risks from public file URLs or third-party integrations, and regulatory compliance challenges. It is recommended to avoid submitting personally identifiable or sensitive data to AI platforms, use privacy settings such as opting out of training data and incognito modes where available, actively delete data when possible, and maintain awareness of evolving platform privacy policies and legal landscapes. Strong data lifecycle management—including access controls, encryption, backup, and recovery—is critical to safeguard privacy and security in AI data handling.

Scannell emphasized the importance of companies having clear processes and policies when it comes to their employees using AI technology. Data made publicly accessible by AI platforms typically remains available for a temporary period ranging from around 24 hours to 30 days, although some services or use cases may result in longer retention or even indefinite storage for legal reasons. This varies by platform: for example, ChatGPT retains temporary chats about 30 days with some data kept longer for safety review, Google’s AI services can retain data from 72 hours up to 18 months, and Perplexity AI retains data roughly 24 hours. On the other hand, enterprise or specialized security platforms like Microsoft Sentinel can retain logs interactively for up to two years and in long-term low-access storage for up to 12 years.

The Digital Digger article emphasizes the difficulty of erasing online posts once they have been made publicly available. The enduring issues raised by the incident serve to highlight the nature of information people are sharing with AI tools, some of it deeply commercially sensitive and some of it deeply personal. As AI continues to permeate our lives, it is crucial for users to be aware of the potential risks and take steps to protect their privacy and security.

  1. The concerns about AI literacy among users are highlighted by the incident, as some users inadvertently shared confidential information via technology like ChatGPT, which could lead to various risks such as reputational damage, legal implications, and security breaches.
  2. To ensure privacy and security, it is essential for individuals to be aware of the implications when using AI technology, including properly managing data lifecycle, avoiding sensitive data submission, and understanding platform privacy policies, as well as for companies to establish clear processes and policies for employee AI technology usage.

Read also:

    Latest