On March 31, OpenAI’s ChatGPT platform has been limited by the Italian Data Protection Authority (IDPA) over concerns of unlawful data collection and lack of age verification.
Data Collection Concerns
According to the agency, on March 20, ChatGPT violated the privacy of users’ conversation data and payment information of people who purchased the paid version of the chatbot. After a data breach the IDPA initiated an inquiry and found that OpenAI failed to provide adequate information to users or obtain proper legal basis for the large-scale data collection and processing that trains its algorithms. The IDPA also found that ChatGPT’s algorithms processed inaccurate personal data and sometimes provided information that did not match factual circumstances.
Absence of Age Verification
Another major concern raised by the IDPA was the absence of age verification mechanisms on ChatGPT, exposing minors to inappropriate responses for their age. Although OpenAI’s terms of service state that the platform is intended for users aged 13 and above, younger users are not protected from inappropriate information.
Compliance Measures
OpenAI is not based in the European Union, but it has a representative in the European Economic Area. The company must notify the IDPA of the measures it has taken to comply with the order within 20 days, or face a fine of up to EUR 20 million or 4% of its total worldwide annual turnover.
This decision by the IDPA highlights the importance of companies ensuring that their data processing practices comply with legal requirements and are ethical. It also serves as a reminder to users to be aware of the potential risks associated with providing personal information online.
Overall, the IDPA’s swift aimed to protect Italian users’ privacy rights by limiting ChatGPT’s data processing until OpenAI demonstrates compliance with legal requirements. OpenAI must ensure that it implements adequate data protection measures and age verification mechanisms to protect the privacy and safety of all its users.