Key Takeaways
- LinkedIn is set to resume using personal data from Hong Kong users to train its generative AI models starting November 3rd.
- Hong Kong’s Privacy Commissioner for Personal Data has urged users to review their privacy settings and understand the implications for AI training.
- LinkedIn has assured that data used for AI training will comply with local privacy laws and will not include private messages.
- Users wishing to opt out of data usage for AI training can do so via their account’s Data privacy settings.
LinkedIn Resumes AI Training with User Data, Prompts Privacy Watchdog Advisory
LinkedIn has announced its intention to resume using personal data from users to train its generative AI models, with the practice scheduled to begin on November 3rd. This move has prompted Hong Kong’s privacy watchdog to issue a reminder to LinkedIn users, urging them to review their privacy settings.
The Office of the Privacy Commissioner for Personal Data stated that LinkedIn confirmed the resumption of using personal data for AI model training. The advisory specifically highlights the need for users to pay attention to changes in LinkedIn’s privacy policy, particularly sections concerning the use of personal data for training generative AI. Users are encouraged to understand these details to make an informed decision about consenting to such use.
LinkedIn initially announced its plans to use member profiles, posts, resumes, and public activity for AI model training in September 2025. The platform confirmed that data from members in various regions, including the United Kingdom, European Union, European Economic Area, Switzerland, Canada, and Hong Kong, would be included.
Hong Kong Privacy Watchdog’s Intervention and LinkedIn’s Commitments
The privacy watchdog in Hong Kong had previously intervened, halting LinkedIn’s use of this data in late 2024. This action followed concerns raised about the platform’s revised privacy policy and the default opt-in setting for Hong Kong users. Following this intervention, the watchdog engaged with LinkedIn from October 2024 to April of the current year.
During these discussions, LinkedIn committed to ensuring that Hong Kong users would retain control over how their data is utilized for AI training. The company also assured that all such data processing activities would adhere to the Personal Data (Privacy) Ordinance. The data slated for AI training includes detailed information from user profiles and publicly shared content on LinkedIn. Importantly, LinkedIn has confirmed that private messages will not be included in this data pool. Users under the age of 18 will also be excluded from the AI training process.

For users who wish to opt out of their data being used for AI training, the privacy watchdog provided specific instructions. Users should navigate to the “Data privacy” section within their account settings. From there, they need to select “Data for Generative AI Improvement” to locate the relevant toggle switch. Users can then disable the option titled “Use my data for training content creation AI models.”

The watchdog has affirmed its commitment to continuously monitor the situation to ensure the personal data privacy of Hong Kong users remains protected. This proactive stance mirrors a broader trend observed across various social media platforms. Notably, Meta previously resumed using user data for AI training on Facebook and Instagram following a regulatory review, as reported by Cryptopolitan.
The Growing Challenge of Training Data for AI Models
LinkedIn plans to share user data with Microsoft and its affiliates, leveraging Microsoft’s significant investments in AI, including its partnership with OpenAI, the developer of ChatGPT. This development occurs against a backdrop of growing concerns about the availability of training data for AI models.
Leading figures in the tech industry have highlighted the potential scarcity of training data as a significant hurdle. Neema Raphael, the chief data officer and head of data engineering at Goldman Sachs, suggested that AI models like ChatGPT and Google’s Gemini may have already exhausted their primary training data. He warned that this limitation could impede the further development of artificial intelligence technologies.
💡 The lack of new, high-quality data could compel AI companies to explore alternative training methodologies, potentially shifting focus towards more agentic AI systems. These autonomous systems are designed to make decisions and execute tasks online without direct human oversight, and are already under development by major AI firms.
While agentic AI offers advanced capabilities, such as real-time adaptation in cyber defense, it also presents potential risks, particularly regarding autonomous cyberattacks. The capacity for these systems to learn and evolve tactics could represent a significant challenge in cybersecurity.
Expert Summary
Hong Kong’s privacy authority has advised LinkedIn users to review their data usage settings as the platform prepares to resume training its AI models with personal data. LinkedIn has assured compliance with local privacy laws and provided an opt-out mechanism for users concerned about their data.
This situation highlights the ongoing challenge of data availability for training advanced AI models, a concern voiced by industry leaders and potentially signaling a shift in AI development strategies toward more autonomous systems.