Why Meta Is Being Sued Over AI Smart Glasses
Meta is facing a class action lawsuit in the United States over allegations that its AI-powered smart glasses collected and reviewed sensitive footage in ways users did not reasonably expect, raising new questions about privacy, transparency and the human labour behind modern AI systems.
Meta’s Ray-Ban Smart Glasses
The product at the centre of the controversy is Meta’s Ray-Ban smart glasses, developed in partnership with eyewear manufacturer EssilorLuxottica.
The glasses look similar to ordinary frames but include built-in cameras, microphones and an AI assistant that can take photos, record video, answer questions and analyse what the wearer is looking at. Users activate the assistant with a voice command such as “Hey Meta”.
The system works by sending captured data such as images, voice queries and video to Meta’s cloud infrastructure, where AI models interpret the information and generate responses.
These smart glasses are part of a growing category of wearable AI products designed to act as hands-free digital assistants integrated into everyday life.
What (Allegedly) Happened?
The legal case was filed in the US by two consumers who claim Meta misled customers about how the glasses handle personal data.
The lawsuit argues that Meta marketed the devices using statements such as “designed for privacy, controlled by you” and “built for your privacy”. According to the complaint, these claims gave users the impression that recordings captured through the glasses would remain private.
However, the case alleges that footage collected through the devices could be reviewed by human contractors involved in training Meta’s AI systems.
The complaint also names Luxottica of America, Meta’s manufacturing partner, and claims the companies violated consumer protection laws through misleading marketing.
The Investigation That Triggered The Case
The lawsuit follows an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten.
Journalists interviewed workers at a Nairobi-based outsourcing company contracted to review data captured through the glasses. These workers act as data annotators, labelling images, video and transcripts so that AI systems can better understand real-world environments.
According to the investigation, the review queue sometimes included extremely private material captured by the glasses.
Workers said they encountered footage showing people undressing, using the toilet or engaging in intimate moments, alongside everyday scenes from homes and workplaces.
One worker described the scale of the material by saying: “We see everything – from living rooms to naked bodies.”
The Role Of Human Review
Human review is a common part of how AI systems are trained and improved.
When users interact with an AI assistant, some of those interactions may be reviewed by humans to check that the system is producing accurate results and responding appropriately.
Meta’s own AI terms state: “In some cases Meta will review your interactions with AIs… and this review may be automated or manual (human).”
According to the company, this process helps improve how the glasses interpret images, recognise objects and answer questions about the environment.
However, critics argue that users may not fully realise that recordings captured through wearable devices could enter a review pipeline involving human contractors.
Why Regulators Are Now Involved
The revelations have drawn the attention of regulators here in the UK.
The UK’s Information Commissioner’s Office confirmed it is contacting Meta after the claims emerged. The regulator described the allegations as “concerning” and said organisations developing products that process personal data must clearly explain how that data is used.
A spokesperson said devices that collect personal data should “put users in control and provide appropriate transparency”, particularly where the data may be used to train artificial intelligence systems.
The issue also raises questions about international data transfers. The workers reviewing the footage were employed by a subcontractor in Kenya, meaning data could potentially be processed outside the jurisdictions where the glasses are sold.
What Meta Says
Meta says that media captured by the glasses normally stays on the user’s device unless it is shared with Meta services.
The company also says it uses filtering techniques, including face blurring, to reduce the risk of identifying individuals in reviewed material.
In a statement, the company said contractors may sometimes review content shared with Meta AI in order to improve the experience provided by the glasses.
Meta has also pointed to its privacy policies and terms of service, which describe the possibility of automated or human review of interactions with its AI systems.
Why The Issue Matters
The controversy highlights a broader challenge facing many AI-powered products.
While these systems are marketed as automated technology, they often depend on large networks of human workers who label and review data in order to train AI models.
This hidden workforce is essential to machine learning systems, yet the role they play is often invisible to consumers.
The Meta case also raises questions about how transparent companies should be when marketing devices that capture audio, video and environmental data throughout daily life.
As wearable AI becomes more common, the line between personal devices and surveillance technology may become harder to define.
What Does This Mean For Your Business?
For organisations adopting AI-enabled devices or platforms, the case highlights the importance of understanding how data is collected, processed and reviewed behind the scenes.
AI tools frequently rely on human review processes, particularly during training and quality assurance. Businesses deploying such technologies must consider whether users, customers or employees fully understand how their data might be used.
The case also demonstrates how privacy expectations can quickly become legal disputes when marketing claims appear to conflict with how systems actually operate.
For technology companies, the issue reinforces the need for clear communication about data practices. For organisations adopting AI tools, it underlines the importance of governance, transparency and careful evaluation of how AI systems handle sensitive information.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!