Workers Lose Jobs In Meta Smart Glasses Footage Controversy
Meta has terminated its contract with outsourcing firm Sama, leading to more than 1,000 Kenyan workers losing their jobs after they revealed they had been reviewing highly sensitive footage captured by users of its AI-powered smart glasses, raising fresh concerns about privacy, labour practices, and the hidden human layer behind AI.
What The Workers Reported Seeing
The controversy began in February when workers employed by Sama in Nairobi told Swedish newspapers that their role involved reviewing and labelling video footage captured by Meta’s Ray-Ban smart glasses. According to those accounts, the material included deeply private scenes, with one worker stating, “We see everything – from living rooms to naked bodies.”
The footage was reportedly not limited to staged or deliberately shared content. Instead, it reflected everyday life captured by wearable cameras, including people undressing, using the toilet, and handling sensitive personal information. The workers’ role was to annotate this material so that Meta’s AI systems could learn to interpret visual and contextual data more effectively.
Meta acknowledged that human review forms part of its AI training process, stating that “photos and videos are private to users” and that human reviewers are used to “improve product performance” with user consent. However, the scale and nature of the material described by workers has intensified scrutiny over how that consent is obtained and understood in practice.
Why Did Meta End The Contract?
Less than two months after the investigation was published, Meta moved to end its relationship with Sama, a US-based outsourcing company that provides data annotation services, employing workers to review and label images and video to train AI systems, a decision that resulted in redundancy notices being issued to 1,108 workers with just days’ notice. The company’s official explanation was that Sama “did not meet our standards,” although it did not specify which standards had been breached or when concerns were first identified.
Disputed By Sama
Sama has strongly disputed that characterisation, stating that it had “consistently met the operational, security and quality standards required” and had not been informed of any shortcomings before the contract was terminated.
The timing of the decision has led to further questions, with labour groups and campaigners arguing that the termination may have been linked to the workers speaking out rather than performance issues, while Naftali Wambalo of the Africa Tech Workers Movement suggested that the standards in question may relate less to quality and more to confidentiality, describing them as “standards of secrecy,” a claim that Meta has not publicly addressed.
The Human Layer Behind AI
The episode highlights a reality that is often overlooked in discussions about artificial intelligence. Before AI systems can recognise images, understand context, or respond to real-world inputs, large volumes of data must be manually labelled by human workers.
In this case, that process meant individuals in Kenya reviewing unfiltered footage captured by wearable devices used by people in entirely different parts of the world. The work sits at the intersection of privacy, labour rights, and technology development, with those carrying out the task often having limited visibility, protection, or influence over how the data is used.
Not The First Time For Meta
It seems this is not the first time Meta’s relationship with outsourced labour has come under scrutiny. For example, previous contracts involving content moderation have been linked to claims of psychological harm, low pay, and inadequate support, with some former workers reporting symptoms consistent with post-traumatic stress. Sama itself exited parts of that work in recent years, acknowledging the challenges involved.
Regulatory Pressure
The revelations have prompted regulatory attention in multiple jurisdictions. For example, the UK’s Information Commissioner’s Office described the reports as “concerning” and requested further information from Meta, while Kenya’s data protection authority has launched its own investigation into the handling of the footage.
Legal challenges are also emerging. A class action lawsuit in the United States alleges that Meta misrepresented the privacy protections of its smart glasses, while privacy groups in Europe continue to question how user data is processed and whether consent mechanisms meet regulatory standards.
The concern centres on a key distinction, because while Meta’s policies may disclose that data can be used to train AI systems, the extent to which users understand that their footage could be viewed by human reviewers remains unclear, particularly when that footage includes sensitive or intimate situations.
What This Means For AI Development
The decision to end the Sama contract does not remove the need for human input in AI systems. Instead, it exposes the tension between rapid technological development and the practical realities of how that development is supported.
Training AI models at scale requires vast amounts of labelled data, and that requirement does not disappear as systems become more advanced. What changes is the level of scrutiny applied to how that data is collected, processed, and reviewed, particularly when it involves real-world human behaviour rather than curated datasets.
Smart glasses themselves represent a significant step forward in AI-enabled consumer devices, combining real-time image capture with on-device and cloud-based processing. However, their effectiveness depends on continuous learning, which in turn depends on the availability of human-labelled data.
What Does This Mean For Your Business?
This story illustrates how organisations adopting AI tools may need to look beyond the technology itself and consider the full data lifecycle, including how training data is sourced, handled, and reviewed, particularly where external providers or offshore teams are involved.
For UK businesses, this has clear implications around compliance and accountability, because under UK GDPR and data protection law, responsibility does not disappear when data is passed to a third party, meaning organisations must be confident not only in how systems perform but also in how the underlying data is being processed and by whom.
Reducing risk therefore means ensuring that suppliers and partners meet clear standards not only for technical performance but also for data governance, worker welfare, and transparency, with strong contractual controls, regular audits, and clear oversight of third-party processes becoming essential, especially when sensitive or personal data is involved.
The broader lesson, and what may be surprising to many, is that AI systems are not purely automated but are built on human input at multiple stages, and any weakness in that chain can create reputational, legal, and ethical risk, leaving businesses that properly understand and manage that reality far better placed to use AI responsibly while maintaining trust with customers, regulators, and stakeholders.