Meta has just released a new pair of AI-powered sunglasses with Ray-Ban, and now we have an idea of what Meta plans on doing with the images and videos captured by the glasses.

For those who don't know, Meta's AI-powered Ray-Bans have a camera located in front of the glasses. The camera can be used for taking photos and video, but it can also be enabled when the user initiates an AI feature through saying a keyword such as "look" and then requesting Meta AI to analyze what the wearer is seeing and provide an answer. An example of this would be looking at a mountain and asking MetaAI what the name of that mountain is its height.
When prompted, the Meta Ray-Bans will then capture a selection of images that will be scanned by MetaAI, and the answer will be read out loud to the wearer via the speakers. However, what happens to the captured images? TechCrunch queried Meta on this and discovered the company was being cagey about the process of using captured images and video, but since then it has provided more clarity.
According to Meta policy communications manager Emil Vazquez, who sent an email to TechCrunch, "[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy."
Previously, Meta stated to the publication that Meta does not use photos and videos captured on Ray-Ban Meta for training purposes if the user chooses not to submit them to AI. However, once the user asks Meta AI to scan any images or video that content falls under a completely different set of policies, meaning they are eligible for training purposes.
What does this mean? Any wearers of Meta's glasses are participating in the company accruing a mountainous stockpile of data, and perhaps what is more nefarious is buyers of the glasses may not be aware all of their images and videos are being used to create more sophisticated Meta AI models. According to Meta, the guidelines for using its Meta AI features are clear within the Ray-Ban Meta user interface. But, what I would consider a healthy piece of criticism is the lack of public explanation of the potential privacy concerns by Meta themselves, which should come with assurances and transparency for customers.
The introduction of smart glasses that can be used to record the world around the wearer, that is then used for further AI training, also introduces the problem of consent for people around the wearer, especially when Meta's AI glasses have already been hotwired into a device capable of revealing the name, address, and phone number of any person they are pointed at.