Smart glasses version 2.0
When you talk about smart glasses, many investors quickly return to the 2014 Google Glass disappointment. For those of you who don’t remember, Google Glass were “smart” glasses with a small plexiglass display that presented information in the user’s field of view, controlled with voice commands, including a built-in camera for taking photos and videos, and receiving turn-by-turn directions on-lens. However, limited applications, a $1.5k price tag, and being a first-of-a-kind product riddled it with social stigmas, earning the early adopters the name “Glassholes”. The product was discontinued in 2015 with an estimated sub-100k units sold. The product failed because of its “creep” factor and lack of utility.
Despite that cautionary case study, Meta and Google appear to be all in on a next generation of smart glasses that combines fashion, functionality, and generative AI into a compelling form factor. I believe the reason for their optimism is a combination of recognizing consumers gravitate to easy-to-use tech along with greater confidence in what generative AI can add to the equation.
The catalyst here is Generative AI, which has allowed us to retrieve information faster and easier through multi-modal centralized search. GenAI is paving the way toward complex wearable Ambient Computing. In other words, the current goal is making glasses that look and feel like “normal glasses”, but with generative AI access via voice-activation, cameras, and sensors. This means your glasses will understand the physical world around you, ready to provide you any information about it, like “what kind of plant am I looking at?” or “did you see where I left my keys?”.
As evidence of this shift, The Information reported Meta is ending development of its Vision Pro competitor, codenamed La Jolla.
Here’s where I see Google and Meta standing on the smart glasses topic:
Google: Project Astra (announced May 2024) – Project Astra is currently in development and is Google’s vision for the future of AI assistants. Project Astra will be integrated into any Google device with a camera, most notably phones and smart glasses. This will allow computer vision and generative AI to inform the user of their surrounding world. The applications include everything from doing math on a whiteboard, remembering someone’s name, to directions. It’s unclear how much Google is spending annually on Project Astra development, that said, I estimate it to be in the $2-4B per year range.
Meta: Smart Glasses (released 2021) – While the initial version was without AI, the new Ray-Ban and Meta AI combination create a product that is similar to what Project Astra is pursuing. You can use voice commands to interact with Meta AI to perform various tasks like sending messages, making calls, controlling media (including live streaming), and even getting information from the web. I estimate more than half of Reality Labs’ ~$20B in annual losses is related to smart glasses. In other words, Meta is putting their money where their beliefs are in terms of the future of wearable computing.
In short, we’ve gone from completely immersive digital worlds (VR), to adding digital elements to our real world (AR), to blending the digital and physical seamlessly (Spatial Computing), and finally, to making technology so intuitive and integrated that it works in the background to enhance our everyday lives (Ambient Computing).