A new design dimension.
Orion is Meta’s first “true” augmented‑reality glasses (unveiled Sept 2024 at Connect). Looks like normal eyewear but projects full‑color holograms through waveguide optics.
Strategic aim: replace the phone with always‑on, heads‑up computing that keeps you “present” instead of staring at rectangles.
How do we fit both novel AR experiences, and everyday life-productivity into a device that looks good on your face?
When your whole world becomes a surface for digital content, and your neural interface can seemingly navigate this digital world seamlessly, a lot (A LOT) of design doors open.
Maybe a notification feels like a breeze on a warm day? Maybe we don’t use a model of “apps” at all, and instead delegate everything through your AI Assistant? Maybe your “AR Device” is actually your own body and mind, digitally augmented like some kind of software cyborg?
Back to Earth.
While these innovations were super fun, they also ultimately made the experience more confusing, without ultimately providing a ton of user value.
Familiarity and usability are important, so ultimately our visual design was “safe” – exciting, spatial, novel, but also familiar, and doesn’t require a PhD to use.
AI ❤️ AR
One aspect of the Orion demo I was really proud of was the AI Cooking experience. This was something I brought to life in my own kitchen late at night after a bolt of inspiration hit me. I would later livestream a presentation of this working in my kitchen to the entire Wearables org. (We all know how finicky Zoom can be on a good day. Now imagine streaming through an experimental AR Headset and a temperamental AI model – but it worked!).
While the demo as presented may appear on the surface to just make a smoothie, the concept of AI seeing what you see, and hearing what you hear, gets deep. Just as our phones have becomes extensions of ourselves, AR Glasses does this to a tremendous degree.
I was proud to find real user value here, all by sharing my world with an AI (and yes, I really made some of the recipes).
Interaction needs to be awesome
Early on, we had a problem: sure, virtual content looked great, but it was really hard to actually use. Demos with leadership where they’d prefer to use an Oculus controller or other input was our canary in the coal mine: if we were going to build the next generation of computing, we were going to need to define the next generation of interaction.
I started building all of my demos with our EMG input devices (which were only very recently upgraded to product from research).
By the nature of using a super early version of research hardware, of course I hit a lot of rough edges. But beyond the snags we were actually building real design paradigms that worked well on this new generation of input devices. And when we brought these demos to leadership, rough edges and all, something amazing happened: leadership realized that we were closer than we thought, and that the next generation of computing really could complement the next generation of input.