Real-Time Generative AI Is Coming to Snap’s Phone AR Effects     – CNET

Real-Time Generative AI Is Coming to Snap’s Phone AR Effects – CNET

One of the first companies that advanced augmented reality and social media on phones is adding generative AI to the mix next. It looks like an early indication of where AR and VR are headed now that AI has become the buzz term of 2024.

Snap announced its new business strategy at the Augmented World Expo, a conference focused on the future of AR and VR technologies in Long Beach, California. Generative AI will show up in two ways: It’ll invoke real-time AR experiences in the phone app, and it’ll be used in developer tools to help make new AR experiences faster.

You might not use Snapchat, or maybe you don’t use AR much at all. But the arrival of gen AI in AR feels like an inevitable move. And it’s one we’ll soon see in other future apps and glasses, too.

Snapchat has already had generative AI in its app: An AI chat bot launched last year, and now AI bots and assistants are everywhere. But the way Snap is using generative AI models to create 3D effects on the fly feels new, and it’s suggestive of how AR and VR might be ready to see more gen AI image creation in other apps.

While CNET hasn’t seen any demos of the AI effects in AR lenses yet, the promise of conjuring up ideas on the fly raises questions about the range of expressiveness in Snap’s gen AI image creation tools. It also raises questions about how current AR lens developers — of which there are over 350,000, according to Snap — will be affected by these new creative tools. 

While they’ll potentially make AR effects easier for developers without coding experience to make, will there also be a point where more people simply create AR effects for themselves using AI without downloading or buying any developer-made lens effects at all?

An AR effect of a woman who looks like a zombie in a Snapchat app made by Paige Piskin

Paige Piskin’s Zombie Girl Comic Lens, an existing AR lens that’s already using Snap’s gen AI creation tools.

Screenshot by Scott Stein/CNET

Those types of disruptions are questions that plenty of people have already wondered about generative AI tools everywhere else, but they look finally ready to carry over into AR and VR apps and software, too.

The real-time part of the gen AI AR effects is the most interesting part of this. Creating generative AI effects usually involves some sort of wait. Will Snap’s process feel instantaneous enough to make the process feel spontaneous, or are we still getting there? 

Snap looks to be one of the first to dip into this space in AR. Apple’s Vision Pro won’t get Apple Intelligence, Apple’s generative AI tools, later this year. And while Meta’s Ray-Ban glasses use generative AI for chats and to analyze photos, Meta’s Quest VR headsets don’t have gen AI in their OS yet. Meta’s CTO Andrew Bosworth told CNET earlier this year that the Quest could be get asset-generating gen AI tools soon.

Snap’s AR is currently phone-based, using cameras to add effects in real-time on phone screens, but the company’s been developing its own AR glasses for years. Maybe, like many other companies like Niantic, Snap might also make moves into mixed reality VR headsets, too.

https://www.cnet.com/rss/all/

Scott Stein

Leave a Reply