Ray-Ban Meta smart glasses use artificial intelligence to see, hear and speak. What are they like?

Date:

In an indication that the tech industry is getting weirder, Meta plans to release a significant update soon that can transform Meta’s Ray-Ban glasses, aka video recording glasses, right into a gadget only seen in sci-fi movies.

Next month, the glasses will give you the chance to use latest artificial intelligence software to see the true world and describe what you are looking at, very like the AI ​​assistant within the movie “Her.”

- Advertisement -

The glasses, available in a wide range of frames with prices starting at $300 and lenses starting at $17, are mainly used for taking photos and videos and listening to music. But thanks to latest AI software, they could be used to scan famous landmarks, translate languages, and discover animal breeds and exotic fruits, amongst other things.

To use the AI ​​software, users simply say “Hey Meta,” followed by a prompt reminiscent of “Look and tell me what kind of dog that is.” The artificial intelligence then responds with a computer-generated voice, which is played through the glasses’ tiny speakers.

The concept of AI software is so novel and bizarre that after we – Brian X. Chen, a technology columnist who reviewed Ray-Bans last 12 months, and Mike Isaac, who covers Meta and wears smart glasses produce a cooking program — we heard about it, we actually wanted to try it. The meta gave us early access to updates, and we have been testing the technology over the previous few weeks.

We wore glasses to the zoo, grocery stores, and museums while asking the AI ​​questions and requests.

The result: We were concurrently amused by the virtual assistant’s antics—like mistaking a monkey for a giraffe—and impressed when it performed useful tasks like checking whether a package of cookies was gluten-free.

A Meta spokesperson said that since the technology continues to be latest, the artificial intelligence won’t all the time get all the pieces right, and that feedback will help improve the glasses over time.

Meta software also created transcripts of our questions and AI answers, which we captured in screenshots. Here are an important moments from our month of cooperation with Meta’s assistant.

BRIAN: Of course, the very first thing I had to test Meta’s AI on was my corgi, Max. I checked out the chubby pooch and asked, “Hey, Meta, what am I looking at?”

“A cute Corgi dog is sitting on the ground with his tongue sticking out,” the assistant said. That’s right, especially the part about being cute.

MICROPHONE: The Meta AI accurately identified my dog ​​Bruna as a “black and brown Bernese Mountain Dog”. I half expected the AI ​​software to think it was a bear, the animal her neighbors most frequently mistake her for.

BRIAN: After the AI ​​accurately identified my dog, the logical next step was to try it on zoo animals. So I recently visited the Oakland Zoo in Oakland, California, where for 2 hours I observed over a dozen animals, including parrots, turtles, monkeys and zebras. I said, “Hey, Meta, look and tell me what animal that is.”

The AI ​​was mistaken more often than not, partially because many animals were kept in cages and beyond. Other mistakes included confusing a primate with a giraffe, a duck with a turtle and a meerkat with a large panda. On the opposite hand, I used to be impressed when the AI ​​accurately identified a particular breed of parrot often called the blue and gold macaw, in addition to zebras.

The strangest a part of this experiment was talking to an AI assistant coping with children and their parents. They pretended not to listen to the one adult within the park as I apparently muttered to myself.

MICROPHONE: I also spent a particular time searching for groceries. Being in Safeway and talking to myself was a bit of awkward, so I attempted to speak quietly. I still find myself looking from the sidelines.

When Meta’s AI worked, it was cute. I picked up a package of strange-looking Oreo cookies and asked him to have a look at the package and tell me if they were gluten-free. (They weren’t.) About half the time he answered a majority of these questions accurately, although I am unable to say he saved any time compared to reading the label.

But the primary reason I picked up these glasses was to put my very own on Cooking show on Instagram — a flattering way of claiming I record myself cooking food while talking to myself for every week. These glasses make doing things much easier than using your phone and one hand.

The AI ​​assistant also can offer kitchen help. If I would like to know the way many tablespoons are in a tablespoon and my hands are covered in olive oil, for instance, I can ask her for the knowledge. (There are three teaspoons in a tablespoon, just FYI.)

But after I asked the AI ​​to have a look at the few ingredients I had and provide you with a recipe, it spat out quick instructions for the egg custard – which wasn’t very helpful in following directions at my very own pace.

A couple of examples to select from might have been more useful, however it would have required some tweaks to the UI and possibly even the screen inside my lenses.

A Meta spokesperson said users can ask additional questions to get more detailed and useful answers from the assistant.

BRIAN: I went to the food market and bought essentially the most exotic fruit I could find – cherimoya, a scaly green fruit that appears like a dinosaur egg. When I gave the Meta AI multiple possibilities to discover it, it guessed otherwise every time: chocolate covered pecan, stone fruit, apple, and finally durian, which was close, but not banana.

MICROPHONE: The latest software’s ability to recognize landmarks and monuments seemed to be improving. Looking across a block in downtown San Francisco at the large dome, the artificial intelligence Meta accurately replied, “City Hall.” This is a neat trick and perhaps useful if you happen to are a tourist.

Other times were hit and miss. As I drove from town to my home in Oakland, I asked Meta what bridge I used to be on while searching the window in front of me (with each hands on the wheel, in fact). The first answer was the Golden Gate Bridge, which was mistaken. On my second attempt, it realized I used to be on the Bay Bridge, which made me wonder if a clearer shot of the tall, white suspension poles on the newer part can be enough to get the shot right.

BRIAN: I visited the San Francisco Museum of Modern Art to see if Meta’s artificial intelligence could do the job of a tour guide. After taking photos of about two dozen paintings and asking the assistant to tell me concerning the piece of art I used to be , the AI ​​was able to describe the pictures and media that were used to compose the piece of art – which can be nice for an art history student – but failed to discover the artist nor title. (A Meta spokesperson said a subsequent software update released after my museum visit improved this capability.)

After the update, I attempted viewing images of more famous artistic endeavors, including the Mona Lisa, on my computer screen, and the AI ​​accurately identified them.

BRIAN: At a Chinese restaurant, I pointed to a menu item written in Chinese and asked Meta to translate it to English, however the AI ​​said it currently only supports English, Spanish, Italian, French, and German. (I used to be surprised because Mark Zuckerberg learned Mandarin.)

MICROPHONE: It did a reasonably good job of translating the book title into German from English.

Meta’s AI-powered glasses offer an intriguing glimpse right into a future that seems far-off. The disadvantages highlight the restrictions and challenges in designing this kind of product. The glasses would probably do higher at identifying zoo animals and fruit, for instance, if the camera had a better resolution – but a nicer lens would add bulk. And regardless of where we were, talking to a virtual assistant in public places was awkward. It’s not clear if it will ever be normal.

But when it worked, it worked well, and we rejoiced — and the proven fact that Meta’s AI can, for instance, translate languages ​​and discover landmarks through trendy-looking glasses shows just how far the technology has come.

Rome
Romehttps://globalcmd.com/
Rome: Visionary Founder of the GlobalCommand Ecosystem (GlobalCmd.com | GLCND.com | GlobalCmd A.I.) Rome is the innovative mind behind the GlobalCommand Ecosystem, a dynamic suite of platforms designed to revolutionize productivity for entrepreneurs, freelancers, small business owners, and forward-thinking individuals. Through his visionary leadership, Rome has developed tools and content that eliminate complexity, empower decision-making, and accelerate success. The Powerhouse of Productivity: GlobalCmd.com At the heart of Rome’s vision is GlobalCmd.com, an intuitive AI-powered platform designed to simplify decision-making and streamline workflows. Whether you’re solving complex business challenges, scaling a new idea, or optimizing daily operations, GlobalCmd.com transforms inputs into actionable, results-driven solutions. Rome’s approach is straightforward yet transformative: provide users with tools that deliver clarity, save time, and empower them to focus on growth and achievement. With GlobalCmd.com, users no longer have to navigate overwhelming tools or inefficient processes—Rome has redefined productivity for real-world needs. An Ecosystem Built for Excellence Rome’s vision extends far beyond productivity tools. The GlobalCommand Ecosystem includes platforms that address every step of the user’s journey: • GLCND.com: A professional blog and content hub offering expert insights and actionable advice across business, science, health, and more. GLCND.com inspires users to explore new ideas, sharpen their skills, and stay ahead in their fields. • GlobalCmd A.I.: The innovative AI engine powering GlobalCmd.com, designed to turn user inputs into tailored recommendations, predictive insights, and actionable strategies. Built on the cutting-edge RAD² Framework, this AI simplifies even the most complex decisions with precision and ease. The Why Behind GlobalCmd.com Rome understands the pressure and challenges of running a business, launching projects, and making impactful decisions in real time. His mission was to create a platform that eliminates unnecessary complexity and provides clear, practical solutions for users. Whether users are tackling new ventures, refining operations, or handling day-to-day decisions, Rome has designed the GlobalCommand Ecosystem to meet real-world needs with innovative, results-oriented tools. Empowering Success Through Simplicity Rome’s ultimate goal is to empower individuals with the right tools, insights, and strategies to take control of their work and achieve success. By combining the strengths of GlobalCmd.com, GLCND.com, and GlobalCmd A.I., Rome has created an ecosystem that transforms how people work, think, and grow. Start your journey to smarter decisions and greater success today. Visit GlobalCmd.com and take control of your future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Advertisement

Popular

More like this
Related

How optometry tests shape the future of eye health

Eye health is a crucial part of general well...

A few more old cameras to admire

In the latter a part of my old camera...

The vaccine can ensure wide resistance to the variants of coronavirus

Sugar coatings aren't just for candy; They also help...

Winstarexch has now gone to Winstarexch365!

Online betting in India and all over the world...