Comparison of All of the New Smart Glasses for the Blind

By emassey, 30 October, 2025

Forum
Assistive Technology

Hello, Here is a comparison of the latest smart glasses with cameras released recently or that will soon be available. There are a lot of options now, with many similarities but some significant differences between them, and all of them are pretty affordable, being between $300 and $800. I will be comparing the Ray-Ban Meta 2nd Generation, Oakley Meta HSTN, Oakley Meta Vanguard, Meta Ray-Ban Display, Agiga EchoVision, Envision Ally Solos, and Solos AirGo V and V2. I have not tried any of these glasses myself (although I use the original Meta RayBans and pre-ordered the EchoVision), and information is limited since most of these either have not been released yet or were released very recently, so some details may turn out to be wrong. However, I think it is important to compare all of them side by side so people can decide which pair of smart glasses is best for them.

Regarding the Ally Solos Glasses and Solos AirGo

The Envision Ally Solos glasses are the same as the Solos AirGo V, except they have slightly modified firmware with better audio cues to make the glasses more accessible, and to allow the user to change the Bluetooth name of the glasses. Envision stated that they may make more changes to the firmware in the future, but not what these changes might be. Neither the Ally Solos nor the Solos AirGo have an AI assistant that runs on the glasses themselves, but rather they connect to an app on your phone which receives pictures from the camera and processes them, sending them to the AI they use and playing responses back over Bluetooth audio. The Ally Solos connect to the Ally app, while Solos has their own app for the AirGo V that provides their own AI assistant. On Access On, Envision stated that if you have the standard Solos AirGo V glasses, you can use them with the Ally app, while on Double Tap they said that you cannot use regular Solos glasses with the Ally app. Since the Ally Solos are about double the price of the Solos AirGo V, it may be better to buy the AirGo V from Solos and use them with the Ally app, since I am not sure the firmware changes are worth the increased price. They went into more detail on Access On so I would expect that information to be more correct, unless there was a change between when those interviews happened. Another consideration to be aware of is that the Solos AirGo V2 will be coming out near the end of this year, and they are a significant upgrade over the original AirGo V. It may be better to wait until those come out and test to see if they work with the Ally app, since they are likely compatible with the protocol the Solos AirGo V uses. It is also possible that Envision may upgrade their Ally Solos to the AirGo V2, although they have not said anything about this yet.

Similarities

All of these glasses have speakers and microphones, and support Bluetooth audio, so you can connect your phone to them and hear your screen reader through them, as well as audiobooks, GPS apps, etc. You can also make phone calls using the microphones on the glasses. All of these glasses have one or two cameras, and can use AI models to describe what the camera can see. The primary way of interacting with these AI models is by voice, asking a question via the glasses microphones and hearing a response from the speakers. All of these glasses either require being connected to a phone to use them, or at least require a phone for setting them up, and they all support both iOS and Android.

Cameras

All of the Meta glasses have a 12 MP camera (3024 by 4032 pixels), and can capture 3K video at 30 FPS or 1080P video at 60 FPS. The RayBan Meta 2nd Generation, Oakley Meta HSTN, and Meta RayBan Display all have a 100 degree field of view, while the Oakley Meta Vanguard have a 122 degree field of view, meaning that the camera on the Oakley Meta Vanguard can see a wider view of what is in front of you. The camera on the Oakley Meta Vanguard is also in the center, while on the other Meta glasses it is on the left, although the camera is angled so that its view is still centered. The Agiga EchoVision have a 13 megapixel wide-angle camera in the center, and Agiga has said that the view is 50% wider than the Meta glasses, probably meaning a 150 degree angle. The Ally Solos and Solos AirGo V have two 5 megapixel cameras (2048 by 1944 pixels). I am not sure how the two cameras are used together, but multiple sources claim that the Solos AirGo V have two cameras, so perhaps Envision either sends two pictures to the AI on each request or the glasses combine them somehow. The Solos AirGo V2 will have one 16 megapixel camera.

Battery and Charging

The RayBan meta 2nd Generation and Oakley Meta HSTN have 8 hours of battery life with normal use, plus 48 more hours with the charging case. They can be charged to 50% from 20 minutes in the case. The case is the only way to charge the glasses, and it is charged with USB-C. You can fully charge the case in 3.5 hours. The Oakley Meta Vanguard have 9 hours of battery life with normal use, and the charging case provides 36 more hours on a full charge. When continuously playing audio, the glasses last 6 hours. You can fully charge the glasses from the case in 75 minutes. The Meta RayBan Display have 6 hours of battery life with the case providing 24 more hours, and the neural band has 18 hours of battery life. However, the amount of battery life you get will depend on how you are using the glasses; if you continuously record video, or continuously stream video to your phone for Be My Eyes or Aira or Meta live AI, you may get as little as 40 minutes to an hour for any of these Meta glasses. The Agiga EchoVision Pioneers Edition will have 6 hours of battery life with audio playback and average AI usage, although if you use live AI continuously, you could get as low as 30 minutes or an hour. The full release edition will have significantly better battery life (at least 50% greater battery capacity). The glasses will come with a charging case similar to the Meta glasses, although you can also charge them using USB-C even while you are wearing them, and there is a neck band with a battery that you can buy to charge them while wearing them. The Ally Solos and Solos AirGo V have 15 or 16 hours of battery life. They charge using USB-C with no charging case, and they fully charge in 1.5 hours; you can get 3 hours of battery life from charging them for 15 minutes. If you continuously play audio the glasses will last 10 hours, while a continuous phone call will cause them to last 7 hours. I'm not sure which scenario most resembles continuously using Ally for the entire time. The battery life of the Solos AirGo V2 is not yet known.

AI Image Description

The Meta AI usually responds in less than a second, although its descriptions are significantly less detailed than Be My AI. However, there is a setting under accessibility that enables more detailed descriptions, and this makes the descriptions better although still not as long and detailed as Be My AI. You can also ask any questions you like about the image. When using Meta AI, I rarely notice any hallucinations or inaccurate descriptions. I have not tried reading documents with the glasses but I have heard that by default it will summarize documents instead of reading all of the text, although you can ask them to read the entire document and that should work. Usually to use Meta AI, you say "Hey Meta" and then ask a question, such as "What am I looking at" or "What doors do you see". There is also a live AI mode where you can have a continuous conversation with the AI without saying "Hey Meta", although it cannot continuously monitor the camera, and will only take a new picture when you ask it something. The live AI also often overhears what other people are saying and responds to that, or what you say to others, although you can pause live AI by tapping the touch pad. The Agiga AI gives very detailed descriptions, and specifically gives very detailed descriptions of people, much more detailed than how Meta AI describes them. The Agiga glasses can also read entire documents using OCR, using a custom model specifically for this I'm pretty sure. You can trigger the AI either by saying a wake word and asking a question, or by pressing one of the buttons on the glasses. There is also a live AI mode that can continuously describe what the camera sees without you having to say anything, and I think you can also ask questions in the live AI mode without saying the wake word. However, you can mute the microphone on the glasses in live AI mode so the AI will not respond to your conversations. Envision Ally can give very detailed descriptions, and you can customize how detailed its descriptions are as well as its personality. You can also give it custom prompts to tell it more information or have it respond in a certain way. Ally also uses specialized models for reading text and some other tasks. Ally works in two ways: you can call Ally, which will start a voice conversation where it is always listening for questions, or you can message Ally, sending text questions and getting text responses back. Both should work with the glasses. With Ally Pro, you can create shortcuts, where you type a prompt and whether to call or message Ally and which personality to use, and you can trigger these shortcuts from the start screen of the Ally app, from Apple Shortcuts or with Siri, and perhaps using a button on the glasses although I am not certain about this. I do not think Ally has a wake word. Agiga AI and Ally both seem to have pretty good response times and they have been improving, although they probably do not respond as quickly as Meta AI.

Third-Party Services

On the Meta glasses, you can make video calls with WhatsApp, Facebook Messenger, and Be My Eyes, and you can make Aira calls through WhatsApp. You can also send and read messages using Meta AI through WhatsApp, Facebook Messenger, Instagram, and your phone's messaging app. There is an AI service from NOA that you can use through WhatsApp, and if you send a picture or video to it from the glasses, it will send a very detailed description back. You can also ask questions about it or send audio messages. The Meta glasses will often read only part of the message, and it can take around 15 seconds or more for the whole process, although having WhatsApp open on your phone makes it significantly faster and lets you read the full responses quickly as well from your phone. The NOA AI service was designed for the blind and to be good at tasks useful for orientation and mobility, like having a good understanding of left and right and finding things like doors and crosswalks. PiccyBot also has a WhatsApp AI service, where it will describe images or videos and you can ask questions as well. It will respond with a voice message, meaning that the Meta glasses should play the entire response. Meta is also creating an SDK for their glasses, where apps running on your phone can stream pictures and video from the camera, and the Seeing AI app will be adding support for them. Hopefully other AI and navigation apps for the blind that use the camera will add support as well. HumanWare is also writing a navigation app for the Meta glasses. Meta AI can also integrate with your calendar, with Audible, with several music services, and with some fitness tracking devices like Garmin and Strava, at least on the Oakley Vanguard. The Agiga glasses support Be My Eyes and Aira, directly from the glasses and not using your phone or WhatsApp as an intermediary. The Agiga glasses will also support finding public transportation routes and describing them using the Transit API, although it will not do live bus tracking or give you directions along the way; it is meant more for choosing a route and preparing before you leave. The Ally Solos do not claim support for any third-party services right now, although Solos sells an SDK that lets developers use the camera on their glasses from their apps for $1,999. Envision has also said that they plan to try to add support for services such as Aira and Be My Eyes and possibly others, with Aira being the most certain I think. Ally can integrate with your calendar and has access to weather data for your current location.

Prices and Availability

The RayBan Meta 2nd Generation start at $379, the Oakley Meta HSTN start at $399, the Oakley Meta Vanguard start at $499, and the Meta RayBan Display start at $799. All of these glasses come in different styles and colors and with different lens types. The Agiga glasses are not available yet, but can be pre-ordered for $599. When the glasses come out there will be a subscription for the AI, but if you pre-order you will get a free lifetime subscription. The pioneers edition ships on October 29, but the registration for this will have closed by October 10. The full release will ship near the end of the year. The price of the glasses after the pre-order period ends, and the price of the subscription, are unknown. The Ally Solos can be pre-ordered for $599 for a while longer, and the full price will be $699. If you pre-order you will get a free year of Ally Pro (a $200 value). They will start shipping sometime in October. Ally Pro costs $18 per month or $180 per year, although when I checked today it gave me an introductory offer for 50% off either option, and perhaps this is available for all first time subscribers. Besides shortcuts, Ally Pro gives you the ability to create your own personalities in addition to using the default ones and customizing them, and longer conversations (the free plan has a limit of 10 minutes per conversation). Envision has also said more Ally Pro features may be coming soon. The Solos AirGo V cost $299, and the price for the V2 is not known, although I would expect them to cost at most $100 or $200 more. They should come out around the end of the year. The NOA AI that you can use over WhatsApp is free for 15 questions per week, or €9.99 per month for unlimited queries. You can use PiccyBot for free, although a subscription gets you access to more AI models and more customization. The subscription is either $2.99 per month, $14.99 per year, or $24.99 for a lifetime subscription.

Other Notes

The Meta glasses also allow you to take pictures and videos and transfer them to your phone. Envision has said this feature is also coming to the Ally Solos, although likely only for pictures since the Solos AirGo V1 cannot stream video. I am not sure whether the Agiga glasses can do this. The Meta RayBan Display will have a screen reader, and will allow you to send and read messages without using your voice, by navigating the interface using gestures with the neural band. I am not sure what other useful things the Display glasses will let you do. Meta has said navigation features are coming, but I am not sure how useful they will be for the blind. On all of the Meta glasses, the assistant runs on the glasses themselves, although they use the connection with your phone to get an internet connection for AI. However, the voice recording, recognition, and playback is done on the glasses themselves, not through the phone. When you make a call, whether a phone call or a video call with one of the supported services, the glasses stream audio and/or video to your phone and the phone handles the call. I am not sure which device handles sending and receiving messages with WhatsApp though. The Agiga glasses are completely standalone. They connect to WiFi and handle AI requests and calls themselves without going through your phone at all. There will be an app to connect the glasses to WiFi and change settings, but this is all the app will do. The Agiga glasses also run Android (Android 11 or 12 I think), opening up many possibilities for the kinds of software they can run. As far as I can tell, the Ally Solos, and Solos AirGo V and V2, do nothing by themselves but send audio and pictures to your phone where they are processed, either by the Ally app or the Solos app, or any other app that uses their SDK.

Options

Comments

By MarkSarch on Thursday, October 30, 2025 - 21:05

Google android XR glasses also are coming sooner than later,. many sources mention first quarter of 2026.
Google Android XR glasses will probably be released first of agiga echovision glasses.
I will paste below some information to see the differences between Meta's Wearables Device Access Toolkit and Google android XR

Meta's Wearables Device Access Toolkit is a software development kit (SDK) that lets mobile apps access the camera and audio features of Meta's smart glasses
. In contrast, Google's Android XR is a complete, standalone operating system designed to run native applications on mixed-reality (XR) headsets and smart glasses from various manufacturers.
Comparison of key differences
Feature
Meta Wearables Device Access Toolkit
Google Android XR
Platform type
Meta Wearables Device Access Toolkit A toolset (SDK) to extend mobile app functions to wearables.
Google Android XR A full, independent operating system for XR devices.

Where code runs
Meta Wearables Device Access Toolkit The main application logic runs on a connected Android or iOS phone.
Google Android XR Applications run directly on the XR headset or glasses.

Hardware compatibility
Meta Wearables Device Access Toolkit Works with Meta's family of AI glasses, including Ray-Ban Meta glasses and future "Meta Ray-Ban Display.”
Google Android XR Designed for a wide range of devices from hardware partners like Samsung (Project Moohan headset), Xreal (Project Aura glasses), and Lynx.

Artificial intelligence
Meta Wearables Device Access Toolkit Developers can access Meta's AI through existing voice commands but cannot create custom ones.
Google Android XR Features deep, native integration with Google's Gemini AI for a richer, more conversational assistant experience.

Display functionality
Meta Wearables Device Access Toolkit Currently does not offer developers access to display functions, even on glasses that have an in-lens display.
Google Android XR Built to support devices with in-lens displays for heads-up information, maps, and notifications.

In-depth explanation 
Meta's strategy: Mobile-first integration 
Meta's approach uses its AI glasses as an extension of a phone. The toolkit creates a secure session between a user's mobile app and smart glasses. This allows the app to use the glasses' camera and audio for hands-free experiences. 
• Example use cases: A mobile app like Twitch could use the toolkit to let users livestream their first-person perspective, or a golf app could provide real-time yardage information without the user having to pull out their phone.
• Strengths: This strategy requires less computational power on the wearable, allowing for a lighter, more fashionable form factor like the Ray-Ban Meta glasses.
• Weaknesses: The experience is connected to and depends on the mobile app, which limits the wearable's ability to act as a fully independent computing device.

Google's strategy: Native XR ecosystem 
Android XR is Google's effort to create an open ecosystem for extended reality devices. It is designed to be a complete operating system for headsets and glasses. 
• Example use cases: Users can fill their physical space with virtual app screens, get information from Gemini based on what they are looking at, and use Google apps like Maps and YouTube in an immersive, 3D environment.
• Strengths: As a native OS, it offers deeper functionality, including full AR and VR experiences, and is more independent from a paired phone. It also attracts a large existing developer community familiar with Android tools.
• Weaknesses: The devices are more complex, potentially requiring more computational power and leading to bulkier designs, at least initially. 
Which is right for developers? 
The choice depends on the developer's goal. 
• Choose Meta's Wearables Device Access Toolkit if the goal is to add a simple, hands-free extension to an existing mobile app using the camera and audio of Meta's hardware.
• Choose Google's Android XR if the goal is to build a rich, native, and independent XR application for a wider variety of XR hardware and have access to deep OS and AI features. 
Here some information from Google
Android XR accessibility overview
https://support.google.com/android-xr/answer/16659361
Also here how to use TalkBack on the android XR devices
https://support.google.com/android-xr/answer/16659712?sjid=4913087507726418488-NC

Use TalkBack on your Android XR devices
For hands-off device control, you can use TalkBack, a Google screen reader. This is included on Android XR devices.

Learn about TalkBack
TalkBack helps people who can’t find on screen content to use their Android XR devices.

When TalkBack is on:

A box outlines what content is on the screen.
To describe what’s on the screen content, your device plays sounds.
Activate TalkBack shortcut when you set up your device
When you set up your Android XR device, to turn on TalkBack, press and hold both Volume up and Volume down. After the set up, to turn TalkBack on or off, you can use the volume buttons as a shortcut.

If you no longer want to use the volume button as a TalkBack shortcut, you can turn it off.

On your Android XR device, open the Settings app.
Select Settings and then Accessibility and then TalkBack.
Turn off TalkBack shortcut.
Learn how to use accessibility shortcuts.

Turn on TalkBack after you set up your device
You can turn on TalkBack through the device’s Settings app, shortcuts, and Gemini.

Use device settings
On your Android XR device, open the Settings app.
On the left, scroll to “Accessibility.”
Select Accessibility and then TalkBack and then Turn on TalkBack.
Use shortcuts
You can use a special button on your screen or press device buttons to turn TalkBack on or off.

On your Android XR device, open the Settings app.
On the left, scroll to “Accessibility.”
Select Accessibility and then TalkBack and then TalkBack shortcut.
Turn on TalkBack shortcut.
To use TalkBack, you can select these actions:
Tap Accessibility button
Press Top and Volume up buttons
Press and hold Volume up and down buttons for 3 seconds
Use voice with Gemini
If you have set up Gemini, you can just talk to your device.

Say “Hey Gemini.”
To turn TalkBack on or off, say “Turn on TalkBack” or “Turn off TalkBack.”
Use your device with TalkBack
Select & move your focus
When TalkBack is on, it speaks aloud what you’re focused on. On each new screen, the focus starts at the top and moves down.

To move forward:
Turn your right palm out.
Pinch.
To move back:
Turn your left palm out.
Pinch.
To pinch:
Hold your hand upright and open.
Tap your index finger and thumb together.
To double pinch: Perform 2 pinches one after the other.
If your first attempt doesn’t work, try double pinching slower.
Tips:

For best results, imagine you hold a golf ball in between your fingers, then quickly tap your index and thumb to complete each pinch.
If your pinch isn’t registered, try a slower pinch and rotate your hand so the camera can find your gesture.
Navigate your Android XR device
You can move around your device with gesture navigation.

Turn your primary palm in towards your face.
Hold your pinch.
Move your hand left, right, or stay centered.
To complete the action, release your fingers.
By default, your right hand is the primary hand. You can change this in Settings under “Input.”

The list of possible gestures when you use gesture navigation includes:

Launcher:
To go to the Launcher screen, turn your primary palm in.
Pinch and hold.
Release your fingers.
Recents:
To find recently used apps, turn your primary palm in.
Pinch and hold.
Move your hand to the right.
Release your fingers.
Go back:
To return to a previous screen, turn your primary palm in.
Pinch and hold.
Move your hand to the left.
Release your fingers.
Manage TalkBack menu
You can adjust settings anytime from the TalkBack menu. From here, you can change your language, perform a screen search, and more.

To open or close the TalkBack menu, turn your secondary palm in, and then pinch.

Use reading controls
When you use TalkBack, you can change how you focus on text.

To cycle forward:
Turn your secondary palm in towards your face.
To swipe right, pinch and hold.
Move to the right.
Release your fingers.
To cycle backward:
Turn your secondary palm in towards your face.
To swipe left, pinch and hold.
Move to the left.
Release your fingers.
Tip: For these gestures, make sure your palm faces directly towards your face. After you find the reading control you want, keep your secondary palm in, then swipe up to move your focus back or swipe down to move it forward.

Use explore mode
You can use explore mode to spatially navigate XR. With this mode on, you can simply hold out your hand to aim your pointer then pinch to select.

To turn explore mode on or off:

Turn your secondary palm in.
Pinch and hold.
Adjust the volume & mute speech
The volume for TalkBack speech is called Accessibility volume which is different from media volume.

To adjust the accessibility volume:
With either hand, pinch and hold.
While you hold your pinch, on your headset, press Volume up or Volume down.
Try these additional gestures
Swipe: If you swipe on a screen, like a long web page or photos, you can use a swipe gesture. You swipe, you can use either hand.
To swipe horizontally:
With your index and thumb, pinch and hold.
Drag to left or right.
To swipe and scroll vertically:
With your index and thumb, pinch and hold.
Drag up or down.
Auto-advance focus: Automatically move your focus forward and backward with auto-advance.
To auto-advance forward:
With your right hand, pinch and hold.
Drag your hand to the right.
Your focus moves forward through focus items until you release your pinch.
To auto-advance backward:
With your left hand, pinch and hold.
Drag your hand to the left.
Your focus moves backward through focus items until you release your pinch.
Type with TalkBack on
Enter text in boxes
When you select a text box, a virtual keyboard appears at the bottom of your view. To navigate the keyboard or set voice to type as your default typing method, use explore mode.

Aim your hand at the bottom half of your view.
TalkBack announces the key you aim at.
Pinch to select the key.
Use your voice to type
You can also switch the type method to voice when you select the voice button on the keyboard.

Aim at the keyboard until you hear the voice button announced.
At the top right, select Voice .
Say what you want to type.