Stop Waiting for AI to Tell You What to See. Start Exploring It Yourself.

By Stephen, 12 November, 2025

Forum
Assistive Technology

I'm about to show you something that breaks every rule about how vision AI is "supposed" to work.
And when I say breaks the rules, I mean completely flips the whole thing upside down.

Here's What's Wrong With Every Vision AI App You've Ever Used
You point your camera.
You wait.
The AI speaks: "It's a living room with a couch and a table."
Cool story. But where's the couch? What color? How close? What's on it? What about that corner over there? That thing on the wall?
Want to know? Point again. Wait again. Ask again.
The AI decides what you need to know. You're stuck listening to whatever it decides to tell you. You don't get to choose. You don't get to dig deeper. You don't get to explore.
You're just a passenger.
So I built something that does the exact opposite.

What If Photos Were Like Video Games Instead of Books?
Forget books. Think video games.
In a game, you don't wait for someone to describe the room. You walk around and look at stuff yourself. You check the corners. You examine objects. You go back to things that interest you. You control what you explore and when.
That's what I built. But for photos. And real-world spaces.
You're not listening to descriptions anymore.
You're exploring them.

Photo Explorer: Touch. Discover. Control.
Here's how it works:
Upload any photo. The AI instantly maps every single object in it.
Now drag your finger across your phone screen.
Wherever you touch? That's what the AI describes. Right there. Instantly.
Let's Get Real:
You upload a photo from your beach vacation.
Touch the top of the screen:
"Bright blue sky with wispy white clouds, crystal clear, no storms visible"
Drag down to the middle:
"Turquoise ocean water with small waves rolling in, foam visible at wave crests, extends to horizon"
Touch the left side:
"Sandy beach, light tan color with visible footprints, a few shells scattered about"
What's that on the right? Touch there:
"Red beach umbrella, slightly tilted, casting dark shadow on sand beneath it"
Wait, what's under the umbrella? Touch that spot:
"Blue and white striped beach chair, appears unoccupied, small cooler beside it"
Go back to those shells - drag your finger back to the beach:
"Sandy beach, light tan color with visible footprints, a few shells scattered..."
See what just happened?
The information didn't vanish. You went back. You explored what YOU wanted. You took your time. You discovered that cooler the AI might never have mentioned on its own.
You're not being told about the photo. You're exploring it.
And here's the kicker: users are spending minutes exploring single photos. Going back to corners. Discovering tiny details. Building complete mental maps.
That's not an accessibility feature. That's an exploration engine.

Live Camera Explorer: Now Touch the Actual World Around You
Okay, that's cool for photos.
But what if you could do that with the real world? Right now? As you're standing there?
Point your camera at any space. The AI analyzes everything in real-time and maps it to your screen.
Drag your finger - the AI tells you what's under your finger:
• Touch left: "Wooden door, 7 feet on your left, slightly open"
• Drag center: "Clear path ahead, hardwood floor, 12 feet visible"
• Touch right: "Bookshelf against wall, 5 feet right, packed with books"
• Bottom of screen: "Coffee table directly ahead, 3 feet, watch your shins"
The world is now touchable.
Real Scenario: Shopping Mall
You're at a busy mall. Noise everywhere. People walking past. You need to find the restroom and you're not sure which direction to go.
Old way? Ask someone, hope they give good directions, try to remember everything they said.
New way?
Point your camera down the hallway. Give it a few seconds.
Now drag your finger around:
• Touch left: "Store entrance on left, 15 feet, bright lights, appears to be clothing store"
• Drag center: "Wide corridor ahead, tiled floor, people walking, 30 feet visible"
• Touch right: "Information kiosk, 10 feet right, tall digital directory screen"
• Drag up: "Restroom sign, 25 feet ahead on right, blue symbol visible"
You just learned the entire hallway layout in 20 seconds.
Need to remember where that restroom was? Just touch that spot again. The map's still there.
Walk forward 20 feet, confused about where to go next? Point again. Get a new map. Drag your finger around.
But Wait - It Gets Wilder
Object Tracking:
Double-tap any object. The AI locks onto it and tracks it for you.
"Tracked: Restroom entrance. 25 feet straight ahead on right side."
Walk forward. The AI updates:
"Tracked restroom now 12 feet ahead on right."
Lost it? Double-tap again:
"Tracked restroom: About 8 steps ahead. Turn right in 4 steps. Group of people between you - stay left to avoid."
Zoom Into Anything:
Tracking that information kiosk? Swipe left.
BOOM. You're now exploring what's ON the kiosk.
• Touch top: "Mall directory map, large touchscreen, showing floor layout"
• Drag center: "Store listings, alphabetical order, bright white text on blue background"
• Touch bottom: "You are here marker, red dot with arrow, pointing to current location level 2 near food court"
Swipe right to zoom back out. You're back to the full hallway view.
Read Any Text
Swipe up - the AI switches to text mode and maps every readable thing.
Now drag your finger:
• Touch here: "Restrooms. Arrow pointing right."
• Drag down: "Food Court level 3. Arrow pointing up."
• Touch lower: "Store hours: Monday to Saturday 10 AM to 9 PM, Sunday 11 AM to 6 PM"
Every sign. Every label. Every directory. Touchable. Explorable.
Scene Summary On Demand
Lost? Overwhelmed? Three-finger tap anywhere.
"Shopping mall corridor. Stores on both sides, restroom 25 feet ahead right, information kiosk 10 feet right, people walking in both directions. 18 objects detected."
Instant orientation. Anytime you need it.
Watch Mode (This One's Wild)
Two-finger double-tap.
The AI switches to Watch Mode and starts narrating live actions in real-time:
"Person approaching from left" "Child running ahead toward fountain" "Security guard walking past on right" "Someone exiting store carrying shopping bags"
It's like having someone describe what's happening around you, continuously, as it happens.

The Fundamental Difference
Every other app: AI decides → Describes → Done → Repeat
This app: You explore → Information stays → Go back anytime → You control everything
It's not an improvement.
It's a completely different paradigm.

You're Not a Listener Anymore. You're an Explorer.
Most apps make you passive.
This app makes you active.
• You decide what to explore
• You decide how long to spend there
• You discover what matters to you
• You can go back and check anything again
The AI isn't deciding what's important. You are.
The information doesn't disappear. It stays there.
You're not being helped. You're exploring.
That's what accessibility should actually mean.

Oh Right, There's More
Because sometimes you just need quick answers:
Voice Control: Just speak - "What am I holding?" "Read this." "What color is this shirt?"
Book Reader: Scan pages, explore line-by-line, premium AI voices, auto-saves your spot
Document Reader: Fill forms, read PDFs, accessible field navigation

Why a Web App? Because Speed Matters.
App stores = submit → wait 2 weeks → maybe approved → users update manually → some stuck on old version for months.
Web app = fix bugs in hours. Ship features instantly. Everyone updated immediately.
Plus it works on literally every smartphone:
• iPhone ✓
• Android ✓
• Samsung ✓
• Google Pixel ✓
• Anything with a browser ✓
Install in 15 seconds:
1. Open browser
2. Visit URL
3. Tap "Add to Home Screen"
4. Done. It's an app now.

The Price (Let's Be Direct)
30-day free trial. Everything unlocked. No credit card.
After that: $9.99 CAD/month
Why? Because the AI costs me money every single time you use it. Plus I'm paying for servers. I'm one person building this.
I priced it to keep it affordable while keeping it running and improving.

Safety Warning (Important)
AI makes mistakes.
This is NOT a replacement for your cane, guide dog, or mobility training.
It's supplementary information. Not primary navigation.
Never make safety decisions based solely on what the AI says.

The Real Point of This Whole Thing
For years, every vision AI app has said:
"We'll tell you what you're looking at."
I'm saying something different:
"Explore what you're looking at yourself."
Not one description - touchable objects you can explore for as long as you want.
Not one explanation - a persistent map you can reference anytime.
Not being told - discovering for yourself.
Information that persists. Exploration you control. Discovery on your terms.

People are spending 10-15 minutes exploring single photos.
Going back to corners. Finding hidden details. Building complete mental pictures.
That's not accessibility.
That's exploration.
That's discovery.
That's control.
And I think that's what we should have been building all along.
You can try out the app here:
http://visionaiassistant.com

Options

Comments

By Zoe Victoria on Thursday, November 20, 2025 - 21:52

One feature I would like to see in the future is the ability to ask questions while exploring photos. Even though the detail it gives is very rich, sometimes I find that it doesn’t give me all the specific details I am looking for so it would be nice to have some ability to ask for clarification from the AI about them.

By Stephen on Thursday, November 20, 2025 - 22:41

Hi Zoe,
Yeah, the share sheet thing is a real trade-off, not gonna lie.
Those couple extra taps to save and upload? It's because we're web-based instead of a native app. And that comes with a downside (you found it) and an upside.
The upside: I can ship you improvements basically instantly. When you give me feedback, I can turn it around same-day or next-day. No waiting for app store reviews, no version updates, none of that. You just reload and it's there.
Native apps with share sheets can't do that. When users report issues, they're waiting weeks or even longer. When I improve the AI, you get it immediately. They don't.
So yeah, it's a slightly clunkier upload flow. But it means you're always using the best version of the tool, and I can respond to what you're telling me in real-time.
That's the trade. Not perfect, but I think it's the right one for where we are right now. I do however want to implement that feature in the future.

Thanks so much for that feedback, I'll also look into adding the feature you had requested :).

By Ashley on Friday, November 21, 2025 - 08:13

looking forward to testing this. Real time, live reading is the part I'm most interested in, but I'm also curious to see how well the image exploration works with album art, on the front of a vinyl record sleeve for example.

By Stephen on Friday, November 21, 2025 - 16:05

Hello all,

I know their hasn't been an update in the last day. I've been working overtime and I hopefully will be pushing updates out this weekend. It gets crazy at work around this time of year it being the final quarter and all but hopefully I will have time to work on further improving some of the features tomorrow or Sunday.
Cheers!

By Gokul on Saturday, November 22, 2025 - 03:43

@Stephen sorry for not responding earlier; I was caught up in a medical emergency. About the realtime navigation feature, the thing seems to be stuck at my end. It asks for camera access and I grant it, and it's stuck there. I tried turning voiceover off and all but nothing seems to happen. Is it me not doing something perhaps?

By Stephen on Saturday, November 22, 2025 - 04:13

Oh noo I hope every one is ok. Don't even worry about real time nav, I'm overhauling the entire thing and putting something much more boss in it's place. I am getting rid of room explore mode and real time navigation and I'm going to use a like live camera feed to make it so that you can room explore, get descriptions, navigate to objects like doors, chairs etc, have contextual awareness to the object your tracking...it is all coming in a verry nice package coming out shortly. I'm just putting the finishing touches on it now then removing room explorer and real time nav and this big beast will take it's place.

By Stephen on Saturday, November 22, 2025 - 04:14

Once I'm done playing D&D I'll put the finishing touches on it and drop it for yawl.

By Stephen on Saturday, November 22, 2025 - 07:32

Hi everyone,
I hope this message finds you well. I wanted to take a moment to share some exciting updates with you all, and more importantly, to thank you for the incredible feedback you've been giving me. Every suggestion, every bug report, every feature request - I read them all, and they genuinely shape the direction of this app.
The Big News: Introducing Live Camera Explorer
After listening to your feedback over the past week, I realized something important: having Room Explorer Mode, Realtime Navigation, and the requests for live AI narration as separate features was creating unnecessary complexity. You told me the navigation felt disjointed, and I heard you loud and clear.
So I've combined everything into one powerful feature: Live Camera Explorer. This brings together room exploration, live object tracking, real-time narration, text detection, and scene awareness into a single, cohesive experience. Instead of jumping between different modes on the homepage, you now have one unified tool that adapts to what you need in the moment.
To streamline the experience, I've removed the separate Room Explorer and Realtime Navigation buttons from the homepage. I've also added "Contact Developer" directly to the main screen because your feedback matters, and I want to make it as easy as possible for you to reach me.
What Live Camera Explorer Can Do:
• Real-time Object Exploration: Drag your finger across the screen and instantly hear what objects are in view with spatial audio cues and voice feedback
• Object Tracking: Double-tap any object to track it with a continuous audio tone that changes with distance and proximity
• Zoom Into Details: Swipe left while tracking to zoom in and explore what's ON or INSIDE an object (like items on a table or what someone is wearing)
• Text Detection Mode: Swipe up to detect and read all text in view - signs, labels, documents, everything
• Scene Summaries: Three-finger tap for a complete spatial overview of your surroundings
• Watch Mode: Two-finger double-tap for live action narration - the AI continuously describes significant events happening in real-time
How We Made It Fast Enough:
The big challenge was speed. Traditional AI analysis took 3-5 seconds per request, which made real-time exploration impossible. Here's how we solved it:
Instead of analyzing the entire frame from scratch every time, the system now captures lightweight frames and sends them for continuous analysis every second in the background. By optimizing the image resolution (1280x720 instead of full 4K), using JPEG compression at 70% quality, and structuring the AI prompt to focus only on object detection with minimal processing, we cut the round-trip time down to approximately 1 second. The AI now returns simple object labels and boundaries instead of lengthy descriptions, which dramatically reduces processing time.
This means when you're exploring with Live Camera, you're getting near-real-time updates. As you change directions or move the camera, give it about a second to refresh - the objects you hear are based on what was in view just a moment ago. It's not perfect, but it's the fastest I've been able to make it work with current AI technology, and I think you'll find it incredibly useful.
Tutorial Included:
Because Live Camera Explorer has so many gestures and capabilities, I've added a comprehensive tutorial that walks you through everything step-by-step. First-time users will see this automatically, and you can always replay it from Settings > Live Camera Tutorial.
AI Mirror Mode - Check Your Appearance Independently
I've also added AI Mirror Mode so you can check how you look before heading out. It uses your front-facing camera with gesture controls. Swipe up with one finger and the AI analyzes your outfit, hair, facial expression, glasses position (if wearing them), and overall presentation - giving you specific, practical feedback like "your hair is slightly messy on the left side" or "your shirt collar is wrinkled." Swipe down to check your framing quality percentage. Swipe right to toggle between full guidance mode and quiet mode. Three-finger swipe left to exit. That's it - straightforward appearance feedback using simple gestures.
Important Notes & Known Issues:
The Start Button: I'll be honest - the start button for Live Camera can be a bit finicky right now. You may need to double-tap it and hold for just a split second to activate. I'm actively working on making this more reliable, but in the meantime, if it doesn't start the first time, just try the double-tap-and-hold technique.
Live Scene Description (Watch Mode): The continuous live narration feature is still being refined. It works, but I'd love to hear your feedback on how well it's detecting and describing actions in real-time. Please let me know what works and what doesn't - your real-world testing is invaluable.
Loading Indicators: I'm trying to implement audio cues when things are loading (like entering Photo Explorer or Live Camera modes), but it's being a bit temperamental. As a workaround, if you tap the middle of the screen during loading, it should announce "loading" or "in progress." Don't worry though - the system will definitely let you know when it's ready for you to explore.
Language Support Fixed: Several of you reported the app randomly speaking in English even when you had another language set, then reverting back to your chosen language. I believe I've patched this issue. The app should now continuously speak in whatever language you've set in Settings > Language & Region. If you still experience this problem, please reach out so I can investigate further.
PWA Caching Issue Resolved: For those using the Progressive Web App version, you may have noticed it was stubbornly holding onto older versions even after updates were released. Here's what was happening: the browser's service worker was aggressively caching app files for offline performance, but wasn't checking for new versions frequently enough. I've updated the cache invalidation strategy to force-check for updates every time you open the app, and clear old cached versions immediately when new ones are available. The PWA should now push the most recent update without requiring manual cache clearing.
Dark Mode Fixed: Dark mode for low vision users should now work consistently across both the web browser version and PWA. The issue was that dark mode styles were being applied at the component level, but when navigating to certain pages (especially Web Browser mode), the iframe and container elements weren't inheriting the dark mode context properly. I've moved the dark mode implementation up to the root layout level so it applies globally across all views.
Bugs & Feedback:
I'm sure there are still some bugs lurking around - that's just the nature of development, especially when trying to push the boundaries of what's possible with AI and accessibility. If you spot anything that doesn't work right, no matter how small, please reach out through the Contact Developer button on the homepage or message me directly or just yell at me on this thread. Every bug report helps make this better for everyone.
Your Feedback Matters:
I can't stress this enough - your suggestions and experiences directly shape this app. The Live Camera Explorer exists because you told me what you needed. The tutorial exists because you asked for better guidance. The language fixes, the dark mode improvements, the streamlined homepage - all of it came from listening to you.
So please, keep the feedback coming. Tell me what works, what doesn't, what you wish existed, what frustrates you. I read every message, and I'm constantly thinking about how to make this app serve you better.
Thank You:
Finally, I just want to say thank you. Thank you for your patience as I work through these complex features. Thank you for your detailed bug reports. Thank you for your enthusiasm and encouragement. Thank you for trusting me with something as important as your independence and accessibility.
You all are absolutely amazing, and knowing that this app makes even a small difference in your daily lives keeps me motivated to keep improving it. I'm working as hard as I can to make this the best AI vision assistant possible, and your support means the world to me.
Here's to making the world more accessible, one update at a time.
With gratitude, Stephen.
P.S. - As always, I'm here if you need anything. Don't hesitate to reach out.

By Guilherme on Saturday, November 22, 2025 - 09:07

Hello, I really liked the new features implemented, I’m just having a few small issues here. The first one is that in this live camera explorer mode, the system is giving me descriptions in English instead of my language, which is Portuguese, and the second is that when I double-tap with two fingers, it doesn’t give me the real-time environment narration — it only says that this mode has been activated but doesn’t return any description

By Stephen on Saturday, November 22, 2025 - 09:29

In regards to your language, please try resetting Portuguese in your settings menu. It seems to be working on this end so you might just have to reset it. As for the two finger double tap, you’re only supposed to get descriptions when something happens in front of you like somebody picking up something or someone opening a door. It’s really only supposed to react during any sort of action sequence however, I’ll give this feature more testing tomorrow and get back to you 😊. Thanks for the feedback.

By Stephen on Monday, November 24, 2025 - 07:05

I now have a dedicated support line for Vision AI assistant.
If you need help or want to give feedback you can reach me at the toll-free number below:
(866) 825-6177
Cheers! :).

By Brian on Monday, November 24, 2025 - 20:46

Hi Stephen,

I asked this a while back, but never heard back. So I thought I would try again. In your pwa, there was an option to sync Meta smart glasses to the AI, but the steps require a head mount. May I ask why?

By Stephen on Monday, November 24, 2025 - 21:05

Oh my I’m so sorry if I missed that! So a couple reasons: web apps can’t access external devices and even if I was able to do a native iOS app, meta has restrictions on what applications are allowed to access the glasses camera. I was looking into it earlier and I knew that that question was gonna come up quite a bit, so that’s why I put all of the information about the Meta glasses in the app. I’m hoping one day we are able to use the camera, I think a lot of it is going to depend on when and if Meta opens it up for third-party access.

By Brian on Monday, November 24, 2025 - 23:27

Fair enough. Hopefully their toolkit they are releasing, or have released, will allow for this.

By Stephen on Monday, November 24, 2025 - 23:53

Oh you better believe I’m keeping a close eye on that one! I have so many ideas for you guys if and when that happens!

By Stephen on Tuesday, November 25, 2025 - 16:33

i’ve gotten a lot of questions about when will this be turned into a fully native iOS app? The answer is I have no idea. I’ve been looking into it and apparently I need a Mac…. i’ve never used a MacBook I’ve always used windows. I was looking at purchasing a Mac and oh my word are they pricey lol. I did find some m1s for around 700 CAD but they only have 8 GB of ram. So I guess I have a few questions for my Mac users. How does the M1 chip hold up? Is it still supported with latest upgrades? Is 8 GB of RAM even going to be enough? How accessible is Xcode with voiceover? I need to know all things MacBook lol. Is Xcode easy to learn? You guys are phenomenal. Thanks so much for the support. It looks like we’re gaining just over 100 new users per week.

By Brian on Tuesday, November 25, 2025 - 20:53

BF sales are already happening on Amazon, and you can find some really good deals on ebay as well for Refurbished models, which I would recommend over simply getting a used device. 🙂

By Stephen on Tuesday, November 25, 2025 - 23:10

Thanks so much for those. I’m leaning towards the MacBook, would 8 gigabytes of ram be enough? I’m also a little torn on that one because it’s 160 some odd dollars in shipping to Canada but even still, it is the M2 chip so even still it’s not the worst deal.

By Brian on Wednesday, November 26, 2025 - 00:07

If you are asking about the MacBook Air I listed above, that particular model has 16gb of ram. You really don't want 8gb of ram these days. I mean it is useable, but 16 or above is standard nowadays. On a side note, that mini above would probably be a better workhorse for you, but in the end it all depends on your needs and use case.

HTH.

By Ashley on Friday, November 28, 2025 - 09:01

I've just briefly tested and this is great work. I never knew apps built with platforms like Base44 could be so powerful! My initial feedback is that the social media functions should be stripped out all together, the very last thing the world needs is yet another social platform, and the last thing the blind community needs is a specialised, segregated social platform. I would remove all of those features. Also, having a setting to toggle the voice verbosity would be great. For example something that can only announce results, rather than functions. I'm testing on a Mac, haven't tried on a touch-based device yet but will do so and report back.

By Stephen on Saturday, November 29, 2025 - 04:16

I will be dropping a new feature this weekend. Can anyone guess what it is? Happy Thanksgiving to all of my American friends!

By Stephen on Tuesday, December 2, 2025 - 13:51

Hey guys sorry for the delay. Been testing this feature and it’s fighting me lol. I will have to delay the feature for now and try to get it out by next Sunday :).

By Stephen on Tuesday, December 2, 2025 - 14:50

OK, so I have managed to implement a physical book reader. You should now be able to pick up any print book and read it. Best thing is, I was able to implement the 11 labs API key so you can listen to the print book in your preferred voice! I’ve been reading my dungeon crawler Carl physical book set using this feature. It feels good lol. I will try to speed up the 11 labs response, but it is analyzing a whole page. Once you finish scanning a page and then you double tap with two fingers to read, 11 labs will take 40 to 50 seconds to load. Granted it is not the most optimal, definitely gonna work on speeding that up if I can.

By Devin Prater on Tuesday, December 2, 2025 - 15:37

I mean, I wouldn't mind Eloquence reading it if it shortens the wait time.

By Stephen on Tuesday, December 2, 2025 - 15:42

The option for system voices can be used as well in the settings of the book reader which should decrease the wait time. I can probably decrease the wait time with the 11 labs API, I’m just gonna have to tinker with it a bit lol.

By Stephen on Tuesday, December 2, 2025 - 16:51

So I’m going to remove the 11 labs API and switch to open AIAPI for the book reader. That 11 labs thing is super costly! OpenAI charges $15 where 11 laps would charge $300. So yeah let’s get that out of there lol.

By Gokul on Wednesday, December 3, 2025 - 02:15

Both Open AI and gemini have some cool voices. I don't know, if you could have gemini in there, I guess it'd be even more cost-effective.

By Brian on Wednesday, December 3, 2025 - 03:52

Hi,

There is an add-on for NVDA called AI Content Describer. I mention this because in the settings for this add-on, is a list of LMs that we can use, including something from Open AI called, "Pollinations".

According to the literature it is a free LM to use without requiring an API key.

Just thought you might be able to look into this as a viable option. 😊

https://mcpmarket.com/server/pollinations-2

https://pollinations.ai/

By Stephen on Wednesday, December 3, 2025 - 06:29

Thank you both for your suggestions. I’m looking into them now 😊.

By JC on Wednesday, December 3, 2025 - 23:24

Could you intigrate an audio recorder with ai noise removel in the background? I know there are similar apps that does that functionality, but it would be pretty cool if the app has an integrated audio recorder that records your audio well at the same time uses AI to remove background noise while it's recording in the background automatically. Also, I'm a big fan of Google AI TTS voices such as Zephyr and Sulafat. These voices use the Google Gemini API which can be integrated into any application that will allow you to integrated into apps such as in your case, the book reader which is built into the web portal just like 11 labs. It would be pretty cool if the Google AI TTS is integrated so that way I, or anyone can use a variety of voices from Google such as those found in AI studio.google.com into the mix. Can that be done?.

By Stephen on Wednesday, December 3, 2025 - 23:53

Absolutely I’ll take a look into adding something like that for you. :).

By JC on Wednesday, December 3, 2025 - 23:58

OK! thanks! Let us know when it is integrated and then I'll go ahead and give it a try when it's ready. Like I said before previous comment, I know they are other solutions out there that lets you do it, but some either give poor results or not at all. It depends on each of the specifications and the environment you're recording in. For your case, all that's needed is the web browser and either a microphone or an audio interface. Adobe has a similar feature called enhanced speech, and the quality is excellent! So maybe you could do something similar but instead of manually recording the files and then processing it later, it will automatically apply the AI in the background while it's recording and then once that's done you can either play it back or you can download it to your local hard drive to share for others to post as an example. Another question. I use Google for everything including signing in into certain apps. When I sign in with Google, can I use that option instead of signing in with my own email address and password? Also, as of now what voices can you use with it with it integrated screen reader?

By JC on Thursday, December 4, 2025 - 01:35

Hey! I'm all signed up! however, I did get a message that says that a subscription has been active. should I ignore it? and are you going to keep the basic stuff free and the optional subscriptions available for those that would like to subscribe? I know that some cannot aford a subscription, and not a lot of people can't afored the money, so I would like the basic stuff to be free, with the optional features be subscription based.

By JC on Friday, December 5, 2025 - 01:12

Hi, I received the following error when composing a new message to a follower: invalid conversation. I think this is a bug.

By Stephen on Saturday, December 6, 2025 - 21:12

Hey JC! Thanks so much for signing up and for reaching out about the subscription. I really appreciate you taking the time to share your thoughts. i’m sorry it has taken me a couple days to respond to you, unfortunately, I became ill for a few days, but I’m back at it. I completely understand your concern about affordability and wanting to keep basic features accessible. Let me be really transparent with you about the reality of running this app. Right now, I'm personally covering around C$300 per month just to keep the backend infrastructure running. The core features you're using work great, but when users request additional integrations like Google APIs or other third-party services, those come with their own costs that add up quickly. Each new API integration I add means another service I'm paying for on the backend. I'm not a big company with venture capital or ad revenue, I'm one person trying to build something genuinely helpful for the blind and visually impaired community while also not bankrupting myself in the process :). When I started this project, I was upfront that a subscription model would eventually be necessary to sustainably support the service and continue adding new features. I really want to make this accessible to everyone, and I'm working hard to keep the core features affordable. I'm absolutely taking notes on feature requests and exploring what's possible to implement, but I also need to be realistic about what I can sustain financially. Regarding Google sign-in: You can absolutely sign in with Google! It's already available as an option when you sign up or log in. I'm committed to building something valuable here and continuing to improve the service. I hope you can understand that the subscription isn't about greed at all it's just about keeping the service running and being able to continue development. Thanks again for being an early supporter and for understanding!
Also, thanks for reporting that bug, I will look into that ASAP.

By JC on Sunday, December 7, 2025 - 03:13

Hi, thanks Steven! I totally understand. so the message that came up regarding the subscription ignore it? also, when will the ai voice recorder feature be added? I also enjoy the social media hub. very cool! it's like instagram but for blind users.

By Stephen on Sunday, December 7, 2025 - 03:35

You do have the free 30 day trial. Also in regards to the AI voice recorder, I haven’t made any guarantees regarding that feature. I will however look into seeing what I can do about that. I’m not gonna give you any guarantees until I can confirm not only whether or not I can implement the feature, but whether or not the feature works successfully. :) but I will keep you updated ❤️.

By IPhoneski on Sunday, December 7, 2025 - 10:57

I was most interested in the 'Watch Mode' feature, which was supposed to inform me about changes in my surroundings, but it doesn't say anything at all. I intentionally made changes, removing objects from the camera's field of view—it didn't help. Additionally, I can't navigate back in any way. I have to close the entire 'Live camera explorer' mode using VoiceOver—the app gestures aren't working. Am I doing something wrong, or is there work in progress on this?"

By Stephen on Sunday, December 7, 2025 - 11:08

Nope you’re doing everything right. I had spotted the bug on my end on Friday but I fell unbelievably ill these passed few days so I haven’t been able to get to it. Feeling much better today though and it is number 1 on my list to fix.

By Stephen on Sunday, December 7, 2025 - 12:21

The watch feature is fixed and to make it more consistent with the gestures throughout the app, the two finger swipe left should bring you back to the main live camera mode.

By IPhoneski on Sunday, December 7, 2025 - 13:59

It works perfectly now. I tested the feature on a ski jumping broadcast, and it narrated the whole process—from preparation, through flight, to landing. I might actually need to buy a dog just to use this feature more!
Just one small suggestion: I would change the frequency of the 'no action' message. Hearing 'No significant action detected' constantly was quite annoying."

By Stephen on Sunday, December 7, 2025 - 14:03

That is uber awesome!!!! Yeah I was thinking the same thing. I’ll take care of that annoying message after I finally go get some sleep lol. I really do love hearing how these features are being used! It’s really kind of heartwarming so I very much appreciate you.

By Stephen on Sunday, December 7, 2025 - 16:19

I do now haha. Wow. That was unexpected. I had no idea this project would become this huge when I originally started it. You all are phenomenal. I don’t have any words lol.

By Stephen on Sunday, December 7, 2025 - 16:21

I feel like Miley Cyrus: I came in like a wrecking ball! I will now see myself out.

By Stephen on Sunday, December 7, 2025 - 16:33

This must have been recorded before I made it faster than 2 seconds. That 2 seconds was driving me wild. You should now be able to walk down a gallery hall for example. There is still maybe a 1 second delay but it feels much faster. At least in live camera mode.

By Stephen on Sunday, December 7, 2025 - 16:59

Hey JC, after the 30-day trial ends, the core AI features require a subscription at $9.99 CAD/month. I'm paying $300/month for infrastructure. I'm one person running this, and I priced it to keep it affordable while covering what it actually costs. The trial gives you full access to everything so you can see if it's worth it for you. I'm trying to build something genuinely useful while keeping it sustainable. :).