r/Blind 2d ago

Can Google Gemini describe live situations?

I have seen a Video where someone used Chat GPT and asked What was in front of him. Does that work With Google Gemini as well?

3 Upvotes

8 comments sorted by

View all comments

1

u/1makbay1 2d ago

I have an Iphone, and it now has “live recognition” available through Voiceover which you can set to “scene and it will constantly describe things in front of you as you move your phone around, but you can’t ask questions. I’m guessing this type of constant live description of will probably happen for other AI interfaces as well, but I don’t know when there will be an interactive element added to a live description. It would be nice to be able to ask questions and get responses along with the live recognition feature.

1

u/janneroblind 1d ago

How do I that with VoiceOver?

1

u/1makbay1 1d ago

The best way to get to it reliably is to go to settings>accessibility>voiceover, then go to the rotor, and add live recognition to your rotor. When you turn the rotor to live recognition, use swiping up and down and double tapping to choose the things you want the live recognition to find, such as doors, people, etc. The “scene” setting tells you all the things the camera is pointed at. If you have a late enough Iphone, such as 12 pro or later, the lidar will also tell you how far away some of the things are.