A group of Chinese scientists has developed a wearable artificial intelligence (AI) system designed to help blind and visually impaired individuals move through the world with greater independence and confidence. Detailed in a recent study published in Nature Machine Intelligence, the system combines real-time video analysis, audio cues, and haptic feedback to guide users safely through their surroundings.
Developed by researchers from Shanghai Jiao Tong University, Shanghai Artificial Intelligence Laboratory, East China Normal University, Hong Kong University of Science and Technology, and the State Key Laboratory of Medical Neurobiology at Fudan University, the device represents a significant step forward in accessible navigation technology.
How It Works: Real-Time Feedback Without Overload
The wearable system consists of a small camera mounted between the user’s eyebrows, an AI processor, bone conduction headphones, and ultrathin artificial skin sensors worn on the wrists. As the camera captures live footage, the AI analyses the visual data locally and provides minimal, yet essential, audio prompts directly through the headphones without blocking out surrounding sounds.
At the same time, the skin-like wrist sensors track proximity to nearby objects. If an obstacle is detected, such as a wall or furniture, the appropriate wrist vibrates, gently nudging the user to change direction. This dual-sensory approach reduces reliance on lengthy verbal instructions and enhances environmental awareness through intuitive cues.
A Thoughtfully Designed, User-Friendly System
Lead researcher Gu Leilei, an associate professor at Shanghai Jiao Tong University, stressed the importance of keeping the system lightweight, practical, and comfortable enough for all-day use.
“Lengthy audio descriptions of the environment can overwhelm and tire users, making them reluctant to use such systems,” Gu told The South China Morning Post. “Unlike a car navigation system with detailed directions, our work aims to minimise AI system output, communicating information key for navigation in a way that the brain can easily absorb.”
He added, “This system can partially replace the eyes.”
The team designed the system to allow users to move naturally, without feeling burdened. It was tested indoors with 20 visually impaired participants, most of whom learned to operate it comfortably within 10 to 20 minutes. Feedback highlighted the system’s reliability and ease of use.
Voice Commands and Object Recognition
Navigating with the device is straightforward. Users simply issue a voice command to set their destination, and the AI determines a safe, obstacle-free route, offering guidance only when needed.
The AI has been trained to recognise 21 commonly encountered objects, such as beds, tables, chairs, doors, sinks, televisions, food items, and even people, across various angles and distances. The recognition database is expected to grow, improving the system’s versatility in different environments.
Beyond navigation, the wrist sensors also assist users in locating and reaching for objects by sensing the distance between the hand and target, offering subtle feedback to guide hand movements.
Looking Ahead: From Indoors to Outdoors
While the current system has shown promise in indoor settings, Gu said the next phase of development will focus on adapting it for outdoor environments. Future upgrades may include more sophisticated object detection, real-time route adaptation, and integration with GPS to handle the unpredictability of streets, traffic, and open spaces.
“This research paves the way for user-friendly visual assistance systems,” the team wrote in the study, “offering alternative avenues to enhance the quality of life for people with visual impairment.”
With further refinement, this wearable AI system could bring newfound autonomy and mobility to millions of people with vision loss around the world.