Cutting Edge Intel AI-Powered Backpack Could Replace A Guide Dog For Blind People
The past decade has seen a number of technical innovations designed to help visually impaired people better navigate the outside world.
These range from white canes and bands using sonar and haptic feedback to smartphone apps providing orientation and navigational assistance.
What remains common to all of these solutions is that they are merely intended to augment the blind user’s experience as they move through public spaces. They are certainly no substitute for the cherished and much-relied upon guide dog.
All of this may be about to change thanks to an innovative new portable solution powered by Intel’s advanced AI software and processors.
The ground-breaking system combines a laptop housed inside a small backpack, a vest jacket with a concealed camera and a waist pack containing a pocket-size battery pack. Audio notifications related to the wearer’s immediate environment are relayed via a Bluetooth earpiece.
The breakthrough device is the brainchild of Jagadish K. Mahendran, a Computer Vision/Artificial Intelligence Engineer from the University of Georgia’s Institute for Artificial Intelligence.
Mahendran’s invention was awarded the grand prize at the OpenCV Spatial AI 2020 competition, the largest competition of its kind in the world, which was sponsored by Intel. This year''s event will also feature Microsoft as a headline sponsor.
Addressing what stirred him to begin work on his supercharged system, Mahendran says, “Last year, when I met up with a visually impaired friend, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help.”
Pushing the limits of hardware
On initial reflection, for the smartphone generation at least, for whom everything should be pocket-sized, a backpack-based solution may sound a little hefty but a shift in mindset is required to understand the scaling.
If a smartphone is shrinking down the form factor of a regular home computer, Mahendran’s wearable is accomplishing this with something a little more akin to a supercomputer.
As Mahendran explains, “Without these neural compute sticks from Intel, the wearer would be looking at carrying something like five graphics processing units in the backpack. Each one weighing around a quarter of a pound and that’s not to mention all the fans and power sources that would be needed.”
“It would be unaffordable and impractical for users,” he continues.
“However, thanks to these neural compute sticks and the Intel Movidius processor, this huge GPU capacity is being compressed into a USB stick-sized hardware, so you can just plug it anywhere and you can run these complex, deep learning models.
“This is why the solution that we have developed is so simple because we can just put everything in a small backpack, and it''s portable, cheap and has a very simple form factor.”
The added ingenuity is that the way the system is configured, it doesn’t even look like a piece of assistive technology at all.
However, rather than simply harnessing a single development, Mahendran has succeeded in combining several bleeding edge software and hardware innovations to construct a veritable symphony of AI-powered spatial guidance and notification technologies housed within one system.
These components include a Luxonis OAK-D spatial AI camera positioned behind viewports within the vest which is capable of running advanced neural networks and providing accelerated computer vision.
This is made possible by embedding an Intel smart video chip onto the camera itself, enabling super-fast response times and eliminating latency.
The system also uses the Intel OpenVINO toolkit for on-chip edge AI inferencing and OpenCV, a library of programming functions supporting real-time computer vision.
OpenCV has its historic roots within the Intel ecosystem and has served as the world’s largest computer vision community over the past 20 years.
Navigating the environment
When out and about, the system can detect crosswalks, changes in elevation at the curbside, traffic lights and signs and other pedestrians, in addition to a whole host of street furniture such as trash cans, hanging branches and flower baskets.
The wearer is then alerted to the presence and direction of these potential hazards through the Bluetooth earpiece.
Even though the quantity of information passed through is customizable, the system provides non-negotiable so-called “critical updates” related to safety, such as an approaching cyclist or change in curb elevation.
Fascinatingly, Mahendran’s system, which uses certain techniques and models featured in self-driving vehicles, is interactive and users can ask it questions during the journey.
These include a “Describe” command to identify objects in the user’s immediate vicinity and the system can provide answers like “person, 10 o’clock or traffic light 2 o’clock.”
The system also allows users to save their current location or send it to another person via an SMS message.
In configuring the system settings and options, Mahendran encountered an issue common to all AI engineers seeking to improve assistive technology i.e., what constitutes information overload for the user?
“If there is a continuous bombardment of information it can become overwhelming,” says Mahendran.
“This is why we wanted to ensure we provide critical updates but make the rest of the information customizable.”
Hema Chamraj, director of Technology Advocacy and AI4Good at Intel, says, “What we are seeing with the type of innovation Jagadish and his team are bringing forward is the real democratization of AI.”
“For some time, AI has just been reserved for specialized systems and specialized skillsets but now it’s being brought into daily life,” she continues.
“We are bringing it down to the consumer level to say ‘here are things that you can just plug and play.’ The technology is democratizing opportunities that have never existed before.”
Under the umbrella of the MIRA Guidance System for The Blind, the next step for the initiative is to make the code, models and data sets open-source, enabling innovators to build further on the system infrastructure.
The development of a brand-new prototype is now at an advanced stage and the team is looking forward to undertaking further extensive testing with blind users.
Future funding will be required to ensure agile development but a key pressing question clearly remains.
Could the refined version of the system replace blind people’s canine companions who have been providing the community with invaluable daily living assistance for over a century?
On a technical level at least, Mahendran is entirely unequivocal, “Guide dogs are extremely good at detecting things but obviously can’t communicate what the obstacle is, so it takes some time for the person to figure out the scenario,” he says.
“This is what is known as primitive perception and just scratches the surface of what our system can do.”
He does, however, issue a caveat in crediting that which is unique to man’s best friend, “But, of course, the dog might well provide an emotional support system and you certainly can’t hug or play around with am AI engine.”
Though, possibly not imminent — with the breakneck speed at which hardware components are miniaturizing, whilst retaining, and even enhancing performance, it may not be that long until the rucksack itself is banished to the back of the wardrobe.
The future generation form factor for the whole system could well end up being the most fitting of all, given the need it would be addressing.
According to Mahendran, “One day, we will be able to perform a lot more complex processing for the user on very simple hardware. It might even be inside smart glasses and eyewear as a wearable they can do everything within.”