“Accessibility by design” is an important concept for Microsoft, and one that underpins many of its artificial intelligence-powered products, including Seeing AI.
Announced on Wednesday among a series of other AI tools, Seeing AI is a free mobile application designed to support people with visual impairments by narrating the world around them. The app — which is an ongoing research project bringing together deep learning and Microsoft Cognitive Services — can read documents, making sense of structural elements such as headings, paragraphs, and lists, as well as identify a product using its barcode.
It can additionally recognise and describe images in other apps, and even pinpoint people’s faces and provide a description of their appearance, though camera quality and lighting might influence its description.
At the Microsoft Future of Artificial Intelligence event in Sydney, Kenny Johar Singh, a Melbourne-based cloud solutions architect at Microsoft, demonstrated Seeing AI, which he uses to help navigate the physical world.
Although he lost 75 percent of his vision due to a degenerative retinal condition, technology has been an “empowering force” that compelled him to pursue a career in the industry, Singh said.
Where before Singh was reliant on his wife to bridge the information gap between him and the physical world, by using Seeing AI he is able to be more independent.
In front of guests, Singh used the app to scan a product, which the app correctly identified as Bounce’s coconut lemon-flavoured natural energy ball.
“What it’s actually done for me is that, now that it has detected the ball, I have the calorie calibration — proteins, carbohydrates, fats, and so forth. So I know what I’m picking and using which basically means that I can be totally independent now and my wife absolutely loves it because she doesn’t need to be dragged into stuff like this,” he told guests at the Microsoft Summit.
Singh also pointed his phone’s camera at Jenny Lay-Flurrie, Microsoft’s chief accessibility officer, and the Seeing AI app described Lay-Flurrie as a “54-year-old woman with brown hair wearing glasses looking happy”. The last three descriptive pieces of information were accurate, though it was off-the-mark about her age.
“People who are blind are not the best at taking pictures, and we often get a lot of edge cases coming through with those pictures. Pictures that aren’t in the middle bracket of the high quality pictures that you can use to … adapt your algorithms and get some machine learning. They are in a lot of ways sometimes dirty pictures, but the overall quality of cognitive services will increase exponentially because you’re including this data sample,” Lay-Flurrie told guests at the Microsoft Summit.
Microsoft’s accessibility features are being used by the Australian National University’s law lecturer Cameron Roles, who said in a statement, “Now is definitely, in my view, the most exciting time in history to be blind.”
One of triplets, Roles, who is also a director with Vision Australia, was born three months early and the oxygen that saved his life left him blind. Of particular use to him are the latest accessibility features in platforms such as Office 365 and Windows 10.
For example, Microsoft has recently integrated its alternative text engine into the core of Windows 10, which Lay-Flurrie said was a “serious step”. This means visually impaired users that use third-party screen readers or Microsoft’s own Narrator, which reads text aloud and describes events, will be able to get a description of the contents in an image, rather than simply knowing there’s an image on screen.
The company also recently introduced Eye Control, a built-in eye-tracking feature for Windows 10, enabling people living with motor neurone disease (MND) and other mobility impairments to navigate their computers. The feature currently only works with Swedish eye-tracking vendor Tobii‘s Eye Tracker 4C, though Microsoft is working to add support for other similar devices.
Within Eye Control is a capability called “shape writing”, aimed at speeding up typing by allowing the user to look at the first and last letters of a word and “simply glancing at letters in between”. Microsoft said in August that a “hint of the word predicted will appear on the last key of the word”, and if the prediction is incorrect, the user can select other predicted alternatives.
The company has additionally introduced colour-blindness filters in Windows 10, a condition that Lay-Flurrie said is more common than people think, affecting one in nine people.
Lay-Flurrie, who has a hearing impairment, said that while we need to consider the potential implications of AI in areas such as privacy and security, we should also look at the positives. She said there are 1 billion people living with disabilities globally, and AI can empower these people both in their day-to-day lives and in workplace environments.
“I also look at my daughter, who has autism, she’s 10. With autism, you don’t always understand social cues and social language, and facial expressions are not obvious to her what they mean. She often misinterprets what we’re saying … So I love the potential and the power and beginning to see a wave of innovation in the area of cognitive and mental health where you’re understanding those social cues, you’re using that visual stimulus to give examples, you’re helping to prompt what would be the next step to your learning through real life as opposed to sitting there … some of the therapeutic applications [require you] to sit there and watch YouTube videos,” she said.
“I think there’s real-time applications that can change the lives and include in the same way you do in the workplace with PowerPoint Designer and position people as the geniuses that they are … And you need to be able to perform at the same level as anyone else. Cognitive services and some of these beautiful engines could give us that capability.”
Lay-Flurrie also said people with disabilities can lead or contribute significantly to innovation.
“People with disabilities have a unique lens on the world that could really give a massive input of innovation here and accelerate our path with AI,” she said at the Microsoft Summit.
Microsoft’s chief storyteller Steve Clayton communicated a similar sentiment at the event, saying innovations designed by and for people living with disabilities can prove to be useful more broadly.
“When I started to learn about inclusivity in design, the dropped kerb was originally invented for people who are in wheelchairs. It turns out that the dropped kerb on a sidewalk is also incredibly useful if you’re carrying groceries or if you’re on a skateboard. There are these serendipitous moments I think we’ve found where we said, ‘Hey, we’re going to create a piece of technology that is for people with visual impairment or other disabilities’ that actually turned out to be incredibly useful for the rest of the world,” he said.
Microsoft has been able to integrate AI into its products, while offering new AI-powered products, because of advances in computer vision, speech recognition, and natural language understanding.
The company has developed technologies that can recognise speech with an error rate of 5.1 percent and identify images with an error rate of 3.5 percent.
Microsoft is also currently leading a competition run by Stanford University that uses information from Wikipedia to test how well AI systems can answer questions about text passages. The competition is expected to generate results that can be applied in areas such as Bing search and chatbot responses.
“This means that using AI’s deep learning, computers can recognise words in a conversation on par with a person, deliver relevant answers to very specific questions, and provide real-time translation,” the company said in an announcement on Thursday.
“It also means that computers on a factory floor can distinguish between a fabricated part and a human arm, or that an autonomous vehicle can tell the difference between a bouncing ball and a toddler skipping across a street.”
In Australia, the University of Canberra has developed the Lucy and Bruce chatbots to streamline support services for students and employees using Microsoft Bot Framework and Microsoft Cognitive Services Language Understanding Intelligent Service.
Once launched, Lucy will connect to the university’s Dynamics 365 platform, allowing students to raise tickets when Lucy can’t find the answer. The university is also exploring possibilities to use Bruce to allow IT service tickets to be logged by staff.
Australian Securities Exchange-listed packaging manufacturer Pact Group has also worked with Microsoft using its Cognitive Services Computer Vision for facial and objection recognition to boost workplace safety.
Pact’s Workroom Kiosk Demo can recognise individual employees in a workshop environment, detecting if they are wearing appropriate safety gear and monitoring their behaviour based on an understanding of the tasks individual employees are authorised to perform. Team leaders are automatically alerted if there are potential issues, and an on-site trial of the system will be launched soon.
Microsoft has also announced advancements to Translator, with expanded use of neural networks to improve both text and speech translations in all of Translator’s supported products.
For people learning Chinese, the company will “soon” release a new mobile application from Microsoft Research Asia that can act as an always available, AI-based language learning assistant.
The company has additionally announced Visual Studio Tools for AI for AI developers and data scientists, which it said combines Visual Studio’s capabilities such as debugging and rich editing, with the support of deep learning frameworks such as Microsoft Cognitive Toolkit, Google Tensorflow, and Caffe. Visual Studio Tools for AI leverages existing code support for Python, C/C++/C#, and supplies additional support for Cognitive Toolkit BrainScript, Microsoft said.
AI capabilities for Azure IoT Edge — which enable developers to build and test container-based modules using C, Java, .NET, Node.js and Python, and simplify the deployment and management of workloads and machine learning models at the edge — are also now generally available, the company said.
“AI is about amplifying human ingenuity through intelligent technology that will reason with, understand, and interact with people and, together with people, help us solve some of society’s most fundamental challenges,” Clayton said in a statement.
Recent Coverage
Microsoft to integrate Visual Studio with AI services
Another piece of Microsoft’s ‘Open Mind Studio’ falls into place: A new extension enabling developers to use AI services from inside Visual Studio.
Microsoft’s Visual Studio Live Share to improve developer collaboration
Microsoft plans to debut a new developer collaboration service in early 2018 that could make it easier for developers to work in tandem even when located remotely.
Microsoft gets data-fabulous at NYC event
Microsoft announces Azure Databricks service, new Cosmos DB features, enterprise AI capabilities and more at its annual Connect(); event in New York
Microsoft streamlines big data analytics with new Azure services (TechRepublic)
New database offerings and services in Microsoft’s Azure cloud platform were unveiled today at Microsoft’s Connect(); 2017 conference, alongside GitHub’s adoption of GVFS.
Microsoft reveals Azure IoT Edge: Putting AI at the furthest reaches of your network (TechRepublic)
Redmond revealed a variety of new services to help firms take advantage of AI, ranging from new analytics tools to easier ways to incorporate machine learning into software.
Source: Microsoft using AI to empower people living with disabilities