Smart-Cities-Library-Header-1

Artificial Intelligence Poised to Improve Lives of People With Disabilities

Are you looking forward to a future filled with smart cognitive systems? Does artificial intelligence sound too much like Big Brother? For many of us, these technologies promise more freedom, not less.

One of the distinctive features of cognitive systems is the ability to engage with us, and the world, in more human-like ways. Through advances in machine learning, cognitive systems are rapidly improving their ability to see, to hear, and to interact with humans using natural language and gesture. In the process, they also become more able to support people with disabilities and the growing aging population.

The World Health Organization estimates that 15 percent of the global population lives with some form of disability. By 2050, people aged 60 and older will account for 22 percent of the world’s population, with age-related impairments likely to increase as a result.

I’m cautiously optimistic that by the time I need it, my car will be a trusted independent driver. Imagine the difference it will make for those who cannot drive to be able to accept any invitation, or any job offer, without being dependent on having a person or public transport to get them there. . Researchers and companies are also developing cognitive technologies for accessible public transportation. For example, IBM, the CTA (Consumer Technology Association) Foundation, and Local Motors are exploring applications of Watson technologies to developing the world’s most accessible self-driving vehicle, able to adapt its communication and personalize the overall experience to suit each passenger’s unique needs. Such a vehicle could use sign language with deaf people; describe its location and surroundings to blind passengers; recognize and automatically adjust access and seating for those with mobility impairments; and ensure all passengers know where to disembark.

The ability to learn and generalize from examples is another important feature of cognitive technologies. For example, in my “smart home,” sensors backed by cognitive systems that can interpret their data will learn my normal activity and recognize falls or proactively alert my family or caregivers before a situation becomes an emergency, enabling me to live independently in my own home more safely. My stove will turn itself on when I put a pot on, and I’ll tell it “cook this pasta al dente,” then go off for a nap, knowing it will turn itself off and has learned the best way to wake me.

All of this may sound futuristic, but in the subfield of computer science known as accessibility research, machine learning and other artificial intelligence techniques are already being applied to tackle obstacles faced by people with disabilities and to support independent aging. For example, people with visual impairments are working with researchers on machine learning applications that will help them navigate efficiently through busy and complex environments, and even to run marathons. Cognitive technologies are being trained to recognize interesting sounds and provide alerts for those with hearing loss; to recognize items of interest in Google Street View images, such as curb cuts and bus stops; to recognize and produce sign language; and to generate text summaries of data, tailored to a specific reading level.

One of the most exciting areas is image analysis. Cognitive systems are learning to describe images for people with visual impairment. Currently, making images accessible to the visually impaired requires a sighted person to write a description of the image that can then be read aloud by a computer to people who can’t see the original image. Despite well-established guidelines from the World Wide Web Consortium (W3C), and legislation in many countries requiring alternative text descriptions for online images, they are still missing in many websites. Cognitive technology for image interpretation may, at last, offer a solution. Facebook is already rolling out an automatic description feature for images uploaded to its social network. It uses cognitive technologies to recognize characteristics of the picture and automatically generates basic but useful descriptions such as “three people, smiling, beach.”

The possibilities for cognitive technology to support greater autonomy for people with disabilities are endless. We are beginning to see the emergence of solutions that people could only dream of a decade ago. Cognitive systems, coupled with sensors in our homes, in our cities and on our bodies will enhance our own ability to sense and interpret the world around us, and will communicate with us in whatever way we prefer.

The more that machines can sense and understand the world around us, the more they can help people with disabilities to overcome barriers, by bridging the gap between a person’s abilities and the chaotic, messy, demanding world we live in. Big Brother may not be all bad after all.


Source: Artificial Intelligence Poised to Improve Lives of People With Disabilities

Source: https://www.huffingtonpost.com/entry/artificial-intelligence-poised-to-improve-lives-of_us_59662920e4b09be68c005698

(Visited 98 times, 1 visits today)

Related Posts

Please Leave a Reply. Thank You.

This site uses Akismet to reduce spam. Learn how your comment data is processed.