AI technology helps students who are deaf learn

ROCHESTER, NY — As stragglers settle into their seats for general biology class, real-time captions of the professor’s banter about general and special senses – “Which receptor picks up pain? All of them.” – scroll across the bottom of a PowerPoint presentation displayed on wall-to-wall screens behind her. An interpreter stands a few feet away and interprets the professor’s spoken words into American Sign Language, the primary language used by the deaf in the US.

Except for the real-time captions on the screens in front of the room, this is a typical class at the Rochester Institute of Technology in upstate New York. About 1,500 students who are deaf and hard of hearing are an integral part of campus life at the sprawling university, which has 15,000 undergraduates. Nearly 700 of the students who are deaf and hard of hearing take courses with students who are hearing, including several dozen in Sandra Connelly’s general biology class of 250 students.

The captions on the screens behind Connelly, who wears a headset, are generated by Microsoft Translator, an AI-powered communication technology. The system uses an advanced form of automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. The removal of disfluencies and addition of punctuation leads to higher-quality translations into the more than 60 languages that the translator technology supports. The community of people who are deaf and hard of hearing recognized this cleaned-up and punctuated text as an ideal tool to access spoken language in addition to ASL.