Researcher Gets 2 NIH Awards to Study Speech and Technology
CINCINNATI—With new grants from the National Institutes of Health, University of Cincinnati (UC) researcher Suzanne Boyce, PhD, hopes to prove it's not what you say, but how you say it.
Boyce, professor in communication sciences/disorders at UC's College of Allied Health Sciences, has received $850,000 total in two NIH awards, both of which involve developing computer technology to help researchers and caregivers better understand the nature of speech. Boyce will work with a team of colleagues around the country to develop the software.
The first grant, for $650,000 over two years, is to develop speech articulation tools for neuroscience research. In this work, Boyce hopes to create software that will allow researchers unfamiliar with speech, a technically difficult field, to quickly screen patients.
She says changing speech patterns can hold clues to changes in conditions like autism, schizophrenia or sleep disorders. But speech's complexity poses barriers to many researchers, leading some to turn away from potential courses of study.
"Speech is a very, very specialized field with a very steep learning curve," says Boyce. "There are elements of acoustics, articulation … it's not the same as just the words people use. Technically, it's very daunting."
As a result, she says speech is "very understudied from a psychological standpoint."
"Our idea is to make it easier for those who don't understand speech to at least do a fast screening to see if there is something interesting that is worth pursuing," she says.
Boyce’s second award is for $200,000 over one year to develop a monitoring device for speech to challenged listeners. This device will help family members of those with hearing impairments learn how to communicate more effectively with their loved one.
"It turns out with people with hearing impairments really do a lot better if they are spoken to in a slightly more clear fashion than you normally would use for them," she says. "In a family, you know each other’s voice so well that you don’t have to speak too clearly to get your meaning across."
But those casual speech patterns can mean people with hearing impairments miss out on conversations.
"They end up being isolated. It’s a very well-known problem among people dealing with hearing impairments," says Boyce.
To remedy the problem, audiologists often try to counsel family members to speak more clearly, for example, as they would speak in a job interview. But overcoming years of habit is hard to change.
That's why Boyce and her colleagues want to help family members track their own progress in changing their speech.
Ideally, the proposed software will take two measurements—one of a person’s casual speaking style and one of their more formal speech. They can then use those measurements to learn how to switch between them.
Boyce says she wants patients to think of switching between speech patterns as a skill they can learn or "an extra muscle they know how to control." She cautions that the goal is not to evaluate speech or to drill out a family’s dialects or accent.
"It has nothing to do with dialect," she says. "You can be maximally clear in any dialect.
"Every person who speaks a language has a normal range of being able to speak more clearly or less clearly," she says. "We want to train people to be aware of their own natural ability to change their speech."