Apple researching voice accessibility features for Siri like stutter detection

525
0

Apple is working on making Siri and its other voice recognition technologies more accessible to users with atypical speech patterns.

For example, the company is researching ways to automatically detect if someone is speaking with a stutter, according to a report in The Wall Street Journal.

To that end, the company has amassed nearly 28,000 clips of people speaking with a stutter from podcasts. That data was published in an Apple research paper this week (PDF link), The Wall Street Journal added.

Although an Apple spokesperson declined to comment on how it’ll use the findings from the data, the company does plan to leverage at least some of it to improve voice recognition systems.

In the interim, Apple added that its Hold to Talk feature, which it introduced in 2015, allows users to control how long they want Siri to listen for. That helps prevent the assistant from interrupting users or timing out before a command is fully spoken.

Although the article doesn’t mention it, Siri can also be activated and controlled using a Type to Siri feature on macOS and iOS.

Training for atypical speech patterns is just one area of research for Siri improvement. Apple is also developing systems that could help secure a device by locking it to a user’s unique voice patterns.

The Wall Street Journal also covers how other technology companies, like Amazon and Google, are training their digital assistants to understand more users that may have trouble with voice command.

Amazon in December launched a new fund allowing users with atypical speech patterns to train algorithms that will recognize their unique voices. Google is also collecting atypical speech data for use in Google Assistant.

AppleInsider has affiliate partnerships and may earn commission on products purchased through affiliate links. These partnerships do not influence our editorial content.

[Read More…]