Apple has found a way to use AI for good by incorporating a feature that allows people who are losing their speech to have their iPhones read text messages in their voice.
The feature works in two parts. The first is Live Speech. This allows users to type out whatever they want to communicate and have it spoken to callers during phone calls and FaceTime. The feature also enables users to save commonly used phrases and words for multiple conversations. To achieve a more personal experience, users can incorporate the Personal Voice tool. This allows the spoken messages to be said using the iPhone owner’s voice. A 15-minute training is needed to lock in the voice to activate Personal Voice. The phone will present a set of text prompts that the user must repeat. Using this, a synthetic version of their voice is created with the help of AI and then stored in the Live Speech tool.
These new features aim to add a more in-depth level of inclusivity to each device. Specifically, Apple wishes to make life easier for people who suffer from diseases such as ALS, which affects speech.
“Today, we’re excited to share incredible new features that build on our long history of making technology accessible so that everyone has the opportunity to create, communicate, and do what they love,” Apple CEO Tim Cook stated in a press release on Tuesday.
The user’s voice is saved directly to their device. This gives customers peace of mind, guaranteeing that their voices won’t end up in the hands of hackers. These new features will be housed in the Accessibility menu and will roll out later this year.