iOS 17 Will Let You Create a Voice Sounding Like You in Just 15 Minutes
Apple this week previewed new iPhone, iPad, and Mac accessibility features coming later this year. One feature that has received a lot of attention in particular is Personal Voice, which will allow those at risk of losing their ability to speak to “create a voice that sounds like them” for communicating with family, friends, and others.
Those with an iPhone, iPad, or newer Mac will be able to create a Personal Voice by reading a randomized set of text prompts aloud until 15 minutes of audio has been recorded on the device. Apple said the feature will be available in English only at launch, and uses on-device machine learning to ensure privacy and security.
Personal Voice will be integrated with another new accessibility feature called Live Speech, which will let iPhone, iPad, and Mac users type what they want to say to have it be spoken out loud during phone calls, FaceTime calls, and in-person conversations.
Apple said Personal Voice is designed for users at risk of losing their ability to speak, such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability. Like other accessibility features, however, Personal Voice will be available to all users. The feature will likely be added to the iPhone with iOS 17, which should be unveiled next month and released in September.
“At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, who was diagnosed with ALS in 2018 and is a member of the ALS advocacy organization Team Gleason. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”