Will iPhones be able to mimic your voice for conversations?

It was reported today that, later this year, Apple will enable users of iPhones and tablets with cognitive disabilities to deal more easily and independently with the “Assistive Access” feature, by launching a feature that makes speechless people speak with their voices. Through its applications or voice calls on its devices.

The new feature is expected to enable individuals who do not speak writing to speak during calls and conversations using Live Speech, and people at risk of losing their ability to speak can use their personal voice to create a synthesized voice that resembles them to communicate with family and friends.

For users who are blind or have low vision, Magnifier mode provides Point and Speak, which identifies text that users point to and reads it aloud to help them interact with physical objects such as home appliances, Apple announced today. Tuesday.

iPhones and iPads will learn the user’s voice after training the device on it for just 15 minutes. Live Speech will then use Synthetic Audio to read aloud the user’s written text during phone calls, FaceTime conversations, and even personal conversations. People will also be able to save commonly used phrases to use during live chats.

This feature is one of several aimed at making Apple devices more inclusive for people with cognitive, vision, hearing and movement disabilities. Apple said that people who may suffer from conditions in which they lose their voice over time, such as ALS (amyotrophic lateral sclerosis), could benefit most from the tools.

“Accessibility is part of everything we do at Apple,” Sarah Hurlinger, senior director of global accessibility policies and initiatives at Apple, said in a post on the company’s blog. “These features are designed with feedback from members of the disability community every step of the way in their development, to support a diverse range of users and help people connect in new ways.”

The new features are scheduled to roll out later in 2023.

While these tools have the potential to fill a real need, they also come at a time when advances in artificial intelligence have raised alarms about bad actors using convincing fake audio and video — known as “deepfakes” — to defraud or mislead the public.

In the blog post, Apple said that the Personal Voice feature uses “on-device machine learning to keep users’ information private and secure.”

Other tech companies have experimented with using artificial intelligence to replicate voice. Last year, Amazon said it was working on an update to its Alexa system that would allow the technology to mimic any voice, even a deceased family member. (The feature has not yet been launched.)

In addition to voice features, Apple announced Assistive Access, which brings together some of its most popular iOS apps, such as FaceTime, Messages, Camera, Photos, Music, and Phone, into a single calling app.

Apple is also updating its Magnifier app for the blind. It will now include a detection mode to help people better interact with physical objects. The update will allow someone, for example, to hold the iPhone’s camera in front of the microwave and slide their finger across the keyboard during app labels and announcing text on the microwave’s buttons.