This new feature is useful for people who have problems such as ALS or so-called amyotrophic lateral sclerosis.
Apple has unveiled a new feature that allows the iPhone and iPad to reproduce the user’s voice digitally.
According to Sky News, this personal voice feature, which is expected to be part of iOS 17, works in conjunction with the Live Speech feature, allowing users to record their voice and use it in voice calls or platforms such as Use FaceTime to communicate with others.
To record 15 minutes of audio on iPhone or iPad, users can read a random set of text messages to input their own voice into the device. The Live Speech feature then allows users to type messages into the device to be read aloud.
If people use certain phrases more often, they can also save those phrases as shortcuts.
Users can also hear phrases with their own voice if they have created a personal voice model in the system – otherwise, everything will be read as before by the voice of the device’s digital assistant, Siri.
This new feature is especially useful for people who have certain problems such as ALS or so-called amyotrophic lateral sclerosis. People with this disease may lose the ability to speak as the disease progresses.
Another feature called Point and Speak will also be available to users, with the help of which you can hold your finger in front of the camera towards something, for example, the microwave buttons, and this program will read the text in the area indicated by the finger. This feature only works on Apple devices with a LiDAR sensor, i.e. the more expensive iPhone and iPad models.
This news was published on the eve of the World Conference of Software and Application Developers on June 5. Apple is expected to unveil its first virtual reality headset during this conference.