The actor behind the voice of Star Wars villain Darth Vader has granted Disney permission to recreate his voice artificially for future productions of the franchise.
James Earl Jones's famous, ominous tones are likely to be replicated by an AI voice-modelling tool called Respeecher, which trains text-to-speech machine-learning models with licensed and released recordings of actors.
James Earl Jones first voiced Darth Vader in the original 1977 Star Wars movie and went on to reprise the role in The Empire Strikes Back and Return of the Jedi. He continued voicing the iconic character in later films, including the first instalment of the Star Wars anthology series, Rogue One, and the third instalment of the sequel trilogy, Star Wars: The Rise of Skywalker.
Now, however, Star Wars sound supervising editor Matthew Wood has told Vanity Fair that the 91-year-old actor "was looking into winding down this particular character".
The iconic character has already had his voice digitally mastered by the AI for the TV show Obi-Wan Kenobi. In a recently-published Vanity Fair story, the company explains how it managed to edit Jones' voice with the Ukrainian software while the country was being invaded by Russia.
Wood said Jones' family had been very happy with the result and he described Jones as "a benevolent godfather" for his advice on the performance of the role during production. As the test was considered to be successful, the voice of Darth Vader will continue to be produced by the AI company.
This is not the first time a well-known actor has had their voice modified by a computer. Another notable recent example is Top Gun: Maverick, in which the voice of Val Kilmer (reprising his role as Iceman) was synthesised due to the actor's medical condition.
The Ukrainian startup also worked with Lucasfilm to create the voice of a young Luke Skywalker for Disney+ series, The Book of Boba Fett.
The sustainable approach that will help avoid a third ‘AI winter’
Energy-hungry artificial intelligence systems are heading for a dead end. The solution is to design systems that work like the human brain, not an oversimplified version of it.