Virginia Congresswoman Jennifer Wexton used a artificial intelligence (AI) programme to address the House on Thursday. A year ago, the lawmaker was diagnosed with progressive supranuclear palsy, which makes it difficult for her to speak.
The AI programme allowed Wexton to make a clone of her speaking voice using old recordings of appearances and speeches she made in Congress. Wexton appears to be the first person to speak on the House floor with a voice recreated by AI.
This is a valid problem to solve with AI. I sure wish CEOs of all the moron companies jumping on the AI buzzword bandwagon would take note that AI should be for real problems to solve, not just to hitch a ride on the train and hope your stock goes up.
That would require executives to be capable of generating actual value rather than burning it for short term profits.
Is it following her real voice in real time, or is the script preprepared?
That’s a good question non of the articles I found on the web is answering. But Ms. Wexton spoke to the Time magazine also using the device, and the magazine says:
During the interview at her dining room table in Leesburg, Virginia, the congresswoman typed out her thoughts, used a stylus to move the text around, hit play and then the AI program put that text into Wexton’s voice. It’s a lengthy process, so the AP provided Wexton with a few questions ahead of the interview to give the congresswoman time to type her answers.
Source: A Neurological Disorder Stole Her Voice. Jennifer Wexton Took It Back With AI on the House Floor
[Edit typo.]
At least Wexton supplied some of the data to make it all work
I wonder where the data to develop the program came from?
Can AI be developed ethically? or do the datasets have to be so large the job requires pilfered data?TTS voice models have been around while now and don’t require much more than a 5 second sample of the voice data. TTS Tortise, among many, many others, for example.