Virginia Congresswoman Jennifer Wexton used a artificial intelligence (AI) programme to address the House on Thursday. A year ago, the lawmaker was diagnosed with progressive supranuclear palsy, which makes it difficult for her to speak.
The AI programme allowed Wexton to make a clone of her speaking voice using old recordings of appearances and speeches she made in Congress. Wexton appears to be the first person to speak on the House floor with a voice recreated by AI.
At least Wexton supplied some of the data to make it all work
I wonder where the data to develop the program came from?
Can AI be developed ethically? or do the datasets have to be so large the job requires pilfered data?
TTS voice models have been around while now and don’t require much more than a 5 second sample of the voice data. TTS Tortise, among many, many others, for example.