Synthesized media is a category surging with the latest development of synthetic voices and speech to text technology. It’s not new, but is emerging with the development of Machine learning algorithms. Synthesized media refers to information types synthesized by the computers (e.g. Text, graphics and computer animation). They must be computer-controlled.
We are not that far away from that if we look at current examples of synthesized celebrities:
Lilmiquela on Instagram takes photos with influencers, and has more than 1 million followers, who don’t seem to care about her existence.
Japanese pop star Hatsune Miku appears as an hologram in concert venues. Thousands pay to see her “live”.
Other examples of the advances of synthesized media includes
Lyrebird, a company that created a service that listens to your voice for five minutes and then can sound like you saying anything. Lyrebird published a video almost a year ago of Donald Trump voice synthesized in their social media. There are other companies that offer this service (the banking of your voice) as well and you can check it out in out text-to speech services episode.
Another example, researchers at the University of Washington used AI to synthesize a video of President Barack Obama speaking based on footage from his weekly addresses.
The world federation of science journalists published a tweet earlier in May calling for the development of “Robust processes for debunking of synthesized media”. The article was based on Obama synthesized video and warned against the dangers of deep fakes. But they also highlighted an opportunity for the media and quote:
The media itself is a simulacrum of reality, in which each selection, edit, highlight, or turn of phrase shapes the audience’s interpretation of events. What’s new here is that media-synthesis algorithms further fracture any expectation of authenticity for recorded media while enabling a whole new scale, pervasiveness, potential for personalization, and ease of use for everyone from comedians to spies. Faked videos could upset and alter people’s formation of accurate memories around events.
What will happen when nothing seen online can be trusted. What will happen while we ride the wave of authenticity awareness gap?
According to CJR it’s an opportunity for media to ramp up training media forensics techniques. However much of this technology is still away from cheap availability that would make it practical for reporters.
Media forensics is new for me as it’s probably for a lot of you as well. There are some sources to read on the episode notes at voicefirstweekly.com/flashbriefing/82 (yeah this one). It’s about following a rigorous process to ensure what’s published is authentic. However, as I was reading I wonder, there is so much new media being created every day that this process of media forensics by journalists might just not be enough. What if we decide to surrender to the fact that we will have synthesized media? What measures will we take, will we become more tribal with a trusting authority discerning information for the group?
And what if our future news anchors are synthesized videos combined with text to speech services? The Hatsune Miku of news.
Thank you for listening. Remember to subscribe, like, comment and share this episodes. My name is Mari, and you can find me on Twitter as voicefirstlabs and on Instagram @voicefirstweekly. Thank you for listening and you have a great day!
The ultimate resource in the voice space. Conversational interfaces, voice interfaces, smart speakers and smart assistants, voice strategy, audio branding.
Please log in again. The login page will open in a new window. After logging in you can close it and return to this page.