Skip to main content
Apr 29, 2020

The IR threat of deep-fake videos

Threat assessment and co-ordinated response plans should be in place, says digital disinformation expert

Disinformation attacks could pose a growing threat that investors and corporations may not be prepared to handle, a leading consultant tells IR Magazine’s podcast The Ticker. 

Antonio Ortolani, a global media analytics and measurement consultant at the Brunswick Group, says 2020 may be the year of the deep-fake and recommends that issuers have a response plan in place. 

Deep-fakes are videos edited to create the false impression that individuals said or did something. As artificial intelligence and video-editing technology has improved, so too has deep-fake technology. Political experts warned last year that deep-fake technology could have an effect on the 2020 US presidential election.  

Ortolani says there is a potential for doctored videos to emerge of executives issuing false earnings statements, using swear words or appearing in any number of other unsavory situations. He suggests threat assessment and a co-ordinated response plan are crucial to put in place to deter a digital disinformation attack. 

‘What’s your protocol within your organization?’ he asks. ‘Who is taking the lead? Is it the CEO, the general counsel, the IRO? Whose role is it to do what? I think this is something not enough companies have thought about.’ 

Ortolani further cautions that deep-fake videos could easily cross over from politics to business, adding that the news cycle is so fast that people can move on quickly with a fake story already in their head. It means a targeted company has about half a day to respond. 

When IR comes into play
The potential for a cyber-attack can contribute to an even more elevated role for IR professionals. Deep-fake videos are often supported online by artificial social media accounts used to generate conversation and gain visibility. Ortolani suggests IR professionals should monitor spikes in company mentions on social media and investigate whether the increased activity is driven by artificial accounts. 

He also gives some insights for corporates on how to differentiate the artificial social media accounts from the human ones, and when red flags should be raised: ‘If a Twitter account was created a week ago with 20 followers and is spinning out 750 tweets a day, that is a fake account. We have not seen many deep-fakes happening in the corporate space but the technology is only getting better so [it’s now a matter] of if not when.’

He adds that any disinformation attack against a company should be elevated to the boardroom for discussion. 

Investor’s view on deep-fakes
Ortolani helped to conduct a survey on investor awareness of deep-fake technology and their perception of digital misinformation in relation to their decision-making in late summer 2019. A surprising outcome of the study is that only 17 percent of the respondents recognize what this term means. Following a series of political events since summer last year, Ortolani is convinced that if the survey had been run today, this number would triple. ‘You really cannot avoid the term these days,’ he explains.

The survey polled US retail investors with at least $250,000 in investment, looking at their level of awareness of the problem. A vast majority of the respondents (88 percent) say deep-fakes are a threat. A similar [proportion] see it as getting worse in the years to come. 

When asked where they go in the event of doubting a company story, about 70 percent say they turn to the media unit of the company or go to the CEO’s official Twitter account. 

‘Even though [they are] staying on top of the news, they also realize it is pretty easy to be tricked, and it’s becoming harder to know what is fake and what is real,’ concludes Ortolani.

Listen to the full podcast here.

Clicky