Skip to main content
Oct 05, 2022

The truth about ‘lie detector’ AI and machine-learning disclosure analysis

Prof Blake Steenhoven studies the ever-changing corporate disclosure space and reveals what concerned IROs can do in response

In most cases, effective communication is intuitive – you know it when you hear it. As an IR professional, you’re likely a natural communicator in your day-to-day life and investors are, after all, just people. But what does an effective message sound like when your audience is a machine?

Investors are increasingly adopting advanced tools that use AI and machine learning to analyze disclosures, trying to find the hidden meaning in the numbers, words and even non-verbal behavior in corporate disclosures. These tools are being used during your earnings calls, roadshow presentations and even private phone calls with investors. So how do you communicate effectively in this new disclosure landscape?

In this article, I’ll provide insights from academic literature to explain what some of these tools are, why investors are using them and what IR professionals can do about it.

Much of the content of corporate disclosure is also available in the financial statements, which suggests something most IR professionals know: how something is said can be as important as what is said. And investors looking for an edge over their competitors are using a variety of methods to analyze the words and vocal cues in disclosure.

Textual analysis

Methods of analyzing the language in a disclosure range from simple word counts to complex probabilistic modeling. Word counts can be used to measure the frequency of specific words, like ‘revenue’ or ‘risk’, as well as categories of words, like personal pronouns. Specific dictionaries can also be used for characteristics like tone, which compare the number of positive and negative words in a disclosure.

While these approaches are simple to use, a limitation is that they can fail to capture the context of the disclosure. For example, consider the simple sentence: ‘Bad debt expense decreased.’ Although this clearly conveys positive information, a simple word count approach would likely classify each of the words as negative, resulting in a tone measure that doesn’t capture the meaning of the sentence.

More sophisticated approaches try to overcome these issues using tools like machine learning. These methods typically begin by collecting a large sample of historical data, such as the transcripts from a company’s previous earnings calls.

This data is used to ‘train’ a model to look for relationships between groups of words and stock movements. The model can then be used in real time to predict stock prices using the words in new disclosures.

Although machine-learning approaches have some advantages over simpler methods, they also have limitations. Machine learning involves complex computations and algorithmic analysis, which can be costly to implement. The complexity of these approaches can also increase the risk of user error. When combined with high-frequency trading strategies, the speed and efficiency of the analysis means these failures can be catastrophic.

And because they’re based purely on historical data, algorithms can fail when novel circumstances arise. Even in the best-case scenario, the complexity of statistical models can make it difficult or impossible for investors to know what the model is reacting to, unlike simpler metrics or qualitative analysis.

Acoustic analysis

In addition to the words you use, investors are also listening to vocal characteristics like pitch, volume and how quickly you talk. These cues can be informative about a speaker’s personality, emotional state and intentions, and research has shown that they can predict financial performance, stock returns and even CEO career outcomes.

Non-verbal communication has always mattered, but the availability of free open-source applications like Praat and Python have allowed these cues to be quantified with greater precision and speed than ever before. While it’s clear that there’s useful information in vocal cues, it’s important to note that this area is still new and investors are still learning how to use this data.

Although there’s some evidence that individual cues like vocal pitch are associated with stock price movements, the general consensus is that these cues need to be looked at in combination with each other.

With little guidance on how to combine these measures, investors may rely on a purely statistical machine-learning approach or off-the-shelf software. These approaches are costlier but the larger issue, particularly for proprietary software, is that they’re opaque and difficult for users to understand. Without being able to see what’s happening under the hood, investors are left to rely on the developers’ claims.

For example, many voice analysis tools are marketed as effective lie detectors, and claim they can determine whether a chief executive is being deceptive. While this idea might sound appealing to investors, these claims should be treated with skepticism; the proprietary nature of these tools prevents independent verification, and the evidence overwhelmingly shows that vocal cues cannot reliably detect deception.

Even so, research consistently shows that stock prices and KPIs are associated with managers’ vocal characteristics, and investors are likely to continue analyzing and trading on these cues.

What we can do about it

The disclosure landscape is changing and a common concern I hear from IR professionals is not knowing how to respond. How do you communicate effectively when even subtle changes in vocal pitch can move stock prices? Before you cancel your earnings calls, here are a few recommendations.

  1. Don’t try to beat the algorithms: Although it’s important to be aware of how investors are analyzing your disclosures, you don’t need to become an expert in machine learning. In many cases, even the investors using these algorithms don’t know what’s happening under the hood, so it’s not worth trying to guess. Further, machine learning involves learning, so algorithms will adapt over a period. By the time you figure out what it was doing last quarter, it has likely already changed.
     
  2. Make your writing clear and concise: When algorithms process your disclosures, the goal is to extract additional information to gain an advantage over other investors. One of the best ways to narrow this advantage is to make your communications easy for core investors to understand, using plain English and simple formatting.
     
  3. Tailor non-verbal communication to your message: A common assumption in acoustic analysis is that speakers aren’t in control of their vocal cues, which act as ‘tells’. But that’s not entirely true. Effective speakers use their vocal pitch, volume and speech rate to communicate, clarify and persuade. When preparing to speak with investors, therefore, consider the specific audience and message: keeping these in mind can help you tailor your vocal cues to your goals.
     
  4. Practice (but not too much): Like any other skill, effective communication is something that can be learned and improved. Rehearsal and presentation coaching can help identify opportunities for improvement in your delivery. That said, recognize that everyone has his or her own style, and there are a lot of ways to communicate effectively. Although everyone can improve, don’t try to change too much – if your delivery doesn’t feel authentic to you, it won’t sound authentic to your audience.
     
  5. Ask for help: While new technology is changing how investors analyze your disclosures, the same technology is being developed to help companies improve their communication. Collaborating with experts and researchers in this area can help IR professionals maintain their edge and communicate effectively in this changing disclosure environment.

Blake Steenhoven is an assistant professor of accounting at the Smith School of Business at Queen’s University, Canada

This article originally appeared in the Fall 2022 issue of IR Magazine.

Clicky