The multi-billion collapse of FTX – a famous cryptocurrency exchange whose founder is now is awaiting trial on fraud charges – serves as a stark reminder of the hazards of fraud within the financial world.
FTX founder Sam Bankman-Fried’s lies return to precedent days right on the very starting of the corporate, prosecutors say. As a part of this, it’s alleged, he lied to each customers and investors U.S. Attorney Damian Williams called “one of the largest financial frauds in American history.”
How could so many individuals have apparently been fooled?
A brand new study published within the Strategic Management Journal sheds some light on this issue. In it, my colleagues and I discovered that even skilled financial analysts fall for the CEO’s lies – and that essentially the most respected analysts could be the most naive.
Financial analysts provide expert advice to assist firms and investors make cash. They predict how much the corporate will earn and suggest whether to purchase or sell its shares. By directing money to good investments, they assist develop not only individual enterprises, but all the economy.
Although financial analysts are paid for his or her advice, they aren’t oracles. How professor of management, I wondered how often people can be fooled by lying managers – so my colleagues and I used machine learning to search out out. We have developed an algorithm, trained on transcripts of S&P 1500 earnings calls from 2008–2016, that can reliably detect fraud In 84% of cases. Specifically, the algorithm identifies various linguistic patterns that occur when an individual lies.
Our results were striking. We found that analysts were significantly more more likely to provide “buy” or “strong buy” recommendations after listening to fraudulent CEOs – by a mean of just about 28 percentage points – than their more honest counterparts.
We also found that highly regarded analysts fell for CEO lies more often than their lesser-known counterparts. In fact, analysts named “stars” by industry publisher Institutional Investor were 5.3 percentage points more likely than their lesser-known counterparts to advance to the upper tier of typically dishonest CEOs.
While we’ve applied this technology to realize insight into this area of finance for educational research, its broader application raises a variety of difficult ethical questions on using artificial intelligence to measure psychological constructs.
Prone to belief
This seems counterintuitive: Why do skilled financial advisors consistently fall for lying managers? And why do essentially the most reputable advisors appear to have the worst results?
These findings reflect the natural human tendency to assume that others are honest – which is often known as “truth bias” This habit makes analysts as prone to lies as anyone else.
Moreover, we found that increased status promotes greater deviation from the reality. First, “celebrity” analysts often turn out to be overconfident in themselves and their entitlements as their prestige increases. They come to consider that they’re less more likely to be defrauded, which leads them to take CEOs literally. Second, these analysts are likely to have closer relationships with CEOs, research shows increase error in reality. This makes them even more vulnerable to fraud.
Given this gap, firms will want to reassess the credibility of “all-star” designations. Our study also highlights the importance of accountability in management and the necessity for strong institutional systems to counter individual biases.
AI “lie detector”?
The tool we developed for this study could have applications beyond the business world. We validated the algorithm using fake transcripts, retracted medical journal articles, and deceptive YouTube videos. It can be easily applied in various contexts.
It is significant to notice that this tool does circuitously measure fraud; identifies linguistic patterns associated with lying. This signifies that even though it is extremely accurate, it’s prone to each false positives and negatives, and false accusations of dishonesty particularly can have devastating consequences.
Moreover, tools like this try to tell apart socially helpful “white lies” – which foster a way of community and emotional well-being – from more serious lies. Indiscriminately flagging any fraud can disrupt complex social dynamics, resulting in unintended consequences.
These issues should be addressed before one of these technology is widely adopted. But that future is closer than many could imagine: these are firms in fields corresponding to investing, security and insurance I’m already beginning to use it.
Many questions remain
Widespread use of artificial intelligence to detect lies would have profound social consequences – particularly, making it harder for the powerful to lie without consequences.
This may sound like an unambiguously good thing. While this technology offers undeniable advantages, corresponding to early detection of threats or fraud, it can even be disruptive a dangerous culture of transparency. In such a world, thoughts and emotions can turn out to be subject to measurement and judgment, destroying the sanctuary of mental privacy.
This study also raises ethical questions on using artificial intelligence to measure psychological characteristics, particularly in terms of privacy and consent. Unlike traditional deception research, which relies on humans consenting to be tested, this AI model works covertly, detecting nuanced linguistic patterns without the speaker’s knowledge.
The consequences are astonishing. For example, on this study, we developed a second machine learning model to evaluate the extent of suspicion in a speaker’s tone. Imagine a world where social scientists can create tools to evaluate any aspect of your psychology, applying them without your consent. Not very attractive, right?
As we enter a brand new era of artificial intelligence, advanced psychometric tools offer each promise and risk. These technologies have the potential to revolutionize business by providing unprecedented insight into human psychology. They can also violate human rights and destabilize society in surprising and disturbing ways. The decisions we make today – regarding ethics, oversight and responsible use – will set the course for years to come back.