All articles

Fraud, Fakes & AI: The New Corporate Battleground

September 20, 2024 • ARTICLE BY Douglas S. Pasternak

The firm had an illustrious history and a sophisticated executive team. But its profits made it a prime target, and last year, a victim of a new-age crime worth some attention.

The company, Arup, based in the United Kingdom, was founded nearly 80 years ago. It employs more than 18,000 employees in 140 countries to design buildings, bridges, electrical facilities, and water treatment plants, among other major projects, including the Sydney Opera House, the Oresund Bridge connecting Sweden and Denmark, and Beijing’s 2008 Olympics’ Bird Nest Stadium. It generated revenue of more than $2.7 billion in 2023, with nearly $70 million in profits.

Earlier this year, however, one of their Hong Kong employees was duped by virtual fraudsters who utilized the newest tools in Artificial Intelligence (AI) to create a deepfake video that impersonated the company’s Chief Financial Officer (CFO) and other corporate employees to orchestrate the digital transfer of more than $25 million to a cyber thief.

In this case, according to press reports, the Arup employee initially received an email – supposedly from the company’s UK headquarters office — about quietly setting up the financial transactions. To his credit, the employee initially suspected a phishing attempt. However, the thief or thieves were able to assuage those fears by setting up an AI-generated deepfake video conference call between the Hong Kong-based employee and the company’s CFO and other senior corporate staff known to the employee.

The conversation and images on the video call looked authentic to the worker in the company’s finance department in Hong Kong. In fact, he was the only actual person on the call. The others were all fake, AI-generated videos of real colleagues. This was part of an elaborate AI-fueled scam.

Subsequently, at the direction of the AI-generated CFO, the worker made fifteen separate financial transactions wiring a total of $25 million  to five distinct Hong Kong bank accounts, according to news reports. At the end of these transactions the worker finally made an inquiry about the transactions with the company’s headquarters in the United Kingdom and realized he had made an expensive mistake.

Hong Kong police told reporters they have had at least 20 similar AI-deepfake incidents reported to them, which resulted in 90 fraudulent loan applications and 54 bank account registrations between July and September 2023. Six people have apparently been arrested tied to those incidents, but it is unclear if they were also connected to the Arup case.

In May 2024, Warren Buffet warned that AI-related scams may become the “growth industry of all time.” Fast Company and CNBC reported on Buffet’s comments, and a video-link to Buffet’s remarks showed him relaying an incident that happened to him. He was sent a recent image of himself, wearing his clothes, replicating his own voice, that was generated by AI, and he joked that the representation was so good that it could have fooled him into wiring money to himself.

Although there is growing investment in AI-powered tools meant to identify and prevent fraud, sophisticated AI-related scams are occurring on an increasing basis. Corporate fraud is an old threat, but artificial intelligence is creating new, expanding, and evolving challenges making it more difficult to both identify and prevent AI-related scams from deceiving a wide-range of employees and corporate executives alike.

Like so many technologies, AI has become a double-edged sword, requiring that corporations take extra care in vetting transactions and partners. On the positive side, it has ushered in an era of great promise in medical research, national security, science, economic efficiencies, education, and other fields. In February 2023, for instance, Mark Read, the Chief Executive Officer (CEO) of the world’s largest marketing company, WPP, told The Guardian newspaper that over the past few years the company had been investing in AI-advertising, generating AI-inspired videos, and that this was both helping the company “win new business” and that this sort of AI-advertising was “fundamental” to the company’s future success.

However, AI has also unleased serious new perils in the corporate and financial arenas. One year after Read’s comments, YouTube footage and a voice clone of Read were used in a Microsoft Teams meeting with another senior WPP executive in an ill-fated attempt to solicit money and personal information by urging him to set up a new business. “Fortunately the attackers were not successful,” Read wrote in an email divulging the AI-inspired scam. “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes,” he wrote.

Last year, the World Economic Forum noted the increasing use of AI in cybercrimes, especially the use of deepfakes. It said that in 2022/2023, “26% of smaller and 38% of large companies experienced deepfake fraud resulting in losses of up to $480,000.” Indeed, the number of AI-fraud cases continues to expand worldwide as indicated by the chart below:

 

 

A recent study of 2,000 American adults found that nearly half (48%) of them believed the increased use of AI in fraudulent activities made them less likely to detect these scams, and more than two-thirds of them believe that AI has had a large impact on financial-related scams. A separate survey conducted earlier this year by the consulting firm KPMG found that 95% of Canadian businesses that had previously experienced fraud, expressed grave concerns regarding the increased abuse of AI-fraud related activities against their companies.

These are not idle fears.

  • In June 2024, the Digital Trust Index, which looks at AI-related fraud, found that “76% of fraud and risk professionals believe their business has been targeted by AI fraud, with over half reporting this type of fraud happening daily or weekly.”
  • In their July/August 2024 issue, Fraud magazine had a featured article on the increasing use of deepfake technology to commit financial crimes. In June The Banker magazine had a similar article focused on the use of deepfakes against banks.
  • In May 2024, the Journal of Accountancy warned Certified Public Accountants (CPAs) about emerging AI threats. “Resourceful fraudsters can now use AI to create convincingly realistic documents and data such as invoices, contracts, reports, spreadsheets, and bank statements to support a fraud scheme,” the article said.
  • In March 2024, INTERPOL released a global financial fraud assessment that highlighted the fact that fraudulent criminal acts are now being conducted by organized criminal networks boosted by technology, including AI. The com website has said that “the average cost of creating a deepfake is $1.33” while “the expected global cost of deepfake fraud in 2024 is $1 trillion.”
  • Last year, the financial auditing and consulting firm Deloitte said the first AI-related fraud case was verified in 2019 in the United Kingdom and resulted in payments of nearly $250,000 to a fictitious company in Hungary. Since then, a global survey has indicated that 37% of organizations have experienced some type of AI-fraud utilizing deepfake voice cloning technology.
  • The financial firm Vanguard also reported last year that a government official in Fuzhou, China, fell victim to AI-related fraud when he mistakenly made a payment of around $600,000 to an individual he believed to be a close friend but in reality was an AI-generated imposter.

Last year, the Federal Trade Commission (FTC) issued an alert regarding the mischievous use of voice cloning technology to engage in illegal acts, and the Consumer Financial Protection Bureau (CFPB) has also warned consumers about AI-inspired fraud that rely upon audio, video, and images to target individuals. The Financial Industry Regulatory Authority (FINRA) and Securities and Exchange Commission (SEC) have also issued warnings about the growing use of AI-enabled fraudulent schemes.

Law enforcement officials have taken note: Earlier this year, Deputy Attorney Geneal Lisa Monaco delivered a speech to the American Bar Association’s National Institute on White Collar Crime and said, “Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike.”

However, regardless of the government’s actions on these various fronts, the best defense to block AI-fueled fraud schemes is to  make heightened efforts to verify and validate the credibility and authenticity of companies and individuals – as well as their data – of all those that corporations partner with during their regular course of business..

Some protective steps can be taken internally, but others will likely require external assistance.
In any conversations involving transactions or the transfer of sensitive information, employees should:

  • Verify the authenticity of those they are communicating with over the phone, computer, cell phone, or video conference. When in doubt, they should pause, step back, hang up, and initiate the call or communication to the known phone number of the individual who reached out.
  • Validate the accuracy of the information that you are being given in business transactions, particularly with new corporate partners or new senior hires. Validate the source of key information, including business, financial or other records. Evolving protocols for Know Your Customer (KYC) procedures can help verify and validate corporate clients and collaborators via both technical solutions and basic investigative steps.
  • Authenticate the parameters of a transaction before hitting send. Use low-tech methods, such as phone calls, or reputable vendors to verify that the destination of any funds is legitimate.
  • Inform your colleagues and business partners of the known and growing threats posed by AI-fraud. Most organizations now have annual IT security training to remind employees about the vast range of cybersecurity threats that may be used against them to target their business. Informing them of the evolving nature of AI-fraud can go a long way in helping to shield businesses from these new threats.
  • Combat AI-fraud with emerging AI-tools being developed to help detect AI-powered fraud. Many of these tools are in their early stages of development, however, and they will need to evolve as the sophistication of the threat advances.

Experts can also help. Conducting thorough, pre-transactional investigations can help ward off costly mistakes or unintentional missteps. AI-related fraud has made combatting fraudulent criminal endeavors more challenging, and the necessity of taking proactive and diligent steps in professional interactions to vet the legitimacy and authenticity of your clients, partners, and collaborators is now more important than it has ever been.

Douglas Pasternak is the Senior Director for Investigations at RosettiStarr LLC, a corporate research and investigations firm in Bethesda, Md. He was a former award-winning investigative reporter at U.S. News & World Report magazine and NBC Nightly News, and led investigations and oversight matters on three separate Congressional committees for more than 16 years.

Share this post

Related articles