AIA Whitepaper 2025
When identity can be faked: Deepfakes, voice cloning & the legal risks implicating workplaces

When identity can be faked: Deepfakes, voice cloning & the legal risks implicating workplaces

As deepfakes and voice cloning become alarmingly convincing, organisations across Asia are confronting a new workplace reality where familiar faces and trusted voices can no longer be taken at face value. In conversation with Sarah Gideon, legal experts and HR leaders weigh in on the rising risks and what companies must do to redesign trust, governance and accountability in the digital age.

Artificial intelligence (AI) is becoming smarter by the day. Once niche, deepfakes and voice cloning have become increasingly accessible, making it easier than ever to create highly convincing synthetic audio, video and digital identities.

Deepfakes — AI-generated audio or video that misrepresents a person’s image or voice, opens the door to malicious uses, including spreading misinformation and falling victim to scammers for financial gain.  

In APAC, the threat is real: organisations have fallen prey to fake voice messages from senior leaders, convincing video calls, and emails triggering urgent financial transfers or sensitive disclosures. 

According to Jonathan Isaacs, Asia Pacific Chair, Baker McKenzie’s Employment & Compensation Practice, deepfakes can be used to create manipulated and compromising images of a fellow employee, which could be distributed to others. Dissemination of a sexually compromising deepfake picture of a fellow employee could amount to sexual harassment under the legislation.

"Aside from detrimental impacts on wellbeing and the workplace environment, harassment can also expose employers to liability since they can be vicariously liable for the discriminatory acts of their employees," he shares.

What makes these scams particularly effective is not carelessness, but trust. They exploit familiar voices, recognisable faces and established workplace hierarchies — environments where people are conditioned to act quickly and comply.

As a result, deepfakes and voice cloning are no longer purely technological concerns. They present a growing legal, organisational and human risk, particularly in the workplace, where decisions, approvals and confidential information are increasingly exchanged through digital channels.

With the above in mind, Sarah Gideon speaks to HR leaders and legal experts to find out how they are responding to a reality where seeing and hearing are no longer reliable indicators of truth — and what this shift means for governance, accountability and trust at work.

The legal perspective: Responsibility, evidence and regulatory limits

From a legal standpoint, deepfakes and voice cloning can be highly disruptive.

For Donovan Cheah, Partner and Head of Employment & Dispute Resolution at Donovan & Ho, the challenge is twofold.

“First, it becomes harder to determine responsibility when an employee is misled by a convincing fraudulent instruction.

“The usual assumptions about fault may no longer apply cleanly. When a fraudulent instruction looks and sounds authentic, an employee may be acting reasonably by following it,” he adds.

Secondly, Cheah highlighted that evidence can be increasingly unreliable given that audio and video recordings can now be realistically manipulated.

"This raises doubts about their reliability in investigations and disputes.”

In the workplace, Cheah elaborates, decisions are often made quickly based on trust and hierarchy as opposed to verification, and deepfakes and voice cloning create grey areas around negligence, internal controls, and what counts as reasonable steps to prevent fraud and deception.

Celeste Ang, Principal, Employment and Dispute Resolution, Baker McKenzie Wong & Leow, adds that these technologies are particularly dangerous because they are built to blend in.

“Deepfakes and voice cloning present notable legal challenges, in part because they are designed to blend into the fast-paced environment where corporate decisions are made.

“Without adequate detection, reporting and monitoring mechanisms, fraudsters may go undetected long enough to extract payments or sensitive information before anything appears amiss.”

Identifying and pursuing perpetrators after a fraud, adds another layer of difficulty.

Ang explains that syndicates often operate across borders and may use advanced encryption to hide their identities.

“In Singapore, whilst the Courts have civil jurisdiction to issue orders in appropriate cases against unknown persons who are sufficiently described, enforcing those orders against the unidentified fraudster remains a practical challenge if their whereabouts remain unclear.

As a result, even when legal rights are clearly violated, companies may struggle to obtain remedies that fully compensate for their losses.

Misconceptions about exposure

Both lawyers highlight that organisations often underestimate their vulnerability.

“One misconception is that it only becomes a serious issue if there is a direct financial loss,” says Cheah.

“Exposure can extend to breaches of confidentiality, data protection obligations, employment duties, and even fiduciary responsibilities.”

For instance, an employee who discloses personal or confidential information in response to a fake instruction may trigger regulatory or contractual consequences, regardless of whether money changed hands.

Ang adds that synthetic identities are more sophisticated than many realise. “Synthetic identity scams manifest in different ways, with some scams using wholly fictitious details whilst other scams combine real and fake elements to create a new identity.

Fraudsters may fraudsters may combine real personal details – such as names, identification numbers or addresses – with fabricated biometric data, which can limit the effectiveness of manual checks alone. Organisations may also overestimate the coverage their fraud or cybercrime insurance provides, as certain exclusions may apply to losses involving proprietary data, trade secrets, or other intellectual property.

Another misconception that some companies may have is that synthetic identity scams only affect financial institutions.

“A recent September 2025 information paper from the Monetary Authority of Singapore challenges this view, citing examples of deepfake and/or impersonation scam victims, which included a multinational corporation, a startup and even a security awareness tracking firm,” Ang explains.

Are existing laws enough?

Legal frameworks exist but are stretched. Cheah notes that in many jurisdictions, existing fraud, impersonation, and privacy laws can technically apply to deepfakes and voice cloning, though these concepts were not designed with AI-generated identities in mind.

In Malaysia, for example, misconduct involving deepfakes may already fall under existing frameworks such as cheating offences under the Penal Code or unauthorised use or disclosure of personal data under the Personal Data Protection Act 2010 (PDPA).

While entirely new legislation may not be required, he suggests that "refinement and clearer regulatory guidance would be helpful”, as courts are increasingly likely to assess what is “reasonable” based on available technology and known risks.

Ang agrees that legislative evolution is inevitable, pointing out that keeping laws current with rapidly evolving fraud techniques and technologies is an ongoing challenge for any jurisdiction.

She adds that in Singapore, the government has proactively expanded sentencing options for those found guilty of online scams.

Yet, she stresses, laws alone cannot provide complete protection.

Organisations must complement legal frameworks with robust detection, monitoring, and reporting mechanisms, as well as strong company policies, technological investment, and comprehensive employee training — all essential to addressing potential weak points in the defence against sophisticated fraud.

The HR perspective: When credibility becomes synthetic

If the legal risks are structural, the HR implications are cultural.

Teofilus Ponniah, an experienced HR and employee relations leader, brings the standpoint that deepfakes expose a long-standing workplace bias.

"Deepfakes are increasingly forcing organisations to face something that we have quietly tolerated for decades in that we have on many occasions prioritised someone for “looking credible” over someone who does not.”

He notes that appearance has thus stopped being a reliable signal for capability and credibility. Instead, "with the advent of widespread video calls and working remotely where the only view is through the screen, the usage of deepfakes and voice alteration has the potential to be widespread and can cause a whole lot of damage and hurt ranging from managing relations at the workplace to hiring potential employees.”

Rather than tightening rules alone, Ponniah advocates shifting the focus from visibility to performance — from "what an employee is capable of performing in their role rather than “visibility” that has often plagued the workplace.”

He adds that deepfakes undermine a foundational workplace assumption – that seeing someone equates to knowing them. With the understanding that perception and biases are formed between 3-5 seconds of meeting someone, and rapid categorisation occurs sometimes right down to the perceived social group that the person maybe from, Ponniah believes that humans “unconsciously correlate visual and vocal cues to intelligence and leadership ability.”

In tackling the matter, Ponniah proposes an outcome-first approach, urging organisations to “double down on areas that would better test output and competency and outcome as we slowly move away from appearance.”

This involves evaluating employees on evidence that matters:

  • What work can they produce? 
  • Can they perform in the role that they have been hired for?
  • Will they be able to work effectively with a team?

He points to growing evidence that skills and work samples predict job performance more accurately than appearance or first impressions. With a significant portion of employees working remotely — and even more active on platforms like TikTok or Instagram — regulating deepfakes may not be the most effective solution.

“The opportunity here is to have processes or systems in place that will allow for analytical and deep reasoning as to whether the person is actually displaying real capability.”

From an economic lens, the barrier to entry to leveraging deepfake has become low whereas the cost of thwarting such deception has escalated. A more plausible approach would thus be to take on a risk adaptive approach, he says.

Yet he is equally clear about the darker side of deepfakes. “Using deepfake technology to plaster the faces of colleagues on nude bodies and then distribute and pass off such pictures as being real creates a new form of horror at the workplace,” he highlights.

In such cases, Ponniah insists there should be zero tolerance, with immediate termination policies and safeguards to support victims.

Ultimately, he argues, deepfakes force organisations to confront an uncomfortable truth: workplaces have long rewarded appearance as a proxy for ability, and this must change. Companies now face a choice — introduce policies solely to prevent deepfakes, or redesign work around outcomes.

“An outcome based, risk-adaptive workplace model does not excuse fraud, nor does it alleviate zero-tolerance responses to the most abusive and harmful uses of deepfakes.

“Instead, it draws a sharper line: punishing deception and harm decisively, while stripping appearance of its unwarranted power and elevating demonstrable capability in its place.”

Adding a complementary perspective, Rodney Pereira, Senior HR Director at Medtronic, emphasises that policies and systems are essential to make this outcome-first approach practical.

Referencing Medtronic's approach, he tells us: "Our Code of Conduct and Global Anti-Fraud Policy set clear expectations for ethical behaviour, proactive reporting, and secure internal communications. In addition, we leverage the Medtronic AI Compass framework, which guides our responsible use of artificial intelligence and ensures we are vigilant about emerging digital threats.

“Employees are also provided regular training on recognising and escalating suspicious communications, and internal guidelines reinforce robust verification processes — especially for sensitive requests.

"These combined policies and resources help us foster a culture of vigilance, accountability, and trust, supporting a secure workplace environment for all.”

From policy to practice: A shared response

Legal and HR teams agree: this cannot be solved in silos.

“The most impactful step is for Human Resources, IT, and Legal to jointly redesign how approvals and verification work in practice, not just on paper,” says Cheah.

Ang emphasises a “multi-prong, shared, organisation-wide response framework and ensure employees are familiar with it.”

For Pereira, he shares that at Medtronic, the most impactful step HR, IT, and Legal can take together is “to establish a unified, organisation-wide framework for digital identity verification and incident response.”

He shares that various measures have been set in place to address these risks, such as multi-factor authentication for critical systems, regular phishing simulations, and scenario-based training exercises that include deepfake awareness.

Further, cross-functional collaboration is further supported by “clear protocols for authenticating requests and reporting suspicious activity, as well as ongoing policy reviews to align with both technological advancements and evolving regulations.”

In Asia, where digital adoption and threat landscapes are rapidly changing, Pereira emphasises that this collaborative and proactive approach is imperative to maintain trust, governance, and accountability across all levels of the organisation.

Trust, redesigned

Caricature generators and AI-enhanced personas may seem like entertainment, but they highlight a deeper shift: trust is no longer earned solely through visibility. In the era of synthetic media, verification — not mere compliance — becomes the foundation of governance.

In many ways, the deepfake era forces organisations to confront a unique paradox: trust remains essential but must now be structured.

As synthetic media becomes more sophisticated, the future of workplace governance may become less dependent on how quickly employees comply — and more on how confidently they verify.

And perhaps more fundamentally, workplaces may need to move beyond rewarding appearance — and toward measuring what truly matters: capability, accountability and outcome.


Lead image / Provided

Follow us on Telegram and on Instagram @humanresourcesonline for all the latest HR and manpower news from around the region!

Free newsletter

Get the daily lowdown on Asia's top Human Resources stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's Human Resources development – for free.

subscribe now open in new window