Blog /
Your Face Is Now a Business Asset And It Has No Legal Protection
April 17, 2026
Your Face Is Now a Business Asset And It Has No Legal Protection

In 30 years of practicing international intellectual property law across 35 jurisdictions, I have witnessed every conceivable permutation of identity fraud, brand impersonation, and reputational attack. Nothing — not counterfeiting, not domain squatting, not even coordinated defamation campaigns — has moved as fast, hit as hard, or exposed institutional investors and high-net-worth individuals to as much asymmetric risk as the rise of synthetic media fraud.

 

We are no longer in the era of deepfakes as a novelty. We are in the era of deepfakes as infrastructure— commoditized, automated, and deployed at scale against executives, investors, public figures, and corporations. The question facing every sophisticated actor in the global investment ecosystem is not 'will this happen to someone I know?' but 'when will it happen to me, and what legal standing will I have when it does?'

 

The answer, today, for the vast majority of people — including C-suite executives and board-level investors — is none. This article is my attempt to change that.

The Numbers Are No Longer Theoretical

Let me begin with data,because in the investor community, data commands attention. The scale of synthetic media fraud in 2025–2026 has crossed from alarming into systemic.

The scale of synthetic media fraud has crossed from alarming into systemic.

These are not hypothetical projections from academic papers. These are numbers being recorded by insurance underwriters, financial regulators,and corporate legal departments — right now.

The Case Files: When Deepfakes Meet Capital

The following incidents represent a cross-section of verified events that have reshaped how I advise clients on identity risk. Each case illustrates a different attack vector; collectively, they define the threat landscape of 2025–2026.

Arup Engineering — Hong Kong, 2024

In what remains the most extensively documented corporate deepfake case to date, a senior finance employee at global engineering firm Arup authorized a transfer of $25.4 million following a video conference call in which every other participant — including the apparent CFO — was a real-time deepfake. The employee expressed no suspicion during the call. The fraud was discovered only after the wire cleared. Arup confirmed the incident publicly in February 2024. No recovery has been reported.

WPP CEO Fraud via Voice Clone — 2024

Mark Read, the CEO of WPP — the world's largest advertising group — was the subject of a deepfake Microsoft Teams call in which fraudsters used AI-generated video of his likeness combined with a cloned voice to solicit money and personal information from a senior WPP executive. The attack failed, but only because the target became suspicious of an unusual request. The sophistication of the execution was confirmed by WPP's own security team.

UK Energy Firm CEO Fraud via AI Voice — 2019 (the benchmark)

This case established the template. A German subsidiary CEO received what he believed to be a phone call from his parent company's chief executive, requesting an urgent transfer of €220,000 to a Hungarian supplier. The voice was AI-generated. The transfer was executed within the hour. The case, reported by Euler Hermes, represents the first documented use of AI voice synthesis in wire fraud and remains the baseline that every subsequent case is measured against.

Multinational Corporation — Singapore, 2025

In a near-exact replication of the Arup methodology, a Singapore-based multinational suffered a $8.9 million loss in early 2025 following a deepfake video conference in which a fraudulent 'senior executive' instructed finance personnel to execute a series of international transfers. Singapore's Commercial Affairs Department confirmed the investigation. The perpetrators remain unidentified.

These four cases share a common architecture: they exploit the trusted visual and acoustic identity of real people. The attacks succeed not because the technology is flawless, but because human beings are evolutionarily incapable of skepticism toward what they see and hear from a person they believe they recognize.

Why Existing Legal Frameworks Fail

Every investor and executive I brief on this subject asks the same question: 'Surely there is existing law that covers this?' There is not. And the gap is structural, not incidental.

Here is a systematic analysis of why every apparent legal avenue collapses under scrutiny:

• The Computer Fraud and Abuse Act (CFAA) — Requires unauthorized access to a computer system. A deepfake built entirely from publicly available images and audio involves no unauthorized access. The CFAA is silent.

• Defamation Law — Requires an identifiable defendant and the ability to establish that a specific false statement was made. Synthetic media distributed through anonymous networks, generated by AI systems without human authorship attribution, breaks both requirements.

• GDPR (EU) — Addresses the processing of personal data. If the images and voice recordings used to construct a deepfake were voluntarily made public — as is the case for virtually every CEO, politician, media figure, and public professional — GDPR provides no basis for a claim.

• Right of Publicity — Where it exists (primarily in US state law), protects against unauthorized commercial use of a person's likeness. Financial fraud via deepfake rarely falls within this definition; neither does reputational destruction through non-commercial synthetic media.

• Section 230 (US) — Platforms hosting deepfake content are almost universally immune from liability under this provision. The content remains; the legal remedy evaporates.

• EU AI Act, Article 50 — Mandates disclosure labeling for AI-generated content. Effective for compliant actors. Entirely ignored by fraudsters. The regulation is designed to govern legitimate industry participants, not criminal operators.

The conviction rate for deepfake financial fraud is, in practice, near zero — despite over 400 CEO-impersonation attacks occurring daily. The asymmetry between attack capability and legal recourse has never been greater in the history of financial crime.

A common question at this point is whether deepfake detection technology resolves the problem. It does not — and the distinction matters. Deepfake detection identifies synthetic media after it has been created and circulated. It cannot establish prior ownership of a person’s likeness, cannot activate legal enforcement, and cannot shift the burden of proof. Detection is a diagnostic tool. What the investment community requires is a legal instrument: something that transforms identity from a target into protected property before the attack occurs. That is a categorically different problem, and it requires a categorically different solution.

The Composite Copyright Work: A Structural Solution

After three years of legal research conducted across multiple jurisdictions in collaboration with INTEROCO (International Online Copyright Office, Berlin, Germany) and our consortium of 247 IP attorneys, litigators, and regulatory specialists, SANDJAR GROUP has developed and formalized a novel legal architecture that addresses this gap directly.

The mechanism rests on a single foundational insight: the problem with deepfake law is that it treats a person's digital identity as a collection of public data points. Our solution redefines it as private property.

"The critical shift is this: a person's curated digital identity — their face, voice, signature, name variants, public character profile, and cognitive expression corpus — does not belong to the public domain simply because it has been publicly observed. Under the Berne Convention, an original compilation of fixed expressions constitutes a protectable Composite Copyright Work the moment it is created and deposited. The deepfake becomes, by definition, an unauthorized derivative work." — Dr. Sandjar Muminov, SANDJAR GROUP

The Anti Deep-Fake Certificate (ADFC) operationalizes this framework. It is, in effect, an SSL certificate for human identity — a cryptographically verifiable, internationally registered declaration of ownership over one's digital likeness across six distinct dimensions:

• Visual Profile (Face-Print) — Biometric registration of facial geometry, enabling AI-based comparison verification and establishing the baseline for any derivative face-generation claim.

• Audio Profile (Voice-Print) — MFCC-based voice fingerprint registration, creating the foundational protected work against which any voice clone constitutes infringement.

• Signature Profile (Signature-Print) — Both personal and business signature dynamics, including pressure and velocity characteristics where applicable.

• Name Profile (Name-Print) — Registration of name variants, transliterations, and professional pseudonyms across relevant linguistic contexts.

• Public Profile (Character-Print) — A formal declaration of the holder's established public positions — and critically, of the 20+ categories of content they have never and would never produce. Any deepfake content outside this declared scope is immediately identifiable as inauthentic and legally actionable.

• Mental Profile (Cognitive-Print) — A registered corpus of the holder's authentic written and spoken expression: vocabulary, rhetorical structure, argumentative frameworks. This is the foundation for claims against AI systems trained on or misappropriating an individual's intellectual identity.

The Berne Convention Mechanism: Why This Works Globally

The Berne Convention — ratified by 181 countries — establishes that copyright protection arises at the moment of creation and fixation. No registration is required in most signatory jurisdictions for the right to exist. However, registration — particularly through a recognized international deposit authority — creates presumptive evidence of ownership and priority that courts in every member state are bound to recognize.

This is the precise function of the INTEROCO European Depository registration (HRB 198086, Berlin, Germany). Each ADFC certificate constitutes a formally deposited Composite Copyright Work. When a deepfake is created using a protected individual's face, voice, or cognitive signature, the legal calculus changes fundamentally:

• The burden of proof shifts — from the victim (who must prove harm and identify an attacker) to the alleged infringer (who must demonstrate their synthetic content does not derive from the protected Composite Work).

• The evidentiary standard is objective — pixel-by-pixel comparison between the deepfake and the registered Face-Print; frequency analysis against the Voice-Print. Courts are given a technical standard, not a philosophical argument about consent.

• Platform liability is activated — hosting a derivative work that infringes a registered copyright is materially different, legally, from hosting content that violates a vague 'likeness' right. Platforms cannot claim ignorance of a registered, QR-verifiable certificate.

• Cease-and-desist becomes immediately actionable — in every Berne signatory jurisdiction without needing to establish jurisdiction-specific precedent from scratch.

The Regulatory Window: August 2026

There is a specific urgency to this moment that transcends the general risk environment. The EU AI Act, Article 50, mandates compulsory deepfake disclosure labeling effective August 2026. Non-compliance carries penalties of up to €35 million or 7% of global annual revenue — whichever is greater.

For organizations operating in European markets — which, given the extraterritorial reach of EU digital regulation, effectively means any organization with a European digital presence, user base, or business relationship — compliance infrastructure must be in place before that date.

The ADFC provides three compliance-adjacent functions:

• Provenance baseline — A registered, QR-verifiable identity record against which content authenticity can be assessed, in alignment with C2PA technical standards adopted by Adobe, Microsoft, Google, and OpenAI.

• Enforcement trigger — The moment a registered certificate holder discovers non-disclosed AI-generated content using their protected likeness, the legal basis for enforcement under both copyright law and the EU AI Act is immediately available.

• Counterparty verification — For financial institutions, legal firms, and corporate treasury teams: the ability to verify, via API query to the Digital ID Registry, whether a counterparty in a video or voice communication holds a registered ADFC — and whether the communication is consistent with their Character-Print and declared communication profile.

The Investment Thesis: Verified Identity asRisk Infrastructure

We address the VNTR investor community directly on this point, because it reflects both the commercial dimension of what we have built and the strategic opportunity that exists for early adopters.

The SSL certificate analogy is not rhetorical. In the mid-1990s, organizations that adopted SSL encryption early did not do so because they faced immediate existential threats. They did so because they understood that trust infrastructure, once it becomes standard, confers a lasting competitive advantage — and that those who establish it early set the terms for everyone who follows.

We are at a structural turning point for digital identity. Consider the trajectory:

• In 2024, 53% of financial professionals in the US and UK reported experiencing a deepfake attack attempt. (EY Financial Services Survey, 2024)

• In 2025, 88% of all documented deepfake fraud events targeted the cryptocurrency and digital asset sector. (Sumsub, 2025)

• By 2027, Gartner projects that 25% of enterprise authentication decisions will require synthetic media verification infrastructure.

The organizations that register their executive teams, board members, and key investor-facing personnel with verified Digital ID Certificates in 2026 will occupy a structurally differentiated position by 2027: they will be the counterparties that sophisticated institutions, regulators, and investors trust by default.

More concretely: a fund manager with a registered ADFC in a video call with an LP is a categorically different risk proposition than one without. A CEO whose Voice-Print is registered and verifiable is a categorically different counterparty in a treasury-authorization workflow than one who is not. This is not branding. This is risk architecture.

What VNTR and SANDJAR GROUP Built

VNTR’s role in this partnership is specific: as a global investor community operating across 40+ chapters, VNTR brings access to the people this certificate is designed to protect — fund managers, founders, board members, and capital allocators whose identities are both high-value and high-exposure. VNTR members gain access to the ADFC program at preferential tiers, with onboarding coordinated directly through the VNTR network. SANDJAR GROUP provides the legal architecture, jurisdictional coverage, and enforcement infrastructure behind each certificate.

The ADFC program, developed in partnership with INTEROCO European Depository in Berlin, represents the culmination of three years of cross-jurisdictional legal engineering. Our consortium together with VNTR brings:

• 247 specialists — IP attorneys, litigators, regulatory consultants, and technical advisors across 35 countries

• 30 years of international IP litigation experience, including 1,200+ cases with a 98.2% favorable outcome rate

• $816.4 billion in IP assets under advisory across our client base

• Active registration infrastructure in the EU, GCC, North America, and Asia-Pacific jurisdictions

Each certificate is issued as a formal legal document under UAE law and the Berne Convention, verifiable via QR code on interoco.com, and protected for 50 years from the date of registration. Senior certificate holders receive direct access to Dr. Sandjar Muminov for strategic consultation, with escalation to full litigation support in any of our 35 active jurisdictions.

The Bottom Line for VNTR Investors

The deepfake threat is not a future scenario. It is the present reality of every high-visibility individual in the global investment community — and the absence of legal recourse under existing frameworks is not a temporary gap waiting to be filled by legislation. It is a structural condition that requires a structural solution.

The Anti Deep-Fake Certificate does not promise to prevent attacks. Nothing can guarantee that. What it provides is the transformation of a person's digital identity from unprotected public data into registered private property — with the legal machinery of the Berne Convention, 181 national jurisdictions, and a 29-year track record in IP enforcement standing behind it.

In the language of the investment community: this is a risk mitigation instrument with a highly asymmetric payoff profile. The cost of a certificate is fixed and knowable. The cost of a single successful deepfake attack against a principal — reputationally, financially, legally — is potentially unbounded.

'The question is not whether your digital identity is worth protecting. Everything of value is worth protecting. The question is whether you will establish that protection before or after the attack.' — Dr. Sandjar Muminov

VNTR members interested in exploring the Anti Deep-Fake Certificate — for personal registration, executive team coverage, or portfolio company risk advisory — can initiate the process directly through VNTR. Reach out via vntr.vc or drop a message to the VNTR team directly. VNTR Plus and VNTR Club members receive preferential pricing and priority onboarding.

Victoria Merli
Social Media & Marketing Manager
Subscribe to our Regular Newsletter
Subscribe to receive the latest industry news, events announcements, investment opportunities and insights!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
You might also be interested in…
May 9, 2025
The Future of ClimateTech: Why Sustainable Technologies Are Attracting Billions
Victoria Merli
June 6, 2025
Diversified Real Estate Investment: The Smart Way to Build Wealth
Victoria Merli
October 22, 2024
The Future of Web3: Community, Collaboration, and Innovation
Sara Fishbane