Sunday May 10th, 2026
Download the app
Copied

Saudi Authority Issues Deepfake Guidelines to Address AI Risks

The framework outlines safeguards against fraud and misuse while supporting innovation across sectors.

Scene Now Saudi

Saudi Authority Issues Deepfake Guidelines to Address AI Risks

The Saudi Data and Artificial Intelligence Authority has released new guidelines on the use of deepfake technology, outlining measures to reduce risks associated with synthetic media while supporting its use across industries.

Titled “Deepfakes Guidelines: Mitigating Risks While Fostering Innovation,” the document distinguishes between malicious and non-malicious applications across sectors including marketing, entertainment, retail, education, healthcare, and culture.

The guidelines identify key risks including impersonation scams, non-consensual manipulation of personal images or audio, and disinformation affecting public discourse. They also highlight emerging threats such as advanced AI voice cloning and fabricated virtual environments designed to simulate real-world interactions.

For developers, the framework requires compliance with national regulations including the Personal Data Protection Law and Anti-Cyber Crime Law, alongside international standards such as GDPR and CCPA. Recommended technical measures include privacy-by-design, anonymisation, consent management, and systems that allow individuals to request removal of their data from training sets.

Content creators are required to avoid uses involving fraud, impersonation, or defamation, and to apply visible, tamper-resistant watermarks and secure explicit user consent. The guidelines also recommend the use of blockchain and cryptographic tools to verify content origin.

Regulators are advised to prioritise monitoring of high-risk areas such as financial fraud and identity misuse, and to introduce approval processes before commercial deployment. Additional measures include audits, training programmes, and public awareness campaigns.

For users, the guidance outlines steps to identify manipulated content, including verifying sources, checking for visual inconsistencies, and using detection tools. Victims of misuse are advised to document evidence, report content to platforms, and notify authorities through official channels.

×

Be the first to know

Download

The SceneNow App
×