YouTube expands deepfake detection tool to politicians and journalists YouTube is broadening its efforts to combat AI impersonation by introducing a deepfake detection tool to a select group of government officials, political candidates, and journalists. The move aims to safeguard the integrity of public discourse, according to Leslie Miller, vice president of government affairs and public policy at YouTube, who emphasized the heightened risks of AI impersonation for individuals in civic roles. The platform has been developing its likeness detection tool since 2024 in collaboration with Creative Artists Agency, initially testing it with top creators like MrBeast and Marques Brownlee. Last year, access was extended to all creators, with Amjad Hanif, vice president of creator products at YouTube, noting that most of the tool’s impact has been relatively benign or beneficial to creators’ businesses. YouTube CEO Neal Mohan highlighted AI transparency and protections as a top priority for 2026, including labeling AI-generated content and removing harmful synthetic media. The current tool focuses on facial likeness detection, but the company is also exploring voice impersonation. Additionally, YouTube is considering allowing creators to monetize their likeness in detected content, similar to its Content ID system. The expansion comes as tech companies increasingly prioritize safeguards against AI-driven impersonation. YouTube plans to eventually extend the tool to all government officials, political candidates, and journalists. Congress and the Trump administration have shown concern over deepfakes and AI impersonation. Last year, President Trump signed the TAKE IT DOWN Act, addressing nonconsensual intimate images, including deepfaked content.#youtube #no_fakes_act #leslie_miller #neal_mohan #take_it_down_act
Expanding Likeness Detection to Civic Leaders and Journalists YouTube is enhancing its tools to protect the identities of individuals central to public discourse, including government officials, journalists, and political candidates. The platform previously introduced likeness detection for creators in the YouTube Partner Program, a first-of-its-kind feature to manage AI-generated content. Now, the tool is being expanded to a pilot group of civic leaders and media professionals, aiming to address the risks posed by deepfakes and unauthorized AI impersonation. The likeness detection system operates similarly to Content ID but focuses on identifying a person’s likeness in AI-generated content. If a match is found—such as a deepfake of an individual’s face—the affected person can review the content and request its removal if it violates YouTube’s privacy guidelines. However, the tool does not guarantee removal, as YouTube prioritizes free expression and preserves content like parody or satire, even when critiquing public figures. The platform will continue to evaluate exceptions to its policies when removal requests are submitted. To ensure the tool is used responsibly, participants must verify their identity before enrolling in the likeness detection program. The data collected during setup is solely for identity verification and to support the safety feature, not for training Google’s generative AI models. YouTube emphasizes that technology alone is not sufficient to address the challenges of AI-generated content. The company is advocating for legal frameworks like the NO FAKES Act, which establishes a federal right of publicity and serves as a model for international adoption. This approach aims to ensure that technology complements, rather than replaces, human creativity and accountability.#deepfakes #google #youtube #no_fakes_act #ai_generated_content
