YouTube expands deepfake detection tool to politicians and journalists YouTube is broadening its efforts to combat AI impersonation by introducing a deepfake detection tool to a select group of government officials, political candidates, and journalists. The move aims to safeguard the integrity of public discourse, according to Leslie Miller, vice president of government affairs and public policy at YouTube, who emphasized the heightened risks of AI impersonation for individuals in civic roles. The platform has been developing its likeness detection tool since 2024 in collaboration with Creative Artists Agency, initially testing it with top creators like MrBeast and Marques Brownlee. Last year, access was extended to all creators, with Amjad Hanif, vice president of creator products at YouTube, noting that most of the tool’s impact has been relatively benign or beneficial to creators’ businesses. YouTube CEO Neal Mohan highlighted AI transparency and protections as a top priority for 2026, including labeling AI-generated content and removing harmful synthetic media. The current tool focuses on facial likeness detection, but the company is also exploring voice impersonation. Additionally, YouTube is considering allowing creators to monetize their likeness in detected content, similar to its Content ID system. The expansion comes as tech companies increasingly prioritize safeguards against AI-driven impersonation. YouTube plans to eventually extend the tool to all government officials, political candidates, and journalists. Congress and the Trump administration have shown concern over deepfakes and AI impersonation. Last year, President Trump signed the TAKE IT DOWN Act, addressing nonconsensual intimate images, including deepfaked content.#youtube #no_fakes_act #leslie_miller #neal_mohan #take_it_down_act