YouTube To Introduce AI Detection Tools, Safeguard Creators

YouTube said that the first new tool has already been developed and the company is “actively developing“ the second.

0

YouTube has announced a new set of AI detection tools to protect creators, including artists, actors and musicians from generative artificial intelligence.  The tools will prevent their face and voice from being copied and used in other videos.

How YouTube Aims To Counter Deepfakes With AI Detection Tools

The new detection technology is part of YouTube’s existing Content ID system, which identifies copyright-protected material. This system will be expanded to include new synthetic-singing identification technology to identify AI content that mimics someone’s singing voice. Other technologies will be developed to detect when someone’s face is simulated with AI.

YouTube said that the first new tool has already been developed and the company is “actively developing“ the second.

“As AI evolves, we believe it should enhance human creativity, not replace it. We’re committed to working with our partners to ensure future advancements amplify their voices, and we’ll continue to develop guardrails to address concerns and achieve our common goals,” stated YouTube.

Additionally, YouTube also pointed out that the company pledges to crack down on those scraping the content on its  platform to build AI tools, which is a violation of its terms of use. 

“When it comes to other parties, such as those who may try to scrape YouTube content, we’ve been clear that accessing creator content in unauthorised ways violates our Terms of Service and undermines the value we provide back to creators in exchange for their work. We’ll continue to employ measures to ensure that third parties respect these terms, including ongoing investments in the systems that detect and prevent unauthorized access, up to and including blocking access from those who scrape”, said the Google-owned company. 

YouTube To Introduce Tools To Detect AI Deepfakes, Safeguard Creators

The announcement of the detection technologies come at a time when entities across the public and private sectors have been taking action to safeguard against generative AI models, particularly around the use of deepfakes.

For instance, California lawmakers are mulling over  a new bill that would require AI companies to add watermarks to the metadata of AI-generated images, video and audio. In Europe, the recently promulgated AI Act requires disclosures for AI-generated content. 

Big tech companies, including Meta and TikTok have also taken initiatives to ensure their users are able to distinguish between real and AI-generated content shared on their respective platforms.

Also Read: Top AI Tools Every Content Creator Should Use

YouTube’s decision comes in line with the promise it made last November that it would figure out a way to compensate artists whose works were used to generate AI music. 

YouTube has said that it is presently focused on refining the technology and plans to launch the pilot program early next year.

The video-streaming platform also said it’s developing ways to give creators more choices regarding how third-party AI companies would be permitted to use their content on the platform and will share further details later this year. 

Leave A Reply

Your email address will not be published.