YouTube’s new policy on AI-generated content – what you need to know

YouTube Music have been making some huge strides in AI. Most of it has been a largely kept secret. Artists can now request a takedown of music that mimics their voice, or content that replicates their face.

Meta isn’t the only tech giant grappling with the implications of AI-generated content. In June, YouTube quietly introduced a significant policy update that allows individuals to request the takedown of AI-generated or synthetic content simulating their face or voice.

This policy change builds on YouTube’s broader approach to responsible AI. Which, was initially outlined in November. While AI is great, it can be extremely damaging. With deepfakes being so believable, it can get out of hand pretty quickly, causing damage.

Privacy-based takedown requests

Previously, AI-generated content that was misleading, like deepfakes, could be removed for violating YouTube’s guidelines against misinformation. The new policy, however, frames these takedown requests as privacy violations.

According to YouTube’s updated help documentation, the affected person must submit the request unless specific exceptions apply. Such as; cases involving minors, deceased individuals, or those without computer access.

Evaluation criteria

Submitting a takedown request doesn’t guarantee the content will be removed. YouTube will evaluate each complaint on several factors:

  • Disclosure: Whether the content is labelled as synthetic or AI-generated.
  • Identification: If the content uniquely identifies an individual.
  • Nature of content: Whether the content could be considered parody, satire, or otherwise valuable and in the public interest.
  • Public figures: If the content features a public figure or well-known individual, especially if it portrays them in a sensitive context, such as criminal activity or political endorsements.

This last point is particularly critical during election years, where AI-generated endorsements could potentially influence voters.

Content removal process

If a takedown request is made, YouTube will give the content uploader 48 hours to address the complaint. If the content is removed within this period, the complaint is resolved. However, if not, YouTube will initiate a review.

Removal means the video will be completely taken down, and any personal information in the title, description, or tags will be deleted. Blurring out faces is also an option. But, simply making the video private doesn’t comply, as it could be switched back to public at any time.

Disclosure and context tools

In March, YouTube introduced a tool in Creator Studio allowing creators to disclose when content is made with altered or synthetic media, including AI. Recently, YouTube also began testing a feature that lets users add crowdsourced notes to provide context.

Such as indicating whether a video is parody or misleading. This will help the platform notice when something appears on the site that perhaps shouldn’t. After all, the platform can’t monitor every video/song closely, so will rely on the help of users.

YouTube’s stance on AI

YouTube isn’t opposed to AI. It has experimented with generative AI tools, like a comments summariser and a conversational tool for video-related questions and recommendations. However, AI-generated content must still adhere to YouTube’s Community Guidelines.

Merely labelling it as AI-created doesn’t exempt it from removal if it violates these guidelines. YouTube’s new policy represents a proactive step in managing the impact of AI-generated content. It could make huge changes that will protect content creators.

By allowing privacy-based takedown requests, YouTube aims to balance the benefits of AI with the need to protect individual privacy and integrity. As AI continues to evolve, such measures will be crucial in maintaining trust and safety on digital platforms.

PUSH.fm sign up for free GIF
Found this helpful? Share it with your friends!
Close Bitnami banner
Bitnami