Mandatory Labeling for AI-Generated Content
상태바
Mandatory Labeling for AI-Generated Content
  • 정민서 기자
  • 승인 2024.06.24 14:57
  • 댓글 0
이 기사를 공유합니다

As digital media develops and becomes more common, artificial intelligence (AI) is also evolving. AI is a computer system with functions that imitate human intelligence and is capable of learning, reasoning, and self-correction. AI has been applied to various fields such as medical care, finance, education, and entertainment. This in turn has created a need for AI product labeling, which is generally used for identifying AI-generated content. It indicates that content was created by AI, not a human, and increases transparency and reliability. Recently, as the influence of AI has grown, so has the importance of identifying AI products.

 What Are the Problems of AI?

 With the development of AI technology, deepfakes and fake news have emerged as global problems. In an example of fake news, AI-constructed photos of a building next to the U.S. Pentagon engulfed in flames were circulated on social media, causing many people to be confused. As a result of such incidents, the European Union passed a bill in December last year, marking the first time in the world that watermarks were mandated for AI-generated content. In July of last year, seven major AI companies in the U.S., including Google, Microsoft, and OpenAI, officially announced their collaboration with the U.S. government to implement watermarks*. They are actively pushing for legislation at the congressional level to support this initiative. South Korea is also advocating for the introduction of mandatory labeling on AI-produced content to prevent the misuse of AI.

* A watermark is a faint design, image, or text that is visible when paper or some other material is held up to a source of light. It is typically used in paper to identify the source or manufacturer, indicate authenticity, or prevent counterfeiting.

 

Copyright Infringement by Generative AI

Lately, there has been growing discussion around the unauthorized extraction of celebrity voices and the development of AI technology capable of learning and generating cover songs. Legal circles believe that cover songs violate the publicity rights of music artists. Human voices are currently not protected by copyright law because they are not defined as works. But if AI is used to cause financial damage to an artist, that can be seen as infringing the artist’s right of publicity, which is an intellectual property right that protects against the misappropriation of a person’s name, likeness, or other indicia of personal identity for commercial benefit.

 In an example from South Korea, South Jeolla Provincial Office of Education held a theme song contest to promote the Glocal Future Education Fair, where an AI generated song was selected to receive the top prize. In an open letter distributed on April 3rd, the Alliance for Artists’ Rights argued that predatory attacks using AI that steal the voices and portraits of professional artists and infringe on their rights should be prevented. They also said that the use of artists’ works for training AI models and systems without the permission of artists is an attack on human creativity.

 In a case of overseas copyright infringement, an Internet court in Guangzhou, China made a ruling on February 27 this year. The court stated that images created by the defendant using generative AI were similar to the Japanese character Ultraman, infringing on copyright and adaptation rights. As a result, the defendant was ordered to compensate the plaintiff, the copyright holder of Ultraman, 10,000 yuan (1.85 million won) in damages. According to the ruling, the defendant trained an AI using images of Ultraman to create similar characters. He afflicted the plaintiff by earning profits from selling his AI-generated characters and running paid memberships for content.

 

Korean Government’s Plan and Public Opinion Regarding AI Content

 On March 21st, the Korea Communications Commission (KCC) announced a work plan aimed at enhancing user protection in the realm of AI services through strengthened policy measures. In particular, the government will push for the introduction of an AI product display system that makes it mandatory to mark AI products when posting AI-generated content online. The government will also open a dedicated reporting center to remedy AI-related damage. The government’s plan calls for curbing the dysfunction of new digital services such as AI and re-establishing a normative system that fits within the paradigms of reality. In addition, the government and related authorities are making continuous efforts to revise and strengthen media laws; tighten social education and information distribution in educational institutions, civic groups, and the media; reinforce social media platform regulations; and establish a fact-checking and verification system.

 Meanwhile, content producers and industry insiders have had a subtle disagreement over the mandatory disclosure of AI involvement in content produced by AI. Content producers have urged active and fast mandatory disclosure, but the industry has insisted that the method and scope of disclosure should be carefully approached.

 Kwon Hyuk-joo, president of the Webtoon Writers Association, expressed concern over AI-produced content and insisted on strengthening artists’ rights to data used for AI training. He said that although the flow of AI utilization cannot be stopped, he thinks it is necessary to regulate the ability of AI to learn from visual content, such as webtoons. According to the results of his survey on the issue, 50 percent of participants said that regulations should be made, 30 percent said that they do not know, and 20 percent said that artists should keep up with the times.

 At a public hearing at the National Assembly, Chairman of the Korea Music Copyright Association (KOMCA), Chu Ga-yeol, said, "I think it is a sign that AI and copyrights can coexist by making AI labeling mandatory and establishing a basic framework to distinguish between creators’ works and AI's." He also stressed that the public must play a positive role in preventing AI misuse by strengthening transparency in AI use and preventing greater social problems from occurring.

 

How Is AI to Be Used in the Future?

 AI will continue to be used more often in our daily lives and in a wider range of fields. How we use this wonderful technology will change the way people see AI in the future. There is confusion surrounding AI because the laws on how to use AI are still being written, but there is no doubt that AI will be an inevitable technology for humans. Quick calculation and creativity beyond basic human skills will generate endless new possibilities. Until that era comes, we need clear regulations and defined lines for human and AI systems.


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
0 / 400
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.