AI Regulation Law: Is It Too Early in Korea?
상태바
AI Regulation Law: Is It Too Early in Korea?
  • 경남타임즈(경남대학교)
  • 승인 2024.03.22 16:25
  • 댓글 0
이 기사를 공유합니다

By Kang Min-joo, Kim Ha-neul

KT Senior Reporters

 

 On December 8, 2023, the European Union (EU) reached a deal on the world’s first artificial intelligence (AI) law. The following is the partial contents of the law: facial recognition using personal biometric information is allowed for victim identification and criminal tracking, but the creation of fake videos or images is prohibited; in the case of autonomous vehicles and medical devices, strict testing and data disclosure are required; and generative AI like ChatGPT is also subject to regulation. This law, which is now at its starting point, was proposed in April 2021. It is still pending a vote in the European Parliament. The expected timeframe for the law to come into effect is 2025 or 2026. We all are facing an unprecedented change for humanity, and the discussion about AI must continue to prevent risks and maximize the benefits of AI.

 Human rights activists criticized clauses allowing the collection of personal biometric information during the EU agreement process. Similar reactions from the Korean public and organizations are anticipated. Korea is currently in the early stages of AI development. Since the news of the EU proposal, there have been reports of movements in various countries to create regulatory guidelines. But some people in Korea have questioned the need for regulations at this early stage of AI development. Nevertheless, no one can deny the need to prepare for the upcoming future. But the question is, how should we go about that?

 

Pros: Before It’s Too Late

- Kim Ha-neul

 Our country is currently in the early stages of AI development. Therefore, if laws are enacted in advance to restrict AI development, it could be seen as an infringement on the freedom to develop AI. Moreover, hastily creating regulatory legislation could result in slowing down the pace of AI development in Korea. However, AI remains an unpredictable field in terms of when and how it will impact humans. This is referred to as ‘high-risk AI,’ which primarily denotes the potential risks that can arise when artificial intelligence surpasses a certain threshold.

 There are various ways in which these potential risks manifest. For example, there’s concern that AI could autonomously create systems similar to itself, surpassing human control and being utilized for various malicious activities such as information manipulation and invasion of privacy. When interacting with humans, such AI systems may conflict with human values and ethics.

 AI development is a field that we must continue to advance in as we consider the future era ahead. Moreover, the pace of advancement is progressing rapidly, often beyond what we can perceive. With the emergence of generative AI like ChatGPT, our future is poised to progress even further into a futuristic society. Therefore, it is necessary to establish guidelines from the early stages of development to prevent the misuse of AI and facilitate its development in a harmonious direction with humans.

 With the current pace of AI development, one of the necessary regulations is prioritizing the protection of personal information. Even in the case of widely used algorithms, AI relies on vast amounts of big data formed from individual information. Therefore, regulations are necessary to ensure the stability and security of AI systems, given the substantial amount of personal information already incorporated into AI. Additionally, transparency should be increased to enable people to understand how AI systems operate. Developers and operators should be held accountable for the results of AI systems.

 Introducing too many regulations too early in the development stage of AI could hinder innovation and potentially violate the less restrictive alternative (LRA) in the Constitution. Therefore, I think it’s important to first establish the regulations for AI development necessary for the current era, and then to amend and then amend the laws in accordance with the pace of AI development.

 

Cons: The Most Important Thing

- Kang Min-joo

 It is important to regulate AI. However, there is something more crucial than regulatory laws: legislation for victims. Despite the occurrence of crimes involving fake images, videos and voice, discussions on the damages are insufficient. If the focus were nominally put on regulations against corporations, the direction of problem solving might turn to ‘who did the wrong?’ or ‘who is to be responsible?’ If so, what is needed to address the individual damages?

 Firstly, there should be an institution where crimes such as AI-enabled identity theft or infringement of rights can be reported. It would be better if individuals could be informed in advance about how their information is being used. For instance, there is a ‘Personal Information Exposure Accident Prevention System’ made by the Financial Supervisory Service, which can track credit cards and bank accounts opened by malicious agents. Like this, services that enable companies utilizing AI technology to verify whether they have used personal information are required.

 Secondly, individuals should be granted the authority to refuse the use of their information at any time. Around every November, people in Korea receive notices of personal information usage from platforms they subscribe to. However, documents sent without explicit notification may feel like junk mail. Therefore, instead of a simple notification method for the use of personal information, there should be an officially recognized platform or service that allows individuals to easily understand and manage their information.

 Thirdly, a supervisory facility for AI technology needs to be established both inside and outside of the companies. Issues can arise even without malicious intent, as machines inherently carry the potential for certain errors. Thus, introducing a supervisory institution to monitor whether machines are committing errors is essential.

  Crimes resulting from AI should be categorized as a new area. Institutions composed of AI experts are necessary. Rather than focusing on vague fears about the future threatened by AI, addressing current problems is the best thing for the present and the future. After all, the reason why laws exist is not only for punishment but also for protection.


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
0 / 400
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.