Among countless imaginings and narratives, perhaps no phrase resonates more profoundly today than “The world has truly changed. What will the future hold?” We are, of course, speaking of artificial intelligence (AI). New models that astonish us emerge almost daily, AI-related publications consistently feature in bookstore displays among new releases and bestsellers, and AI has long dominated newspaper articles, columns, and news broadcasts. AI has now permeated our daily lives so deeply that it has become an inseparable part of our existence.

On January 22, 2026, the Act on the Development of Artificial Intelligence and Establishment of Trust (the “AI Basic Act”) has finally entered into force. In the context of global regulatory developments, including the EU’s adoption of the world’s first AI Act in August 2024, the AI Basic Act was enacted exactly one year ago with the dual objective of promoting AI-related industries while establishing a foundational legal framework to prevent potential harm.

Six Principles of the AI Basic Act

  • (Why) To establish national-level AI governance, provide legal basis for industry development, and create a foundational regulatory framework for AI;
  • (Who) Targeting AI service providers (i.e., AI developers and AI deployers);
  • (When & Where) When AI service providers offer high-impact AI, generative AI, or high-performance AI;
  • (What) Applying regulations to ensure AI transparency and stability; and
  • (How) Requiring identification (by service providers themselves or through government agencies) and impact assessment.
    ※ Compliance is enforced through sanctions including administrative fines and corrective orders.

Grace Period: In conjunction with the enforcement of the AI Basic Act, the Ministry of Science and ICT (“MSIT”), as the competent authority, has announced a grace period of at least one year for fact-finding investigations (incl. corrective orders) and administrative fine dispositions, and has published related guidelines. Moreover, the so-called “AI Basic Act Help Desk” is currently in operation to provide guidance on the Act's enforcement and provisions.

Having actively participated in numerous discussions and deliberations from the Act's enactment through its enforcement, Shin & Kim LLC plans to further expand and develop its AI Center, and to conduct a series of regular seminars addressing key related areas and issues on this significant occasion of the Act's enforcement. We expect that these seminars will provide a valuable platform for relevant personnel in related industries and sectors with high AI and data utilization and concentrated regulatory risks to strengthen their substantive compliance capabilities.

The AI Basic Act and its Enforcement Decree entered into force on January 22, 2026. The Act establishes a government support and governance framework for AI development and promotion, AI ethics, and trust-building, while imposing specific obligations on AI service providers, including requirements to ensure transparency and stability.

The following provides an overview of the key provisions of the AI Basic Act and its Enforcement Decree, recommended corporate response measures, and our upcoming seminar plans.

 

1.  Key Provisions of the AI Basic Act and its Enforcement Decree

▶ Regulated Entities – AI Service Providers

  • The AI Basic Act imposes various obligations on AI service providers, with different requirements depending on the type and category of AI they provide.
Item AI Service Provider
Type AI developer (Article 2(7)(a))

A person who develops and provides AI

AI deployer (Article 2(7)(b)) ) A person who provides AI products or services using AI provided by an AI developer
Category High-impact AI service provider (Article 2(4), (7)) A service provider offering high-impact AI* products or services in specific areas such as recruitment and loan screening
* AI systems that may have a significant effect on, or pose risks to, human life, physical safety or fundamental rights
Generative AI service provider (Article 2(5), (7)) A service provider offering AI products or services that generate content such as images and videos by learning and imitating the characteristics of input data

※ Companies may hold multiple classifications simultaneously depending on the characteristics of their services.

(Example: An entity that directly develops generative AI and provides loan screening services would qualify as both an AI developer and AI deployer, as well as both a generative AI service provider and high-impact AI service provider.)

  • Specifically, the AI Basic Act distinguishes users—defined as those who receive AI products or services—from AI service providers, and excludes users from its regulatory scope.

 

2. Key Obligations 

With the enforcement of the AI Basic Act and its Enforcement Decree, AI service providers are subject to the following obligations, which may apply cumulatively depending on their business type and the categories of products or services they provide.

▶ Transparency Requirements (Article 31 of the Act, Article 23 of the Decree)

  • Applicable to: High-impact AI service providers, generative AI service providers
  • Requirements: Advance notice and labeling

Advance Notice Obligation: Provide advance notice* to users that the product or service operates based on high-impact AI or generative AI*
* Methods include documentation (in products, contracts, user manuals, terms of service, etc.), display (on service screens), or posting.

Labeling Obligation: Indicate* that the output was generated by generative AI
* For general outputs, either human-recognizable labeling method or machine-readable methods such as watermarks may be used; for deepfakes, only labeling methods that are clearly recognizable by users are permitted.

  • Exceptions: All or part of the advance notice or labeling obligations do not apply (i) if the use of high-impact AI or generative AI is apparent* or (ii) if the AI is used solely for the AI service provider's internal business purposes.
    * This includes cases where it is apparent based on the product or service name or text, etc. displayed on user screens, product exterior or the output itself.
     
  • Sanctions: Corrective order* or administrative fine of up to KRW 30 million** (Articles 40(3) and 43)
    * For violations of labeling obligations
    ** For violations of advance notice obligations


▶ High-Impact AI Identification and Safety/Reliability Requirements (Articles 33-35 of the Act, Articles 25, 27, 28 of the Decree)

  • Applicable to: AI service providers
  • Requirements: Identify whether AI constitutes high-impact AI and (if applicable) ensure safety and reliability

High-Impact AI Identification: When providing AI or AI-based products or services, conduct prior review* to determine whether it constitutes high-impact AI
* Identification requests may be submitted to the Minister of Science and ICT (“MSIT Minister”) with supporting documents such as service overviews.

Safety and Reliability Requirements: Implement and post** safety and reliability measures*
* Measures include: (i) establishing and operating risk management plans; (ii) establishing and implementing AI explanation plans; establishing and operating user protection measures; (iii) human management and supervision of high-impact AI; and (iv) preparing and retaining (for 5 years) documentation confirming safety and reliability measures.
** Post the following on business premises or website (trade secrets may be excluded): (i) key elements of risk management plans, (ii) key elements of AI explanation plans, (iii) user protection measures, (iv) names and contact information of persons managing and supervising high-impact AI.

  • Sanctions: Corrective order (Article 40(3) of the Act)
  • Other Provisions: AI service providers may conduct advance assessments of the impact of high-impact AI on fundamental human rights, and government agencies shall give priority consideration to products and services for which impact assessments have been conducted.

 ▶ Designation of Domestic Representative (Article 36 of the Act, Article 29 of the Decree)

  • Applicable to: AI service providers without a domestic address or place of business who either meet certain scale requirements* or have been subject to administrative fines for violating corrective orders
    * Previous year’s revenue of KRW 1 trillion or more, or AI service segment revenue of KRW 10 billion or more in the previous year, or average daily Korean users of AI products or services exceeding 1 million over the preceding 3 months as of the end of the previous year
     
  • Requirement: Designate a representative with a domestic address or place of business*
    * Domestic representatives shall represent and support the service provider in: (i) submitting large-scale AI safety measure results, (ii) requesting high-impact AI Identification, and (iii) implementation of high-impact AI safety and reliability measures.
     
  • Sanctions: Administrative fine of up to KRW 30 million (Article 43(1)(2) of the Act)

Meanwhile, for large-scale AI—defined as AI that uses cumulative computational resources of 1026 floating-point operations or more in training, applies cutting-edge AI technology and is likely to have widespread and significant impacts on fundamental human rights—AI service providers are required to submit to the MSIT Minister the results of identification, assessment and mitigation of risks throughout the AI lifecycle as well as the establishment of an AI-related risk management system (Article 32 of the Act, Article 24 of the Decree).

 

3. Grace Period for Enforcement under the AI Basic Act and its Enforcement Decree (1 Year+)

The MSIT, the competent authority for the AI Basic Act, has decided to grant a grace period of at least one year for the application of regulations under the AI Basic Act and its Enforcement Decree to minimize confusion for businesses and provide sufficient time for preparation. During this grace period, a guidance period will be in effect for fact-finding investigations and administrative fines and fact-finding investigations will be conducted only in very exceptional circumstances, such as cases involving serious social issues including loss of life or human rights violations (please refer to MSIT Press Release, MSIT Supplementary Press Release).

Furthermore, to support businesses in achieving smooth compliance, the MSIT has established and is operating the “AI Basic Act Help Desk,” which provides specific and practical guidance on matters related to the AI Basic Act and its Enforcement Decree (AI Basic Act Help Desk).

 

4. Building a Comprehensive Governance Framework to Minimize Regulatory Risks

With the enforcement of the AI Basic Act, companies need to analyze the risks associated with developing and using AI products and services they currently provide or intend to provide and establish a comprehensive risk management system that considers the following matters to minimize such risks.

  • (Development of Comprehensive and Systematic Compliance Tools) First, companies should determine whether the AI products or services they currently provide or intend to provide are subject to regulation under the AI Basic Act, and if so, identify which specific regulations apply. Additionally, companies are advised to identify the various laws and regulations applicable to their respective industry sectors, such as finance, healthcare, telecommunications, and retail, as well as regulations governing the processing of data and personal information that inevitably accompanies AI service provision. To this end, companies need to develop systematic compliance tools, such as checklists, that enable them to identify and comply with the multi-layered regulatory framework.

    (Example: For a service that uses AI to transform user-submitted content, companies should prepare a comprehensive checklist addressing: how to fulfill labeling obligations under the AI Basic Act; how to establish a legal basis for processing if the user-submitted content constitutes personal information; and how to display both the labeling required under the AI Basic Act and the notice items required when collecting personal information under Article 15(1)(iv) of the Personal Information Protection Act (“PIPA”) on a single screen.) 
     
  • (Establishment of Proactive and Ongoing Compliance Governance) Companies are advised to establish compliance governance throughout the AI planning, development, and operation phases to effectively manage and supervise unforeseen regulatory risks.
    This requires first clarifying roles and responsibilities of each internal organization in charge through conducting organizational assessment and documenting necessary compliance measures. In particular, where personal information is processed in connection with AI service provision, companies need to map the flow of personal information processing throughout its lifecycle—including collection, storage, use, and destruction—and establish legal basis for such processing.

    (Example: Identify and designate the organization responsible for AI use, clarify its roles through internal regulations, and prepare internal policy documents such as ethics guidelines and pledge forms, as well as standard contract templates for use when entering into agreements with external AI service providers. Additionally, identify and diagram the complete flow of personal information processing throughout its lifecycle to recognize potential risks under the PIPA that may arise when providing specific services, including AI services.) 

 

5. Our Future Plan

With the enforcement of the AI Basic Act and its Enforcement Decree, it has become critically important for companies to clearly understand the complex regulatory framework surrounding AI and to strengthen their preparedness and compliance capabilities.

Shin & Kim LLC has long provided comprehensive and systematic advisory services that go beyond simple statutory interpretation. Through our AI Center, we have offered organizational diagnostics, data processing status assessment and evaluation, incident response framework establishment, and development of comprehensive regulatory compliance tools. Building upon this advisory experience and accumulated expertise, we plan to hold regular seminars focused on the regulatory issues that arise in practice during AI implementation, with the objective of supporting companies in strengthening their substantive compliance capabilities. Seminars on the following topics are currently being planned, and we will continue to announce subsequent seminar topics sequentially, focusing on matters of primary practical concern while taking into account changes in the regulatory environment and legislative amendments.

  • (AI, Personal Information & Data) Beginning with the mandatory requirements under the AI Basic Act, we will conduct a seminar examining how data-related regulations, including the PIPA, apply in practice throughout the AI service planning, development, and operation processes, with emphasis on matters requiring attention and preparation. This seminar will also introduce practical measures to minimize personal information leakage and security risks that have recently emerged in the course of AI service provision.
     
  • (Industry-Specific AI Utilization) Additionally, we will examine how AI utilization operates in conjunction with existing regulatory frameworks in various industry sectors such as finance, healthcare, retail, and content, and provide guidance on key considerations for the respective industry sectors. We will also address considerations applicable to AI utilization in areas common across industries, such as recruitment.

These seminars are primarily designed for legal, compliance, and strategy personnel in related industries and sectors with high AI and data utilization and concentrated regulatory risks, including personal information, finance, healthcare, HR, gaming, and content. The seminars will feature practice-oriented presentations by external experts combined with in-depth discussions and Q&A sessions with senior advisors and attorneys from Shin & Kim’s ICT Group and AI Center. Companies seeking to proactively assess regulatory risks associated with AI adoption in their respective industries are expected to gain practical insights and strategic direction through this seminar series. We invite interested companies to register for participation according to seminar schedules to be announced.


About Shin & Kim’s ICT Group

Shin & Kim’s ICT Group provides comprehensive, one-stop legal services by leveraging the firm's distinctive expertise and extensive professional network in the ICT sector, having consistently earned the highest client recognition in recent years. Drawing upon our deep-seated capabilities in broadcasting, telecommunications, personal information protection, and internet IT, we deliver the highest level of legal advisory services encompassing regulatory trend analysis in broadcasting, telecommunications, and ICT; legislative improvement and legislative consulting; regulatory impact assessment; and corporate strategic planning. Furthermore, we possess extensive experience and exceptional expertise in personal information/AI compliance and risk management, providing holistic analysis and strategic responses to legal issues and regulatory risks associated with AI adoption and deployment across diverse industry sectors. Please feel free to contact us with any questions or if you require our assistance on any ICT-related legal matters.

 

[Korean version] AI 기본법 시행과 그 시사점