On December 26, 2024, the National Assembly passed the Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act), a proposed legislation that establishes fundamental matters necessary to promote the sound development of artificial intelligence and creation of a trust-based framework.1 The bill has been sent to the government, and barring the President’s exercise of his veto power within 15 days, it is set to become law after promulgation. Once it is enacted, Korea will become the second jurisdiction in the world, following the European Union, to establish a comprehensive legal framework specifically governing artificial intelligence.

 

Read on for more details: 

The AI Basic Act establishes a comprehensive governance framework to support the development and promotion of artificial intelligence while ensuring ethical standards and reliability.

Fostering Industry Development

Under the AI Basic Act, the Minister of Science and ICT (the “Minister”), in collaboration with relevant ministries, must establish a master plan every three years to promote AI technologies and industries and enhance national competitiveness. To strengthen the foundation of the AI industry, the AI Basic Act also establishes a legal basis for government support of initiatives aimed at advancing AI technology development and utilization, as well as promoting technology standardization. It outlines measures to drive AI adoption and facilitate industry growth, including through support for SMEs and startups, talent development, and global market expansion.

Notably, the AI Basic Act mandates separate initiatives to support the utilization of training data and data centers, recognizing the pivotal role of data – often referred to as the “new currency” of the AI era – in fostering AI innovation.

AI Ethics and Reliability

The AI Basic Act requires the government to formulate a list of measures to minimize potential risk arising from AI and to foster trust for the safe use of AI. Under the AI Basic Act, the government is also empowered to establish AI ethics principles on AI safety, reliability, and accessibility.

Legal Foundation for the National AI Committee and More

The AI Basic Act provides a legal basis for the establishment and operation of the National AI Committee, launched in September 2024 to oversee the nation’s artificial intelligence policy. Under the AI Basic Act, the National AI Committee is to be chaired by the President, with a majority of members from the private sector to allow for industry-driven AI policies.

To ensure the professional and systematic development and execution of AI-related policies and initiatives, the AI Basic Act also sets forth the legal grounds for other key institutions, including: the AI Policy Center, the AI Safety Research Institute, and the Korea AI Promotion Association.

The AI Basic Act stipulates specific duties and responsibilities of AI companies to secure AI ethics and reliability

The types of AI systems governed by the AI Basic Act are: (i) high-impact AI (defined as AI systems that pose significant risks or impacts on human life, physical safety, or fundamental rights) and (ii) generative AI (defined as AI systems designed to mimic the structure and characteristics of input data to generate outputs such as text, sound, images, videos, and other creative content).

In relation to the above categories of AI systems, the AI Basic Act distinguishes and assigns a different set of obligations on the entities that develop and deploy the foregoing AI systems and those that utilize the foregoing AI systems to provide their own products and services.

Similar to the EU AI Act, the AI Basic Act takes a risk-based approach and regulates high-impact AI systems. However, unlike the EU AI Act, which categorizes AI risks into four levels (unacceptable risk, high risk, limited risk, and minimal risk) and explicitly bans the type of AI systems deemed as a clear threat to the safety, livelihoods and fundamental rights of people, the AI Basic Act does not outright ban any specific type of an AI system.

 

Below summarizes the obligations the AI Basic Act imposes on in-scope entities:

Provision What On whom and how
Article 31 Transparency Requirement
(for high-impact and generative AI systems)

 

Businesses incorporating high-impact or generative AI systems into their products or services must provide users with advance notice that the products or services they provide are AI-powered.

Where generative AI systems (including products or services based on such systems) are being offered, businesses must indicate the fact that the generated output is artificially created.

Businesses providing AI-generated outputs (deepfakes) must indicate the fact that the applicable output is artificially created.   

Article 32 Safety Requirements
(for large-scale AI systems)
Providers of large-scale AI systems must implement measures to identify, assess, and mitigate risks at each stage of the AI lifecycle. They must also build a monitoring and risk management system and submit the implementation results to the Minister.   
Article 33 Confirmation of impact Businesses must conduct a preliminary assessment to determine whether their AI systems, or the AI systems integrated into their products or services, qualify as high-impact AI. If needed, the businesses may seek confirmation from the Minister.
Article 34 Safety and Reliability Requirements
(for high-impact AI systems)
Businesses providing high-impact AI systems, or incorporating such systems into their products or services, must:
  • develop and implement a risk management plan;
  • establish and execute measures to explain the AI system (including an overview of the training data);
  • develop and implement a user protection plan;
  • ensure human supervision and oversight of the high-impact AI system;
  • maintain documentation on safety and reliability measures; and
  • perform additional tasks as determined by the National AI Committee.
Article 35 Impact Assessment
(for high-impact AI systems)
Businesses incorporating high-impact AI systems into their products or services must endeavor to conduct a prior assessment of the potential impact on fundamental rights of people.

For products of services intended for government use, priority must be given to those employing high-impact AI systems that have undergone such impact assessments. 
Article 36 Local Representative
Designation
Foreign businesses exceeding the statutory threshold are required to designate and report a local representative, who shall be responsible for:
  • submitting the results of safety measure implementation for large-scale AI systems;
  • requesting ministry confirmation on high-impact AI systems; and
  • supporting the implementation of safety and reliability measures for high-impact AI systems.

Fact-finding Investigations and Sanctions for Non-compliance

Where the Minister becomes aware of an actual or suspected violation of the duty to secure transparency and safety for AI systems (in addition to reliability for high-impact AI systems), or receives reports or complaints concerning such violations, the AI Basic Act authorizes the Minister to conduct a fact-finding investigation to ensure that in-scope entities: (i) mark and disclose the fact that the outputs are generated by the AI systems (transparency requirements); (ii) implement measures to identify, assess, and mitigate risks, establish risk management systems, and duly submit results of their implementation (safety requirements); and (iii) implement measures to secure safety and reliability of high-impact AI systems through risk management and user explanation and protection (high-impact AI safety and reliability requirements).

To enforce the obligations set out in the law, the AI Basic Act allows administrative fines of up to KRW 30 million for noncompliance involving a failure to: (i) provide prior notice of the fact that the services provided are AI-driven; (ii) designate a local representative (in the case of foreign businesses); or (iii) comply with a cease and desist or corrective orders following an investigation.

 

Key Takeaways:

Once it takes force, the AI Basic Act is expected to reduce regulatory uncertainty surrounding AI technology development and the provision of AI-related products and services. With a framework for systematic support policies for promoting the AI industry, it is anticipated to foster a dynamic domestic AI innovation ecosystem and strengthen Korea’s competitive position in shaping AI norms and achieving technological leadership.

The AI Basic Act outlines policies to be implemented by the government and related agencies to foster the AI industry; however, specific details are yet to be determined. Further, matters relating to the exact scope of “high-impact AI” and detailed standards and procedures to ensure compliance, such as how in-scope entities must establish and implement a user protection plan and ensure human oversight of high-impact AI systems, will be addressed in subordinate legislation.

Businesses operating or planning to operate with high-impact or generative AI systems should thus closely track the development of subordinate legislation and government policies and proactively: (i) assess whether and how their operations are subject to the AI Basic Act, (ii) review the legal obligations they must fulfill, and (iii) determine the government support measures for which they may be eligible. In particular, businesses aiming to benefit from the government’s preferential procurement policies for high-impact AI systems that have obtained certifications or undergone an impact assessment should proactively prepare for certification and impact assessment procedures.  

 


1 The AI Basic Act consolidates 19 separate AI bills introduced to the National Assembly since May 2024, after the 22nd session of the National Assembly took off.

 

[Korean version] AI 기본법 제정안 국회 본회의 통과