Agentic Payments Drive Sea Ltd.’s Latest Google Collaboration

The companies aim to integrate AI into both consumer interfaces and backend systems.

Indonesia Strains Under Debt From China-Led High-Speed Rail

With 75 percent of funding from the China Development Bank, interest obligations are increasingly burdensome.

Singapore Airlines Maintains Premium Strength As Fare Competition Rises

Southeast Asia’s airline sector is shifting from recovery to competitive normalization.
SEND TO: pressreleases@pageonemedia.ph

From Deepfakes To Loan Reviews, South Korea’s New AI Law Casts A Wide Net

Companies deploying high-risk AI tools are required to notify users and ensure strong safety standards in their operations.

From Deepfakes To Loan Reviews, South Korea’s New AI Law Casts A Wide Net

360
360

How do you feel about this story?

Like
Love
Haha
Wow
Sad
Angry

South Korea on Thursday formally enacted a comprehensive law governing the safe use of artificial intelligence (AI) models, becoming the first country globally in doing so, establishing a regulatory framework against misinformation and other hazardous effects involving the emerging field.

The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, or the AI Basic Act, officially took effect Thursday, according to the science ministry.

It marked the first governmental adoption of comprehensive guidelines on the use of AI globally.

The act centers on requiring companies and AI developers to take greater responsibility for addressing deepfake content and misinformation that can be generated by AI models, granting the government the authority to impose fines or launch probes into violations.

In detail, the act introduces the concept of “high-risk AI,” referring to AI models used to generate content that can significantly affect users’ daily lives or their safety, including applications in the employment process, loan reviews and medical advice.

Entities harnessing such high-risk AI models are required to inform users that their services are based on AI and are responsible for ensuring safety. Content generated by AI models is required to carry watermarks indicating its AI-generated nature.

“Applying watermarks to AI-generated content is the minimum safeguard to prevent side effects from the abuse of AI technology, such as deepfake content,” a ministry official said.

Global companies offering AI services in South Korea meeting any of the following criteria –global annual revenue of 1 trillion won (USD681 million) or more, domestic sales of 10 billion won or higher, or at least 1 million daily users in the country – are required to designate a local representative.

Currently, OpenAI and Google fall under the criteria.

Violations of the act may be subject to fines of up to 30 million won, and the government plans to enforce a one-year grace period in imposing penalties to help the private sector adjust to the new rules.

The act also includes measures for the government to promote the AI industry, with the science minister required to present a policy blueprint every three years.