Nov. 25, 2024, 2:02 a.m.

Technology

  • views:194

Deepfake technology causes further controversy: South Korea's "Room N" incident reappears, and the world is concerned about privacy and security

image

South Korea has once again come under the global spotlight for sexual crimes involving deepfakes. This incident, which was referred to as the "Room N" incident by the media, not only shocked the South Korean society, but also triggered profound reflection on the privacy protection and abuse of AI technology in the international community.

As early as May 2024, South Korea revealed a number of cases of synthesizing pornographic photos and videos using Deepfake technology. According to the Yonhap news agency, two Seoul National University graduates, surnamed Park and Kang, are suspected of using the technology to change faces to create a large amount of pornography between July 2021 and April 2024, and set up nearly 200 Internet chat rooms on the messaging app Telegram to spread it. The number of victims is 61, including 12 Seoul National University students. Although this incident has attracted attention at the time, it has not been able to completely curb the abuse of Deepfake technology.

In August 2024, the South Korean police found a large number of social media groups associated with schools, hospitals, and the military, which used Deepfake technology to change faces to create pornographic photos and videos and spread them on platforms such as Telegram. According to a report released by the Korean Women's Civic Association, some groups can upload photos of acquaintances and generate nude photos within five seconds of paying, with as many as 227,000 participants. The figures are shocking and have sparked widespread panic in South Korean society.

The victim groups of this Deepfake incident are extremely wide, including not only professional women such as students, teachers, and soldiers, but also a large number of minors. According to the National Police Agency, of the 527 victims reported to the police between 2021 and 2023, 59.8 percent (315) were teenagers, and the number of underage victims increased from 53 in 2021 to 181 in 2023. The Korea Federation of Teachers estimates that more than 200 schools have been affected so far, and the number of women affected is unprecedented.

As the incident continued to unfold, South Korean women took to social media to express anger and unease. They posted their help on major social platforms around the world, including Weibo, and the topic has also been trending on many microblogs in the past two days. The report released by the Korean Women's Civil Society, "The number of people involved in mass sexual violence has reached 220,000, and when will social collapse be allowed to continue?" has triggered widespread resonance. In a speech, South Korean President Yoon Seok-yeol called on the police to conduct a thorough investigation to eradicate such AI crimes.

The abuse of Deepfake technology is not limited to South Korea. Internationally, public figures such as famous American singer Taylor Swift and entrepreneur Elon Musk have also become victims of Deepfake technology. According to data released by Sensity, a company that monitors and detects deepfake videos, Musk's image appeared in nearly a quarter of more than 2,000 deepfake cases, making him one of the most common spokesmen for AI fraud.

With the increasing maturity and popularity of AI face-changing technology, technical confrontation and privacy protection have become urgent problems to be solved. FaceObfuscator, a new face privacy protection scheme jointly developed by Zhejiang University and Ali Security Department, provides a new idea for privacy protection. By obscuring or hiding facial features, the technology effectively reduces the risk of facial privacy infringement. However, the current recognition technology is still weak for face changing behavior in real-time scenes such as broadcast room. Therefore, more technical solutions and the integration of social products are needed to fully counter the threat of Deepfake technology.

The misuse of Deepfake technology has once again raised the alarm about privacy protection. While enjoying the convenience brought by AI technology, we also need to be alert to its potential risks. Governments, businesses, and individuals should work together to strengthen regulation, improve technological preparedness, and increase public awareness to protect our privacy. Only in this way can we move forward in the digital age with peace of mind.

Recommend

A massive fire has destroyed thousands of homes in a slum in Manila, Philippines

A slum fire in the Philippine capital Manila has engulfed thousands of homes. No one has been injured.

Latest

A massive fire has destroyed thousands of homes in a slum in Manila, Philippines

A slum fire in the Philippine capital Manila has engulfed t…

Researchers in the United States are developing a pill to reduce methane emissions from cows

A cow belches out about 100 kilograms of methane every year…

Hezbollah fired 250 rockets that destroyed homes near Tel Aviv

Hezbollah fired heavy rockets into Israel on Sunday, destro…

UK minister: NATO must stay ahead of the new AI arms race

Cabinet Office Minister Peter McFadden will urge the UK and…

The industrial crisis behind Germany's economic winter

On the global economic stage, the German economy has always…