Photo credit: Wang, Y. et al. (2021). HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping. Zhejiang University. Source: arXiv (https://arxiv.org/pdf/2106.09965.pdf).
A recent survey conducted by The Voice of Mason Korea found that 55% of the 20 students surveyed are "extremely concerned" about deepfake crimes, with 30% planning to limit who can view their posts. Additionally, 35% were unfamiliar with the term "digital literacy,” highlighting the awareness of online safety and the risks associated with deepfake technology.
The term "deepfake" is a combination of "Deep Learning" and "Fake." It refers to synthetic media technology that uses artificial intelligence (AI) to create fake but highly realistic content, often referred to as “face swap” when applied to altering faces in videos or images. A notable incident began with a 2021 investigation, which revealed deepfake crime victims among Seoul National University alumni. Their personal identities and images were obtained to create and distribute illegal deepfake videos via Telegram which were later used to threaten their release. The perpetrator was arrested in May 2024.
Following this, on August 19th, the Korean media outlet Hankyoreh reported a rise in deepfake-related digital sex crimes. Over 100 Telegram channels categorized by Korean educational institutions were found to be distributing illegal deepfake pornography. Victims and perpetrators spanned from all ages and genders. Investigations revealed that most of the perpetrators were acquaintances of the victims, gaining access to their social media accounts to obtain images and videos. In some cases, personal information was exposed through platforms like Telegram, causing further harm, as victims were blackmailed with the fake videos.
According to the "2023 Deepfake Production Status" report by U.S. cybersecurity firm Security Hero, South Korea is the most vulnerable to deepfake crimes. Professor Kwak from Mason Korea highlighted the risks, especially for female K-pop artists, stressing that anyone can be targeted due to the widespread availability of images and videos on social media. She advocates for regular digital literacy education in schools, workplaces, and colleges to help people protect their online information and be aware of threats like deepfakes, which are becoming more common with advancing technology.
To cultivate a stronger digital culture, institutions should offer regular digital literacy education to promote safe online practices and raise awareness about the risks of advanced technology like deepfake. Digital literacy is becoming increasingly important as it helps individuals recognize potential threats.
The term "deepfake" first appeared in 2017, on Reddit. Deepfake technology uses an AI method called Generative Adversarial Networks (GAN). It operates through two neural networks called the generator and the discriminator. The generator creates fake content while the discriminator tests and compares them from the original source, repeating this, the AI itself improves the quality of the generated content until it becomes so realistic that the discriminator is unable to distinguish it from the original.
With the advancement of AI software, the creation of deepfakes has become significantly easier. In today’s digital society, individuals need to be aware of the threat and be mindful about sharing data online.
Written by Heewon Yang | Staff Writer
Revised by Maddie Sailakkham | Editor-In-Chief
Comments