|
Chinese authorities are cracking down on political and
fraud cases driven by deepfakes, created with face- and
voice-changing software that tricks targets into believing
they are video chatting with a loved one or another trusted person.
; 중국정부가 deepfake 기술을 이용한 fruad를 단속중이다.
How good are the deepfakes?
Good enough to trick an executive at a Fuzhou tech company
in Fujian province who almost lost $600,000 to a person
he thought was a friend claiming to need a quick cash infusion.
The entire transaction took less than 10 minutes
from the first contact via the phone app WeChat to police
stopping the online bank transfer when the target called
the authorities after learning his real friend had never requested
the loan, according to Sina Technology.
; deepfake 기술이 얼마나 효과적인가?
최근에 거액의 사기사건에 활용될만큼 실제적 위험이 되고 있다.
Despite the public's outcry about such AI-driven fraud,
some experts say Beijing appears more concerned about the political challenges
that deepfakes may pose, as shown by newly implemented regulations on
"deep synthesis" management that outlaw activities that "endanger
national security and interests and damage the national image."
; 경제 분야의 사기사건 뿐아니라 국가안보에 위협이 될 수 있다고 전문가들은 우려하고 있다.
In a 2017 demonstration of the risks, a video created by University of
Washington researchers showed then-U.S. President Barack Obama saying
things he hadn't.
; 2017년 워싱턴대학에서 오바마의 연설을 조작하여 발표한 사례는 잘 알려진
사건이다. (이 사례는 정치적 의도와 목적을 위함이 아니고, deepfake 기술의
가능성을 보여준 기술적 데모였다)
Two years later, Chinese smartphone apps like Zao let users swap
their faces with celebrities so they could appear as if they were in a movie.
Zao was removed from app stores in 2019 and Avatarify,
another popular Chinese face-swapping app, was also banned in 2021,
likely for violation of privacy and portrait rights, according to Chinese media.
; 중국산인 Zao라는 스마트폰 앱은 동영상에서 얼굴을 바꿔치기 할 수 있다(2019).
영화에서 주인공의 얼굴을 다른 얼굴로 바꿔치기 가능하다.
비슷한 앱으로 Avatarify라는 중국산 앱은 2021에 중국정부에 의해 사용이 금지되었다.
"To create a deepfake, a fraudster uses a real person's document,
taking a photo of it and turning it into a 3D persona," he said.
"The problem is that the technology,
it is becoming more and more democratized.
Many people can use it. …
They can create many deepfakes, and they try to bypass these checks that
we try to enforce."
; deepfake 기술이 대중화되고 있고, 그 기술도 다양해지며, 단속을 피해가는
방법이 개발되고 있다. (정부의 단속이 실효성이 있을까하는 의문이 든다)
"Audio alone, you're more easily fooled, but audio plus video,
it would be little harder to fool you.
But apparently they're able to do it," Kambhampati said,
adding that it is harder to make a video that appears trustworthy.
; 사람들을 속일 목적이라면 오디오만(audio alone)을 사용하는 것이 비디오 보다 더 쉽다.
그렇지만, 비디오의 경우도 기술적으로 얼마든지 가능하다.
"We don't expect the problem to go away.
The biggest solution … is education, let people understand the days of
trusting your ears and eyes are over, and you need to keep that
in the back of your mind," Kambhampati said.
; deepfake 기술로 인하여 사람들은 더 이상 비디오를 신뢰하기 어렵게 되고 있다.
이 점을 사용자들은 인식해야 한다.
To regulate "deep synthesis" content, China's administrative measures
implemented on January 10 require service providers to
"conspicuously mark" AI-generated content that
"may cause public confusion or misidentification" so that
users can tell authentic media content from deepfakes.
China's practice of requiring technology platforms to
"watermark" deepfake content has been widely discussed internationally.
; deepfake로 인한 범죄를 방지하기 위한 대책들이 전세계적으로 논의되고 있다.
중국 정부에서는 deepfake 기술을 규제하기 위하여 표식("watermark")을 고려하고 있다.
//