이번 주에는
요즘 전세계적으로 대세가 돼가고 있는 AI가
과연 인간의 법을 따를 수 있을까에 관한 내용을 가지고
영어훈련하겠습니다.
상당히 수준높은 글이기 때문에 이해가 쉽지 않은 면도 있지만,
이런 글을 고민하다 보면 독해 수준이 상당히 높아질 것입니다.
[빨간색 문장들은 고난도 문장입니다. 강의를 듣기 전에 먼저 고민해 보셔야 독해 두뇌가 발달합니다]
[영어훈련 하면서 글쓴이의 논리를 감상하시면, 여러분의 논리력도 강해집니다]
Can AI learn to obey the law?
If the British computer scientist Alan Turing’s work on “thinking machines” was the prequel to what we now call artificial intelligence, the late psychologist Daniel Kahneman’s bestselling “Thinking, Fast and Slow” might be the sequel, given its insights into how we ourselves think. Understanding “us” will be crucial for regulating “them.”
That effort has rapidly moved to the top of policymakers’ agenda. On March 21, the UN unanimously adopted a landmark resolution calling on the international community “to govern this technology rather than let it govern us.” And that came on the heels of the European Union’s AI Act, which more than 20 countries (most of them advanced economies) signed last November. Moreover, country-level efforts are ongoing, including in the US, where President Joe Biden has issued an executive order on the “safe, secure, and trustworthy development and use” of AI.
These efforts are a response to the AI arms race that started with OpenAI’s public release of ChatGPT in late 2022. The fundamental concern is the increasingly well-known “alignment problem”: the fact that an AI’s objectives and chosen means of pursuing them may not be deferential to, or even compatible with, those of humans. The new AI tools also have the potential to be misused by bad actors (from scam artists to propagandists), to deepen and amplify pre-existing forms of discrimination and bias, to violate privacy, and to displace workers.
The most extreme form of the alignment problem is AI-generated existential risk. Constantly evolving AIs that can teach themselves could go rogue and decide to engineer a financial crisis, sway an election, or even create a bioweapon.
중략........