UK Prime Minister Rishi Sunak's speech on AI
26 October 2023
Location: The Royal Society
Source: https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023
https://www.youtube.com/watch?v=emrHKQPQYQ4
***Prime Minister Rishi Sunak makes a speech on how we have a global responsibility to understand and address the risks surrounding AI, in order to realise all its benefits and opportunities for future generations.
GLOSSARY
1 | Royal Society | 런던왕립학회 |
2 | Moorfields Eye Hospital | 무어필드스 안과 병원 |
3 | super intelligence | 초인공지능 |
SCRIPT (568 words)
I’m delighted to be here at the Royal Society, the place where the story of modern science has been written for centuries.
Now, I’m unashamedly optimistic about the power of technology to make life better for everyone.
So, the easy speech for me to give - the one in my heart I really want to give would be to tell you about the incredible opportunities before us.
Just this morning, I was at Moorfields Eye Hospital.
They’re using Artificial Intelligence to build a model that can look at a single picture of your eyes and not only diagnose blindness, but predict heart attacks, strokes, or Parkinson’s.
And that’s just the beginning.
I genuinely believe that technologies like AI will bring a transformation as far-reaching as the industrial revolution, the coming of electricity, or the birth of the internet.
Now, as with every one of those waves of technology, AI will bring new knowledge, new opportunities for economic growth, new advances in human capability and the chance to solve problems that we once thought beyond us.
But like those waves, it also brings new dangers and new fears.
So, the responsible thing for me to do – the right speech for me to make – is to address those fears head on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.
Now, doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.
So, I won’t hide them from you.
That’s why today, for the first time, we’ve taken the highly unusual step of publishing our analysis on the risks of AI, including an assessment by the UK intelligence communities.
These reports provide a stark warning.
Get this wrong, and AI could make it easier to build chemical or biological weapons.
Terrorist groups could use AI to spread fear and destruction on an even greater scale.
Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.
And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’.
Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
Now, I want to be completely clear:
This is not a risk that people need to be losing sleep over right now.
I don’t want to be alarmist.
And there is a real debate about this - some experts think it will never happen at all.
But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.
And when so many of the biggest developers of this technology themselves warn of these risks, leaders have a responsibility to take them seriously, and to act.
And that is what I am doing today – in three specific ways.
First, keeping you safe.
Right now, the only people testing the safety of AI are the very organisations developing it.
Even they don’t always fully understand what their models could become capable of.
And there are incentives in part, to compete to build the best models, quickest.