way we live but the rise of chat GPT and
the other fast advancing systems has
been accompanied these past months by a
sharp increase in anxiety it's the
ability of AI systems to teach
themselves or to grow in ways we don't
fully understand yet that has the
researchers predicting a large-scale
catastrophe
that makes it the business of the United
Nations security Council and today for
the first time they began the dialogue
about its potential impact on Global
Peace and security the UK which holds
the rotating presidency of the council
so the challenge is to mitigate the risk
AI presents through coordinated action
while making sure we all benefit from
the tremendous things it could deliver
let's speak to our North America
correspondent nomia Iqbal who is
watching that session so what has the
foreign secretary James cleverly been
saying Nami
yeah so Christian this was a big deal
for the UK for the foreign secretary
James cleverley the UK really wants to
show that it can compete with the global
Giants in what is the most hyped up area
of tech that it can be a leader
geographically and intellectually when
it comes to artificial intelligence so
you know you mentioned there they are
holding the rotation of the presidency
this month and so they've wasted no time
putting on this uh the security council
meeting and I spoke to the foreign
secretary Mr cleverley uh just before
the meeting started and I I started off
by asking him what was the point of this
meeting what was he hoping to achieve
with it
well AI is having and will have an
amplifying effect an accelerating effect
and all the things that we currently use
technology for
and that can be used for good for
medical research for a research on
climate change on analyzing big data
sets but there are also potential malign
uses of AI so what we are saying today
the U.N security Council is that we need
to work internationally to understand
the risks to look to mitigate those
risks and also to put some structures
and regulations in place that can only
be done internationally that's why we
discussed it give me an example of what
you think is malign
well the use of AI for example to
develop weapons and just as AI can be
used to develop drugs that could perhaps
solve some of the disease challenges of
the world it could potentially be used
to create bio weapons so that's the kind
of thing that we need to do harvest the
positives and protect ourselves
internationally from are there any
countries that you're particularly
concerned about in terms of their
development of AI well I think one of
the things we have to understand
is that it it wouldn't just be a state
actors that could potentially use AI for
a negative or malign purposes so it is
about thinking about how we develop it
what we release publicly what is a more
tightly held now I don't pretend to have
all the answers no one does but working
together we can analyze some of the
challenges and look carefully at what we
do as I say to protect ourselves
internationally whilst also harvesting I
mean you've got countries like China
Avenue that are behind the US and ahead
of the UK does that concern you well we
have one of the people that we brief in
the UN Security Council today is a
Chinese technologist we also have a
British technologist there
it is in everybody's interest
everybody's interest every nation and
every person's interest to get this
right and better that we get it right
together than in silos or isolated from
each other that's why the U.N security
Council I think is a really good
starting point the UK is hosting an AI
safety Summit later on this year and we
will play our part working with the
International Community to harvest the
benefits whilst protecting ourselves
so that was the UK foreign secretary
James cleverley chairing that meeting
and Christian it was one of those
meetings where there were was a lot of
talking there were lots of statements no
concrete goals were achieved at the end
of it but the UN Secretary General and
Tony gutres was in that meeting and he
did say that he his vision is for the UN
to create this sort of a sort of
governing body to to govern uh
artificial intelligence and the way it
does when it comes within the way it has
bodies to govern the use of nuclear
weapons uh you know Aviation energy and
meatballs the challenges of climate
change as well
um he taught James cleverley there about
this Chinese technologist that was in
the room giving evidence but what does
the Chinese Camp think about this
generally
there was some tension there because uh
China is uh you know leading sort of
second just behind America in terms of
advancement of artificial intelligence
and a spokesperson for China said uh
that if there are going to be these U.N
bodies or rules that it should reflect
the views of developing countries and it
shouldn't be uh these uh the Western
Nations that uh that decide how
artificial intelligence uh you know is
is governed that that other country
should be able to establish their own
regulations there is some tension
between China and America of course
there always is there are some reports
that the the US is looking to limit the
flow of powerful artificial intelligence
chips and there was also
um you know the US and the meeting they
didn't they didn't directly sort of uh
address China's concerns but they did
make a a dig at China accusing China of
using technology to uh to monitor ethics
thank you for that uh joining me from
San Francisco California is Anthony
aguiri he is the executive director of
the future of Life Institute uh he's one
of these people that knows everything
there is to know about this issue um
let's talk about what was proposed today
the UN says it would like to form a
governing body what would that look like
in your opinion what would it be tasked
with doing
well I think there are a lot of models
that people have proposed for bodies
like this this is still very early days
I think the crucial thing is that we we
have a body that can address some of the
sort of urgent and and profound risks
that this these Advanced AI systems are
starting to pose
um these are Broad and extreme and I
think we're really at a new stage as we
were at the beginning of the nuclear era
where we have to change the way that we
do things internationally and have new
bodies that can uh that can have the
sort of safety of humanity as a whole uh
first
I asked chat GPT today what risks it
presents to humanity and it came up with
a list fairly exhaustive list actually
let me put them on screen for you so it
says there are cyber security
vulnerabilities there's the
weaponization of AI there's
misinformation and disinformation data
privacy and mass surveillance bias and
discrimination which would be the unfair
targeting of specific populations the
unemployment it could create which of
course leads to social unrest and
finally the AI arms race that nomia
talked about that is a lot to get your
head round and it is developing so
quickly do you subscribe to the theory
that perhaps we're all ready too late to
put a governing body around all those
issues
I don't think we're too late but I think
we have to act quickly and I do think uh
you know I was one of the initiators
behind an effort to pause AI we had a
large open letter I think we do need to
slow down and take a little break from
the Breakneck and competitive speed of
development so that we can allow the uh
governance and the regulation and the
creation of new institutions to catch up
so I think we need to act fast and slow
down uh the race you know to get these
super powerful AI systems
um just reflect for me if you would on
the view of some within the Chinese camp
that um if there is a governing body it
needs to reflect the the wishes and the
the demands of developing nations you're
sitting there in San Francisco and it is
true that in in the social media space
it's it's America that is commanded it's
America that has led the way to the
exclusion of other countries around the
world sometimes
well I think there are there are lots of
different Arenas that AI is going to
play in uh they're gonna it's gonna be
incorporated in many parts of our
economy and different countries are
going to have different ways that they
want to regulate and sort of govern how
AI plays out in their countries and in
their societies I I think there are
issues that transcend individual sort of
countries and and governments uh which
are the ones that you know threaten
Humanity as a whole so this you know in
the same way that the U.S and the Soviet
Union came together during the Cold War
to make agreements about nuclear weapons
I think countries are going to have to
come together at this very high level
and make agreements about the most
powerful AI systems and how we're going
to keep them safe and under control but
at the moment China is is lagging behind
America and the development of
artificial intelligence you can see a
scenario though where they they might
catch up and they might see this as the
short cut to not only
um an economic Advantage but also to
China you know becoming the most
powerful Nation on the earth if they
were to go their own way and not
subscribe to a rules-based system would
that undermine the rest of the club
well I think one of the reasons we we
need to get together and coordinate
internationally is the alternative is
this sort of pernicious race that
somehow by racing to these ever more
powerful systems delegating more to them
uh giving more capability and more
decision making to systems where we
don't really understand how they work
that we're somehow going to win and I
think this is just getting into a race
like that is a race that that people are
not going to win you know the human race
is not going to win the AI is going to
win that race uh at all of our expense
just on on security and defense which
was part of the discussion today there
are concerns that AI could be used to
set false targets or to put satellites
off targeting and the we might in the
future have to think of very different
ways to how we approach defense is it
incumbent then on NATO and the Western
allies and maybe we've learned some
lessons during the war in Ukraine to
find their own solutions to some of
these problems
um well I think you know countries are
going to have different their own
decisions and how to incorporate Ai and
various things parts of what they do uh
one of the things we've been concerned
about is incorporation of AI into
command and control systems this is
there's a big push for this uh and and
one of the things we really want to be
careful not to do is over delegate and
definitely not you know this seems sort
of obvious but definitely not
incorporate AI into nuclear command and
control systems uh this just seems like
a terrible idea uh our organization
actually put out a short film today kind
of illustrating the risk of over
delegating AI including a nuclear
command and control I think we could all
agree this is a a bad idea and that we
should keep certain high-stakes
decisions really in human hands so I'm
hoping that some of the low-hanging
fruit for international agreements can
be things like we're not going to
incorporate Ai and nuclear weapons we
are going to keep human hands on certain
things like uh the decision to take
lives and and to uh you know make large
decisions and command and control I
think there's low hanging fruit where we
can all agree just in in sort of Base
levels of sanity and how we use AR
that's where I would start
Anthony again very interesting
non-proliferation of AI within nuclear a
discussion no doubt for the future thank
you very much indeed around the world
and here in the UK you're watching BBC
News
0 Comments