field of artificial intelligence.
Bloomberg is reporting
that Apple is developing its own chatbot
to rival OpenAI's Chad.
GPT internal engineers
reportedly calling these service Apple.
GPT shares and Apple spiked moments
after Bloomberg published its report.
There is no release date
yet for the product.
Right now,
it can only be used by Apple employees.
But generative A.I.
has reportedly become a major push
for the tech giant.
While there's much excitement
around the new technology,
there's also concern about
the potential dangers. A.I.
generated images can cause,
whether people can spot
fakes in the real world, panic
that could be caused
by an artificial image.
Daniel Sullivan tonight has more
We downloaded the Pentagon
deep fake image
and upload it into the platform.
When a fake image
purporting to show an explosion
at the Pentagon
went viral on Twitter in May,
led to a brief dip in the stock markets.
For a moment,
there was concern
America was under attack. Here we go.
And it was able to
pick up the Pentagon image as 78% fake.
Wow.
Rajat Gupta is the CEO of Deep Media,
a company
that has built technology
to detect deepfakes.
So if we look at what actually came
through as fake in the detector,
a lot of it, again, is about the cloud.
So. Right.
The lighting conditions on the smoke
aren't what a real world explosion
would appear as Rojo's
Company is working with the U.S.
Air Force as the U.S.
government prepares for what some fear
will be a deluge of disinformation
through deepfakes.
The generative A.I.
capabilities are just going to continue
to grow
from a national security perspective.
What are the concerns here?
That photo
that claims
that there was an explosion
at the Pentagon is one example
certainly that could be used to target
the decision making process of U.S.
leaders
The Pentagon itself has been concerned
about deepfakes for some time.
Mattrick and William
Carvey are part of the Pentagon's
DARPA program, DARPA.
Shaping the Future
DARPA was set up in 1958
amid concerns
that the United States was falling behind
in the space race.
And today
it is
still tasked
with keeping up
with the latest
cutting edge research and technology
So what what
this algorithm is asserting
anyway is that
this is computer generated.
DARPA has been working for more
than five years with American research
and other institutions
to develop technology to spot deepfakes.
You can kind of see some tallies
maybe in the
in the building in the background.
Right. It doesn't look quite real.
It's really hard to generate
apparently this fencing nation states
have always had the ability
to manipulate media.
I think what is changing here
is what's the level of skill
and resources
needed to create
those media manipulations?
And as we're seeing
that continues to come down,
Brigitte Gupta demonstrated
just how easy it is
to create a fake image in seconds
using a tool freely available online.
And you can basically type in here
anything and it'll create whatever.
Yeah, pretty much.
Much we try to come up with
and create today.
Well,
what if we create a fake image of
Anderson Cooper doing karaoke?
Sure. All right.
Let's see if this one gets it.
Also, this is just remarkable.
Quickly, this generates, right?
Yeah. Yeah. Here we go.
Which one's your favorite?
I've got like number four.
They've left
and they've let his hair grow.
Out of Rachael,
then run the deep fake Anderson
through his deep fake detection system.
So here
we see the results for our deepfake of
Anderson Cooper. Singing karaoke.
Interestingly,
it got his face
as being synthetically manipulated,
but I guess it's
picking up on the lighting
on the synthetic version
of Anderson Cooper's forehead and cheeks.
Again, it picked up on this person
over here
is being synthetically translated.
There's not a synthetic
about Anderson's cheekbones.
And while some deepfakes
are clearly satire,
hey, baby, loopy,
I'm an absolute ball of zest and flavor,
like this twitch account
that streams hours
of deepfake Trump and deepfake Biden
insulting one another.
There are very real concerns
that this technology
will be used to cause
chaos and confusion
in the 2024 election campaign.
Professor Hany Freed studies
deepfakes and disinformation.
I think that the campaigns
need to start thinking very carefully
about how they are going to combat
these disinformation campaigns
because they are absolutely coming
And Tony joins us now.
But for the record,
I have actually never done or never
the idea of it just
I would go after we can go for
it is not in a million years
It doesn't interest me.
Has anyone out there
found like a beneficial use of it
other than amusing.
I mean, look, this is this is. Yeah.
You know,
you can have a lot of fun with it.
I mean, it's fascinating to see
we're seeing it obviously
being used in Hollywood.
Some people are saying this is great.
You can do special effects much easier.
But obviously,
we also know that SAG-AFTRA
and the actors union have problems
with that.
And then, of course, you know,
it doesn't take a lot of imagination
to see how this could be used
in a bad way for elections.
I do want to show you,
we created a few more deepfakes of you
using this technology
that's you there looking quite dapper.
These are actually done by an artist,
but still using the same technology
that's, you know,
running away from the burning flames.
And so I try to make
have the Hollywood career. Yes, exactly.
If this doesn't work,
you don't need a solvent.
Thank you very much. Appreciate it.
0 Comments