
Anthropic’s new AI model raises fears about high-tech risks
Clip: 4/9/2026 | 6m 19sVideo has Closed Captions
Anthropic’s powerful new AI model raises concerns about high-tech risks
Anthropic announced that it has started a very limited test of its newest AI model called Mythos. It's a model deemed so powerful that the company warned it could cause widespread disruption if it were released to the public. Anthropic is giving some companies access to Mythos to test and identify vulnerabilities, a move that is raising concerns. Geoff Bennett discussed more with Gerrit De Vynck.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Major corporate funding for the PBS News Hour is provided by BDO, BNSF, Consumer Cellular, American Cruise Lines, and Raymond James. Funding for the PBS NewsHour Weekend is provided by...

Anthropic’s new AI model raises fears about high-tech risks
Clip: 4/9/2026 | 6m 19sVideo has Closed Captions
Anthropic announced that it has started a very limited test of its newest AI model called Mythos. It's a model deemed so powerful that the company warned it could cause widespread disruption if it were released to the public. Anthropic is giving some companies access to Mythos to test and identify vulnerabilities, a move that is raising concerns. Geoff Bennett discussed more with Gerrit De Vynck.
Problems playing video? | Closed Captioning Feedback
How to Watch PBS News Hour
PBS News Hour is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipAnthropic announced this week it has begun limited testing of its newest AI model called Mythos.
One the company says is so powerful it could cause widespread disruption if released to the public.
It's just generally better at pursuing really long range tasks that are kind of like the tasks that a human security researcher would do throughout the course of an entire day.
Obviously, capabilities in a model like this could do harm if in the wrong hands, and so we won't be releasing this model widely.
For now, Anthropic is giving more than 40 tech companies, including some rivals, access to Mythos to test it and identify vulnerabilities across systems.
But even that move is raising concerns.
For a closer look at all of this and the implications, we're joined now by Garrett Vanderwick, who covers AI for the Washington Post.
Thanks for being with us.
Of course.
So help us understand the concern here.
What specifically makes this model different from other AI models and and why is there so much frankly fear around it?
The specific concerns that are being called out here is that this model is really good at finding you know gaps in software that hackers could exploit.
So right now you know all software has bugs but software is pretty complicated and you need to kind of really know what you're doing in order to sift through all that code to find something that you could then use to hack into a system.
And what Anthropic is saying and some of the independent cyber security experts that they've also given access to this model to are saying is that this can essentially do that automatically.
It can sift through all sorts of code.
Something that might take humans who are very uh you know good at this months to do it can do in minutes or hours.
And so the concern here is that if this is sort of out in the public, anyone can use it.
that anyone who wants to hack into any kind of software for whatever reason would be able to do it using this technology.
And that's why the company is saying at least they're sort of keeping it under wraps for now.
Keeping it under wraps, but also giving, as we mentioned, some 40 other companies including Microsoft and Nvidia access in part to strengthen their own cyber defenses.
What do we know about that decision?
Does sharing it more widely actually reduce the risk or potentially increase it?
Yeah, I mean there is a bit of a precedent here in cyber security.
Often if one company finds some lack in another company's software instead of just giving it to the public and you know creating a situation where that other company could be hacked, they will sort of go you know behind the scenes and say hey guys we found this you might want to fix this before the rest of the world figures it out.
So I think it's sort of in that tradition that they're doing this.
But of course some people are saying hey now we have all these powerful tech companies that have access to this allegedly extremely powerful tool for cyber security while is it also powerful for other things you know other things that they could use to you know increase their business get an edge on other companies.
So there are some complaints that you know if this thing is really so good why don't you let the rest of the world actually see it for themselves and then we can decide what to do with it.
Logan Graham, who's one of Anthropic's uh researchers, suggested that if this this AI program were fully released, it could force widespread software updates, eventually exposing weaknesses everywhere.
Is that a a realistic scenario or is he in some ways overstating it?
Yeah, potentially.
I mean, it's difficult because, you know, besides these companies, no one has really been able to get uh their hands on it.
And I think we always need to take these big AI companies with a grain of salt.
It's not the first time an AI company has said, "Oh my goodness, our new technology is so powerful, we should be afraid of it."
You know, it's great marketing, right?
Because if something is so powerful that it could, you know, change the world or cause chaos, it's also very powerful for doing other things.
And so, I think we need to be careful.
You know, I don't I'm not necessarily saying that Anthropic is lying or misleading the public here.
I I'm sure they are very legitimate about these concerns, but I do think that we're already in a situation where cyber security is pretty atrocious.
I mean, everyone's personal data has been hacked at some point.
Uh, if anyone really wants to get into a software system, if they have the resources, the uh, you know, incentive, they will probably be able to do it.
We already live in a world where software is broken and needs to be updated constantly, right?
Every time you uh, open your operating system, it's probably pinging you to update the apps that you have on your computer, right?
That's because of the cyber security situation we have right now.
And in the same way that this uh mythos technology could be used to hack into computers, it could also be used to defend against hacks.
And so a lot of the cyber security experts are saying, look, yes, this is concerning, but we can also use this technology uh the good guys can also use it to protect us.
And so it doesn't necessarily completely change that balance of power that we have right now.
We'll say more about that because there is this strange disconnect where you have now even the AI companies themselves warning about the potential dangers and this is as the AI companies are also racing to release more powerful systems at the same time.
What accounts for that?
Yeah, I mean I think you know it's very easy to sort of point that and say like look like what's really going on here and I think you know each AI company is slightly different.
They have different incentives but it's true.
I mean they are all in this extremely competitive race to uh build the best AI system.
It's very expensive to train these things.
It costs hundreds of millions of dollars to develop each new version of this AI technology and very few companies are able to do it.
And the entire tech industry is in agreement that this is you know the most important technology to come out probably since the internet itself.
And so there's a huge amount of money that is incentivizing the development of this technology.
At the same time, a lot of the people who work at these companies do legitimately believe that there are concerns that it could be used for cyber security.
It could be used for misinformation.
It could, you know, some people even believe that it could, you know, become so smart in the coming years that humans are, you know, challenged to keep it under control.
And so I do think that those are real beliefs held by some people at these these these companies and yet they are locked in this uh competitive dynamic.
Garrett Dink covers AI for the Washington Post.
Garrett, thanks again for being with us.
Support journalism you trust.
Support PBS News.
Donate now or even better, start a monthly contribution today.
Inside Chicago's Steppenwolf Theatre as it marks 50 years
Video has Closed Captions
Inside Chicago's innovative Steppenwolf Theatre Company as it marks 50 years (6m 39s)
Israelis mark Passover in shadow of war
Video has Closed Captions
Israelis mark Passover in shadow of war: 'We cannot celebrate together' (5m 10s)
Israel, Lebanon to talk as strikes threaten U.S.-Iran truce
Video has Closed Captions
Israel agrees to talks with Lebanon as strikes there threaten fragile U.S.-Iran truce (6m 58s)
Melania Trump denies close relationship with Epstein
Video has Closed Captions
In rare public address, Melania Trump denies close relationship with Epstein (2m 31s)
News Wrap: NASA optimistic ahead of Artemis II reentry
Video has Closed Captions
News Wrap: NASA optimistic ahead of Artemis II reentry (5m 28s)
Trump’s Iran rhetoric faces scrutiny as Congress returns
Video has Closed Captions
Trump’s Iran strategy and rhetoric face scrutiny as Congress returns from recess (5m)
Ukraine faces military desertions as Russian war grinds on
Video has Closed Captions
Ukraine faces military desertions as Russian invasion grinds through 5th year (8m 41s)
What Iranians are saying about the war and their government
Video has Closed Captions
What people in Iran are saying about the war and their government (5m 23s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship
- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.

- News and Public Affairs

Amanpour and Company features conversations with leaders and decision makers.












Support for PBS provided by:
Major corporate funding for the PBS News Hour is provided by BDO, BNSF, Consumer Cellular, American Cruise Lines, and Raymond James. Funding for the PBS NewsHour Weekend is provided by...







