United States Corrupt Twattery

SilverHood

FH is my second home
Joined
Dec 23, 2003
Messages
2,371
He didn't ask Denmark, haha. Anyway, the Danish Iver Huitfeldt air defense frigates are not able to take part in missions right now, since their anti air systems don't work.
 

Overdriven

Dumpster Fire of The South
Joined
Jan 23, 2004
Messages
12,901
So out of all the "Allies" Trump asked to send ships to the Strait of Hormuz

Italy: Rejected
Spain: Rejected
Japan: Rejected
France: Rejected
Norway: Rejected
Canada: Rejected
Australia: Rejected
Germany: Rejected
China: No response
UK: Rejected
Netherlands: No response
South Korea: No response

It's almost as if constantly belittling your allies and starting an illegal war had consequences scratches chin

Lol, US thinking China will go to war/work against against one of their own allies. That's funny.
 

Scouse

Giant Thundercunt
FH Subscriber
Joined
Dec 22, 2003
Messages
38,499
Also, a recent study shows that Claude was the best-performing AI model when it comes to preventing people from planning to cause harm to themselves or others. No wonder the US admin doesn't want to work with them.
Yes. So why would you let your military spend money on it?

I said it was a bit of a "dick move". And it is. But that's all it is. Woe is the 'poor' one of the richest companies in the world - worth about $380 billion.

The US doesn't want it's military to work with a company that stops it's AI models being used by the military.

Do you and @Gwadien need me to say it's obviously because Trump is a baby rapist or something?
 

Scouse

Giant Thundercunt
FH Subscriber
Joined
Dec 22, 2003
Messages
38,499
No it doesn't. I can't tell if you are being contrarian for it's own sake or you're just willfully ignorant. It's used to call out companies the country considers an enemy of the state or a nation level security risk. It's a designation intentionally designed to harm Anthropic and its reputation.
I get where you're coming from - and already said it can be fought in the courts.

However consider this - if their AI model is being used to write military code, yet has built-in restrictions on military use, even if nothing malicious is happening that could feasibly and legitimately create a supply chain vulnerability. If Claude is embedded in a workflow that generates, tests, or maintains military code and it refuses certain tasks because of vendor-safety (i.e. Anthropic's admirable principles) then that could easily be seen by the DoD as a denial of service vector.

If the US military depended on Claude for code generation reliability becomes a national security issue. And these are hallucinatory AI models we're talking about here.

According to Anthropic, Claude is explicitly designed to avoid autonomous targetting, weapons development, 'harmful code'. All the stuff the US military wants.

I know we all hate Trump. But really, with your coder head on, do you not see this as a response to the US military saying "disable these safety protocols, or we can't use you". Anthropic telling them to jog on. And the US going - well we simply can't allow your code generation facilities to produce code for US military applications. It's a clear point of operational risk.

Small issue. Multibilliondollar company. Owned by Amazon, Google and a load of Venture Capitalists. Boo, fucking, hoo tbh.

But also - kudos from me. I'm glad there's a company that doesn't want military using it to produce stuff that kills...
 

Users who are viewing this thread

Top Bottom