Anthropic challenges Pentagon on AI ethics, security


Anthropic challenges Pentagon on AI ethics, security

Reading

Read this text and listen to it at the same time. If you don’t know a word click it to see explanation, hear pronunciation or add it to vocabulary.
Anthropic challenges Pentagon on AI ethics, security

Anthropic will not let the Pentagon use its AI technology freely. They worry about AI for surveillance or weapons without human control.

The Pentagon told Anthropic to agree by Friday or lose its contract. They also threatened to label Anthropic a security risk to the nation.

The Pentagon said they do not want to use AI for illegal things. They promised no spying on Americans or weapons without human input.

Anthropic has rules against using its AI for certain bad purposes. Other AI companies already provide their technology to the military.

Some leaders in government did not like the public dispute. They believe the Pentagon should not pressure important US companies.


Questions

Answer the questions to the text. Speak or write, you choose. AI will assess your answers.

What does Anthropic not want the Pentagon to do with its AI technology?

Anthropic will not let the Pentagon use its AI technology freely.

What did the Pentagon threaten Anthropic if they did not agree by Friday?

The Pentagon told Anthropic to agree by Friday or lose its contract.

What promise did the Pentagon make about using AI?

The Pentagon said they do not want to use AI for illegal things.


Describe the article image

Look at the article image and describe what you see on it. You may either speak or write your answer there.


Discussion

Discuss this article with your AI tutor to practice. Your tutor will help you with vocabulary and grammar.

Read a new article every day and
discuss it with AI at lingolette.com
All content and tasks are generated by AI inspired by a real publication.
Anthropic challenges Pentagon on AI ethics, security