The AI company Anthropic, known for its chatbot Claude, imposes a remarkable restriction on job advertisements: applicants must guarantee that they will not use AI assistants for their application documents. How does the company argue this?
“While we encourage our employees to use AI in their day-to-day work, we would like to get to know your personal motivation and non-AI-based communication skills during the application process,” the job advertisements state. Applicants must explicitly confirm that they agree to this requirement.
The regulation can be found in almost all of the 150 or so positions currently advertised, from software development and finance to communication and sales. Only a few technical roles such as “Mobile Product Designer” are exempt.
Paradoxical situation
The irony of this situation was uncovered by open source developer Simon Willison (via 404 Media): Of all things, a company that is developing one of the leading AI assistants, Claude, feels compelled to prevent its use in its own recruitment process. The main aim is to ensure independent thinking and authentic communication.
However, the practical enforceability of this regulation remains questionable. The language models developed by Anthropic and its competitors now produce texts that are almost indistinguishable from human communication.
At the same time, Anthropic has come in for criticism: last year, the company’s own data crawler systematically ignored access restrictions and scraped some websites millions of times – precisely in order to collect training data for the AI models that are now prohibited from being used in job applications.