Humanity may have promoted the Claude chatbot as being skilled in writing, but there is one writing task that says startups want AI chatbots to use.
All humankind's nearly 150 open job positions require applicants to write their own materials and not use AI such as Claude and ChatGpt. It doesn't matter whether the position is financial, telecommunications or sales. The job application requires all candidates to agree not to use AI in their submissions.
Related: Employees said they stopped attending Openai in 2016. I said that was a bad idea. Now he is an AI billionaire.
This agreement is outlined in the Application section entitled “AI Policy for Application.” It was first discovered earlier this week by open source developer Simon Willison.
The section is the same across positions: “We encourage the use of AI systems during their roles to help them work faster and more effectively, but please do not use AI assistants during the application process. We would like to understand your personal interest in humanity without mediating through the AI systems.
Related: Amazon invests $4 billion in ChatGpt rivals and is bold moves with AI Arms Race
Entrepreneur I have confirmed that all roles have a policy at the time of writing. There were no AI policies as of Monday's report. 404 Mediathere is a policy now.
Dario Amody, CEO of humanity. Photos by Chesnot/Getty Images
Humanity's preference for AI-free applications is not unique. Many other major US employers do not tolerate the use of AI by job seekers. A survey from Resume Genius in April found that AI-generated resumes are the biggest risk flag for 625 US employment managers, with 53% saying they are unlikely to hire candidates.
Still, candidates use technology. An August report from the Financial Times found that about half of job seekers used AI to highlight job applications, from writing cover letters to injecting resumes with keywords. Applicants can quickly generate cover letters and resumes, and can apply for about twice as many jobs per post.
Related: Openai's rivals have developed a model that appears to have “metacognition.”
What is Claude?
Anthropic's Claude is a popular AI chatbot that can provide everything from health coaching to legal advice, and the New York Times last month called it a “technology insider chatbot” because of its willingness to act as a therapist. There is a free tier, a Protia that costs $18 per month, and a team tier of $25 per person. Users told the Times that talking to Claude felt like talking to a smart person rather than a chatbot.
“It's creepy,” one user wrote on X in October. “This is the first time I've been interacting with LLM and I have to continue to consciously remind myself that it's not actually a sensation.”
People were saying that Claude Sonnett works fine as a coach/therapist, so I'm trying it out now, and it's really creepy. This is the first time I've been interacting with LLM and I have to continue to consciously remind myself that it's not actually a sensation.
– Kaj Sotala (@xuenay) October 29, 2024
Claude is less popular than its rival ChatGpt, which attracts more than 300 million users a week as of December, but according to Simarweb, its webpage elicited 73.8 million visits in December.
As of last month, humanity had raised $2 billion in a deal valued at $60 billion in high-level consultations, becoming the fifth most valuable US startup after SpaceX, Openai, Stripe and Databricks.
Related: Almost half of the VC funds raised last year went to startups in one category