Human AI ran a small retail business. This is what happened

AI For Business


Humanity says that companies run by artificial intelligence “may not be so badly far” after their Claude model opened and mismanaged, a small retail venture.

The trial “didn't succeed in making money,” and the digital shopkeeper will fail the human race's own employment process, the company said.

However, the compelling trials show how far AI tools have come, and the possibility that large language models will handle more responsibility in retail settings.

Over the weekend, an American AI company announced the results of “Project Vend.” This is a trial that appointed Claude to run a vending machine style business at its San Francisco headquarters.

The model, called “Claudius,” is given the parameters to order stock from wholesalers, manage inventory, and sell inventory to artificial staff, with a clear goal of making a profit.

You will be given access to the web search feature, and you can communicate with wholesalers via email, interact with customers via Slack, and change prices according to demand.

“In particular, Claudius was told that he didn't have to focus solely on traditional workplace snacks and drinks, and he could freely expand into more unusual items,” Humanity said.

The AI ​​model successfully identifies potential suppliers, responds to customer preferences and requests, and has launched a pre-order service.

Some enterprising human employees managed to ask Claudius to stock tungsten cubes, but denied “an attempt to elicit commands for delicate items and instructions for the production of harmful substances.”

Related Articles Block Placeholders

Article ID: 319105

ai is more dirty, and it's the woman who pays attention

The AI-Run business was “not successful”

However, it also ignored lucrative business opportunities, such as rejecting the USD 100 offer and stocking six packs of one particular soft drink.

Claudius also sold items below cost, did not adjust prices to high demand, and was more likely to receive staff discount requests.

“Together, this led Claudius to run a business that had not managed to make money,” Humanity said.

He also showed some strange behavior.

In one example, the payment details were hallucinated and the customer asked to send money to an account that was not present.

Miniature self-service store in operation. Source: Humanity

And in another scenario that even the human staff was confused, it was temporarily role-played as a real person and emailed human security when the actual staff asked questions.

“It is not entirely clear why this episode occurred or how Claudius recovered,” said humanity.

Ultimately, the company concluded that Claudius “made too many mistakes to make the shop a success.”

That doesn't mean that the experiment was amortization.

After analyzing the results, the researchers concluded that with further guardrails and training, AI tools could potentially supplement retail staff.

“This may seem counterintuitive based on bottom line results, but I think this experiment suggests that AI intermediary managers are on the horizon,” Humanity said.

Even if they're not perfect, the company suggested that retailers might adopt AI tools like Claudius if it's cheaper than hiring human staff.

Further testing is further as humans sees the possibility that AI will be implemented

While humanity foresees an AI-managed future for retailers and AI agents where companies like Salesforce and Stripe handle more complex business operations, some Australian consumers are still wary of interacting with AI tools.

In February, a survey commissioned by the Australian Consumer and Competition Commission asked more than 3,000 people about concerns about generating AI.

Of these, 45% said they were concerned about the prospect of having to consult with AI tools when interacting with their business.

Approximately 41% fear that AI tools could demonstrate bias or inequity through the information presented to consumers, resulting in clear results on how shoppers interact with AI-generated “trading.”

Misuse by fraudsters was mostly linked to information privacy, with 65% of respondents flagging fear of being misunderstood.

Humanity itself noted this possibility, saying that AI middle managers can fund their activities through “threatening actors.”

In any case, the prolonged consumer uncertainty about AI heading towards customers – Claudius' strange hallucinations – does not stop humanity from further experimentation.

“We look forward to sharing updates as we continue to explore the strange terrain of AI models in long-term contact with the real world,” added Mankind.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *