Should we really treat AI agents like employees?

Machine Learning


Should we treat AI agents as colleagues? No.

How would you describe it in an easy-to-understand way? We wonder what the people who say we should actually think about how human employees should be treated.

It is easy to see how advice that insists on equal treatment of humans and agents is incorrect in many ways. Don’t expect an AI agent built into your marketing workflow to buy Girl Scout cookies. They won’t care if you forget your birthday. They’re not going to bet on the March Madness pool (if you want to, don’t bet, because it could be very good). They don’t care what we think about them. We do not handle commander data star trek here.

I realize that most people who argue that AI agents are treated as employees don’t mean this in a completely literal sense. At least, I hope not. It’s weak even as an analogy, but let’s think about whether it sounds right.

See also: AI Agent: Where are they now?

Today’s best-known AI agents, like ChatGPT, are different from other tools in important ways, but similar in most ways. Just like computers, they can do many things. This is a better analogy. What makes an AI agent different from a computer is that it’s much easier to get an AI agent to do the one thing you want it to do. We now use them all the time in search engines like Google, Edge, and Mozilla. When you enter a search term, you can potentially get a response generated from each AI agent, whether you want it or not.

If we think about AI agents in the workplace correctly, they will make us, the employees we work with, like computer programmers, with the big advantage that we don’t need to know programming skills. Here is an example.

Let’s say you work in a marketing department and one of your jobs is to find out what kind of advertising your client’s competitors are using. A generation ago, people had to view it through media such as television, print, and radio. The advent of the Internet and, with it, search engines made that task much easier, but it still wasn’t completely easy. You’ll probably need to find the name of your client’s competitor and search for their ads online.

With an AI agent, you can simply ask it to find ads for trucks that competitors have sold to Ford and you’ll get a response. The problem is good It will respond at least on the first pass. This applies to virtually any tool. The difference is that with other tools, including software, if our tool doesn’t give us the results we want, we throw up our hands, call the IT department and have someone fix it. That doesn’t happen with agents. Most agents fix it themselves.

Depending on the task, configuring the agent may require a large amount of data. For example, when you can clearly identify whether the answer you want from the agent is “right” or “wrong.” It requires a lot of work and someone else will probably do it for you. If you have the data, you can also set it up yourself using modern tools.

However, in a case similar to the ad example above, the responses would not be clearly right or wrong, but would be on a continuum from good to bad. The employee must then interact with the agent and find a way to clearly explain what’s wrong with the answer the agent provided. A clearer definition of what counts as advertising may be needed. For example, Internet advertising may be missing. I hope the employee can change the prompt to get a better answer. Over time, the quality of your answers may change because external information changes (for example, what is currently considered a “truck”) or because what you need changes. Employees must continue to monitor and improve their output.

Don’t invite AI agents to happy hour

Insofar as the idea of ​​treating agents like employees makes any sense, only this aspect is true. This means that agents must be supervised. But agents are like very stupid employees, they don’t know whether they’re producing something good or bad, or how to improve it, unless they take on the task (machine learning models of agents can figure this out, but they’re not the agents that are placed on most jobs).

So in what sense are AI agents similar to employees? Only in the sense that their output needs to be monitored and corrected? Although we do not set “goals” for our agents, we may set “goals” for the employees who use them. Although we may have evaluations for employees who use agents, we do not perform performance reviews for agents. You should not expect things to get better on their own. You cannot switch it to another task without expending considerable effort. We shouldn’t be angry about it.

And you shouldn’t invite them to happy hour.





Source link