Get AI For Work™: Are Your Tools Really AI? (Video) – Employee Rights/Labor Relations

AI Video & Visuals


detail

Employers face a patchwork of federal, state, and local laws with unique definitions and requirements for AI technology in the workplace. Understanding these legal nuances and proactively evaluating the capabilities of each tool before implementation is essential to maintaining compliance and minimizing liability.

self

transcript

Eric Felsberg

long island principal

hello everyone. Welcome to the latest episode of “Getting AI for Work.” My name is Eric Felsberg. As always, I’m joined by my colleague and friend Joe Lazzarotti.

Joe, we have an interesting episode today. It may seem a little basic at first glance, but it’s actually very important. The question we’re going to talk about today is about my AI tool, actually AI? It may seem simple, but it can have important implications when considering your compliance plan for using AI in the workplace.

Joe, any thoughts?

Joseph Lazzarotti

tampa principal

First of all, nice to meet you, Eric. I hope all is well. There’s a lot to think about on this question, both in terms of what the technology is and isn’t, and perhaps in terms of what impressions of the company organization in various positions believe. People in IT may think differently than people in human resources or marketing. The law then determines what AI is. In our position and in our world, that’s often what matters. As you said, that’s something you really have to think about in terms of answering that question because it can have pretty significant consequences.

Felsberg

Much of the technology we use on a daily basis, perhaps one that we don’t really think about, is AI. The question is: Is AI what we’re concerned about? Is AI what regulators are trying to regulate? When answering AI questions from clients, it’s often what they want to know, at least in the current environment. You mentioned it in the comments, but what do I need to worry about from a legal standpoint? I’m using this tool, probably as part of an employee or applicant selection function. This AI promises to streamline processes, increase efficiency, and allow you to vet more applicants than you would without the tool. What do you need to worry about from a legal perspective? Let’s start from a federal perspective.

From a federal perspective, one of the things I think about a lot is, are there any potentially disparate impact issues under Title VII of the Civil Rights Act that I have to worry about? Perhaps if I had to validate this tool, there would be a duty here. As I answer and try to answer these questions, I think about things like uniform guidelines for staff selection procedures. This is still good law, but this is a document written in the 70s. In short, whenever there is a selection mechanism, it needs to be monitored for disparate effects. If statistical evidence of disparate effects is found, the tool should be validated. Again, when they wrote this law, they were probably thinking about cognitive and physical tests, such as the ability to lift a certain amount of weight, as part of their employment considerations. It’s still good law today.

Uniform guidelines apply when using AI tools to make selections. If you look at how they define what they mean and what they’re trying to regulate from a selection perspective, it’s very vague. In other words, it is essentially any mechanism used to select employees. At the federal level, there’s a very broad definition of what may qualify, but you have to think about that when you’re talking to your employer about using some of these technologies.

Joe, there are a number of state and local laws emerging that approach this from a slightly different perspective. Maybe you can comment on that.

lazzarotti

There are many levels of this analysis. Often, when talking to clients or colleagues, the focus is on the generative AI tools that most people are using. We often hear about agent AI today, but there are also more traditional machine learning AIs that have been around for even longer. This is one way to categorize the different types of AI and give it some definitions.

Then, to your point, you start looking at the federal level and start saying maybe it’s so broad that we need to see what other frameworks we’re working under. If you look at the EU AI law, Eric, you’ve done a lot of work on the New York City AI law and the California Fair Employment and Housing Act. They defined a set of regulations that were issued and finalized. The definition of AI they have may not be exactly what New York is looking for. If you’re a multi-state organization and you’re thinking about deploying so-called AI or any kind of tools nationally, you’re going to have to really consider those and make those decisions. For example, the Colorado law is a decision-support tool, and the California law is a decision-facilitator. Do these terms used in litigation have different meanings and are themselves undefined? How do we know what it means? These are just some of the distinctions. Is it a big decision, or how involved will the tool be compared to the human? All of these things influence the question, “Do we need to follow that law?”

There are other provisions in specific frameworks as well. Another area where California is having some influence on AI regulation is in the area of ​​the California Consumer Privacy Act. The law applies only if the entity is a business under that law. For example, you need to do business within the state and need to control your personal information. Next, one of three thresholds must be met. One of them is that your annual gross revenue must exceed a threshold. Currently, the prior year threshold is $26,625,000. If you think about it, there may be AI, but its laws may not apply to you. As we deploy this tool, our concern with using it for a particular purpose in a particular jurisdiction is determining whether it is AI under the law.

Felsberg

What you are describing is exactly correct. This is difficult for employers, as we currently face a patchwork of laws across the country when considering the use of AI tools in the workplace. So far, things have worked out to some extent. In most cases, even though AI is getting a lot of attention, it is actually being implemented in only a handful of jurisdictions. It’s now a little easier to keep track of what laws require what and whether our tools are subject to them. This means that you need to look at the laws of each tool to understand exactly what it does, what type of task it performs, and how it does it. You may need to speak with your provider and developer to determine exactly how they will help you complete that task.

Again, the effects can be significant. You mentioned New York City law, but my office is outside of New York City. We often speak with New York City employers about New York City’s AEDT laws. Why would you care if you meet the definition? Because if you do, you could potentially have to prepare a bias audit. New York City may need to post these bias audits on career sites for the world to see. Certain notices must be issued. What is in those notices? And what are the obligations arising from those notices? Then you have to worry about what Colorado and California require, and other states that have issued their own laws. We see some of the same types of themes crop up in many of these laws, but there are subtle nuances where what may be considered AI in one law may not be in another.

As you say, these laws are so new that there isn’t a lot of litigation or precedent to consider. We’re all in the same boat here, exploring our surroundings and doing our best to stay compliant and stay there. This is an important issue. As I said at the beginning, it may seem like a very simple question: “Is this tool really AI?” This is a critical decision point for employers as they consider implementing these technologies.

lazzarotti

Yes, there are two additional points. One is when the organization acts as an employer. That’s problematic in that context. However, when we put on the employer hat, and because we are making important employment-related decisions, it is possible that a business may decide that this law may apply to them. However, applying the same technology to different use cases (such as deciding which tools to sell or have in a particular market) may not make any sense. The same law does not apply because the way the law defines significant decisions is that they are not similar types of activities to what Congress or the states have determined to be significant decisions, such as health care decisions or housing decisions. These are common areas of AI law. That’s one thing.

The other thing is that we’re seeing a lot of clients that serve enterprise customers and may have contractually agreed to certain restrictions on the technology they use. In some cases, it’s AI. What does that contract say? Do you prohibit its use or do you have specific controls in your contract that you plan to use in the performance of your contract? Understanding that can be very different from how New York City, California, and other state laws define AI. It’s just something you agreed to in your contract. I mean, if you just think about the organization as a whole, there’s all these different areas where AI can have some meaning. It is very important to understand how the applicable circumstances may affect when to use a particular technology and whether there are rules governing its use.

Felsberg

absolutely. One last note on this: You need to consider these issues before you go live. There’s often a lot of excitement, perhaps around a particular business unit, whatever it may be, seeing the clear benefits that AI can provide. They are excited about the efficiencies and are often in a hurry to implement them right away because of the potential business impact. Having a gatekeeper in place to consider these issues before go-live is critical. Because it’s always easier to address some of these issues and understand exactly the path forward from a compliance perspective or potentially liability perspective. What types of risks are involved when performing a proactive assessment before going live?

Joe, as always, great discussion with you. I think we could talk about this for a long time, but I won’t do that right now. I hope you, our listeners, found this discussion useful. If you have any questions or would like us to cover a specific AI-related topic, please feel free to contact us. We have a dedicated email address at AI@JacksonLewis.com. Thank you for your attention.

The content of this article is intended to provide a general guide on the subject. You should seek professional advice regarding your particular situation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *