New developments in artificial intelligence are unstoppable, despite calls by some experts to hit the brakes, said business attorney David Miller.
“AI is already embedded in so many businesses, from monitoring gas compression to medical advances,” Miller said. “The problem with suspension is that if we find a way to suspend in the United States, capital and labor will move elsewhere.”
Last month, an open letter published by the Future of Life Institute kicked off a national debate about whether AI is developing too fast to be safe.
“Strong AI systems should only be developed if we are confident that their effects will be positive and the risks manageable,” the letter said, with more than 24,000 signatures as of Thursday.
“The problem with AI is that it’s hard to define, so it’s hard to regulate. Pausing it without defining it is impossible and not beneficial,” said a University of Oklahoma alumnus. said Miller, a supporter practicing in Dallas. yeah.”
Legislation and regulation in specialized areas such as AI are typically defined by courts, Miller said. In 2022, more than 100 of his AI-related lawsuits have been filed. That’s ten times what he was five years ago. “Set some parameters as you navigate through the system.”
Industry groups can set standards for the use of AI. It’s much quicker, but it requires cooperation among competitors, Miller said.
“We are going through a very difficult time,” he said. “This is a fascinating question that we all have to deal with.”
A global survey released Wednesday found that 65% of business and IT executives believe there is data skew within their organizations, and 78% say the use of artificial intelligence and machine learning We believe that data skew will become a greater concern as it increases.
“Data Bias: The Hidden Risk of AI” was released by Progress, a company that helps customers use data intelligently to drive business outcomes.. The study, conducted by research firm Insight Avenue, surveyed more than 640 businesses and people who use data to make decisions and are using or planning to use AI and ML to support decision making. Based on interviews with IT professionals (director level and above). .
When it comes to AI and ML, algorithms are only as good as the data used to create them. If the data set is flawed, or worse, biased, the erroneous assumptions will be incorporated into all resulting decisions, the report states.
John Ainsworth, executive vice president and general manager of Progress, said:
Business practices based on biased AI data can have serious consequences for those adversely affected, the study found, citing examples in retail, finance and healthcare.
A well-known retailer discovered a flawed hiring algorithm that hired men only for open technology roles and excluded otherwise qualified female candidates.
A financial institution found that it was incorrectly rejecting eligible loan candidates due to a flawed AI tool that discriminated by applicant’s zip code.
A company that uses AI to assign healthcare eligibility erroneously assigned black patients low health risk status, denying them the appropriate care they were entitled to, with adverse medical consequences.
In the legal field, Miller sees the benefits of using AI to find relevant case law. He also knows of six cases of his using AI to draft wills and contracts that have proven inadequate, raising the following questions: Who is responsible, the user or the provider of the document?
l
