Will artificial intelligence bring about the end of civilization?
This question itself sounds like theoretical hyperventilation. Something that will make someone laugh in the future, along with predictions that the world will never develop a market for computers, or that every home will have a nuclear vacuum cleaner by the end of the world. of the 20th century.
And, as I myself tend to scoff at this notion, even the most predictable uses of AI seem to end in disaster, at least in theory, when used on the basis of logical extensions. I can’t help but notice one thing.
Dave Wright, co-founder and CEO of a Utah-based e-commerce startup that currently employs 1,400 people worldwide, is a big believer in AI. He also exudes an energy of optimism.
At the Silicon Slope Summit on Artificial Intelligence held at the Utah Valley University campus last week, he said his company could use AI to create a 246-page product sheet with content in eight languages on a ceiling fan in about seven minutes. showed that. About 300 trillion data using his points as reference.
But when I asked him what he feared about AI, as I did after his presentation, the first thing he said was what other, more traditional companies would create. It’s about the by-product of this kind of instant analysis, which can take months. People who know how to use AI will be far more successful in business than those who don’t.
“My biggest concern is that economic inequality is starting to happen,” he said, before adding, “I think the biggest thing is the expansion of the haves and have-nots.”
As that chasm widens, one of two things can happen. The have-nots may rise up in revolutionary zeal, or (more likely in the US) they may regulate the haves and lobby governments to prevent them from gaining such benefits. .
A third possibility, of course, is that we all adapt, as the economy did when automobiles replaced horses, or computers made typewriters obsolete. Using AI could become as commonplace in business as text messaging is today. This could lower barriers to entry and create a new generation of wealthy, job-producing entrepreneurs.
But ask him what he’s most excited about and he’ll spout health care. Not only will AI help develop treatments that are tailored to a particular patient’s DNA, but it will also help doctors diagnose problems more quickly.
He then quickly jumps to the logical conclusion of that thread. “I think in a few generations we might start talking more about immortal humans.”
It’s interesting to ponder the mortal form of immortality. But it can, among other things, eventually lead to an overpopulated world where people kill each other to survive.
It seems we can never escape the end times.
The world is at an interesting crossroads when it comes to artificial intelligence and the potential of machine learning, for better or worse.
The UVU summit coincides with the European Union’s decision last week to advance legislation that could be a big step towards AI regulation. As we know, the AI Act restricts the use of facial recognition software and requires makers of products such as ChatGPT to be transparent and oblige them to reveal the data used by their programs.
In the U.S., lawmakers fear they are becoming obsolete. The New York Times notes that “policymakers everywhere from Washington to Beijing are now vying to control an evolving technology that worries even some of its early developers.”
But they do it for different motives. China, for example, is concerned that chatbots will violate censorship laws.
Mr. Wright’s concerns about economic inequality and health care are less acute than those of many others. Earlier at the same UVU summit, Utah Attorney General Sean Reyes spoke out about so-called deepfake videos and audio clones, which he said had already led to several fake kidnapping extortion crimes. Perpetrators covertly record a person’s voice, use AI to create an audio file calling for help, and contact relatives to demand a ransom.
He wondered how hard it would be to prove his innocence against compelling fake video evidence in court one day. The answer may lie in both private sector and careful government regulation.
“If we don’t have certain safeguards built into AI’s DNA, we will be far behind and constantly catching up,” he says.
But in a world where outlaws have access to the same technology, law has its limits. And if the U.S. passes laws that are too restrictive, countries with a smarter approach could gain an edge in job-creating technology.
I have serious doubts about man-made immortality and machine domination of the world. But it makes a lot of sense for Wright to warn against rushing government regulation.
“If you’re regulating everyone who follows the rules, you’re slowing them down,” he said, noting that the entrepreneur and tech CEO knows how to design effective regulations. I pointed out that I don’t know. How do policy makers know? ”
I also like his optimistic attitude. If civilization were to collapse, it would likely be due to viruses and other destructive mechanisms, he said. “I think it’s much more likely than, say, AI.”