This week, senior officials from the world's two largest economies discussed how to manage a future dominated by artificial intelligence. It should be taken as a positive sign that the US and China can meet in Geneva for talks, but that there is much work to do while we wait for governments and businesses to set the rules of engagement. is also worth pointing out. There are things people can do to prepare for the inevitable moment when AI becomes fully ubiquitous.
Ask yourself how you should leverage AI. What are your own values regarding privacy, fairness, and the potential environmental impact of AI's myriad applications? As humans and professionals, we have There is a small frame in which you can begin to define boundaries for yourself.
This is based on the past decade, when the benefits and costs of keeping digital devices at hand have become more clear. It not only affects our well-being and mental health, but also allows us to make our dreams come true, like in the old tales of djinns and magic lamps. The moral of these stories is still very relevant today. Never underestimate such power.
At the invitation of Google, I spent an interesting few days attending their Zeitgeist conference outside London. After listening to experts both internally and externally, it became very clear that we are at the beginning of an era in which AI will become central to our lives. It affects how we travel, receive health care, stay safe, plan the communities we live in, and communicate with each other. There's no going back now.
Ethical debates about how and when to use AI have been going on for decades. Engineers and experts working in the field are grappling with this question, but to chart a positive path forward, answers need to come from a broader segment of society.
The emergence of generative AI technologies has accelerated the debate and expanded its scope.
According to UK government data, “a third of people report using chatbots in their daily lives at least once a month. At the same time, older people, people from lower socio-economic status, Self-reported awareness and understanding of AI is increasing across society, including among those who are less digitally savvy.”
Self-reported awareness and understanding of AI is increasing across society, including older adults
However, the same report suggests that “despite increased understanding, concerns related to AI persist.” An increasing proportion of the public believe that AI will have an overall negative impact on society, and words such as “scary,” “worry,” and “anxiety” are often used to describe emotions related to AI. Masu. ”
In the United States, a Pew Research Center survey shows similar results, with nearly all Americans recognizing the growing role of AI and a majority saying they don't care about where the technology will take us. He says he has concerns. Fundamentally, as understanding increases, so does the need to ensure that public debates, whether for or against the use of AI, are based on facts rather than fear-mongering.
For example, while the potential of AI to make it harder to detect misleading and false information has been repeatedly cited, little has been said about the risks to society from misinformation about the technology itself. A void in education about the truth about AI allows bad actors to take advantage of the general sense of unease surrounding the rise of this technology. Malicious acts like this risk further polarizing and destabilizing communities.
Therefore, we need to discuss this issue in the most open and frank way possible so that the public can be more informed about the pros and cons.
What Google defines as “proactive information leaking” campaigns are increasingly needed to help people identify and resist manipulative content before misinformation is placed in the public domain. But who should be held responsible for this? A mix of government and business? A better question might be to ask how people can take control of their own destiny and educate themselves about the realities of AI, rather than waiting for governments and businesses to make the choices for them.
For example, during the COVID-19 pandemic, we of course complied with public safety rules and regulations, but what does it mean to stay safe and how do we balance our physical and mental health? And each of us had to figure out how to find that balance and navigate our daily lives. They make decisions such as which vaccines to administer and when. It was very stressful to say the least, and furthermore, due to the nature of the crisis, it had to be done within a limited period of time.
At the moment, we have relatively more time when it comes to AI, but what is at stake is just as serious.
AI can help us do better, but it can also deepen the negative aspects of society. It's not just a regulatory issue. It is a practical subject as well as an ethical and moral one.
Published: May 17, 2024, 4:00 AM