Artificial intelligence (AI) technology is the most transformative technology of our time. Used by businesses to manage vast amounts of data and by individuals to simplify their daily tasks, generative AI holds far more power. AI's ability to create entirely new content raises questions about ethical applications and the need to regulate its capabilities. Balancing innovation and responsibility has become one of the main challenges for AI stakeholders, prompting the Pope to get involved.
Key ethical issues in AI
If there was any doubt about the transformative power of AI, the record-breaking growth rate of the chatbot ChatGPT proved beyond doubt the demand for generative AI. ChatGPT gained 1 million users within five days of its release. Within two months, that number had grown to 100 million, setting a record for the fastest-growing consumer application of all time.
That record has now been beaten by Meta's Threads, but it's still impressive. But the growth of generative AI isn't all good news. Most groundbreaking technological changes come at a cost, and AI is no exception. The growth of AI raises a series of questions about the ethics behind the content.
Questions raised by skeptics and users alike revolve around bias and fairness, privacy, accountability and transparency. When it comes to ethics and AI, the debate begins long before users start posting their results on blogs or other outlets.
The training of AI chatbots such as ChatGPT has raised questions about copyright and intellectual property, with experts concluding that in the race to dominate the AI applications market, companies like OpenAI (which owns ChatGPT), Google, and Meta were all cutting corners to stay ahead of their competitors.
Balancing innovation and responsibility
The need to balance AI innovation with responsible business practices has become so urgent that the topic was a key consideration at the recent G7 summit of world leaders in Italy. In a session that included G7 leaders as well as representatives from other countries and the head of the Catholic Church, participants sought to move closer to creating a “regulatory, ethical and cultural framework” for AI.
Data bias is one of the key issues surrounding AI. An application is only as powerful as the information used to train it, which is why major players may be ignoring copyright law in order to improve their datasets. Disputes between authors and other creators and major technology players have prompted the U.S. Copyright Office to consider how copyright law can be applied to generative AI.
Transparency is also a concern: Few users understand how AI algorithms choose which information to present to them or how they disclose its sources. Without this type of information, it becomes nearly impossible for users to identify false information, leading to the spread of misinformation and disinformation.
There are also questions about possible invasions of privacy, particularly with the use of facial recognition technology blurring the line between security and unjustified surveillance, as well as issues of accountability, for example when AI is used to make or assist with medical diagnoses.
Decisions made by self-driving cars is another area where AI-based technology is raising questions: Who is to blame if a self-driving car fails to stop at a pedestrian crossing?
Frameworks and Guidelines
As early as 2020, a few years before generative AI was made publicly available, Harvard Business Review noted that AI was helping companies do more but also increasing risks. At the time, ethics around AI were moving from a murky topic debated by academics to one the world's largest technology companies needed to care about.
There is now general agreement that AI needs to be regulated if it is to be used for the benefit of humanity, but a framework agreed upon by the U.S. government and other major industry players has yet to be established.
In Europe, the European Parliament adopted its first AI bill earlier this year, with requirements to be phased in over the next 24 months. Provisions vary depending on the risk level of each individual AI application, from unacceptable risk to high risk. Transparency requirements for generative AI tools include disclosing when AI is used and designing models to prevent the creation of illegal content.
While these regulations may sound abstract, they apply to the daily operations of countless businesses. Already, businesses of all sizes are using ChatGPT to create content for their digital marketing channels. Tools like Stay Social use AI to streamline social media content creation, saving businesses time and money. Content generated by these and other tools will need to comply with the upcoming regulations.
Roles of stakeholders
Developers, major AI companies like OpenAI and Google, governments, and users all have important roles to play in ensuring that AI is developed and used ethically. Developers must start by resolving copyright disputes during AI training and ensure that AI-based applications cannot create and distribute misinformation.
Governments need to find ways to harness the economic power of AI without compromising access, inviting bias or limiting access to certain groups in society. Users and consumers of AI-generated information need to have a clear view of how that information has been provided to them.
Our goal
The pervasiveness of AI in our society will continue to grow as various stakeholders explore its potential. As governments and organizations like the AI Partnership work to create fair and empowering versions of AI, users need to hold AI providers accountable.
Generative AI tools like ChatGPT are an early application of this technology that has the potential to change our lives more than the advent of the internet, and harnessing this power in a proactive way is critical to the ethical development of AI.
Conclusion
Balancing the excitement of new AI developments with the ethical concerns surrounding their use has been a hotly debated topic for the past few years. As regulatory frameworks emerge, it remains critical to ensure that AI is used ethically for the benefit of all humanity.
Jessica Wong is a member of Grit Daily's Leadership Network and the founder and CEO of nationally known marketing and PR firms Valux Digital and uPro Digital. She is a digital marketing and PR expert with over 20 years of success in improving her clients' bottom lines through innovative marketing programs aligned with new strategies.
