Using AI in Business: Thought Leaders Explain Best Practices in Kenan Institute Q&A

AI For Business


Editor’s Note: Innovation Thursday – a deep dive into emerging technologies and companies like the Startup Spotlight – is a regular feature of WRAL TechWire.

+++

Chapel Hill – As the debate over artificial intelligence continues to gain momentum, experts continue to tout AI’s potential benefits. However, organizations and employers may wonder how best to introduce this nascent technology into the workplace. Specifically, what framework should organizations and employers use when considering this new technology?

The Kenan Institute of Private Enterprise asked these questions to two individuals with extensive and unique experience in the world of artificial intelligence. Professor Mohammad Jarahi of the UNC School of Library and Information Science studies not only the impact of AI on jobs, but also the broader impact of new information and communication technologies. Phaedra Boinodiris is a Business Transformation Leader in her Responsible AI consulting group at IBM and part of her leadership team at IBM’s Academy of Technology.

We asked Jarahi and Voinodiris about potential concerns and issues surrounding AI adoption. Phaedra’s answer is provided by an excerpt from her recent book, AI For The Rest of Us, co-authored with Beth Ludden and released in May.

Large language models such as ChatGPT have exploded AI-related discussions, especially in the last few months. What is his current level of AI adoption across the enterprise and how do you see this changing in the next few years?

Mohammad Jarahi: OpenAI’s ChatGPT sparked conversations around AI, turning it from a background technology primarily of interest to tech enthusiasts and business magnates to a topic that captures the imagination of ordinary people. It is clear that there is a renewed interest in AI, fueled largely by ChatGPT. A recent survey by Accenture shows that 63% of organizations are prioritizing AI over other digital technologies.[1]

Tech giants such as Amazon, Facebook, and Google have already invested billions of dollars in various AI systems. However, we expect that investment in AI technology will spread to all industries in the future. I also expect that as AI competition intensifies, it will often hide in the background, or (in the words of Jeff Bezos) “below the surface”.[2], emerges from the shadows. This will be a prominent strategy in product offering and marketing.

As companies adopt AI tools, how can they engage employees in conversations about new technology integration, ethical considerations, and potential job changes?

Jarahi: Adopting a strategy like a “human-centered AI” mindset can help you incorporate diverse stakeholder perspectives into your organization’s AI adoption process. Employees and other stakeholders should be actively involved in strategic decisions about AI. In this strategic direction, the organization should emphasize that her AI purpose is to enhance operations, not replace them. Employees can help organizations determine what I call the optimal symbiosis between humans and AI within various organizational processes.[3], and how to rebuild the system ethically and effectively. Years of research into IT implementation have highlighted that an overly focused approach to efficiency, often short-sighted approaches that lead to automation and labor relocation, can cause long-term organizational problems. These can demoralize employees or lead to unethical or illegal behavior.

Phaedra boinodylis: Positive reinforcement is one of the most powerful tools we have. We use AI to help us find better ways to proactively reinforce humans and encourage how they behave. Here are some examples of proactively reinforcing behaviors for AI responsible curation.

  • Ensure leaders consistently communicate the importance of managing AI responsibly and provide clear guidance on how to do so. Leaders can recognize and celebrate teams that actively follow ethical practices.
  • Clearly define and communicate responsibility for addressing and mitigating the various impacts caused by AI models. Reward individuals who take responsibility and actively work to minimize bias and promote fairness.
  • AI practitioners are encouraged to actively involve a diverse and inclusive team in discussing potentially disparate impacts before completing the risk assessment form. We recognize and appreciate the efforts of those who foster collaboration and seek diverse perspectives.
  • Establish an AI ethics leader and give them sufficient authority to make important decisions, including the power to stop projects that raise ethical concerns. Recognize and support their role in advancing responsible AI practices.

Algorithms and AI systems have been shown to reinforce biases and structural inequalities, especially when trained on data that reflects these inequalities.Paedra In one of your videos, you say that culture and auditing can be a way to combat bias in AI systemshow can companies implement these approaches in practice?

Voinodylis: A Business Value Institute research study conducted by IBM in April released statistics that illustrate this diversity dichotomy among organizations’ AI teams. Women, who make up her 33% of the total organization, make up only 6% of AI teams. Black or people of color may make up her 10% of all employees, but only 6% of AI teams. And finally, LGBTQ+ team members, who make up her 4% of employees, make up just 1% of the AI ​​team. When there is only one homogenous representative group that chooses which data to use to train AI models that directly affect our lives, that bias is calcified into the system and systemic inequalities emerge. You can rest assured that you will survive.

Figure 1: Differences between the proportion of women, people of color, and LGBTQ+ team members in company employees and AI teams.

Practitioners developing AI and working on AI governance must be diverse and inclusive to reduce the risk of harm. In addition, when working with data, these professionals ensure that the data is adequately representative of the underlying population, that it has been collected with consent from ethical sources, and that it has been collected with consent from ethical sources and must be responsible for ensuring that the data is correct. But we also know that humans typically do what users do, not what they say. So how do we model the culture needed to responsibly curate AI?We believe an organization should prioritize three key elements as part of the culture: I believe there is.

1. Stay humble and maintain an open and growth mindset. Organizations need to realize that they have as much to learn as much. Organizations must be open enough to look in the mirror and examine inherent biases to scrutinize which skills and competencies are valued more than others and why. Are speakers given unfair access to megaphones? What does it mean to be considered “technical” in the organization? (e.g. against the dominant paradigm of “”)? Prejudice is an emotional commitment to ignorance.

2. Prioritize diversity and inclusion. Organizations should make it a priority to ensure that the teams collecting the data used to train AI models are diverse, inclusive, and treated fairly and equitably. As organizations consider team composition of data scientists, they should ask themselves: “How many women are there on this team?” How many minorities are there on this team? How many worldviews are there on this team? Do they represent aspects? These can be experts in their field who can reliably express different worldviews and experiences.

3. Aim for an interdisciplinary approach. Organizations should make it a priority to do everything they can to consistently communicate the need for teams to be interdisciplinary in nature and to provide opportunities to train together on a regular basis. Earlier chapters have emphasized the important role of social scientists. How is that value communicated to the development team?

How can users of AI systems (e.g. employees of a company) avoid over-reliance on artificial intelligence instead of human intelligence? How can we avoid over-reliance on AI systems?

Jarahi: As AI integration accelerates, automation bias will become a major concern, making employees and organizations overly dependent on systems. Addressing this problem requires what I call “AI literacy” at both the individual and organizational levels. AI literacy includes not only data-driven analytical skills, but also a comprehensive understanding of machine capabilities and limitations, identifying areas where human intervention is critical. A key component of AI literacy, therefore, involves distinguishing between AI-friendly tasks and those that require human intervention. Moreover, deploying AI systems will require redesigning processes with a “human participatory” approach in mind. His AI capabilities are heavily embedded in this approach, augmenting employees rather than replacing them.

Are there specific areas or tasks within the company that need to be done? no Should we outsource to AI systems and stay within the bounds of human judgment?

Jarahi: AI adoption will be a blend of automation and augmentation. AI systems have the ability to automate mundane and repetitive tasks previously performed by humans. However, there are still significant areas that require human involvement. For example, even if machines can make near-optimal decisions based on historical data, humans still need to participate directly in high-stakes decisions to understand their ethical and organizational implications. there is.

AI is still task-centric and focused on specific areas. Organizational decision-making often requires a holistic view that incorporates insights from multiple tasks (potential stakeholder reactions and how a particular decision will shape the organization’s long-term strategy). (e.g. understanding what promotes or inhibits In this context, the role of humans is important to contextualize AI’s task-specific decision-making. For example, in my own research, algorithmic reasoning can inform a pathologist’s diagnostic work, but arriving at an informed, comprehensive diagnosis requires a patient’s medical history, lifestyle, overall health, etc. It turns out that we need to consider the elements of[4]

Moreover, we are increasingly encountering what researchers call “agency laundering,” where organizations exploit the opacity of AI-driven decision-making to avoid liability. These are areas where human judgment is essential to ensure accountability and transparency.

Voinodylis: The purpose of AI is to augment human intelligence. That AI models can actually do this well, i.e. perform better than humans, are trained using good representative data validated by subject matter experts, and do not exacerbate inequalities. and empower people in return. And, where applicable, models should earn people’s trust by providing data lineage and provenance of their outputs along with rigorous audits. If an AI model doesn’t do these things for your company, or if the model doesn’t provide test/retest reliability, you’re taking risks.

(C) Kenan Institute

This story was originally published at: https://kenaninstitute.unc.edu/commentary/ai-in-practice-the-view-from-academia-and-industry/

source of information

[1] Wiggers, K. (20 March 2023). Enterprises are investing more in AI, driven by the promise of AI technology. tech crunch. https://techcrunch.com/2023/03/20/corporate-investment-artificial-intelligence/

[2] E. Lauder (19 April 2017). Amazon CEO Jeff Bezos explains his approach to AI. AI business. https://aibusiness.com/companies/amazon-s-ceo-jeff-bezos-explains-their-approach-to-ai

[3] Jarahi, MH Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making. Business Horizon, 60(4): 577-586. https://doi.org/10.1016/j.bushor.2018.03.007

[4] Jarrahi, MH, Davoudi, V., Haeri, M. Keys to effective digital pathology powered by AI: Establishing a symbiotic workflow between pathologist and machine. Journal of Pathology Informatics, Vol. 13 (2022). https://doi.org/10.1016/j.jpi.2022.100156



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *