Three thresholds for using AI

Applications of AI


The use of AI reveals a lack of or disregard for people’s knowledge, integrity and accountability, and if we want to avoid AI making us ignorant, dishonest and irrelevant to the broader societal interests, we need to commit to raising and maintaining fairly high standards in these three respects.

Once, during a gathering of deans and other academic leaders, my university's faculty development center did an interesting warm-up activity. They handed out green, yellow, and red stickers to all participants and asked them to put one on each of several sentences on the wall: “How well do you think ChatGPT can perform this task?” After we all put green stickers where we thought ChatGPT could perform well, yellow stickers where we thought it could perform moderately, and red stickers where we thought it couldn't perform that kind of task, an open conversation ensued.

The conversation revealed a crucial reality about public perception of artificial intelligence (AI) tools: people, including professors, overestimate AI’s capabilities in areas outside their own fields, with serious consequences. For example, I was the only writing professor in the room, and I noticed that my philosophy colleagues were the only ones who had a red sticker under the statement that ChatGPT could draft the essay assignments that students would receive. It was shocking to see computer scientists, economists, medical researchers, and business professors who believed that AI tools “could do writing just fine” (to borrow the words of a faculty trainer at another event). Similarly, the conversation revealed that while most others had a green sticker on ChatGPT’s ability to complete coding assignments, the computer science professors in the room did not believe it. They knew a lot about AI, just as I knew a lot about writing. And the same was true in other areas.

Before the advent of AI, computer scientists had no machine writing assistants, writing teachers had no machine assistants who claimed to know everything about financial decisions, and financial managers had no machine assistants who “did computing no problem.” Seeking human assistance cost a lot of time, money, and effort. Now these costs are so low that it is easy and problematic to lower expectations on important issues. In this essay, I discuss how the use of AI is revealing people's lack or disregard for knowledge, integrity, and accountability, and how we might raise the threshold in these three areas. In other words, if we want to avoid AI rendering us ignorant and dishonest and thus uncontributing to the broader societal good, we need to commit to raising and maintaining fairly high standards in these respects.

Maintaining the threshold

AI tools seem to encourage people to lower their standards for accuracy and comprehensiveness of content, reliability and trustworthiness in communications, and honesty and accountability in work. Public, media, and academic debates increasingly show that scientists seem to tolerate AI-generated arguments when applying for grants with serious societal impact. Communications professionals seem to trust AI tools to make significant financial decisions on their behalf. And financial managers seem comfortable using AI applications to automate financial transactions on behalf of their companies and clients.

AI technologies are exposing the weaknesses of every profession and society as a whole. AI technologies are exposing our ignorance. That is, if we are impressed by merely plausible patterns of words as facts or knowledge, rather than their validity itself, our threshold of knowledge may be lower than necessary on that subject. We use AI to pretend to know things we don't. AI tools are exposing our honesty. If we are saving ourselves the effort to research, read, think, and develop our ideas, we are asking others to spend their time on ideas we don't spend time on. We use AI to give the impression that we have skills we don't have. And AI tools are exposing our irresponsibility towards the environment and the good of society. Every time we use AI “for fun,” we are contributing to the inordinate amount of energy used by the systems behind the tools. Every time we use AI for work, we are potentially endorsing a knowledge system that does not equally and fairly represent society. AI data sets will continue to represent minority societies, cultures, and epistemologies. AI algorithms will continue to reflect the rhetorical thought patterns of the dominant societies that create and control the systems, and AI markets will continue to advance the interests of the wealthy and powerful, especially in societies that have colonized, marginalized, and often erased the epistemologies of others.

Of course, AI is a fantastic new development in that it enables new things for all professionals, just as fire, cars, and computers were in the past. But while this new technology is advancing at breakneck speed, knowledge, skill, and accountability among its users are not keeping pace. It is not enough for writing teachers to have reasonable expectations about AI writing. Insofar as their writing impacts the world, scientists and financial managers also need a reasonable threshold of knowledge, skill, and honesty about their writing. They should not use generated texts in interpersonal, legal, or financial interactions without considering all the potential harms to others. In fact, we all need to raise our thresholds as everyday users. We cannot afford to push through a society in which voters are insensitive to AI-generated misinformation, in which neighbors communicate with each other in AI-created language, and in which public leaders lack the nuanced understanding and courage to prevent various AI harms.

Be aware of exposure

Every technological change, or social change, not only brings out what is in us, but also opens up new possibilities. When radio and television were developed, we could hear what people say when they were not interacting with their audience. The Internet and social media in particular have brought about profound changes by enabling interactions between strangers. AI is revealing what people will do when they are able to say or write things they did not create or have not thought through. Further developments in AI will mediate all of our communications, texts and ideas in ways we cannot currently imagine. One thing is clear: AI will reveal our ignorance, dishonesty, and irresponsibility towards society and the environment. It is up to us to decide how high we set our thresholds for knowledge, honesty, and ethical/social responsibility.

I had an eye-opening experience about these thresholds while hosting an AI workshop for graduate students. For our writing support group’s end-of-year lunch and discussion, with the help of a graduate assistant, I created several activities to allow PhD and Masters students to see how they react to their own and others’ use of AI. In the first activity, we asked students to first define complex terms in their field without the assistance of AI, and then to do the same with ChatGPT alone. When we asked them to grade the AI ​​definitions, they gave it a 7 or 8 out of 10. However, when asked how many points they would give as teaching assistants if undergraduates submitted the definitions, they said they would give no marks or only a few marks depending on how well the students understood the concepts. It was clear that they did not want to give marks without evidence of learning.

It would have been no surprise that graduate students gave ChatGPT such high ratings, but they may not have realized whose time and effort the generated answers were saving them. So, in the second activity, we asked them to imagine using ChatGPT to write an application for a research or faculty position in industry after their graduate degree. Would they be comfortable submitting that application with some revisions, since the “quality of writing” is much better than their own? They answered yes. But when we asked them if they were reading that application on a hiring committee, would they hire someone who submitted the same application? They answered “no.” This time, it became more clear that they didn’t care about whether they had the credits of study or the skills to land the job. They indicated that the gap in their answers was related to their qualifications for the job, but their answers seemed to be driven by convenience and self-interest rather than honesty and accountability.

In the final activity, we asked students to imagine that they had worked in the same company for 5 years and spent 6 months working in a team to create a SWOT analysis report. Would they enter “data” into ChatGPT and “create the report”? Answers varied. How would they feel if their company manager entered the report into ChatGPT to generate an email response that praised their work? Many students would consider starting to look for a new job.

chance

We need to keep in mind that behind the exposure of our ignorance, dishonesty, and irresponsibility lies great opportunity. If we learn how to use the assistance of AI to filter unnecessary information to produce valid and valuable knowledge, we can improve our thought processes, advance and apply new knowledge for greater societal benefit. If we learn how to be transparent and honest about our use of AI and focus on the benefit of others as much as our own, we can make the world a better place for everyone. And if we reduce harm to the environment and mobilize AI to promote social justice, we may find new opportunities to leverage our efforts to do so.

Unfortunately, the status quo is not inspiring. AI companies are releasing products that aren't yet ready, dismantling their safety and ethics teams, increasing their ability to ignore or circumvent government regulations (if any exist), and increasingly listening to investors rather than paying attention to public safety or societal concerns. To make matters worse, as the use of AI permeates societies around the world, more and more of us are adopting AI tools with very low standards of knowledge, integrity, and accountability to the benefit of broader society.

Educators can start by raising and maintaining these standards, but it's not clear how to do this.

Society acts for the public. There are no easy solutions to epistemic bias. Yet, academics and other professionals must take an ethical responsibility to counter the spread of disinformation, exacerbating injustice, and reinforcing irresponsibility towards society. This can be done through fostering critical AI literacy with a strong global DEI dimension and through public research. The debate can and must shape practice.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *