Questions about AI that Arab Higher Education Should Be Asking

Applications of AI


(The opinions in this article are those of the authors and do not necessarily reflect those of Al-Fanar Media.)

Discussions about artificial intelligence (AI) in higher education these days bring up a wide range of emotions from both educators and students. Educators range from extreme enthusiasts who want to adopt AI for everything and explore its uses, to those who want to pretend AI does not exist at all, with many in the middle who have some skepticism and concerns but do not refuse it completely.

Major topics of discussion include how to ensure students are learning while maintaining academic integrity, and how to prepare students for the job market where AI may be used frequently.

We are often surprised that enthusiasm for the use of AI in education is not dampened by the  European Union’s AI Act considering its use in education as “high risk”. In an appendix, the act cites the following as high-risk use cases:

Education and vocational training: AI systems determining access, admission or assignment to educational and vocational training institutions at all levels. Evaluating learning outcomes, including those used to steer the student’s learning process. Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests.

In this article, we hope to address multiple dimensions of ethical concerns and risks from our point of view as Arabs (we are Egyptian and Moroccan), recommending questions that Arab higher education should be asking about: students versus instructor and administrator use of AI, as well as differentiating between the use of generative AI (GenAI) or large language models (LLMs) such as ChatGPT and Gemini, versus more specialized forms of AI trained on specific tasks in a specific domain. So here are the list of questions we recommend everyone asks as we approach AI.

Is AI Really Inevitable?

The discourse, in the beginning, was that ChatGPT is already at the level of artificial general intelligence when it was introduced in November 2022. Since then, AI’s Inevitability became the dominant underlying discourse whenever the topic is invoked. Major companies’ continuous investment and development of their own LLMs established the need to adapt rather than reflect on (or, god forbid, resist) AI critically. This hype established chatbots as a necessity as they were implemented in every aspect of daily life under the banner of improving people’s lives. However, in practice, AI tools continue to have many harms and limitations, as we mention later in this article.

We need to push back against the AI inevitability narrative and hype. Yes, AI exists, and we cannot control that or stop that, and yes, students have access to AI, and we cannot fully police that unless we bring them into environments where they do not have access to devices or the internet at all.

However, the influence of AI on education is not inevitable. We can, as educators and administrators, make decisions about whether to use AI in education and what the boundaries are, and we should make those decisions after careful thought and investigation, rather than based on promotional hype.

It is important not to shame those who choose to resist AI on principle, because of the many ethical concerns it poses.

What Risks Do Biases in AI Pose for Us?

We know that AI platforms are biased for two reasons: the first is that their training data is skewed towards more English/western data, and so they are more likely to continually give outputs from that perspective. ChatGPT was trained chiefly on Common Crawl data, which depends on web scraping to collect data from across the web. To fine-tune ChatGPT, OpenAI used reinforcement learning from human feedback (RLHF), which by definition introduces a double bias: first, bias from the web and bias of the human feedback in fine-tuning ChatGPT.

The second reason is that LLMs work through probability that tends towards the average, so that even if trained in a more diverse manner, with some minority data, they are still more and more likely to respond based on the majority data. A 2023 study by Mohamad Atari of the University of Massachusetts at Amherst and co-authors, called “Which Humans?”, found that the closer your culture is to being WEIRD (western, educated, industrialized, rich and democratic), i.e. similar to U.S. or British culture on average, the more likely ChatGPT’s responses would be similar to yours, and the farther your culture was from this (e.g. Egypt or Jordan) the less likely ChatGPT’s response would be similar to yours. This also implies that the more westernized, rich and educated Arabs within our region may notice these biases less as they themselves are greatly exposed to the culture LLMs have been trained on.

Given that data from Arab and Muslim culture is not the majority of what LLMs have been trained on, western-trained LLMs are more likely to display biases and hallucinations (mistakes) against our culture, and also more generally. An example from Maha’s research is that even though some AI tools like ChatGPT, Gemini and Claude are trained to avoid explicit bias, they display implicit bias. If asked directly if a person from one country or another is more likely to be a terrorist, those tools will say that they do not want to stereotype or generalize; however, if asked to define terrorism and give five examples, almost all the western-trained AI tools will list three or four examples of terrorism done in the name of Islam, thus reproducing Islamophobia from western cultures (see more examples of bias against Arab/Islamic culture here). If you prompt an Arab-trained LLM such as Falcon, the responses become different.

Therefore, it is important that we continue to build and use our own technologies rather than importing western technologies and knowledge systems. This will not address all the problems of LLMs, but it will address some of them. However, simply feeding LLMs data from our side can pose new problems of losing sovereignty over our knowledge. Not only will this data be reinterpreted through a double bias, but it will also be subject to a hidden fine-tuning that would “re-know” it for the people using LLMs—imagine how you would feel about giving LLMs freedom to reinterpret sacred knowledge from the Qur’an and Sunnah, for example.

The risks that biased AI tools and outputs have caused in the past have been documented before LLMs became widely used: they reproduce biases when used to filter applications for hiring, and against people of color in criminal justice systems in the United States. Facial recognition AI tools have also been shown to be biased and less accurate in recognising people who have darker skin.

The risks for education are in all three possible uses of AI:

  1. When students use AI, they tend to trust the output, and not question the biases in the output, so encouraging students to use AI on potentially biased fine-tuned datasets, without questioning, leads to a colonisation of their learning and thinking. It is important that we develop students’ consciousness about colonialist power and knowledge systems and the risks of accepting LLM outputs, and teach them to be critical of the outputs to develop their critical AI literacy, in the same ways we had been teaching media literacy and information literacy beforehand. The difference is that with media and information literacy, we were able to question the sources of the information directly, and we knew that certain websites or newspapers had particular biases, whereas with LLMs, the source of the bias is less explicit.
  2. When teachers use AI to do things like create lesson plans or give feedback to students, they need to be aware again that the kinds of lesson plans an LLM will bring back are mostly based on western knowledge and contexts which are different from our context. As mentioned beforehand, double bias becomes the basis for a student’s learning process which does not only limit but also effaces locality for an assumed global western culture. This could leak through hallucinations, fake citations, and biased use of language.
  3. When administrators use AI to do things like learning analytics or filtering of student applications, we must keep in mind that these tools can reproduce biases and we need to bring in human accountability because AI tools are black boxes; there is no transparency in the decisions they make, and we cannot as responsible humans just take their outputs and consider them accurate and neutral without questioning the reasoning behind the decisions. Knowing the kinds of biases AI tools can reproduce should make us proactive in our anticipatory accountability as well as remedial accountability with very strong human oversight if we choose to use AI systems at all.

 

What Are the Ideologies Behind AI Platforms?

[B]y integrating generative AI into the teaching and learning process, we are not only misleading ourselves into thinking we are tackling education’s challenges, but we are also outsourcing a social, civic and democratic process of cultivating the coming generation to commercial and capitalist enterprises whose priority is profit. – Unesco, “AI and the Future of Education: Disruptions, Dilemmas and Directions”, 2025

None of the AI tools we have now were made for philanthropic purposes, nor were they originally made to support student learning. They are made to serve the corporations who made them, whose main goal is profit, and some of the visionaries in the movements towards a more general artificial intelligence come from eugenicist ideologies that are not for the good of humanity. The neoliberal aspects of AI feed into their continuous striving to produce AI that can do everything, for a better monetisation of a person’s life.

Agentic AI tools that can take over your browser and computer and start taking actions on your behalf pose dangers to privacy and can be used later by malicious parties and governments for surveillance. More directly, in education, agentic AI tools can enter into a learning management system and start solving quizzes and writing homework assignments for students. In what way are these tools helpful for learning?

What Do We Lose When We Use AI Tutors?

We know that higher education worldwide has structural issues like giving university professors large numbers of students and a very small number of teaching assistants. These are challenges we need to address with human beings, rather than replacing human teaching assistants with AI tutors. When Maha has asked professors why they use AI tutors, and whether they are worried about hallucinations, the professors sometimes say that they train them on particular materials and that their responses are sometimes more accurate than those of human tutors, even though they do sometimes hallucinate. Also, obviously, AI tutors are available 24/7 and never sleep, so they respond faster than humans.

But there are other consequences for replacing humans with AI tutors. Teaching assistantships and fellowships are a financial means of support that graduate students will now lose, which may reduce the number of people willing to go through graduate programmes. Moreover, this would reduce the number of people who gain teaching experience at an early stage, so that when people finish their graduate studies, we will have fewer people with any teaching experience at all.

Undergraduate students who only work with AI tutors and not human tutors will get used to having a transactional question and answer relationship with the AI tutor, and lose the human touch and relationship building they could have had with a human tutor who could also mentor them more broadly and support them in everyday life beyond the transactional moment. Overuse of AI may damage opportunities for building community and human relationships which are central to our collectivist cultures in the Arab world.

The very very quick responses of an AI tutor also feeds into young people starting to expect superquick answers rather than try to think a little bit longer for themselves, or be patient and wait for human support. These wait times are not just inconveniences, they are life skills. When the internet came along and people could search the internet instead of doing old-school searching, this did not replace human interaction, because the internet also provided opportunities to learn from other human beings and interact with them. The use of AI tutors circumvents even digitally-mediated communication between humans and removes one of the humans completely.

Further, the implementation of AI limits access and socio-economic agency for most students wanting to be teaching assistants to help in tuition fees or with student life in general. This not only limits certain students from accessing higher education but also excludes those excellent students from impoverished backgrounds.

Does Using AI Truly Result in Any Benefit?

We often hear people talking about the benefits of AI for productivity, but there are several articles that claim that AI use has not yet caused this increase in productivity; it may actually result in loss of productivity, often because human oversight is continually needed to revise outputs that often contain hallucinations. Nor should productivity be the main aim of educational institutions, even if it is a priority of for-profit corporations.

However, the most important thing we should focus on in education is whether AI use benefits learners. What do we lose when learners use AI to get the answers quickly instead of struggling with necessary positive friction such as questioning assumptions, using exploratory approaches, and valuing divergence, in order to learn slowly and deeply?

Historical claims of how AI can be used to personalise learning have not come to fruition, and we need to continually question them: tools that claim to personalise learning tend more to categorise learners based on the history of other learners, and tend more towards letting the computer control the learner, rather than giving agency to the learner to control the machine and their own learning path. These are also tools that focus on older theories of cognitive-behaviourism and imagine an autonomous learner while ignoring the well-established social constructivist theories of learning that support the value of learning with experts and peers. They also imagine learning as solely based on knowledge transmission, rather than as a potentially liberatory process that happens in dialogue with others.

We need more evidence before we enthusiastically adopt a view that AI can truly be beneficial to education specifically, beyond overhyped claims (such as this one). And any of us who end up implementing AI need to remain vigilant and evaluate the consequences, both expected and unexpected. Further, this reimagination fits into the neoliberal narrative where universities no longer want to hire new staff under the banner of efficiency and well-being; this liberal perspective further camouflages the capitalist aims of AI.

Even when, in the real world, AI is being used for work, we need to ask ourselves what students need to learn first in order to be able to use AI critically and assess whether its use is appropriate for a particular situation, and how to assess accuracy and potential bias in the outputs. Because our learners have access to AI tools anyway, whether we like it or not, so much of education right now needs to focus on developing learners’ critical AI literacy so that they are aware of the ethical challenges of AI creation and use, and so they can make decisions that don’t interrupt the development of their own critical and creative thinking during their time in higher education.

So much has been written about the ways we should modify our assessment methods, and this is also needed, in order to adapt to a world where AI exists. But this is not the same as believing, unquestioningly, that AI is a good thing for education, or for the future of humanity.

This article was produced with support from the Arab Council for the Social Sciences and the Ford Foundation’s Regional Office in Cairo for the ACSS Higher Education Working Group.

Maha Bali is a professor of practice at the Center for Learning and Teaching at the American University in Cairo (AUC). She blogs at http://blog.mahabali.me

Rachid Benharrousse is a postdoctoral fellow at Tilburg Law School at Tilburg University, in the Netherlands.



Source link